aboutsummaryrefslogtreecommitdiff
path: root/modules/sub_quadratic_attention.py
diff options
context:
space:
mode:
authorbrkirch <brkirch@users.noreply.github.com>2023-07-25 03:03:06 -0400
committerbrkirch <brkirch@users.noreply.github.com>2023-08-13 10:06:25 -0400
commit2489252099c299bed49a9d4a39a4ead73b6b6f10 (patch)
tree98950f6d50e0a93d08dcb7134db93c49777c460f /modules/sub_quadratic_attention.py
parent87dd685224b5f7dbbd832fc73cc08e7e470c9f28 (diff)
`torch.empty` can create issues; use `torch.zeros`
For MPS, using a tensor created with `torch.empty()` can cause `torch.baddbmm()` to include NaNs in the tensor it returns, even though `beta=0`. However, with a tensor of shape [1,1,1], there should be a negligible performance difference between `torch.empty()` and `torch.zeros()` anyway, so it's better to just use `torch.zeros()` for this and avoid unnecessarily creating issues.
Diffstat (limited to 'modules/sub_quadratic_attention.py')
-rw-r--r--modules/sub_quadratic_attention.py4
1 files changed, 2 insertions, 2 deletions
diff --git a/modules/sub_quadratic_attention.py b/modules/sub_quadratic_attention.py
index 497568eb..ae4ee4bb 100644
--- a/modules/sub_quadratic_attention.py
+++ b/modules/sub_quadratic_attention.py
@@ -58,7 +58,7 @@ def _summarize_chunk(
scale: float,
) -> AttnChunk:
attn_weights = torch.baddbmm(
- torch.empty(1, 1, 1, device=query.device, dtype=query.dtype),
+ torch.zeros(1, 1, 1, device=query.device, dtype=query.dtype),
query,
key.transpose(1,2),
alpha=scale,
@@ -121,7 +121,7 @@ def _get_attention_scores_no_kv_chunking(
scale: float,
) -> Tensor:
attn_scores = torch.baddbmm(
- torch.empty(1, 1, 1, device=query.device, dtype=query.dtype),
+ torch.zeros(1, 1, 1, device=query.device, dtype=query.dtype),
query,
key.transpose(1,2),
alpha=scale,