mirror of
https://github.com/pytorch/pytorch.git
synced 2025-10-20 21:14:14 +08:00
Don't use torch.backends.cuda.matmul.allow_tf32 in inductor cache key (#159480)
Summary: According to https://github.com/pytorch/pytorch/pull/158209, the API is deprecated and we should be using torch.backends.cuda.matmul.fp32_precision instead. Fixes https://github.com/pytorch/pytorch/issues/159440 Test Plan: CI Pull Request resolved: https://github.com/pytorch/pytorch/pull/159480 Approved by: https://github.com/xmfan, https://github.com/oulgen
This commit is contained in:
committed by
PyTorch MergeBot
parent
25343b343e
commit
af39144a93
@ -818,7 +818,7 @@ class FxGraphHashDetails:
|
||||
|
||||
# Global settings affecting matmul codegen.
|
||||
self.cuda_matmul_settings = (
|
||||
torch.backends.cuda.matmul.allow_tf32,
|
||||
torch.backends.cuda.matmul.fp32_precision,
|
||||
torch.backends.cuda.matmul.allow_fp16_reduced_precision_reduction,
|
||||
torch.backends.cuda.matmul.allow_bf16_reduced_precision_reduction,
|
||||
)
|
||||
|
Reference in New Issue
Block a user