Don't use torch.backends.cuda.matmul.allow_tf32 in inductor cache key (#159480)

Summary: According to https://github.com/pytorch/pytorch/pull/158209, the API is deprecated and we should be using torch.backends.cuda.matmul.fp32_precision instead.

Fixes https://github.com/pytorch/pytorch/issues/159440

Test Plan: CI

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159480
Approved by: https://github.com/xmfan, https://github.com/oulgen
This commit is contained in:
Sam Larsen
2025-07-30 08:33:05 -07:00
committed by PyTorch MergeBot
parent 25343b343e
commit af39144a93

View File

@ -818,7 +818,7 @@ class FxGraphHashDetails:
# Global settings affecting matmul codegen.
self.cuda_matmul_settings = (
torch.backends.cuda.matmul.allow_tf32,
torch.backends.cuda.matmul.fp32_precision,
torch.backends.cuda.matmul.allow_fp16_reduced_precision_reduction,
torch.backends.cuda.matmul.allow_bf16_reduced_precision_reduction,
)