[Inductor] Support fallback for all gemm like ops (#165755)

Summary: Fill op_override field for bmm aten ops so they can be converted properly in the wrapper_fxir backend

Reviewed By: StellarrZ

Differential Revision: D84840948

Pull Request resolved: https://github.com/pytorch/pytorch/pull/165755
Approved by: https://github.com/blaine-rister
This commit is contained in:
Nan Zhang
2025-10-17 21:08:29 +00:00
committed by PyTorch MergeBot
parent ab65498d71
commit 8cb2fb44f2

View File

@ -119,7 +119,7 @@ bmm_template = TritonTemplate(
cache_codegen_enabled_for_template=True,
)
aten_bmm = ExternKernelChoice(torch.bmm, "at::bmm_out")
aten_bmm = ExternKernelChoice(torch.bmm, "at::bmm_out", op_overload=aten.bmm.out)
aten_bmm_dtype = ExternKernelChoice(
torch.bmm,
"at::_bmm_out_dtype_cuda",