mirror of
https://github.com/pytorch/pytorch.git
synced 2025-10-20 12:54:11 +08:00
Don't use C++ CIA decomps if there's a Python one (#164970)
Some more context at https://github.com/pytorch/pytorch/pull/164939 The basic point here is that Python decomps are guaranteed to be functional, whereas C++ ones are not. If we have a Python decomp, we should prefer it over the C++ one. This currently doesn't matter too much as CIA decomps will get functionalized, but it matters after the quoted PR because we now run these decompositions very late (to make it easy for things like aot_eager to get the fused versions of operators in proxy tensor). Signed-off-by: Edward Yang <ezyang@meta.com> Pull Request resolved: https://github.com/pytorch/pytorch/pull/164970 Approved by: https://github.com/bdhirsh
This commit is contained in:
committed by
PyTorch MergeBot
parent
a70ef954b9
commit
8b2137e74a
@ -53,7 +53,7 @@ class CustomDecompTable(dict[torch._ops.OperatorBase, Callable]):
|
||||
self.decomp_table = _core_aten_decompositions_post_autograd()
|
||||
|
||||
for op in _collect_all_valid_cia_ops_for_aten_namespace():
|
||||
if op not in PRESERVED_ATEN_CIA_OPS:
|
||||
if op not in PRESERVED_ATEN_CIA_OPS and op not in self.decomp_table:
|
||||
self.decomp_table[op] = _get_decomp_for_cia(op)
|
||||
|
||||
# This is to track the *pending* deleted custom ops that haven't been materialized yet
|
||||
|
Reference in New Issue
Block a user