mirror of
https://github.com/pytorch/pytorch.git
synced 2025-10-20 12:54:11 +08:00
[MTIA Aten Backend][2/n] Migrate clamp ops(clamp.out/clamp_min.out/clamp_max.out) from out-of-tree to in-tree (#154015)
Summary: # Context See the first PR https://github.com/pytorch/pytorch/pull/153670 # This PR 1. Migrate 3 clamp ops from out-of-tree to in-tree(had to migrate the 3 ops altogether, because clamp.out calls all 3 stubs, which are also called by the other 2 ops): - clamp.out - clamp_min.out - clamp_max.out 2. Also enabled structured kernel codegen for MTIA, which is needed by clamp 3. Also introduced the `--mtia` flag to torchgen to prevent OSS from gencoding MTIA code.(Otherwise we got such link error `lib/libtorch_cpu.so: undefined reference to at::detail::empty_mtia`) Differential Revision: D74674418 Pull Request resolved: https://github.com/pytorch/pytorch/pull/154015 Approved by: https://github.com/albanD, https://github.com/nautsimon
This commit is contained in:
committed by
PyTorch MergeBot
parent
bcb2125f0a
commit
0d62fd5c3c
@ -72,7 +72,7 @@ def define_targets(rules):
|
||||
"--install_dir=$(RULEDIR)",
|
||||
"--source-path aten/src/ATen",
|
||||
"--aoti_install_dir=$(RULEDIR)/torch/csrc/inductor/aoti_torch/generated"
|
||||
] + (["--static_dispatch_backend CPU"] if rules.is_cpu_static_dispatch_build() else []))
|
||||
] + (["--static_dispatch_backend CPU"] if rules.is_cpu_static_dispatch_build() else []) + ["--mtia"])
|
||||
|
||||
gen_aten_outs_cuda = (
|
||||
GENERATED_H_CUDA + GENERATED_CPP_CUDA + GENERATED_AOTI_CUDA_CPP +
|
||||
|
Reference in New Issue
Block a user