[AOTI] Update AOTInductor tutorial (#163808)

Summary: Remove the BC breaking warning. Add inductor_config to the example code.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/163808
Approved by: https://github.com/yushangdi
This commit is contained in:
Bin Bao
2025-09-24 15:37:23 -07:00
committed by PyTorch MergeBot
parent f1260c9b9a
commit 48a852b7ae

View File

@ -2,11 +2,6 @@
# AOTInductor: Ahead-Of-Time Compilation for Torch.Export-ed Models
```{warning}
AOTInductor and its related features are in prototype status and are
subject to backwards compatibility breaking changes.
```
AOTInductor is a specialized version of
[TorchInductor](https://dev-discuss.pytorch.org/t/torchinductor-a-pytorch-native-compiler-with-define-by-run-ir-and-symbolic-shapes/747),
designed to process exported PyTorch models, optimize them, and produce shared libraries as well
@ -73,6 +68,10 @@ with torch.no_grad():
# [Optional] Specify the generated shared library path. If not specified,
# the generated artifact is stored in your system temp directory.
package_path=os.path.join(os.getcwd(), "model.pt2"),
# [Optional] Specify Inductor configs
# This specific max_autotune option will turn on more extensive kernel autotuning for
# better performance.
inductor_configs={"max_autotune": True,},
)
```