Converting .rst files to .md files (#155377)

Fixes #155036
This pull request updates the documentation for several modules by transitioning from .rst to .md format, improving readability and usability. It introduces new Markdown files for the documentation of torch.ao.ns._numeric_suite, torch.ao.ns._numeric_suite_fx, AOTInductor, AOTInductor Minifier, and the torch.compiler API

Pull Request resolved: https://github.com/pytorch/pytorch/pull/155377
Approved by: https://github.com/svekars

Co-authored-by: Svetlana Karslioglu <svekars@meta.com>
This commit is contained in:
dggaytan
2025-06-13 22:54:27 +00:00
committed by PyTorch MergeBot
parent 799443605b
commit 3003c681ef
9 changed files with 489 additions and 487 deletions

View File

@ -0,0 +1,16 @@
(torch_ao_ns_numeric_suite)=
# torch.ao.ns._numeric_suite
```{warning}
This module is an early prototype and is subject to change.
```
```{eval-rst}
.. currentmodule:: torch.ao.ns._numeric_suite
```
```{eval-rst}
.. automodule:: torch.ao.ns._numeric_suite
:members:
:member-order: bysource
```

View File

@ -1,13 +0,0 @@
.. _torch_ao_ns_numeric_suite:
torch.ao.ns._numeric_suite
--------------------------
.. warning ::
This module is an early prototype and is subject to change.
.. currentmodule:: torch.ao.ns._numeric_suite
.. automodule:: torch.ao.ns._numeric_suite
:members:
:member-order: bysource

View File

@ -0,0 +1,39 @@
(torch_ao_ns_numeric_suite_fx)=
# torch.ao.ns._numeric_suite_fx
```{warning}
This module is an early prototype and is subject to change.
```
```{eval-rst}
.. automodule:: torch.ao.ns._numeric_suite_fx
:members:
:member-order: bysource
```
---
# torch.ao.ns.fx.utils
```{warning}
This module is an early prototype and is subject to change.
```
```{eval-rst}
.. currentmodule:: torch.ao.ns.fx.utils
```
```{eval-rst}
.. function:: compute_sqnr(x, y)
```
```{eval-rst}
.. function:: compute_normalized_l2_error(x, y)
```
```{eval-rst}
.. function:: compute_cosine_similarity(x, y)
```

View File

@ -1,26 +0,0 @@
.. _torch_ao_ns_numeric_suite_fx:
torch.ao.ns._numeric_suite_fx
-----------------------------
.. warning ::
This module is an early prototype and is subject to change.
.. currentmodule:: torch.ao.ns._numeric_suite_fx
.. automodule:: torch.ao.ns._numeric_suite_fx
:members:
:member-order: bysource
torch.ao.ns.fx.utils
--------------------------------------
.. warning ::
This module is an early prototype and is subject to change.
.. currentmodule:: torch.ao.ns.fx.utils
.. autofunction:: torch.ao.ns.fx.utils.compute_sqnr(x, y)
.. autofunction:: torch.ao.ns.fx.utils.compute_normalized_l2_error(x, y)
.. autofunction:: torch.ao.ns.fx.utils.compute_cosine_similarity(x, y)

View File

@ -0,0 +1,212 @@
# AOTInductor: Ahead-Of-Time Compilation for Torch.Export-ed Models
```{warning}
AOTInductor and its related features are in prototype status and are
subject to backwards compatibility breaking changes.
```
AOTInductor is a specialized version of
[TorchInductor](https://dev-discuss.pytorch.org/t/torchinductor-a-pytorch-native-compiler-with-define-by-run-ir-and-symbolic-shapes/747),
designed to process exported PyTorch models, optimize them, and produce shared libraries as well
as other relevant artifacts.
These compiled artifacts are specifically crafted for deployment in non-Python environments,
which are frequently employed for inference deployments on the server side.
In this tutorial, you will gain insight into the process of taking a PyTorch model, exporting it,
compiling it into an artifact, and conducting model predictions using C++.
## Model Compilation
To compile a model using AOTInductor, we first need to use
{func}`torch.export.export` to capture a given PyTorch model into a
computational graph. {ref}`torch.export <torch.export>` provides soundness
guarantees and a strict specification on the IR captured, which AOTInductor
relies on.
We will then use {func}`torch._inductor.aoti_compile_and_package` to compile the
exported program using TorchInductor, and save the compiled artifacts into one
package.
```{note}
If you have a CUDA-enabled device on your machine and you installed PyTorch with CUDA support,
the following code will compile the model into a shared library for CUDA execution.
Otherwise, the compiled artifact will run on CPU. For better performance during CPU inference,
it is suggested to enable freezing by setting `export TORCHINDUCTOR_FREEZING=1`
before running the Python script below. The same behavior works in an environment with Intel®
GPU as well.
```
```python
import os
import torch
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
self.fc1 = torch.nn.Linear(10, 16)
self.relu = torch.nn.ReLU()
self.fc2 = torch.nn.Linear(16, 1)
self.sigmoid = torch.nn.Sigmoid()
def forward(self, x):
x = self.fc1(x)
x = self.relu(x)
x = self.fc2(x)
x = self.sigmoid(x)
return x
with torch.no_grad():
device = "cuda" if torch.cuda.is_available() else "cpu"
model = Model().to(device=device)
example_inputs=(torch.randn(8, 10, device=device),)
batch_dim = torch.export.Dim("batch", min=1, max=1024)
# [Optional] Specify the first dimension of the input x as dynamic.
exported = torch.export.export(model, example_inputs, dynamic_shapes={"x": {0: batch_dim}})
# [Note] In this example we directly feed the exported module to aoti_compile_and_package.
# Depending on your use case, e.g. if your training platform and inference platform
# are different, you may choose to save the exported model using torch.export.save and
# then load it back using torch.export.load on your inference platform to run AOT compilation.
output_path = torch._inductor.aoti_compile_and_package(
exported,
# [Optional] Specify the generated shared library path. If not specified,
# the generated artifact is stored in your system temp directory.
package_path=os.path.join(os.getcwd(), "model.pt2"),
)
```
In this illustrative example, the `Dim` parameter is employed to designate the first dimension of
the input variable "x" as dynamic. Notably, the path and name of the compiled library remain unspecified,
resulting in the shared library being stored in a temporary directory.
To access this path from the C++ side, we save it to a file for later retrieval within the C++ code.
## Inference in Python
There are multiple ways to deploy the compiled artifact for inference, and one of that is using Python.
We have provided a convinient utility API in Python {func}`torch._inductor.aoti_load_package` for loading
and running the artifact, as shown in the following example:
```python
import os
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
model = torch._inductor.aoti_load_package(os.path.join(os.getcwd(), "model.pt2"))
print(model(torch.randn(8, 10, device=device)))
```
The input at inference time should have the same size, dtype, and stride as the input at export time.
## Inference in C++
Next, we use the following example C++ file `inference.cpp` to load the compiled artifact,
enabling us to conduct model predictions directly within a C++ environment.
```cpp
#include <iostream>
#include <vector>
#include <torch/torch.h>
#include <torch/csrc/inductor/aoti_package/model_package_loader.h>
int main() {
c10::InferenceMode mode;
torch::inductor::AOTIModelPackageLoader loader("model.pt2");
// Assume running on CUDA
std::vector<torch::Tensor> inputs = {torch::randn({8, 10}, at::kCUDA)};
std::vector<torch::Tensor> outputs = loader.run(inputs);
std::cout << "Result from the first inference:"<< std::endl;
std::cout << outputs[0] << std::endl;
// The second inference uses a different batch size and it works because we
// specified that dimension as dynamic when compiling model.pt2.
std::cout << "Result from the second inference:"<< std::endl;
// Assume running on CUDA
std::cout << loader.run({torch::randn({1, 10}, at::kCUDA)})[0] << std::endl;
return 0;
}
```
For building the C++ file, you can make use of the provided `CMakeLists.txt` file, which
automates the process of invoking `python model.py` for AOT compilation of the model and compiling
`inference.cpp` into an executable binary named `aoti_example`.
```cmake
cmake_minimum_required(VERSION 3.18 FATAL_ERROR)
project(aoti_example)
find_package(Torch REQUIRED)
add_executable(aoti_example inference.cpp model.pt2)
add_custom_command(
OUTPUT model.pt2
COMMAND python ${CMAKE_CURRENT_SOURCE_DIR}/model.py
DEPENDS model.py
)
target_link_libraries(aoti_example "${TORCH_LIBRARIES}")
set_property(TARGET aoti_example PROPERTY CXX_STANDARD 17)
```
Provided the directory structure resembles the following, you can execute the subsequent commands
to construct the binary. It is essential to note that the `CMAKE_PREFIX_PATH` variable
is crucial for CMake to locate the LibTorch library, and it should be set to an absolute path.
Please be mindful that your path may vary from the one illustrated in this example.
```
aoti_example/
CMakeLists.txt
inference.cpp
model.py
```
```bash
$ mkdir build
$ cd build
$ CMAKE_PREFIX_PATH=/path/to/python/install/site-packages/torch/share/cmake cmake ..
$ cmake --build . --config Release
```
After the `aoti_example` binary has been generated in the `build` directory, executing it will
display results akin to the following:
```bash
$ ./aoti_example
Result from the first inference:
0.4866
0.5184
0.4462
0.4611
0.4744
0.4811
0.4938
0.4193
[ CUDAFloatType{8,1} ]
Result from the second inference:
0.4883
0.4703
[ CUDAFloatType{2,1} ]
```
## Troubleshooting
Below are some useful tools for debugging AOT Inductor.
```{toctree}
:caption: Debugging Tools
:maxdepth: 1
logging
torch.compiler_aot_inductor_minifier
```
To enable runtime checks on inputs, set the environment variable `AOTI_RUNTIME_CHECK_INPUTS` to 1. This will raise a `RuntimeError` if the inputs to the compiled model differ in size, data type, or strides from those used during export.
## API Reference
```{eval-rst}
.. autofunction:: torch._inductor.aoti_compile_and_package
.. autofunction:: torch._inductor.aoti_load_package
```

View File

@ -1,221 +0,0 @@
AOTInductor: Ahead-Of-Time Compilation for Torch.Export-ed Models
=================================================================
.. warning::
AOTInductor and its related features are in prototype status and are
subject to backwards compatibility breaking changes.
AOTInductor is a specialized version of
`TorchInductor <https://dev-discuss.pytorch.org/t/torchinductor-a-pytorch-native-compiler-with-define-by-run-ir-and-symbolic-shapes/747>`__
, designed to process exported PyTorch models, optimize them, and produce shared libraries as well
as other relevant artifacts.
These compiled artifacts are specifically crafted for deployment in non-Python environments,
which are frequently employed for inference deployments on the server side.
In this tutorial, you will gain insight into the process of taking a PyTorch model, exporting it,
compiling it into an artifact, and conducting model predictions using C++.
Model Compilation
---------------------------
To compile a model using AOTInductor, we first need to use
:func:`torch.export.export` to capture a given PyTorch model into a
computational graph. :ref:`torch.export <torch.export>` provides soundness
guarantees and a strict specification on the IR captured, which AOTInductor
relies on.
We will then use :func:`torch._inductor.aoti_compile_and_package` to compile the
exported program using TorchInductor, and save the compiled artifacts into one
package.
.. note::
If you have a CUDA-enabled device on your machine and you installed PyTorch with CUDA support,
the following code will compile the model into a shared library for CUDA execution.
Otherwise, the compiled artifact will run on CPU. For better performance during CPU inference,
it is suggested to enable freezing by setting ``export TORCHINDUCTOR_FREEZING=1``
before running the Python script below. The same behavior works in an environment with Intel®
GPU as well.
.. code-block:: python
import os
import torch
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
self.fc1 = torch.nn.Linear(10, 16)
self.relu = torch.nn.ReLU()
self.fc2 = torch.nn.Linear(16, 1)
self.sigmoid = torch.nn.Sigmoid()
def forward(self, x):
x = self.fc1(x)
x = self.relu(x)
x = self.fc2(x)
x = self.sigmoid(x)
return x
with torch.no_grad():
device = "cuda" if torch.cuda.is_available() else "cpu"
model = Model().to(device=device)
example_inputs=(torch.randn(8, 10, device=device),)
batch_dim = torch.export.Dim("batch", min=1, max=1024)
# [Optional] Specify the first dimension of the input x as dynamic.
exported = torch.export.export(model, example_inputs, dynamic_shapes={"x": {0: batch_dim}})
# [Note] In this example we directly feed the exported module to aoti_compile_and_package.
# Depending on your use case, e.g. if your training platform and inference platform
# are different, you may choose to save the exported model using torch.export.save and
# then load it back using torch.export.load on your inference platform to run AOT compilation.
output_path = torch._inductor.aoti_compile_and_package(
exported,
# [Optional] Specify the generated shared library path. If not specified,
# the generated artifact is stored in your system temp directory.
package_path=os.path.join(os.getcwd(), "model.pt2"),
)
In this illustrative example, the ``Dim`` parameter is employed to designate the first dimension of
the input variable "x" as dynamic. Notably, the path and name of the compiled library remain unspecified,
resulting in the shared library being stored in a temporary directory.
To access this path from the C++ side, we save it to a file for later retrieval within the C++ code.
Inference in Python
---------------------------
There are multiple ways to deploy the compiled artifact for inference, and one of that is using Python.
We have provided a convinient utility API in Python :func:`torch._inductor.aoti_load_package` for loading
and running the artifact, as shown in the following example:
.. code-block:: python
import os
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
model = torch._inductor.aoti_load_package(os.path.join(os.getcwd(), "model.pt2"))
print(model(torch.randn(8, 10, device=device)))
The input at inference time should have the same size, dtype, and stride as the input at export time.
Inference in C++
---------------------------
Next, we use the following example C++ file ``inference.cpp`` to load the compiled artifact,
enabling us to conduct model predictions directly within a C++ environment.
.. code-block:: cpp
#include <iostream>
#include <vector>
#include <torch/torch.h>
#include <torch/csrc/inductor/aoti_package/model_package_loader.h>
int main() {
c10::InferenceMode mode;
torch::inductor::AOTIModelPackageLoader loader("model.pt2");
// Assume running on CUDA
std::vector<torch::Tensor> inputs = {torch::randn({8, 10}, at::kCUDA)};
std::vector<torch::Tensor> outputs = loader.run(inputs);
std::cout << "Result from the first inference:"<< std::endl;
std::cout << outputs[0] << std::endl;
// The second inference uses a different batch size and it works because we
// specified that dimension as dynamic when compiling model.pt2.
std::cout << "Result from the second inference:"<< std::endl;
// Assume running on CUDA
std::cout << loader.run({torch::randn({1, 10}, at::kCUDA)})[0] << std::endl;
return 0;
}
For building the C++ file, you can make use of the provided ``CMakeLists.txt`` file, which
automates the process of invoking ``python model.py`` for AOT compilation of the model and compiling
``inference.cpp`` into an executable binary named ``aoti_example``.
.. code-block:: cmake
cmake_minimum_required(VERSION 3.18 FATAL_ERROR)
project(aoti_example)
find_package(Torch REQUIRED)
add_executable(aoti_example inference.cpp model.pt2)
add_custom_command(
OUTPUT model.pt2
COMMAND python ${CMAKE_CURRENT_SOURCE_DIR}/model.py
DEPENDS model.py
)
target_link_libraries(aoti_example "${TORCH_LIBRARIES}")
set_property(TARGET aoti_example PROPERTY CXX_STANDARD 17)
Provided the directory structure resembles the following, you can execute the subsequent commands
to construct the binary. It is essential to note that the ``CMAKE_PREFIX_PATH`` variable
is crucial for CMake to locate the LibTorch library, and it should be set to an absolute path.
Please be mindful that your path may vary from the one illustrated in this example.
.. code-block:: shell
aoti_example/
CMakeLists.txt
inference.cpp
model.py
.. code-block:: shell
$ mkdir build
$ cd build
$ CMAKE_PREFIX_PATH=/path/to/python/install/site-packages/torch/share/cmake cmake ..
$ cmake --build . --config Release
After the ``aoti_example`` binary has been generated in the ``build`` directory, executing it will
display results akin to the following:
.. code-block:: shell
$ ./aoti_example
Result from the first inference:
0.4866
0.5184
0.4462
0.4611
0.4744
0.4811
0.4938
0.4193
[ CUDAFloatType{8,1} ]
Result from the second inference:
0.4883
0.4703
[ CUDAFloatType{2,1} ]
Troubleshooting
---------------------------
Below are some useful tools for debugging AOT Inductor.
.. toctree::
:caption: Debugging Tools
:maxdepth: 1
logging
torch.compiler_aot_inductor_minifier
To enable runtime checks on inputs, set the environment variable `AOTI_RUNTIME_CHECK_INPUTS` to 1. This will raise a `RuntimeError` if the inputs to the compiled model differ in size, data type, or strides from those used during export.
API Reference
-------------
.. autofunction:: torch._inductor.aoti_compile_and_package
.. autofunction:: torch._inductor.aoti_load_package

View File

@ -0,0 +1,215 @@
# AOTInductor Minifier
If you encounter an error while using AOT Inductor APIs such as
`torch._inductor.aoti_compile_and_package`, `torch._indcutor.aoti_load_package`,
or running the loaded model of `aoti_load_package` on some inputs, you can use the AOTInductor Minifier
to create a minimal nn.Module that reproduce the error by setting `from torch._inductor import config; config.aot_inductor.dump_aoti_minifier = True`.
One a high-level, there are two steps in using the minifier:
- Set `from torch._inductor import config; config.aot_inductor.dump_aoti_minifier = True` or set the environment variable `DUMP_AOTI_MINIFIER=1`. Then running the script that errors would produce a `minifier_launcher.py` script. The output directory is configurable by setting `torch._dynamo.config.debug_dir_root` to a valid directory name.
- Run the `minifier_launcher.py` script. If the minifier runs successfully, it generates runnable python code in `repro.py` which reproduces the exact error.
## Example Code
Here is sample code which will generate an error because we injected an error on relu with
`torch._inductor.config.triton.inject_relu_bug_TESTING_ONLY = "compile_error"`.
```
import torch
from torch._inductor import config as inductor_config
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
self.fc1 = torch.nn.Linear(10, 16)
self.relu = torch.nn.ReLU()
self.sigmoid = torch.nn.Sigmoid()
def forward(self, x):
x = self.fc1(x)
x = self.relu(x)
x = self.sigmoid(x)
return x
inductor_config.aot_inductor.dump_aoti_minifier = True
torch._inductor.config.triton.inject_relu_bug_TESTING_ONLY = "compile_error"
with torch.no_grad():
model = Model().to("cuda")
example_inputs = (torch.randn(8, 10).to("cuda"),)
ep = torch.export.export(model, example_inputs)
package_path = torch._inductor.aoti_compile_and_package(ep)
compiled_model = torch._inductor.aoti_load_package(package_path)
result = compiled_model(*example_inputs)
```
The code above generates the following error:
```text
RuntimeError: Failed to import /tmp/torchinductor_shangdiy/fr/cfrlf4smkwe4lub4i4cahkrb3qiczhf7hliqqwpewbw3aplj5g3s.py
SyntaxError: invalid syntax (cfrlf4smkwe4lub4i4cahkrb3qiczhf7hliqqwpewbw3aplj5g3s.py, line 29)
```
This is because we injected an error on relu, and so the generated triton kernel looks like below. Note that we have `compile error!`
instead if `relu`, so we get a `SyntaxError`.
```
@triton.jit
def triton_poi_fused_addmm_relu_sigmoid_0(in_out_ptr0, in_ptr0, xnumel, XBLOCK : tl.constexpr):
xnumel = 128
xoffset = tl.program_id(0) * XBLOCK
xindex = xoffset + tl.arange(0, XBLOCK)[:]
xmask = xindex < xnumel
x2 = xindex
x0 = xindex % 16
tmp0 = tl.load(in_out_ptr0 + (x2), xmask)
tmp1 = tl.load(in_ptr0 + (x0), xmask, eviction_policy='evict_last')
tmp2 = tmp0 + tmp1
tmp3 = compile error!
tmp4 = tl.sigmoid(tmp3)
tl.store(in_out_ptr0 + (x2), tmp4, xmask)
```
Since we have `torch._inductor.config.aot_inductor.dump_aoti_minifier=True`, we also see an additional line indicating where `minifier_launcher.py` has
been written to. The output directory is configurable by setting
`torch._dynamo.config.debug_dir_root` to a valid directory name.
```text
W1031 16:21:08.612000 2861654 pytorch/torch/_dynamo/debug_utils.py:279] Writing minified repro to:
W1031 16:21:08.612000 2861654 pytorch/torch/_dynamo/debug_utils.py:279] /data/users/shangdiy/pytorch/torch_compile_debug/run_2024_10_31_16_21_08_602433-pid_2861654/minifier/minifier_launcher.py
```
## Minifier Launcher
The `minifier_launcher.py` file has the following code. The `exported_program` contains the inputs to `torch._inductor.aoti_compile_and_package`.
The `command='minify'` parameter means the script will run the minifier to create a minimal graph module that reproduce the error. Alternatively, you set
use `command='run'` to just compile, load, and run the loaded model (without running the minifier).
```
import torch
import torch._inductor.inductor_prims
import torch._dynamo.config
import torch._inductor.config
import torch._functorch.config
import torch.fx.experimental._config
torch._inductor.config.triton.inject_relu_bug_TESTING_ONLY = 'compile_error'
torch._inductor.config.aot_inductor.dump_aoti_minifier = True
isolate_fails_code_str = None
# torch version: 2.6.0a0+gitcd9c6e9
# torch cuda version: 12.0
# torch git version: cd9c6e9408dd79175712223895eed36dbdc84f84
# CUDA Info:
# nvcc: NVIDIA (R) Cuda compiler driver
# Copyright (c) 2005-2023 NVIDIA Corporation
# Built on Fri_Jan__6_16:45:21_PST_2023
# Cuda compilation tools, release 12.0, V12.0.140
# Build cuda_12.0.r12.0/compiler.32267302_0
# GPU Hardware Info:
# NVIDIA PG509-210 : 8
exported_program = torch.export.load('/data/users/shangdiy/pytorch/torch_compile_debug/run_2024_11_06_13_52_35_711642-pid_3567062/minifier/checkpoints/exported_program.pt2')
# print(exported_program.graph)
config_patches={}
if __name__ == '__main__':
from torch._dynamo.repro.aoti import run_repro
with torch.no_grad():
run_repro(exported_program, config_patches=config_patches, accuracy=False, command='minify', save_dir='/data/users/shangdiy/pytorch/torch_compile_debug/run_2024_11_06_13_52_35_711642-pid_3567062/minifier/checkpoints', check_str=None)
```
Suppose we kept the `command='minify'` option, and run the script, we would get the following output:
```text
...
W1031 16:48:08.938000 3598491 torch/_dynamo/repro/aoti.py:89] Writing checkpoint with 3 nodes to /data/users/shangdiy/pytorch/torch_compile_debug/run_2024_10_31_16_48_02_720863-pid_3598491/minifier/checkpoints/3.py
W1031 16:48:08.975000 3598491 torch/_dynamo/repro/aoti.py:101] Copying repro file for convenience to /data/users/shangdiy/pytorch/repro.py
Wrote minimal repro out to repro.py
```
If you get an `AOTIMinifierError` when running `minifier_launcher.py`, please report a bug [here](https://github.com/pytorch/pytorch/issues/new?assignees=&labels=&projects=&template=bug-report.yml).
## Minified Result
The `repro.py` looks like this. Notice that the exported program is printed at the top of the file, and it contains only the relu node. The minifier successfully reduced the graph to the op that raises the error.
```
# from torch.nn import *
# class Repro(torch.nn.Module):
# def __init__(self) -> None:
# super().__init__()
# def forward(self, linear):
# relu = torch.ops.aten.relu.default(linear); linear = None
# return (relu,)
import torch
from torch import tensor, device
import torch.fx as fx
from torch._dynamo.testing import rand_strided
from math import inf
import torch._inductor.inductor_prims
import torch._dynamo.config
import torch._inductor.config
import torch._functorch.config
import torch.fx.experimental._config
torch._inductor.config.generate_intermediate_hooks = True
torch._inductor.config.triton.inject_relu_bug_TESTING_ONLY = 'compile_error'
torch._inductor.config.aot_inductor.dump_aoti_minifier = True
isolate_fails_code_str = None
# torch version: 2.6.0a0+gitcd9c6e9
# torch cuda version: 12.0
# torch git version: cd9c6e9408dd79175712223895eed36dbdc84f84
# CUDA Info:
# nvcc: NVIDIA (R) Cuda compiler driver
# Copyright (c) 2005-2023 NVIDIA Corporation
# Built on Fri_Jan__6_16:45:21_PST_2023
# Cuda compilation tools, release 12.0, V12.0.140
# Build cuda_12.0.r12.0/compiler.32267302_0
# GPU Hardware Info:
# NVIDIA PG509-210 : 8
exported_program = torch.export.load('/data/users/shangdiy/pytorch/torch_compile_debug/run_2024_11_25_13_59_33_102283-pid_3658904/minifier/checkpoints/exported_program.pt2')
# print(exported_program.graph)
config_patches={'aot_inductor.package': True}
if __name__ == '__main__':
from torch._dynamo.repro.aoti import run_repro
with torch.no_grad():
run_repro(exported_program, config_patches=config_patches, accuracy=False, command='run', save_dir='/data/users/shangdiy/pytorch/torch_compile_debug/run_2024_11_25_13_59_33_102283-pid_3658904/minifier/checkpoints', check_str=None)
```

View File

@ -1,221 +0,0 @@
AOTInductor Minifier
===========================
If you encounter an error while using AOT Inductor APIs such as
``torch._inductor.aoti_compile_and_package``, ``torch._indcutor.aoti_load_package``,
or running the loaded model of ``aoti_load_package`` on some inputs, you can use the AOTInductor Minifier
to create a minimal nn.Module that reproduce the error by setting ``from torch._inductor import config; config.aot_inductor.dump_aoti_minifier = True``.
One a high-level, there are two steps in using the minifier:
- Set ``from torch._inductor import config; config.aot_inductor.dump_aoti_minifier = True`` or set the environment variable ``DUMP_AOTI_MINIFIER=1``. Then running the script that errors would produce a ``minifier_launcher.py`` script. The output directory is configurable by setting ``torch._dynamo.config.debug_dir_root`` to a valid directory name.
- Run the ``minifier_launcher.py`` script. If the minifier runs successfully, it generates runnable python code in ``repro.py`` which reproduces the exact error.
Example Code
---------------------------
Here is sample code which will generate an error because we injected an error on relu with
``torch._inductor.config.triton.inject_relu_bug_TESTING_ONLY = "compile_error"``.
.. code-block:: py
import torch
from torch._inductor import config as inductor_config
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
self.fc1 = torch.nn.Linear(10, 16)
self.relu = torch.nn.ReLU()
self.sigmoid = torch.nn.Sigmoid()
def forward(self, x):
x = self.fc1(x)
x = self.relu(x)
x = self.sigmoid(x)
return x
inductor_config.aot_inductor.dump_aoti_minifier = True
torch._inductor.config.triton.inject_relu_bug_TESTING_ONLY = "compile_error"
with torch.no_grad():
model = Model().to("cuda")
example_inputs = (torch.randn(8, 10).to("cuda"),)
ep = torch.export.export(model, example_inputs)
package_path = torch._inductor.aoti_compile_and_package(ep)
compiled_model = torch._inductor.aoti_load_package(package_path)
result = compiled_model(*example_inputs)
The code above generates the following error:
::
RuntimeError: Failed to import /tmp/torchinductor_shangdiy/fr/cfrlf4smkwe4lub4i4cahkrb3qiczhf7hliqqwpewbw3aplj5g3s.py
SyntaxError: invalid syntax (cfrlf4smkwe4lub4i4cahkrb3qiczhf7hliqqwpewbw3aplj5g3s.py, line 29)
This is because we injected an error on relu, and so the generated triton kernel looks like below. Note that we have ``compile error!``
instead if ``relu``, so we get a ``SyntaxError``.
.. code-block::
@triton.jit
def triton_poi_fused_addmm_relu_sigmoid_0(in_out_ptr0, in_ptr0, xnumel, XBLOCK : tl.constexpr):
xnumel = 128
xoffset = tl.program_id(0) * XBLOCK
xindex = xoffset + tl.arange(0, XBLOCK)[:]
xmask = xindex < xnumel
x2 = xindex
x0 = xindex % 16
tmp0 = tl.load(in_out_ptr0 + (x2), xmask)
tmp1 = tl.load(in_ptr0 + (x0), xmask, eviction_policy='evict_last')
tmp2 = tmp0 + tmp1
tmp3 = compile error!
tmp4 = tl.sigmoid(tmp3)
tl.store(in_out_ptr0 + (x2), tmp4, xmask)
Since we have ``torch._inductor.config.aot_inductor.dump_aoti_minifier=True``, we also see an additional line indicating where ``minifier_launcher.py`` has
been written to. The output directory is configurable by setting
``torch._dynamo.config.debug_dir_root`` to a valid directory name.
::
W1031 16:21:08.612000 2861654 pytorch/torch/_dynamo/debug_utils.py:279] Writing minified repro to:
W1031 16:21:08.612000 2861654 pytorch/torch/_dynamo/debug_utils.py:279] /data/users/shangdiy/pytorch/torch_compile_debug/run_2024_10_31_16_21_08_602433-pid_2861654/minifier/minifier_launcher.py
Minifier Launcher
---------------------------
The ``minifier_launcher.py`` file has the following code. The ``exported_program`` contains the inputs to ``torch._inductor.aoti_compile_and_package``.
The ``command='minify'`` parameter means the script will run the minifier to create a minimal graph module that reproduce the error. Alternatively, you set
use ``command='run'`` to just compile, load, and run the loaded model (without running the minifier).
.. code-block:: py
import torch
import torch._inductor.inductor_prims
import torch._dynamo.config
import torch._inductor.config
import torch._functorch.config
import torch.fx.experimental._config
torch._inductor.config.triton.inject_relu_bug_TESTING_ONLY = 'compile_error'
torch._inductor.config.aot_inductor.dump_aoti_minifier = True
isolate_fails_code_str = None
# torch version: 2.6.0a0+gitcd9c6e9
# torch cuda version: 12.0
# torch git version: cd9c6e9408dd79175712223895eed36dbdc84f84
# CUDA Info:
# nvcc: NVIDIA (R) Cuda compiler driver
# Copyright (c) 2005-2023 NVIDIA Corporation
# Built on Fri_Jan__6_16:45:21_PST_2023
# Cuda compilation tools, release 12.0, V12.0.140
# Build cuda_12.0.r12.0/compiler.32267302_0
# GPU Hardware Info:
# NVIDIA PG509-210 : 8
exported_program = torch.export.load('/data/users/shangdiy/pytorch/torch_compile_debug/run_2024_11_06_13_52_35_711642-pid_3567062/minifier/checkpoints/exported_program.pt2')
# print(exported_program.graph)
config_patches={}
if __name__ == '__main__':
from torch._dynamo.repro.aoti import run_repro
with torch.no_grad():
run_repro(exported_program, config_patches=config_patches, accuracy=False, command='minify', save_dir='/data/users/shangdiy/pytorch/torch_compile_debug/run_2024_11_06_13_52_35_711642-pid_3567062/minifier/checkpoints', check_str=None)
Suppose we kept the ``command='minify'`` option, and run the script, we would get the following output:
::
...
W1031 16:48:08.938000 3598491 torch/_dynamo/repro/aoti.py:89] Writing checkpoint with 3 nodes to /data/users/shangdiy/pytorch/torch_compile_debug/run_2024_10_31_16_48_02_720863-pid_3598491/minifier/checkpoints/3.py
W1031 16:48:08.975000 3598491 torch/_dynamo/repro/aoti.py:101] Copying repro file for convenience to /data/users/shangdiy/pytorch/repro.py
Wrote minimal repro out to repro.py
If you get an ``AOTIMinifierError`` when running ``minifier_launcher.py``, please report a bug `here <https://github.com/pytorch/pytorch/issues/new?assignees=&labels=&projects=&template=bug-report.yml>`__.
Minified Result
---------------------------
The ``repro.py`` looks like this. Notice that the exported program is printed at the top of the file, and it contains only the relu node. The minifier successfully reduced the graph to the op that raises the
error.
.. code-block:: py
# from torch.nn import *
# class Repro(torch.nn.Module):
# def __init__(self) -> None:
# super().__init__()
# def forward(self, linear):
# relu = torch.ops.aten.relu.default(linear); linear = None
# return (relu,)
import torch
from torch import tensor, device
import torch.fx as fx
from torch._dynamo.testing import rand_strided
from math import inf
import torch._inductor.inductor_prims
import torch._dynamo.config
import torch._inductor.config
import torch._functorch.config
import torch.fx.experimental._config
torch._inductor.config.generate_intermediate_hooks = True
torch._inductor.config.triton.inject_relu_bug_TESTING_ONLY = 'compile_error'
torch._inductor.config.aot_inductor.dump_aoti_minifier = True
isolate_fails_code_str = None
# torch version: 2.6.0a0+gitcd9c6e9
# torch cuda version: 12.0
# torch git version: cd9c6e9408dd79175712223895eed36dbdc84f84
# CUDA Info:
# nvcc: NVIDIA (R) Cuda compiler driver
# Copyright (c) 2005-2023 NVIDIA Corporation
# Built on Fri_Jan__6_16:45:21_PST_2023
# Cuda compilation tools, release 12.0, V12.0.140
# Build cuda_12.0.r12.0/compiler.32267302_0
# GPU Hardware Info:
# NVIDIA PG509-210 : 8
exported_program = torch.export.load('/data/users/shangdiy/pytorch/torch_compile_debug/run_2024_11_25_13_59_33_102283-pid_3658904/minifier/checkpoints/exported_program.pt2')
# print(exported_program.graph)
config_patches={'aot_inductor.package': True}
if __name__ == '__main__':
from torch._dynamo.repro.aoti import run_repro
with torch.no_grad():
run_repro(exported_program, config_patches=config_patches, accuracy=False, command='run', save_dir='/data/users/shangdiy/pytorch/torch_compile_debug/run_2024_11_25_13_59_33_102283-pid_3658904/minifier/checkpoints', check_str=None)

View File

@ -1,14 +1,14 @@
```{eval-rst}
.. currentmodule:: torch.compiler
.. automodule:: torch.compiler
```
.. _torch.compiler_api:
(torch.compiler_api)=
# torch.compiler API reference
torch.compiler API reference
============================
For a quick overview of ``torch.compiler``, see :ref:`torch.compiler_overview`.
For a quick overview of `torch.compiler`, see {ref}`torch.compiler_overview`.
```{eval-rst}
.. autosummary::
:toctree: generated
:nosignatures:
@ -25,3 +25,4 @@ For a quick overview of ``torch.compiler``, see :ref:`torch.compiler_overview`.
is_compiling
is_dynamo_compiling
is_exporting
```