mirror of
https://github.com/pytorch/pytorch.git
synced 2025-10-21 13:44:15 +08:00
This is a remix of https://github.com/pytorch/pytorch/pull/155558 Instead of mediating guard collective via a config option, in this one it's done via a `set_stance` like API. The motivation is that checking for the config value on entry on torch.compile is apparently quite expensive, according to functorch_maml_omniglot. So this makes it a bit cheaper. Signed-off-by: Edward Z. Yang <ezyang@meta.com> Pull Request resolved: https://github.com/pytorch/pytorch/pull/156562 Approved by: https://github.com/Microve
35 lines
745 B
Markdown
35 lines
745 B
Markdown
```{eval-rst}
|
|
.. currentmodule:: torch.compiler
|
|
.. automodule:: torch.compiler
|
|
```
|
|
|
|
(torch.compiler_api)=
|
|
# torch.compiler API reference
|
|
|
|
For a quick overview of `torch.compiler`, see {ref}`torch.compiler_overview`.
|
|
|
|
```{eval-rst}
|
|
.. autosummary::
|
|
:toctree: generated
|
|
:nosignatures:
|
|
|
|
compile
|
|
reset
|
|
allow_in_graph
|
|
substitute_in_graph
|
|
assume_constant_result
|
|
list_backends
|
|
disable
|
|
set_stance
|
|
set_enable_guard_collectives
|
|
cudagraph_mark_step_begin
|
|
is_compiling
|
|
is_dynamo_compiling
|
|
is_exporting
|
|
skip_guard_on_inbuilt_nn_modules_unsafe
|
|
skip_guard_on_all_nn_modules_unsafe
|
|
keep_tensor_guards_unsafe
|
|
skip_guard_on_globals_unsafe
|
|
nested_compile_region
|
|
```
|