Files
pytorch/docs/source/torch.compiler_api.md
Edward Z. Yang 17eb649d55 Implement guard collectives (optimized version) (#156562)
This is a remix of https://github.com/pytorch/pytorch/pull/155558

Instead of mediating guard collective via a config option, in this one it's done via a `set_stance` like API. The motivation is that checking for the config value on entry on torch.compile is apparently quite expensive, according to functorch_maml_omniglot. So this makes it a bit cheaper.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/156562
Approved by: https://github.com/Microve
2025-06-24 04:59:49 +00:00

745 B

.. currentmodule:: torch.compiler
.. automodule:: torch.compiler

(torch.compiler_api)=

torch.compiler API reference

For a quick overview of torch.compiler, see {ref}torch.compiler_overview.

.. autosummary::
    :toctree: generated
    :nosignatures:

     compile
     reset
     allow_in_graph
     substitute_in_graph
     assume_constant_result
     list_backends
     disable
     set_stance
     set_enable_guard_collectives
     cudagraph_mark_step_begin
     is_compiling
     is_dynamo_compiling
     is_exporting
     skip_guard_on_inbuilt_nn_modules_unsafe
     skip_guard_on_all_nn_modules_unsafe
     keep_tensor_guards_unsafe
     skip_guard_on_globals_unsafe
     nested_compile_region