Files
pytorch/torch/_export/config.py
Tugsbayasgalan Manlaibaatar 4661200125 [RELAND v2] Close some sources of fake tensors (#164372)
Changelog:

1. When we run into an operation we didn't proxy, we end up emitting fake constants. We error under a config and we disable the config for some internal users. The reason we want to error is this signals a coverage problem we need to address but at the same time, we don't wnat to be disruptive to already working flows.

2. Previous attribute mutation detection logic in non-strict didn't account for nested module structure. This fixes silent incorrectness issue of exporting esm and qwen in non-strict and some torchbench models like levit_128 and demucs.

3. Previous logic didn't work on the cases where we mutate a container attribute as the previous approach used to pytree over old and new attributes resulting in length mismatch. We gracefully handle this now.

Differential Revision: [D83673054](https://our.internmc.facebook.com/intern/diff/D83673054)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164372
Approved by: https://github.com/avikchaudhuri
2025-10-02 18:58:52 +00:00

37 lines
1.1 KiB
Python

"""
Configuration module for torch.export.export.
This module contains various configuration flags and settings that control torch.export's
behavior, including:
- Runtime behavior flags
- Debugging and development options
"""
import sys
from typing import Any, TYPE_CHECKING
from torch.utils._config_module import install_config_module
# this flag controls whether we use new functional tracer. It
# should be True in the long term.
use_new_tracer_experimental = False
# this flag is used to control whether we want to instrument
# fake tensor creation to track potential leaks. It is off
# by default, but user can turn it on to debug leaks.
detect_non_strict_fake_tensor_leaks = False
# error on potentially pre-dispatch/non-strict tracing limitation
# this type of error usually happens when we encounter an op
# that we don't know how to proxy, resulting in untracked fake tensors
error_on_lifted_constant_tensors = True
if TYPE_CHECKING:
from torch.utils._config_typing import * # noqa: F401, F403
def _make_closure_patcher(**changes: Any) -> Any: ...
install_config_module(sys.modules[__name__])