Files
pytorch/torch/_dynamo/logging.py
Michael Lazos a1c46e5f8f component-level configurable logging for dynamo, inductor, aot (#94858)
Summary:

Adds NNC-like logging that is configured through an env var `TORCH_COMPILE_LOGS`
Examples:
`TORCH_LOGS="dynamo,guards" python script.py` - prints dynamo logs at level INFO with guards of all functions that are compiled

`TORCH_LOGS="+dynamo,guards,graph" python script.py` - prints dynamo logs at level DEBUG with guards and graphs (in tabular) format of all graphs that are compiled

[More examples with full output](https://gist.github.com/mlazos/b17f474457308ce15e88c91721ac1cce)

Implementation:
The implementation parses the log settings from the environment, finds any components (aot, dynamo, inductor) or other loggable objects (guards, graph, etc.) and generates a log_state object. This object contains all of the enabled artifacts, and a qualified log name -> level mapping. _init_logs then adds handlers to the highest level logs (the registered logs), and sets any artifact loggers to level DEBUG if the artifact is enabled.

Note: set_logs is an alternative for manipulating the log_state, but if the environment contains TORCH_LOGS, the environment settings will be prioritized.

Adding a new log:
To add a new log, a dev should add their log name to torch._logging._registrations (there are examples there already).

Adding a new artifact:
To add a new artifact, a dev should add their artifact name to torch._logging._registrations as well.
Additionally, wherever the artifact is logged, `torch._logging.getArtifactLogger(__name__, <artifact_name>)` should be used instead of the standard logging implementation.

[design doc](https://docs.google.com/document/d/1ZRfTWKa8eaPq1AxaiHrq4ASTPouzzlPiuquSBEJYwS8/edit#)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94858
Approved by: https://github.com/ezyang
2023-03-18 04:17:31 +00:00

70 lines
1.8 KiB
Python

import itertools
import logging
from torch.hub import _Faketqdm, tqdm
# Disable progress bar by default, not in dynamo config because otherwise get a circular import
disable_progress = True
# Return all loggers that torchdynamo/torchinductor is responsible for
def get_loggers():
return [
logging.getLogger("torch.fx.experimental.symbolic_shapes"),
logging.getLogger("torch._dynamo"),
logging.getLogger("torch._inductor"),
]
# Set the level of all loggers that torchdynamo is responsible for
def set_loggers_level(level):
"""Write current log level"""
for logger in get_loggers():
logger.setLevel(level)
def get_loggers_level():
"""Read current log level"""
return get_loggers()[0].level
# Creates a logging function that logs a message with a step # prepended.
# get_step_logger should be lazily called (i.e. at runtime, not at module-load time)
# so that step numbers are initialized properly. e.g.:
# @functools.lru_cache(None)
# def _step_logger():
# return get_step_logger(logging.getLogger(...))
# def fn():
# _step_logger()(logging.INFO, "msg")
_step_counter = itertools.count(1)
# Update num_steps if more phases are added: Dynamo, AOT, Backend
# This is very inductor centric
# _inductor.utils.has_triton() gives a circular import error here
if not disable_progress:
try:
import triton # noqa: F401
num_steps = 3
except ImportError:
num_steps = 2
pbar = tqdm(total=num_steps, desc="torch.compile()", delay=0)
def get_step_logger(logger):
if not disable_progress:
pbar.update(1)
if not isinstance(pbar, _Faketqdm):
pbar.set_postfix_str(f"{logger.name}")
step = next(_step_counter)
def log(level, msg):
logger.log(level, f"Step {step}: {msg}")
return log