[dynamo] Refactor transform() so that instruction translator can be used as a tracing function. [2/n] (#160815)

We are refactoring dynamo code for convert frame so that we can have modularized pieces sharable between different compiler frontends (e.g. torch.compile, precompile and torch.export).

This PR follows the last one which separate out the part to run instruction translator on a given frame and return a DynamoTracerOutput.

The end result is a free function that runs instruction translator indepedently. A follow up diff will wrap the low level function.

Differential Revision: [D80388694](https://our.internmc.facebook.com/intern/diff/D80388694/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160815
Approved by: https://github.com/anijain2305
ghstack dependencies: #160814
This commit is contained in:
zhxchen17
2025-08-17 19:48:38 -07:00
committed by PyTorch MergeBot
parent 72e4786d16
commit 599f639ddb
3 changed files with 126 additions and 71 deletions

View File

@ -2263,6 +2263,22 @@ class OutputGraph(OutputGraphGuardsState):
return self.nn_modules[node.target] # type: ignore[index]
class DynamoTracerOutput:
error_on_graph_break: bool
is_tracing_resume_prologue: bool
output_graph: Optional[OutputGraph]
def __init__(
self, tracer: "InstructionTranslatorBase", error: Optional[Any] = None
) -> None:
self.error_on_graph_break = tracer.error_on_graph_break
self.is_tracing_resume_prologue = tracer.is_tracing_resume_prologue
if error:
self.output_graph = None
else:
self.output_graph = tracer.output
err_epilogue = (
"With the current config, we will graph break "
"(and fall back to eager-mode PyTorch) on all ops "