a7f3bdf550 
					 
					
						
						
							
							[Dynamo][Better Engineering] Type coverage for torch/_dynamo/utils.py ( #159580 )  
						
						... 
						
						
						
						As part of better engineering effort, we would like to improve out type support to improve dev experience in dynamo
This PR adds strict typing support to `torch/_dynamo/utils.py`
Running
```
mypy torch/_dynamo/utils.py --linecount-report /tmp/coverage_log
```
| -------- | Lines Annotated | Lines Total | % lines covered | Funcs Annotated | Funcs Total | % funcs covered |
| -------- | ------- | -------- | ------- | ------- | ------- | ------- |
| Main  |  2163 | 4792 | 45.14% | 121 | 268 | 45.15% |
| This PR | 4818 | 4818 | 100.00% | 268 | 268 | 100.00% |
| Delta    | +2655 | +26 | +54.84% | +147 | 0 | +54.85% |
Pull Request resolved: https://github.com/pytorch/pytorch/pull/159580 
Approved by: https://github.com/williamwen42  
						
						
					 
					
						2025-08-04 21:51:53 +00:00 
						 
				 
			
				
					
						
					 
					
						
						
							
						
						e4b123b5e4 
					 
					
						
						
							
							Revert direct updates ( #159654 )  
						
						... 
						
						
						
						reverts:
```
commit 5711a8f06948eeee56ed5f53f171fa519f78491c (tag: trunk/5711a8f06948eeee56ed5f53f171fa519f78491c, origin/main, main)
Author: Jovian Anthony Jaison <38627145+jovianjaison@users.noreply.github.com >
Date:   Fri Aug 1 09:32:52 2025 -0700
    Update test_utils.py
commit b4b71d011ed07a41c2086ff0dec2988a63662877 (tag: trunk/b4b71d011ed07a41c2086ff0dec2988a63662877)
Author: Jovian Anthony Jaison <38627145+jovianjaison@users.noreply.github.com >
Date:   Fri Aug 1 09:27:54 2025 -0700
    Update utils.py
commit 52376b9b6fbf9fe24f5d82038dc520f0c64b6f8d (tag: trunk/52376b9b6fbf9fe24f5d82038dc520f0c64b6f8d)
Author: Jovian Anthony Jaison <38627145+jovianjaison@users.noreply.github.com >
Date:   Fri Aug 1 09:26:05 2025 -0700
```
(commits pushed directly to main by mistake)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/159654 
Approved by: https://github.com/atalman  
						
						
					 
					
						2025-08-01 16:54:51 +00:00 
						 
				 
			
				
					
						
					 
					
						
						
							
						
						b4b71d011e 
					 
					
						
						
							
							Update utils.py  
						
						
						
						
					 
					
						2025-08-01 09:27:54 -07:00 
						 
				 
			
				
					
						
					 
					
						
						
							
						
						cb4f41e125 
					 
					
						
						
							
							Revert "[dynamo] [guard] Add caching for inside torch.compile.disable function to avoid unnecessary recompilation. ( #157566 )"  
						
						... 
						
						
						
						This reverts commit 8e07c9870d07c5a318ab21bb16b3fa27576851e6.
Reverted https://github.com/pytorch/pytorch/pull/157566  on behalf of https://github.com/yangw-dev  due to failed an odd internal test, please reach out to metamate to fix it, D79112610 ([comment](https://github.com/pytorch/pytorch/pull/157566#issuecomment-3141840110 )) 
						
						
					 
					
						2025-08-01 01:27:45 +00:00 
						 
				 
			
				
					
						
					 
					
						
						
							
						
						2b1ae29960 
					 
					
						
						
							
							[Dynamo][Better Engineering] Add typing annotations to guard and source ( #158397 ) ( #159491 )  
						
						... 
						
						
						
						Summary:
X-link: https://github.com/pytorch/executorch/pull/12986 
As part of better engineering week, we would like to improve out type support to improve dev experience in dynamo
This PR adds strict typing support to a critical set of files for dynamo, `source.py` and the base `_guards.py`
Running
```
mypy torch/_dynamo/source.py torch/_guards.py --linecount-report /tmp/coverage_log
```
| -------- | Lines Unannotated | Lines Total | % lines covered | Funcs Unannotated | Funcs Total | % funcs covered |
| -------- | ------- | -------- | ------- | ------- | ------- | ------- |
| Main  |  1227 | 2208 | 55.57% | 207 | 362 | 57.18% |
| This PR | 2217 | 2217 | 100.00% | 362 | 362 | 100.00% |
| Delta    | +990 | +9 | +44.43% | +155 | 0 | +42.82% |
cc jgong5 mingfeima XiaobingSuper sanchitintel ashokei jingxu10 jerryzh168 voznesenskym penguinwu EikanWang Guobing-Chen zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov coconutruben
Test Plan:
Imported from GitHub, without a `Test Plan:` line.
Rollback Plan:
Reviewed By: JacobSzwejbka, yangw-dev
Differential Revision: D79199389
Pulled By: Lucaskabela
Pull Request resolved: https://github.com/pytorch/pytorch/pull/159491 
Approved by: https://github.com/anijain2305 , https://github.com/yangw-dev  
						
						
					 
					
						2025-07-30 22:57:50 +00:00 
						 
				 
			
				
					
						
					 
					
						
						
							
						
						d987a6f7f0 
					 
					
						
						
							
							Revert "[Dynamo][Better Engineering] Add typing annotations to guard and source ( #158397 )"  
						
						... 
						
						
						
						This reverts commit abcb24f4de11f8fedf2c2c9ff53b6092ef42306d.
Reverted https://github.com/pytorch/pytorch/pull/158397  on behalf of https://github.com/yangw-dev  due to Suggested to fix failing internal signals on D78911890 ([comment](https://github.com/pytorch/pytorch/pull/158397#issuecomment-3133823766 )) 
						
						
					 
					
						2025-07-29 19:49:40 +00:00 
						 
				 
			
				
					
						
					 
					
						
						
							
						
						c55e72bea1 
					 
					
						
						
							
							[Re-land][Inductor] Support native Inductor as backend for MTIA ( #159211 )  
						
						... 
						
						
						
						The previous [diff/PR] (https://github.com/pytorch/pytorch/pull/158526 ) was reverted due to this docstring lint error:
<img width="1736" height="722" alt="image" src="https://github.com/user-attachments/assets/216b1720-4002-48da-b5f3-32b5d48aaa54 " />
I didn't add the docstring cause I thought I'm not supposed to add docstring for an EXISTING function.
So this diff/PR is an exactly copy of the previous one, except for adding the docstring.
-------------
This diff/PR includes the changes to support native Inductor integration for MTIA. The goal is to support `torch.compile(backend="inductor")` for MTIA. Inductor should generate code(triton kernel + python wrapper code) similar to CUDA. And the triton kernels can be launched eagerly.
The changes include:
- Add MTIA device interfaces used by Dynamo and Inductor, including APIs on device, stream, event, etc.
- Add required torch.mtia APIs, like is_bf16_supported, memory_allocated, set_stream_by_id, etc.
- MTIA specific codegen logic, for example, loading MTIA dynamic_library.
- Other necessary changes to integrate with Inductor codegn, following other devices like CUDA, XPU.
- Integrate with the [empty_strided_mtia](https://www.internalfb.com/code/fbsource/[0d017d3a4a1bdff7253f9c66a9f38e77bd62166b]/fbcode/caffe2/aten/src/ATen/native/mtia/EmptyTensor.cpp?lines=49%2C63%2C71%2C74%2C78 ) API that we’ve added for the new MTIA ATen backend.
- A change in Inductor runtime to avoid re-initialize MTIADriver.
- BUCK changes to include ATen-mtia in Inductor, and to use -USE_MTIA preprocessor flag.
- Update `test_mnist_e2e.py` to cover native Inductor as backend, using the `--use_native_inductor` flag.
- Add a personal script(`scripts/anwang/run_native_inductor_script.py`) for testing purpose.
Note:
- This approach(option 3) aims to provide a pytorch native approach of Inductor integration for MTIA, minimizing the onboarding overhead. The downside of this approach is that it doesn't leverage MTIA specific graph optimization, and is limited to eagerly launch overhead.
- MTIA will support another approach(option 2) to provide best performance, based on WrapperFxCodegen. We should be able to reuse the fundamental changes of this diff for option 2, like the device interfaces, steam/event APIs, etc, especially as WrapperFxCodegen inherits PythonWrapperCodegen.
Internal:
References:
- [post for context](https://fb.workplace.com/groups/mtiasw/permalink/1718377262384606/ )
- [Inductor integration discussion(option 1/2/3)](https://docs.google.com/document/d/1p6363OXtVIRv1hPoaKlRSK3j-iir3QIbDd5bjyqCNig/edit?tab=t.0#heading=h.7s4ns6wcnhmb )
- [Project design doc(option 3)](https://docs.google.com/document/d/1jXUmhgoV9WvkMf-bcY3Od_kK9K_RDOdgHdt1LoQ5Tc4/edit?tab=t.0#heading=h.y43gwdqlv46w )
- [early prototying diff](https://www.internalfb.com/diff/D75110196 )
- [MPS integration PR](https://github.com/pytorch/pytorch/pull/153959 )
- [empty_strided_xpu PR](https://github.com/pytorch/pytorch/pull/126678 )
Differential Revision: [D79040806](https://our.internmc.facebook.com/intern/diff/D79040806/ )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/159211 
Approved by: https://github.com/eellison , https://github.com/blaine-rister , https://github.com/jansel  
						
						
					 
					
						2025-07-29 17:03:24 +00:00 
						 
				 
			
				
					
						
					 
					
						
						
							
						
						14d67eec05 
					 
					
						
						
							
							Revert "[dynamo][fsdp] Consistent behavior of int attributes ( #157262 )"  
						
						... 
						
						
						
						This reverts commit 9b4d938f04c95cebe0fbd96974f64c935567e039.
Reverted https://github.com/pytorch/pytorch/pull/157262  on behalf of https://github.com/ZainRizvi  due to This was reverted internally. Somehow this PR didn't get reverted alongside it. See D78772867. To validate your fixes internally, you can follow the instructions here: https://fburl.com/fixing-ghfirst-reverts  ([comment](https://github.com/pytorch/pytorch/pull/157262#issuecomment-3128148475 )) 
						
						
					 
					
						2025-07-28 16:58:27 +00:00 
						 
				 
			
				
					
						
					 
					
						
						
							
						
						8e07c9870d 
					 
					
						
						
							
							[dynamo] [guard] Add caching for inside torch.compile.disable function to avoid unnecessary recompilation. ( #157566 )  
						
						... 
						
						
						
						inside torch.compile.disable function always triggers recompilation. because a user inside function decorated with torch._dynamo.disable would be used as an argument in the resume_in_xx function. In the current implementation,  it will always be a new object, resulting in the ID_MATCH guard always failing and triggering recompilation.
Fixes https://github.com/pytorch/pytorch/issues/157399 
@xmfan
Pull Request resolved: https://github.com/pytorch/pytorch/pull/157566 
Approved by: https://github.com/mlazos , https://github.com/anijain2305  
						
						
					 
					
						2025-07-28 12:44:22 +00:00 
						 
				 
			
				
					
						
					 
					
						
						
							
						
						fe0ff12dab 
					 
					
						
						
							
							Revert "[Inductor] Support native Inductor as backend for MTIA ( #158526 )"  
						
						... 
						
						
						
						This reverts commit cd68559d0451185f8521912c23e77b83d76b87cf.
Reverted https://github.com/pytorch/pytorch/pull/158526  on behalf of https://github.com/facebook-github-bot  due to Diff reverted internally ([comment](https://github.com/pytorch/pytorch/pull/158526#issuecomment-3122186057 )) 
						
						
					 
					
						2025-07-26 17:58:00 +00:00 
						 
				 
			
				
					
						
					 
					
						
						
							
						
						cd68559d04 
					 
					
						
						
							
							[Inductor] Support native Inductor as backend for MTIA ( #158526 )  
						
						... 
						
						
						
						This diff/PR includes the changes to support native Inductor integration for MTIA. The goal is to support `torch.compile(backend="inductor")` for MTIA. Inductor should generate code(triton kernel + python wrapper code) similar to CUDA. And the triton kernels can be launched eagerly.
The changes include:
- Add MTIA device interfaces used by Dynamo and Inductor, including APIs on device, stream, event, etc.
- Add required torch.mtia APIs, like is_bf16_supported, memory_allocated, set_stream_by_id, etc.
- MTIA specific codegen logic, for example, loading MTIA dynamic_library.
- Other necessary changes to integrate with Inductor codegn, following other devices like CUDA, XPU.
- Integrate with the [empty_strided_mtia](https://www.internalfb.com/code/fbsource/[0d017d3a4a1bdff7253f9c66a9f38e77bd62166b]/fbcode/caffe2/aten/src/ATen/native/mtia/EmptyTensor.cpp?lines=49%2C63%2C71%2C74%2C78 ) API that we’ve added for the new MTIA ATen backend.
- A change in Inductor runtime to avoid re-initialize MTIADriver.
- BUCK changes to include ATen-mtia in Inductor, and to use -USE_MTIA preprocessor flag.
- Update `test_mnist_e2e.py` to cover native Inductor as backend, using the `--use_native_inductor` flag.
- Add a personal script(`scripts/anwang/run_native_inductor_script.py`) for testing purpose.
Note:
- This approach(option 3) aims to provide a pytorch native approach of Inductor integration for MTIA, minimizing the onboarding overhead. The downside of this approach is that it doesn't leverage MTIA specific graph optimization, and is limited to eagerly launch overhead.
- MTIA will support another approach(option 2) to provide best performance, based on WrapperFxCodegen. We should be able to reuse the fundamental changes of this diff for option 2, like the device interfaces, steam/event APIs, etc, especially as WrapperFxCodegen inherits PythonWrapperCodegen.
Internal:
References:
- [post for context](https://fb.workplace.com/groups/mtiasw/permalink/1718377262384606/ )
- [Inductor integration discussion(option 1/2/3)](https://docs.google.com/document/d/1p6363OXtVIRv1hPoaKlRSK3j-iir3QIbDd5bjyqCNig/edit?tab=t.0#heading=h.7s4ns6wcnhmb )
- [Project design doc(option 3)](https://docs.google.com/document/d/1jXUmhgoV9WvkMf-bcY3Od_kK9K_RDOdgHdt1LoQ5Tc4/edit?tab=t.0#heading=h.y43gwdqlv46w )
- [early prototying diff](https://www.internalfb.com/diff/D75110196 )
- [MPS integration PR](https://github.com/pytorch/pytorch/pull/153959 )
- [empty_strided_xpu PR](https://github.com/pytorch/pytorch/pull/126678 )
Differential Revision: [D78458745](https://our.internmc.facebook.com/intern/diff/D78458745/ )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/158526 
Approved by: https://github.com/blaine-rister , https://github.com/jansel , https://github.com/eellison  
						
						
					 
					
						2025-07-26 08:16:34 +00:00 
						 
				 
			
				
					
						
					 
					
						
						
							
						
						abcb24f4de 
					 
					
						
						
							
							[Dynamo][Better Engineering] Add typing annotations to guard and source ( #158397 )  
						
						... 
						
						
						
						As part of better engineering week, we would like to improve out type support to improve dev experience in dynamo
This PR adds strict typing support to a critical set of files for dynamo, `source.py` and the base `_guards.py`
Running
```
mypy torch/_dynamo/source.py torch/_guards.py --linecount-report /tmp/coverage_log
```
| -------- | Lines Unannotated | Lines Total | % lines covered | Funcs Unannotated | Funcs Total | % funcs covered |
| -------- | ------- | -------- | ------- | ------- | ------- | ------- |
| Main  |  1227 | 2208 | 55.57% | 207 | 362 | 57.18% |
| This PR | 2217 | 2217 | 100.00% | 362 | 362 | 100.00% |
| Delta    | +990 | +9 | +44.43% | +155 | 0 | +42.82% |
Pull Request resolved: https://github.com/pytorch/pytorch/pull/158397 
Approved by: https://github.com/anijain2305  
						
						
					 
					
						2025-07-24 15:55:18 +00:00 
						 
				 
			
				
					
						
					 
					
						
						
							
						
						9b4d938f04 
					 
					
						
						
							
							[dynamo][fsdp] Consistent behavior of int attributes ( #157262 )  
						
						... 
						
						
						
						Reimpl of https://github.com/pytorch/pytorch/pull/150954 
Pull Request resolved: https://github.com/pytorch/pytorch/pull/157262 
Approved by: https://github.com/bdhirsh  
						
						
					 
					
						2025-07-22 11:26:54 +00:00 
						 
				 
			
				
					
						
					 
					
						
						
							
						
						9498d95b9c 
					 
					
						
						
							
							[Dynamo][BetterEngineering] Type trace_rules.py ( #158679 )  
						
						... 
						
						
						
						As part of better engineering week, we would like to improve out type support to improve dev experience in dynamo
This PR adds strict typing support to a core file, `trace_rules.py`
Running
```
mypy torch/_dynamo/trace_rules.py   --linecount-report /tmp/coverage_log
```
| -------- | Lines Unannotated | Lines Total | % lines covered | Funcs Unannotated | Funcs Total | % funcs covered |
| -------- | ------- | -------- | ------- | ------- | ------- | ------- |
| Main  |  2564 | 3997 | 64.15% | 34 | 53 | 64.15% |
| This PR | 4022 | 4022 | 100.00% | 53 | 53 | 100.00% |
| Delta    | +1458 | +25 | +35.85% | +19 | 0 | +35.85% |
Pull Request resolved: https://github.com/pytorch/pytorch/pull/158679 
Approved by: https://github.com/williamwen42  
						
						
					 
					
						2025-07-21 22:12:59 +00:00 
						 
				 
			
				
					
						
					 
					
						
						
							
						
						b1a0c34dd3 
					 
					
						
						
							
							[pt2 event logging] add configurable prefix ( #157678 )  
						
						... 
						
						
						
						Summary:
# Why
make experiments easier to find
# What
- dynamo config to provide a prefix
- use the prefix when sending data to scuba through the self.id_ field
Test Plan:
```
# code edited to set the prefix as `coconutruben-02`
buck2 run mode/opt scripts/coconutruben/torchmm:experiment 2>&1 | tee /tmp/epx040
```
on scuba
```
| autotune_dtypes | autotune_offset | autotune_shape | autotune_strides | event | run_id |
| -----| -----| -----| -----| -----| ----- |
| "torch.float16, torch.float16" | "0, 0" | "4096x3008, 3008x2048" | "[3008, 1], [2048, 1]" | "mm_template_autotuning" | "coconutruben-02-e6bdccc5-6dcf-4d68-9a04-b34f2c6d94fd" |
| "torch.float16, torch.float16" | "0, 0" | "4096x3008, 3008x2048" | "[3008, 1], [2048, 1]" | "mm_template_autotuning" | "coconutruben-02-14165153-5842-4eaa-9e6c-3b0cbc016375" |
```
Rollback Plan:
Differential Revision: D77837550
Pull Request resolved: https://github.com/pytorch/pytorch/pull/157678 
Approved by: https://github.com/stashuk-olek  
						
						
					 
					
						2025-07-21 20:41:03 +00:00 
						 
				 
			
				
					
						
					 
					
						
						
							
						
						22920c9138 
					 
					
						
						
							
							Grab bag of (mostly) typing improvements ( #158075 )  
						
						... 
						
						
						
						Collects some scattershot improvements made while attempting to enable training for AOTInductor. Non-typing changes are:
1. Swapping a few custom searches for the output node in an FX graph for calling `graph.output_node()`.
2. Removing two unused parameters from `torch.export._unlift._unlift`.
3. Switching handles to constants in `cpp_wrapper_cpu` to use C++ references for memory efficiency.
4. Cleaning out unused, unexported imports from `torch/export/__init__.py`, and adding one missing export to `__all__`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/158075 
Approved by: https://github.com/Skylion007  
						
						
					 
					
						2025-07-21 19:17:01 +00:00 
						 
				 
			
				
					
						
					 
					
						
						
							
						
						89850bbc07 
					 
					
						
						
							
							[Dynamo] Use proper sources for constructing dataclass defaults ( #157993 )  
						
						... 
						
						
						
						Partially fixes https://github.com/pytorch/pytorch/issues/154009 
Pull Request resolved: https://github.com/pytorch/pytorch/pull/157993 
Approved by: https://github.com/williamwen42 , https://github.com/anijain2305  
						
						
					 
					
						2025-07-18 21:51:40 +00:00 
						 
				 
			
				
					
						
					 
					
						
						
							
						
						94995eba07 
					 
					
						
						
							
							[Log] add a hook for recompile user context ( #157961 )  
						
						... 
						
						
						
						Users may want compile-related but customized logging info to dynamo_compile. One example is to logging the current training iteration index when recompilation happens. In general, current training iteration index is not available to compiler, since the same compiled function may be called multiple times in the same training iteration. The user could provide the training iteration index in a user hook where torch.compile logs it when recompilation happens.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/157961 
Approved by: https://github.com/masnesral  
						
						
					 
					
						2025-07-11 03:41:33 +00:00 
						 
				 
			
				
					
						
					 
					
						
						
							
						
						e517066f41 
					 
					
						
						
							
							Revert "[dynamo][fsdp] Consistent behavior of int attributes ( #157262 )"  
						
						... 
						
						
						
						This reverts commit 178fe7aa98987111a73534375099f4ad255e8b59.
Reverted https://github.com/pytorch/pytorch/pull/157262  on behalf of https://github.com/huydhn  due to This fails some internal tests and needs to be relanded ([comment](https://github.com/pytorch/pytorch/pull/157262#issuecomment-3059463896 )) 
						
						
					 
					
						2025-07-10 23:11:18 +00:00 
						 
				 
			
				
					
						
					 
					
						
						
							
						
						82765dad16 
					 
					
						
						
							
							Fix logging of config_suppress_errors and config_inline_inbuilt_nn_modules ( #157947 )  
						
						... 
						
						
						
						Currently ~50% of the time we fail or crash before logging metrics, so moving where this is logged will let us have more comprehensive (less-null) data.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/157947 
Approved by: https://github.com/masnesral , https://github.com/jovianjaison  
						
						
					 
					
						2025-07-10 12:05:43 +00:00 
						 
				 
			
				
					
						
					 
					
						
						
							
						
						178fe7aa98 
					 
					
						
						
							
							[dynamo][fsdp] Consistent behavior of int attributes ( #157262 )  
						
						... 
						
						
						
						Reimpl of https://github.com/pytorch/pytorch/pull/150954 
Pull Request resolved: https://github.com/pytorch/pytorch/pull/157262 
Approved by: https://github.com/bdhirsh  
						
						
					 
					
						2025-07-08 22:11:33 +00:00 
						 
				 
			
				
					
						
					 
					
						
						
							
						
						e49acfc5c5 
					 
					
						
						
							
							[list] Raise exception in invalid list method call ( #156148 )  
						
						... 
						
						
						
						Pull Request resolved: https://github.com/pytorch/pytorch/pull/156148 
Approved by: https://github.com/zou3519 
ghstack dependencies: #153969  
						
						
					 
					
						2025-07-07 14:51:10 +00:00 
						 
				 
			
				
					
						
					 
					
						
						
							
						
						0e7f02fe2e 
					 
					
						
						
							
							[Dynamo] [FrozensetSubclass] Add support for user defined frozensets ( #154263 )  
						
						... 
						
						
						
						Pull Request resolved: https://github.com/pytorch/pytorch/pull/154263 
Approved by: https://github.com/williamwen42 
ghstack dependencies: #153150 , #152991 , #154539 , #153553 , #154063 , #154064 , #154065 , #154066  
						
						
					 
					
						2025-07-04 00:46:05 +00:00 
						 
				 
			
				
					
						
					 
					
						
						
							
						
						22abe6ded4 
					 
					
						
						
							
							[Dynamo] [SetSubclass] Add support for user defined sets ( #153553 )  
						
						... 
						
						
						
						Pull Request resolved: https://github.com/pytorch/pytorch/pull/153553 
Approved by: https://github.com/williamwen42 , https://github.com/zou3519 
ghstack dependencies: #153150 , #152991 , #154539  
						
						
					 
					
						2025-07-04 00:45:25 +00:00 
						 
				 
			
				
					
						
					 
					
						
						
							
						
						e7167dbacf 
					 
					
						
						
							
							[Set] Support sets in VariableBuilder ( #153150 )  
						
						... 
						
						
						
						Pull Request resolved: https://github.com/pytorch/pytorch/pull/153150 
Approved by: https://github.com/zou3519  
						
						
					 
					
						2025-07-04 00:45:03 +00:00 
						 
				 
			
				
					
						
					 
					
						
						
							
						
						8c0df6fe17 
					 
					
						
						
							
							Revert "[dynamo][fsdp] Consistent behavior of int attributes ( #157262 )"  
						
						... 
						
						
						
						This reverts commit 42b48ee67229286127390000f103a11dfc8901f5.
Reverted https://github.com/pytorch/pytorch/pull/157262  on behalf of https://github.com/jeanschmidt  due to Newly introduced tests are red in internal runs, check D77593713 ([comment](https://github.com/pytorch/pytorch/pull/157262#issuecomment-3026944993 )) 
						
						
					 
					
						2025-07-02 08:30:39 +00:00 
						 
				 
			
				
					
						
					 
					
						
						
							
						
						42b48ee672 
					 
					
						
						
							
							[dynamo][fsdp] Consistent behavior of int attributes ( #157262 )  
						
						... 
						
						
						
						Reimpl of https://github.com/pytorch/pytorch/pull/150954 
Pull Request resolved: https://github.com/pytorch/pytorch/pull/157262 
Approved by: https://github.com/bdhirsh  
						
						
					 
					
						2025-06-30 22:32:52 +00:00 
						 
				 
			
				
					
						
					 
					
						
						
							
						
						dcb8982969 
					 
					
						
						
							
							[dynamo] move error_on_graph_break out of config ( #156762 )  
						
						... 
						
						
						
						error_on_graph_break doesn't need to be in config, so we move it out. It should make the functorch_maml_omniglot regression less severe.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/156762 
Approved by: https://github.com/jansel 
ghstack dependencies: #154283 , #154289 , #154782  
						
						
					 
					
						2025-06-26 21:40:38 +00:00 
						 
				 
			
				
					
						
					 
					
						
						
							
						
						1b2146fc6d 
					 
					
						
						
							
							[BE][4/16] fix typos in torch/ (torch/_dynamo/) ( #156314 )  
						
						... 
						
						
						
						Pull Request resolved: https://github.com/pytorch/pytorch/pull/156314 
Approved by: https://github.com/jingsh 
ghstack dependencies: #156313  
						
						
					 
					
						2025-06-23 02:57:19 +00:00 
						 
				 
			
				
					
						
					 
					
						
						
							
						
						5b427c92a8 
					 
					
						
						
							
							Revert "[BE][4/16] fix typos in torch/ (torch/_dynamo/) ( #156314 )"  
						
						... 
						
						
						
						This reverts commit ead741c5fb0036e0fc95b79d4fe1af3a426e1306.
Reverted https://github.com/pytorch/pytorch/pull/156314  on behalf of https://github.com/atalman  due to export/test_torchbind.py::TestCompileTorchbind::test_compile_error_on_input_aliasing_contents_backend_aot_eager [GH job link](https://github.com/pytorch/pytorch/actions/runs/15804799771/job/44548489912 ) [HUD commit link](c95f7fa874https://github.com/pytorch/pytorch/pull/156313#issuecomment-2994171213 )) 
						
						
					 
					
						2025-06-22 12:31:57 +00:00 
						 
				 
			
				
					
						
					 
					
						
						
							
						
						ead741c5fb 
					 
					
						
						
							
							[BE][4/16] fix typos in torch/ (torch/_dynamo/) ( #156314 )  
						
						... 
						
						
						
						Pull Request resolved: https://github.com/pytorch/pytorch/pull/156314 
Approved by: https://github.com/jingsh 
ghstack dependencies: #156313  
						
						
					 
					
						2025-06-22 08:43:18 +00:00 
						 
				 
			
				
					
						
					 
					
						
						
							
						
						554b568040 
					 
					
						
						
							
							Add internal use only utility to allow externally visible side effects within HOPs ( #155715 )  
						
						... 
						
						
						
						Pull Request resolved: https://github.com/pytorch/pytorch/pull/155715 
Approved by: https://github.com/zou3519  
						
						
					 
					
						2025-06-21 03:55:28 +00:00 
						 
				 
			
				
					
						
					 
					
						
						
							
						
						b2fc9cfea1 
					 
					
						
						
							
							[precompile] Add CompilePackage to serialize dynamo states. ( #155118 )  
						
						... 
						
						
						
						Adding a per torch.compile() object CompilePackage which tracks dynamo artifact. CompilePackage is considered a low level component and should not be directly exposed to end users. It has the following interface:
1. `CompilePackage.__init__()` which optionally takes previously serialized dynamo states.
     a. when `dynamo` argument is None, it will contruct a brand new CompilePackage object.
     b. when `dynamo` argument is not None, it will load a pre-compiled dynamo state.
2. `package.save()` which dumps the dynamo states into _DynamoCacheEntry.
3. `package.install(backends)` which will handle all the side-effectful global scope updates with compiled functions and resume functions.
This diff focus on making the low level mechanism for precompile. It will be left to upper level interface to use these API to build more user-facing frontend.
Differential Revision: [D75956538](https://our.internmc.facebook.com/intern/diff/D75956538/ )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/155118 
Approved by: https://github.com/jamesjwu 
Co-authored-by: James Wu <jjwu@meta.com > 
						
						
					 
					
						2025-06-13 13:54:10 +00:00 
						 
				 
			
				
					
						
					 
					
						
						
							
						
						d1947a8707 
					 
					
						
						
							
							Migrate from lru_cache to cache ( #155613 )  
						
						... 
						
						
						
						Pull Request resolved: https://github.com/pytorch/pytorch/pull/155613 
Approved by: https://github.com/ezyang 
ghstack dependencies: #155612  
						
						
					 
					
						2025-06-11 19:44:18 +00:00 
						 
				 
			
				
					
						
					 
					
						
						
							
						
						c4b93e6579 
					 
					
						
						
							
							Replace frame_traced_fn hook with get_traced_code() util ( #155249 )  
						
						... 
						
						
						
						#153622  introduced a hook for getting the relevant code objects after frame tracing. The idea is to have vLLM use this instead of monkey-patching `inline_call_()` to determine the source code files to hash. Unfortunately, the hook runs too late; the vLLM backend needs access to the set of source code filenames while it's running.
This PR replaces the newly-added hook with a utility function that a backend can call to get this information. I've made the change in vLLM and can verify that this allows the information to be queried at the right time.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/155249 
Approved by: https://github.com/zou3519  
					
						2025-06-10 22:40:58 +00:00 
						 
				 
			
				
					
						
					 
					
						
						
							
						
						067fd0b3ab 
					 
					
						
						
							
							[dynamo][cleanup] Simplify disabling of the helper functions on tensor properties ( #155259 )  
						
						... 
						
						
						
						Pull Request resolved: https://github.com/pytorch/pytorch/pull/155259 
Approved by: https://github.com/zhxchen17  
						
						
					 
					
						2025-06-06 19:44:40 +00:00 
						 
				 
			
				
					
						
					 
					
						
						
							
						
						271ca679a8 
					 
					
						
						
							
							[reland][dynamo] Record the pre-graph bytecode using fast record function event ( #154974 )  
						
						... 
						
						
						
						reland of https://github.com/pytorch/pytorch/pull/154769 
@diff-train-skip-merge
Pull Request resolved: https://github.com/pytorch/pytorch/pull/154974 
Approved by: https://github.com/Lucaskabela , https://github.com/jansel  
						
						
					 
					
						2025-06-06 13:11:03 +00:00 
						 
				 
			
				
					
						
					 
					
						
						
							
						
						e01fde8213 
					 
					
						
						
							
							Revert "[reland][dynamo] Record the pre-graph bytecode using fast record function event ( #154974 )"  
						
						... 
						
						
						
						This reverts commit bee9c70c5d4b681ec1f2adf92eca1205b372634a.
Reverted https://github.com/pytorch/pytorch/pull/154974  on behalf of https://github.com/malfet  due to Broke inductor tests, see 3c72b9fd8f/1https://github.com/pytorch/pytorch/pull/154974#issuecomment-2944370617 )) 
						
						
					 
					
						2025-06-05 13:36:21 +00:00 
						 
				 
			
				
					
						
					 
					
						
						
							
						
						bee9c70c5d 
					 
					
						
						
							
							[reland][dynamo] Record the pre-graph bytecode using fast record function event ( #154974 )  
						
						... 
						
						
						
						reland of https://github.com/pytorch/pytorch/pull/154769 
Pull Request resolved: https://github.com/pytorch/pytorch/pull/154974 
Approved by: https://github.com/Lucaskabela , https://github.com/jansel  
						
						
					 
					
						2025-06-05 07:25:04 +00:00 
						 
				 
			
				
					
						
					 
					
						
						
							
						
						c881f2ddf3 
					 
					
						
						
							
							[reland][dynamo] Mark a vt unspecialized nn module variable source earlier ( #155099 )  
						
						... 
						
						
						
						Reland of https://github.com/pytorch/pytorch/pull/154780 
Fixes #ISSUE_NUMBER
Pull Request resolved: https://github.com/pytorch/pytorch/pull/155099 
Approved by: https://github.com/williamwen42  
						
						
					 
					
						2025-06-04 23:05:36 +00:00 
						 
				 
			
				
					
						
					 
					
						
						
							
						
						a99a01a677 
					 
					
						
						
							
							Revert "[dynamo] Mark a vt unspecialized nn module variable source earlier ( #154780 )"  
						
						... 
						
						
						
						This reverts commit cc96febb979da16b0a0b758020b330a49c72b7e7.
Reverted https://github.com/pytorch/pytorch/pull/154780  on behalf of https://github.com/seemethere  due to This fails internal testing see, https://fburl.com/diff/b0yuxk4w  ([comment](https://github.com/pytorch/pytorch/pull/154780#issuecomment-2940381691 )) 
						
						
					 
					
						2025-06-04 15:03:34 +00:00 
						 
				 
			
				
					
						
					 
					
						
						
							
						
						cc96febb97 
					 
					
						
						
							
							[dynamo] Mark a vt unspecialized nn module variable source earlier ( #154780 )  
						
						... 
						
						
						
						I am working on providing some skip guard helper functions to allow users to reduce guard overhead. This is a refactor to allow that.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/154780 
Approved by: https://github.com/StrongerXi , https://github.com/jansel  
						
						
					 
					
						2025-06-03 19:19:47 +00:00 
						 
				 
			
				
					
						
					 
					
						
						
							
						
						a7e496a896 
					 
					
						
						
							
							Revert "[dynamo] Record the pre-graph bytecode using fast record function event ( #154769 )"  
						
						... 
						
						
						
						This reverts commit 409c396a48584de1ab14e1be6957663d548ad89e.
Reverted https://github.com/pytorch/pytorch/pull/154769  on behalf of https://github.com/seemethere  due to This fails internal tests see [fburl.com/diff/67gyp7gp](https://fburl.com/diff/67gyp7gp ) ([comment](https://github.com/pytorch/pytorch/pull/154769#issuecomment-2933629894 )) 
						
						
					 
					
						2025-06-03 06:13:49 +00:00 
						 
				 
			
				
					
						
					 
					
						
						
							
						
						409c396a48 
					 
					
						
						
							
							[dynamo] Record the pre-graph bytecode using fast record function event ( #154769 )  
						
						... 
						
						
						
						
Adds another event in the profiler traces. This can help us find models where pre-graph bytecode is very expensive.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/154769 
Approved by: https://github.com/zou3519 , https://github.com/williamwen42 , https://github.com/StrongerXi , https://github.com/jansel  
						
						
					 
					
						2025-06-02 22:33:27 +00:00 
						 
				 
			
				
					
						
					 
					
						
						
							
						
						b6b9311f4f 
					 
					
						
						
							
							[BE][Ez]: Fix typo in dynamo utils  #154639  ( #154748 )  
						
						... 
						
						
						
						Fixes a typo in #154639 
Pull Request resolved: https://github.com/pytorch/pytorch/pull/154748 
Approved by: https://github.com/ngimel  
						
						
					 
					
						2025-05-30 18:39:01 +00:00 
						 
				 
			
				
					
						
					 
					
						
						
							
						
						2120eeb8de 
					 
					
						
						
							
							[BE][Ez]: Improve dynamo utils typing with TypeIs and TypeGuard ( #154639 )  
						
						... 
						
						
						
						Adds some additional TypeIs and TypeGuard to some _dynamo utils for additional type narrowing
Pull Request resolved: https://github.com/pytorch/pytorch/pull/154639 
Approved by: https://github.com/jansel  
						
						
					 
					
						2025-05-30 18:09:50 +00:00 
						 
				 
			
				
					
						
					 
					
						
						
							
						
						8002d22ce3 
					 
					
						
						
							
							[dynamo] Trace into descriptor with __set__ ( #154176 )  
						
						... 
						
						
						
						As title, this patch basically implements
https://github.com/python/cpython/blob/3.11/Objects/object.c#L1371-L1452 ,
and make the `__get__` handling more robust.
I ran into this while fixing #133762 .
Differential Revision: [D75488090](https://our.internmc.facebook.com/intern/diff/D75488090 )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/154176 
Approved by: https://github.com/jansel  
						
						
					 
					
						2025-05-30 16:14:37 +00:00 
						 
				 
			
				
					
						
					 
					
						
						
							
						
						bb17f9c98b 
					 
					
						
						
							
							[AOTAutogradCache] Fix CHROMIUM_EVENT_LOG being none ( #154258 )  
						
						... 
						
						
						
						It turns out if you import something that's None at import time in python, and later update the value, the one you imported stays none:
```
import torch
from torch._dynamo.utils import CHROMIUM_EVENT_LOG
class Foo:
  pass
torch._dynamo.utils.CHROMIUM_EVENT_LOG =  Foo()
print(CHROMIUM_EVENT_LOG) # None
```
This fixes teh bug so we get AOTAUtogradCache instant events again
Differential Revision: [D75305770](https://our.internmc.facebook.com/intern/diff/D75305770/ )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/154258 
Approved by: https://github.com/oulgen  
						
						
					 
					
						2025-05-23 21:53:31 +00:00 
						 
				 
			
				
					
						
					 
					
						
						
							
						
						3443627e07 
					 
					
						
						
							
							Revert "[BE]: Enable RUFF TRY400 rule - log.exception ( #153473 )"  
						
						... 
						
						
						
						This reverts commit 4f4ecc583e0f48ad2d062a53bf91c61ab40b4948.
Reverted https://github.com/pytorch/pytorch/pull/153473  on behalf of https://github.com/jeanschmidt  due to seems to have broken internal signals, @albanD may I count on you to help the author merge his PR? D74837988 ([comment](https://github.com/pytorch/pytorch/pull/153473#issuecomment-2886017075 )) 
						
						
					 
					
						2025-05-16 08:29:26 +00:00 
						 
				 
			
				
					
						
					 
					
						
						
							
						
						4f4ecc583e 
					 
					
						
						
							
							[BE]: Enable RUFF TRY400 rule - log.exception ( #153473 )  
						
						... 
						
						
						
						Change logging.error to logging.exception to log additional information when relevant.  A few places have slipped in logging.errors in try except since I last did a clean up here and the rule is stabilized so I am enabling it codebase wide. I have NOQA'd much of our custom exception stack trace handling for RPC calls and distributed and tried to a fix a few errors based on whether we immediately reraised it or if we didn't print any exception handling where it could be useful.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/153473 
Approved by: https://github.com/albanD , https://github.com/cyyever  
						
						
					 
					
						2025-05-15 13:36:59 +00:00