9d175bc7e6
Fixes for CPython int/float tests ( #155978 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/155978
Approved by: https://github.com/zou3519
2025-07-02 15:04:00 +00:00
c202a7329a
Revert "Fixes for CPython int/float tests ( #155978 )"
...
This reverts commit 23491519d288dedb2a54cfad5fef7fcb2ad8eade.
Reverted https://github.com/pytorch/pytorch/pull/155978 on behalf of https://github.com/XuehaiPan due to sys.get_int_max_str_digits is not always available ([comment](https://github.com/pytorch/pytorch/pull/155978#issuecomment-3021990027 ))
2025-07-01 06:16:49 +00:00
23491519d2
Fixes for CPython int/float tests ( #155978 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/155978
Approved by: https://github.com/zou3519
2025-06-30 19:42:11 +00:00
da1f337bc4
Revert "Fixes for CPython int/float tests ( #155978 )"
...
This reverts commit fab53dfdf1d89cecd5e82b12cced9b6dd217e87c.
Reverted https://github.com/pytorch/pytorch/pull/155978 on behalf of https://github.com/guilhermeleobas due to failing in trunk ([comment](https://github.com/pytorch/pytorch/pull/155978#issuecomment-3019457531 ))
2025-06-30 14:49:44 +00:00
fab53dfdf1
Fixes for CPython int/float tests ( #155978 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/155978
Approved by: https://github.com/zou3519
2025-06-30 14:15:47 +00:00
0decd966af
Revert "Fixes for CPython int/float tests ( #155978 )"
...
This reverts commit 216bd6091ec52865052282eced7e6d5d2a4b4fb4.
Reverted https://github.com/pytorch/pytorch/pull/155978 on behalf of https://github.com/huydhn due to Some tests are still failing in trunk ([comment](https://github.com/pytorch/pytorch/pull/155978#issuecomment-3014185210 ))
2025-06-27 19:39:41 +00:00
216bd6091e
Fixes for CPython int/float tests ( #155978 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/155978
Approved by: https://github.com/zou3519
2025-06-27 16:41:00 +00:00
d06a406656
[dynamo] Graph break on torch.Tensor.data
assignment with mismatched dtype ( #156623 )
...
Fixes #152162 . Discussed with @bdhirsh and decided this is the easiest
workaround for now.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/156623
Approved by: https://github.com/bdhirsh
2025-06-25 02:03:04 +00:00
1dc1eedd43
Revert "[dynamo] Graph break on torch.Tensor.data
assignment with mismatched dtype ( #156623 )"
...
This reverts commit c1ad4b8e7a16f54c35a3908b56ed7d9f95eef586.
Reverted https://github.com/pytorch/pytorch/pull/156623 on behalf of https://github.com/albanD due to Breaks Dynamo tests in trunk ([comment](https://github.com/pytorch/pytorch/pull/156623#issuecomment-3001806841 ))
2025-06-24 20:44:42 +00:00
c1ad4b8e7a
[dynamo] Graph break on torch.Tensor.data
assignment with mismatched dtype ( #156623 )
...
Fixes #152162 . Discussed with @bdhirsh and decided this is the easiest
workaround for now.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/156623
Approved by: https://github.com/bdhirsh
2025-06-24 19:33:11 +00:00
640f5a7090
[dynamo] Support builtin bool on non-constant VTs ( #155863 )
...
In practice `bool(...)` is either constant folded by Dynamo or used for
branching (so most of its emulation logic lived in
`InstructionTranslator.generic_jump`.
This patch adds a dedicated `bool` hanlder (only for symbolic
bool/int/float for now), and fixes #136075 .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/155863
Approved by: https://github.com/williamwen42
2025-06-23 15:53:15 +00:00
1b2146fc6d
[BE][4/16] fix typos in torch/ (torch/_dynamo/) ( #156314 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/156314
Approved by: https://github.com/jingsh
ghstack dependencies: #156313
2025-06-23 02:57:19 +00:00
5b427c92a8
Revert "[BE][4/16] fix typos in torch/ (torch/_dynamo/) ( #156314 )"
...
This reverts commit ead741c5fb0036e0fc95b79d4fe1af3a426e1306.
Reverted https://github.com/pytorch/pytorch/pull/156314 on behalf of https://github.com/atalman due to export/test_torchbind.py::TestCompileTorchbind::test_compile_error_on_input_aliasing_contents_backend_aot_eager [GH job link](https://github.com/pytorch/pytorch/actions/runs/15804799771/job/44548489912 ) [HUD commit link](c95f7fa874
) ([comment](https://github.com/pytorch/pytorch/pull/156313#issuecomment-2994171213 ))
2025-06-22 12:31:57 +00:00
ead741c5fb
[BE][4/16] fix typos in torch/ (torch/_dynamo/) ( #156314 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/156314
Approved by: https://github.com/jingsh
ghstack dependencies: #156313
2025-06-22 08:43:18 +00:00
d1947a8707
Migrate from lru_cache to cache ( #155613 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/155613
Approved by: https://github.com/ezyang
ghstack dependencies: #155612
2025-06-11 19:44:18 +00:00
07eb374e7e
[dynamo] Avoid unncessary caching source codegen ( #155376 )
...
We only need to cache a source (e.g., `x.y.z`) into a temporary local if
it's used multiple times in the codegen, otherwise we'd just be creating
redundant `DUP` and `STORE_FAST tmp_...` instructions, which might
degrade perf and definitely makes generated bytecode harder to read.
Example:
```python
import torch
@torch.compile(backend="eager")
def fn(x, y):
return x + y
fn(torch.ones(2), torch.ones(1))
```
Original bytecode:
```verbatim
[0/0] [__bytecode] 3 0 RESUME 0
[0/0] [__bytecode]
[0/0] [__bytecode] 5 2 LOAD_FAST 0 (x)
[0/0] [__bytecode] 4 LOAD_FAST 1 (y)
[0/0] [__bytecode] 6 BINARY_OP 0 (+)
[0/0] [__bytecode] 10 RETURN_VALUE
```
Modified bytecode (before this patch):
```verbatim
[__bytecode] 3 0 RESUME 0
[__bytecode] 2 LOAD_GLOBAL 1 (NULL + __compiled_fn_1_578c8d9a_2a9b_4d15_bac7_267591cdee32)
[__bytecode] 14 LOAD_FAST 0 (x)
[__bytecode] 16 COPY 1
[__bytecode] 18 STORE_FAST 3 (tmp_1)
[__bytecode] 20 LOAD_FAST 1 (y)
[__bytecode] 22 COPY 1
[__bytecode] 24 STORE_FAST 4 (tmp_2)
[__bytecode] 26 PRECALL 2
[__bytecode] 30 CALL 2
[__bytecode] 40 STORE_FAST 2 (graph_out_0)
[__bytecode] 42 LOAD_FAST 2 (graph_out_0)
[__bytecode] 44 LOAD_CONST 1 (0)
[__bytecode] 46 BINARY_SUBSCR
[__bytecode] 56 DELETE_FAST 2 (graph_out_0)
[__bytecode] 58 RETURN_VALUE
```
Modified bytecode (after this patch):
```verbatim
[__bytecode] 3 0 RESUME 0
[__bytecode] 2 LOAD_GLOBAL 1 (NULL + __compiled_fn_1_2c498af2_ce5c_49cb_abba_a0c7489b09ce)
[__bytecode] 14 LOAD_FAST 0 (x)
[__bytecode] 16 LOAD_FAST 1 (y)
[__bytecode] 18 PRECALL 2
[__bytecode] 22 CALL 2
[__bytecode] 32 STORE_FAST 2 (graph_out_0)
[__bytecode] 34 LOAD_FAST 2 (graph_out_0)
[__bytecode] 36 LOAD_CONST 1 (0)
[__bytecode] 38 BINARY_SUBSCR
[__bytecode] 48 DELETE_FAST 2 (graph_out_0)
[__bytecode] 50 RETURN_VALUE
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/155376
Approved by: https://github.com/williamwen42
2025-06-10 19:38:15 +00:00
b981fb6744
Add docblock to torch/_dynamo/variables/builtin.py ( #155402 )
...
Add comprehensive module docstring explaining built-in function and type
variable tracking, including handling of Python built-ins, type constructors,
operators, and special constructs during symbolic execution.
Originally generated by claude but reviewed and edited by me.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/155402
Approved by: https://github.com/Skylion007
ghstack dependencies: #155403
2025-06-08 15:24:29 +00:00
e9c31fb86d
[torch.compile] handle a custom __delattr__ method correctly ( #150899 )
...
Fixes #150765
- handle a custom __delattr__ method correctly
Test:
```
import torch
class MyObject:
def __init__(self, val):
self.val = val
# Flag to track deletion attempts instead of using print
self.deletion_attempted = False
def __delattr__(self, attr):
if attr == "val":
# Set flag instead of printing
self.deletion_attempted = True
else:
super().__delattr__(attr)
@torch.compile(fullgraph=True, backend="eager")
def test(input_tensor):
instance_a = MyObject(1)
instance_b = MyObject(2)
del instance_a.val
del instance_b.val
exists_a = hasattr(instance_a, 'val')
exists_b = hasattr(instance_b, 'val')
deletion_attempted_a = instance_a.deletion_attempted
deletion_attempted_b = instance_b.deletion_attempted
return input_tensor + 1, exists_a, exists_b, deletion_attempted_a, deletion_attempted_b
# Run the test
result = test(torch.ones(1))
print(f"Result tensor: {result[0]}")
print(f"val attribute still exists on instance_a: {result[1]}")
print(f"val attribute still exists on instance_b: {result[2]}")
print(f"Deletion was attempted on instance_a: {result[3]}")
print(f"Deletion was attempted on instance_b: {result[4]}")
```
output:
```
(base) sany@sandishs-Laptop pytorch % python3 test_delattr_fix.py
Result tensor: tensor([2.])
val attribute still exists on instance_a: True
val attribute still exists on instance_b: True
Deletion was attempted on instance_a: True
Deletion was attempted on instance_b: True
```
```
(pytorch-dev) sany@sandishs-Laptop pytorch % python3 -m pytest test/dynamo/test_repros.py::ReproTests::test_delattr_return -v
========================================================= test session starts =========================================================
platform darwin -- Python 3.12.5, pytest-8.3.5, pluggy-1.5.0 -- /Library/Frameworks/Python.framework/Versions/3.12/bin/python3
cachedir: .pytest_cache
rootdir: /Users/sany/git/pytorch
configfile: pytest.ini
plugins: typeguard-4.3.0
collected 1 item
Running 1 items in this shard
test/dynamo/test_repros.py::ReproTests::test_delattr_return PASSED [0.0659s] [100%]
========================================================== 1 passed in 1.71s ==========================================================
(pytorch-dev) sany@sandishs-Laptop pytorch %
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/150899
Approved by: https://github.com/jansel , https://github.com/StrongerXi
2025-06-04 17:27:20 +00:00
1258aac1c2
[dynamo] Upcast torch.Size + tuple to be of size torch.Size ( #154830 )
...
Fixes https://github.com/pytorch/pytorch/issues/154432
Pull Request resolved: https://github.com/pytorch/pytorch/pull/154830
Approved by: https://github.com/StrongerXi , https://github.com/Skylion007 , https://github.com/williamwen42
2025-06-02 17:57:23 +00:00
7368eeba5e
[dynamo][guards] Prevent LENGTH guard on nn modules ( #154763 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/154763
Approved by: https://github.com/williamwen42
2025-05-31 05:32:31 +00:00
7183f52675
[dynamo] Support namedtuple subclass ( #153982 )
...
Fixes #133762 . This involves
1. support tuple subclass constructed inside compile region.
2. handle the "fake" global scope associated with NamedTuple-generated
`__new__`.
3. handle `namedtuple._tuplegetter` more faithfully.
Differential Revision: [D75488091](https://our.internmc.facebook.com/intern/diff/D75488091 )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/153982
Approved by: https://github.com/jansel
ghstack dependencies: #154176
2025-05-30 16:14:37 +00:00
f66a159db5
[Set] Raise TypeError if set is called with the wrong number of arguments ( #152990 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/152990
Approved by: https://github.com/anijain2305
ghstack dependencies: #150792 , #152987 , #152988 , #152904 , #152901 , #152902 , #152903 , #152905 , #152906 , #152989 , #152907 , #152908
2025-05-16 14:28:32 +00:00
cf7021a0ee
[Set] Handle exception in ConstantVariable operation ( #152987 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/152987
Approved by: https://github.com/williamwen42 , https://github.com/anijain2305
ghstack dependencies: #150792
2025-05-16 14:28:32 +00:00
a4459cd4e3
Remove property
from python_type function ( #152900 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/152900
Approved by: https://github.com/amjames , https://github.com/anijain2305
ghstack dependencies: #153070
2025-05-13 16:26:25 +00:00
ae1e51b6ad
Add infra to run CPython tests under Dynamo ( #150787 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/150787
Approved by: https://github.com/zou3519
2025-05-07 04:03:14 +00:00
103fe856e1
Revert "Add infra to run CPython tests under Dynamo ( #150787 )"
...
This reverts commit 7c96dd8f0c9a7e17f598612405f002441c7f07ae.
Reverted https://github.com/pytorch/pytorch/pull/150787 on behalf of https://github.com/huydhn due to Sorry for reverting your change but a failed test is showing up in trunk ([comment](https://github.com/pytorch/pytorch/pull/150787#issuecomment-2852818113 ))
2025-05-06 00:20:02 +00:00
7c96dd8f0c
Add infra to run CPython tests under Dynamo ( #150787 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/150787
Approved by: https://github.com/zou3519
2025-05-05 17:20:14 +00:00
1d8cdf373b
[dynamo] Guard serialization for NAME_MATCH ( #152332 )
...
Differential Revision: [D73780430](https://our.internmc.facebook.com/intern/diff/D73780430/ )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/152332
Approved by: https://github.com/jansel
ghstack dependencies: #152325 , #152326 , #152327 , #152328 , #152329 , #152330 , #152331
2025-04-29 20:16:00 +00:00
225742838b
Add an additional check to trigger graph break for sparse tensor ( #151897 )
...
Fixes #151522
This PR fixes the issue that Dynamo fails to trigger a graph break for sparse tensors in certain code paths. I added an additional check to handle this case, and it resolves the original problem.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/151897
Approved by: https://github.com/jansel
2025-04-26 21:02:32 +00:00
1f29190b59
[dynamo] unimplemented -> unimplemented_v2 in variables/builtin.py ( #151145 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/151145
Approved by: https://github.com/Skylion007 , https://github.com/StrongerXi , https://github.com/jansel , https://github.com/zou3519
2025-04-16 17:16:05 +00:00
3c46808a14
[dynamo] Graph break fixes while tracing inspect module ( #151168 )
...
Fixes https://github.com/pytorch/pytorch/issues/139374
Pull Request resolved: https://github.com/pytorch/pytorch/pull/151168
Approved by: https://github.com/jansel
ghstack dependencies: #151164
2025-04-14 17:38:20 +00:00
85ada5d6dd
[Dynamo] Allow dynamo to handle 'or' operator between two dicts ( #147305 )
...
Fixes #146538
Pull Request resolved: https://github.com/pytorch/pytorch/pull/147305
Approved by: https://github.com/anijain2305
2025-04-11 04:47:31 +00:00
f3b2fb6c66
Allow trace through unittest ( #146500 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/146500
Approved by: https://github.com/anijain2305
2025-04-08 14:55:17 +00:00
1b0a023dde
[Dynamo][Misc] Apply typing hints for codegen
( #150289 )
...
Fixes #ISSUE_NUMBER
Pull Request resolved: https://github.com/pytorch/pytorch/pull/150289
Approved by: https://github.com/Skylion007 , https://github.com/cyyever
2025-04-04 14:26:22 +00:00
5d36253a7d
Refactoring: fix the python constant check ( #150608 )
...
As the title stated.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/150608
Approved by: https://github.com/Skylion007
2025-04-03 17:33:45 +00:00
cbc901fac3
Implement raise ... from ...
( #148766 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/148766
Approved by: https://github.com/zou3519
2025-04-03 13:15:31 +00:00
33535b3eee
[dynamo] Support Tensor subclass that has dynamic attributes or calls Parameter.__torch_function__
( #149482 )
...
This fixes most of https://github.com/huggingface/diffusers/issues/10795 ,
except for `torch.Tensor._make_subclass`, which will be fixed in a
subsequent patch.
The relevant tensor subclass from the aforementioned issue is defined
here: fbf6b856cc/src/diffusers/quantizers/gguf/utils.py (L398-L435)
.
There are two things to note about the tensor subclass:
1. it calls `super().__torch_function__`, which is
`torch._C._disabled_torch_function_impl`, so this patch updates
`SuperVariable.call_method` to handle it (we can't do a simpler
polyfill due to some bug with `var_getattr` raising
`NotImplementedError`, which forgot to restore symbolic context).
2. it sets and reads attributes (`quant_type`), and
defines new methods (`as_data`), so this patch adds support for those.
3. it has a `__init__`, which Dynamo needs to trace through in
`TensorSubclassVariable.call_function`.
Differential Revision: [D71906140](https://our.internmc.facebook.com/intern/diff/D71906140 )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/149482
Approved by: https://github.com/jansel , https://github.com/mlazos
2025-04-02 20:56:43 +00:00
03c879d59b
Revert "[dynamo] Support Tensor subclass that has dynamic attributes or calls Parameter.__torch_function__
( #149482 )"
...
This reverts commit 98453c135a7778d12ff881d8b0a717257be9fc38.
Reverted https://github.com/pytorch/pytorch/pull/149482 on behalf of https://github.com/malfet due to Broke trunk, see b03c42109c/1
([comment](https://github.com/pytorch/pytorch/pull/149482#issuecomment-2773650522 ))
2025-04-02 20:30:33 +00:00
98453c135a
[dynamo] Support Tensor subclass that has dynamic attributes or calls Parameter.__torch_function__
( #149482 )
...
This fixes most of https://github.com/huggingface/diffusers/issues/10795 ,
except for `torch.Tensor._make_subclass`, which will be fixed in a
subsequent patch.
The relevant tensor subclass from the aforementioned issue is defined
here: fbf6b856cc/src/diffusers/quantizers/gguf/utils.py (L398-L435)
.
There are two things to note about the tensor subclass:
1. it calls `super().__torch_function__`, which is
`torch._C._disabled_torch_function_impl`, so this patch updates
`SuperVariable.call_method` to handle it (we can't do a simpler
polyfill due to some bug with `var_getattr` raising
`NotImplementedError`, which forgot to restore symbolic context).
2. it sets and reads attributes (`quant_type`), and
defines new methods (`as_data`), so this patch adds support for those.
3. it has a `__init__`, which Dynamo needs to trace through in
`TensorSubclassVariable.call_function`.
Differential Revision: [D71906140](https://our.internmc.facebook.com/intern/diff/D71906140 )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/149482
Approved by: https://github.com/jansel , https://github.com/mlazos
2025-04-02 17:05:12 +00:00
1c98dc3664
[dynamo] Fix handling of setattr with some tensor attributes ( #149791 )
...
We weren't handling `setattr(tensor_obj, "real", 42)` correctly, because
the attribute is a `GetSetDescriptorType` that has special setter logic.
See added test and comments for more explanations.
This patch makes it so that we graph break in those cases, rather than
resulting in silent incorrectness.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/149791
Approved by: https://github.com/mlazos
ghstack dependencies: #149481
2025-03-25 18:57:56 +00:00
44e6464914
Allow setting attribute to NestedUserFunctionVariable ( #146505 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/146505
Approved by: https://github.com/zou3519
2025-03-20 19:59:30 +00:00
fb53e9e514
Add __context/cause/suppress_context/traceback__
to Exception ( #146499 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/146499
Approved by: https://github.com/zou3519 , https://github.com/anijain2305
ghstack dependencies: #146504
2025-03-11 18:55:45 +00:00
3ce352e389
[BE][PYFMT] migrate PYFMT for torch._dynamo
to ruff format
( #144549 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/144549
Approved by: https://github.com/jansel
2025-02-28 03:03:53 +00:00
8c761ac7e3
Handle is
/is not
( #146496 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/146496
Approved by: https://github.com/anijain2305 , https://github.com/zou3519
2025-02-23 01:18:28 +00:00
db4ce78d46
PEP585: More UP006 fixes ( #146392 )
...
This should be the final PR before we can enable RUFF UP006.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/146392
Approved by: https://github.com/justinchuby , https://github.com/albanD , https://github.com/Skylion007
2025-02-20 06:18:13 +00:00
16e202a38e
[dynamo] improved graph break messages for some common graph break sites [1/N] ( #146525 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/146525
Approved by: https://github.com/jansel
2025-02-20 00:08:13 +00:00
ee38a32c55
[Dynamo] support isinstance(...)
check for type tuple ( #146984 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/146984
Approved by: https://github.com/jansel
2025-02-16 10:41:49 +00:00
9dc702875d
[dynamo][mappingproxy][inspect] Support existing types.MappingProxyType ( #147217 )
...
Fixes https://github.com/pytorch/pytorch/issues/147162
Pull Request resolved: https://github.com/pytorch/pytorch/pull/147217
Approved by: https://github.com/williamwen42
2025-02-15 07:59:33 +00:00
21c2565f35
Document dynamo ( #146736 )
...
Many files in dynamo are currently lacking file/module-level documentation, which makes it hard to know what they do at a glance and without digging into the code. This fixes that.
Note: documentation was AI-generated and could be incorrect, please review carefully.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/146736
Approved by: https://github.com/jansel , https://github.com/StrongerXi , https://github.com/anijain2305 , https://github.com/zou3519
2025-02-13 00:02:21 +00:00
d6513f3246
[dynamo] Support list subclasses and fix dict subclasses mutation bugs ( #146819 )
...
This PR adds support for list subclasses. Among other things are
1) Tracking the mutations on internal vts like `_dict_vt` and `_list_vt` using sources. This helps identify if there was a mutation in the underlying data structures, and we need to reconstruct it.
2) `UserDefinedObjectVariable` now has a new method - `is_modified` which `side_effect` infra relies upon to check mutations in the underlying vts (like `_dict_vt`).
3) `reconstruction` logic ensures that we use `dict.__getitem__` and `list.__getitem__` methods. This is super important because we don't want to call the overridden `__getitem__` methods.
If this PR is hard to review, please let me know. I can break it into several small PRs.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/146819
Approved by: https://github.com/StrongerXi , https://github.com/jansel
2025-02-12 17:46:02 +00:00