fbe0d20a17
[2/N] More ruff SIM fixes ( #165031 )
...
This is follow-up of #164695 to apply ruff SIM rules to more files. Most changes are about simplifying dict.get because None is already the default value.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/165031
Approved by: https://github.com/mlazos
2025-10-14 14:22:54 +00:00
b8be796a57
Revert "[2/N] More ruff SIM fixes ( #165031 )"
...
This reverts commit 38095fbd1323ee4a9541fbcbb9b28bd20f2cd956.
Reverted https://github.com/pytorch/pytorch/pull/165031 on behalf of https://github.com/albanD due to One of the changed line started to fail on trunk ([comment](https://github.com/pytorch/pytorch/pull/165031#issuecomment-3390190870 ))
2025-10-10 13:42:14 +00:00
38095fbd13
[2/N] More ruff SIM fixes ( #165031 )
...
This is follow-up of #164695 to apply ruff SIM rules to more files. Most changes are about simplifying dict.get because None is already the default value.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/165031
Approved by: https://github.com/mlazos
2025-10-10 05:37:46 +00:00
7457d139c5
Add pyrefly suppressions to torch/distributed (7/n) ( #165002 )
...
Adds suppressions to pyrefly will typecheck clean: https://github.com/pytorch/pytorch/issues/163283
One more PR after this one.
Test plan:
dmypy restart && python3 scripts/lintrunner.py -a
pyrefly check
step 1: delete lines in the pyrefly.toml file from the project-excludes field
step 2: run pyrefly check
step 3: add suppressions, clean up unused suppressions
before: https://gist.github.com/maggiemoss/4b3bf2037014e116bc00706a16aef199
after:
INFO 0 errors (6,884 ignored)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/165002
Approved by: https://github.com/oulgen
2025-10-09 04:08:25 +00:00
5d7360bb03
Revert "Enable all SIM rules except disabled ones ( #164645 )"
...
This reverts commit 321e6026925f6b6e8a36e3a8b7c0295cd7541911.
Reverted https://github.com/pytorch/pytorch/pull/164645 on behalf of https://github.com/izaitsevfb due to causes lint failures ([comment](https://github.com/pytorch/pytorch/pull/164645#issuecomment-3369274351 ))
2025-10-05 19:32:21 +00:00
321e602692
Enable all SIM rules except disabled ones ( #164645 )
...
`SIM` rules are useful for simplifying boolean expressions and enhances code readability.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164645
Approved by: https://github.com/ezyang
2025-10-05 07:38:25 +00:00
da003d7b95
[3/N] Import Callable from collections.abc in torch/distributed ( #164104 )
...
This is the result of applying the ruff `UP035` check.
`Callable` is imported from `collections.abc` instead of `typing`.
This PR is the follow-up of #164054 .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164104
Approved by: https://github.com/Skylion007
2025-09-30 00:28:53 +00:00
4ccc0381de
[BE][5/16] fix typos in torch/ (torch/distributed/) ( #156315 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/156315
Approved by: https://github.com/Skylion007 , https://github.com/albanD
ghstack dependencies: #156313 , #156314
2025-06-23 02:57:28 +00:00
145d4cdc11
Revert "[BE][5/16] fix typos in torch/ (torch/distributed/) ( #156315 )"
...
This reverts commit c2f0292bd5b4b3206f5b295e96f81cd6c178eb18.
Reverted https://github.com/pytorch/pytorch/pull/156315 on behalf of https://github.com/atalman due to export/test_torchbind.py::TestCompileTorchbind::test_compile_error_on_input_aliasing_contents_backend_aot_eager [GH job link](https://github.com/pytorch/pytorch/actions/runs/15804799771/job/44548489912 ) [HUD commit link](c95f7fa874
) ([comment](https://github.com/pytorch/pytorch/pull/156313#issuecomment-2994171213 ))
2025-06-22 12:31:57 +00:00
c2f0292bd5
[BE][5/16] fix typos in torch/ (torch/distributed/) ( #156315 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/156315
Approved by: https://github.com/Skylion007 , https://github.com/albanD
ghstack dependencies: #156313 , #156314
2025-06-22 08:43:26 +00:00
008345be9d
Fix #155018 (convert distributed rst to markdown) ( #155528 )
...
Used [rst2myst tool](https://rst-to-myst.readthedocs.io/en/latest/ )
Fixes #155018
Docs comparison (check out the 'new' whenever docs build)
1. distributed.checkpoint ([old](https://docs.pytorch.org/docs/main/distributed.checkpoint.html ) vs. [new](https://docs-preview.pytorch.org/pytorch/pytorch/155528/distributed.checkpoint.html ))
2. distributed.elastic ([old](https://docs.pytorch.org/docs/main/distributed.elastic.html ) vs. [new](https://docs-preview.pytorch.org/pytorch/pytorch/155528/distributed.elastic.html ))
3. distributed.fsdp.fully_shard ([old](https://docs.pytorch.org/docs/main/distributed.fsdp.fully_shard.html ) vs. [new](https://docs-preview.pytorch.org/pytorch/pytorch/155528/distributed.fsdp.fully_shard.html ))
4. distributed.optim ([old](https://docs.pytorch.org/docs/main/distributed.optim.html ) vs. [new](https://docs-preview.pytorch.org/pytorch/pytorch/155528/distributed.optim.html ))
5. distributed.pipelining ([old](https://docs.pytorch.org/docs/main/distributed.pipelining.html ) vs. [new](https://docs-preview.pytorch.org/pytorch/pytorch/155528/distributed.pipelining.html ))
Pull Request resolved: https://github.com/pytorch/pytorch/pull/155528
Approved by: https://github.com/wz337 , https://github.com/svekars
2025-06-16 20:46:09 +00:00
e95e8eed0a
mypy 1.16.0 ( #155821 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/155821
Approved by: https://github.com/ezyang , https://github.com/zou3519
2025-06-14 18:18:43 +00:00
bfae151269
[BE][Ez]: Remove unneeded mypy suppressions ( #154800 )
...
Improvements in typing have made this suppression unnecessary
Pull Request resolved: https://github.com/pytorch/pytorch/pull/154800
Approved by: https://github.com/cyyever , https://github.com/jansel
2025-06-01 06:10:41 +00:00
f887bfffda
Fix typo ( #153561 )
...
Fix typo from #153386
Pull Request resolved: https://github.com/pytorch/pytorch/pull/153561
Approved by: https://github.com/albanD
2025-05-14 21:38:51 +00:00
533fc58453
[BE]: Fix typing None override other optimizers ( #153386 )
...
Follow up to #153367 to fix other instances of it throughout the codebase
Also fully type NamedOptimizer since we were so close
Pull Request resolved: https://github.com/pytorch/pytorch/pull/153386
Approved by: https://github.com/tsunghsienlee , https://github.com/janeyx99 , https://github.com/jansel , https://github.com/cyyever
2025-05-14 17:48:47 +00:00
edd640a95a
[BE][Ez]: Use itertools.chain.from_iterable when possible ( #148190 )
...
Often makes the code more readable, more efficient, and adds support for infinite iterables.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/148190
Approved by: https://github.com/jansel , https://github.com/malfet
2025-03-06 20:37:06 +00:00
f30776c37a
[BE] Upgrade to mypy 1.14 ( #145966 )
...
Upgrade mypy version
Pull Request resolved: https://github.com/pytorch/pytorch/pull/145966
Approved by: https://github.com/Skylion007
2025-03-04 20:58:26 +00:00
98bf2f1170
Use Python 3.9 typing ( #148157 )
...
Fixes #ISSUE_NUMBER
Pull Request resolved: https://github.com/pytorch/pytorch/pull/148157
Approved by: https://github.com/janeyx99
2025-03-04 03:09:55 +00:00
995df34b19
[BE][PYFMT] migrate PYFMT for torch.{distributed,distributions}
to ruff format
( #144547 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/144547
Approved by: https://github.com/kwen2501
2025-02-28 07:35:56 +00:00
292af3cc89
[BE][Ez]: ISC001 Auto concatenate implicit one line strings ( #146408 )
...
Apply ruff rule about implicit string concatenation, this autofixes strings that are all the same type and on the same line. These lines are broken up likely as the result of autoformatters in the past. All fixes are automated using the autofixes in ISC001.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/146408
Approved by: https://github.com/justinchuby , https://github.com/janeyx99
2025-02-04 19:07:04 +00:00
00ffeca1b1
PEP585 update - torch/distributed ( #145164 )
...
See #145101 for details.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/145164
Approved by: https://github.com/bobrenjc93
2025-01-21 04:23:29 +00:00
6374332d33
Revert "PEP585 update - torch/distributed ( #145164 )"
...
This reverts commit 6cb186e279bc179a6bb63f0226e24ab42a07b394.
Reverted https://github.com/pytorch/pytorch/pull/145164 on behalf of https://github.com/huydhn due to Sorry for reverting your change but it is failing an inductor test ([comment](https://github.com/pytorch/pytorch/pull/145164#issuecomment-2602875679 ))
2025-01-20 16:46:46 +00:00
6cb186e279
PEP585 update - torch/distributed ( #145164 )
...
See #145101 for details.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/145164
Approved by: https://github.com/bobrenjc93
2025-01-20 00:19:01 +00:00
08be9ec312
Migrate from Tuple -> tuple in torch/distributed ( #144258 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/144258
Approved by: https://github.com/aorenste
2025-01-10 08:34:54 +00:00
fd65bd755d
[BE] replace incorrect .. note:: invocations ( #142868 )
...
Something I've noticed is that a lot of the distributed sites don't render on our docs at all, but if they ever do, the notes will render properly now 😛
Pull Request resolved: https://github.com/pytorch/pytorch/pull/142868
Approved by: https://github.com/albanD
2024-12-11 19:58:18 +00:00
12e95aa4ee
[BE]: Apply PERF401 autofixes from ruff ( #140980 )
...
* Automatically applies ruff rule 401. Turns loops into equivalent list comprehensions which are faster and do not leak the scope of the loop variables.
* list comprehensions not only often have better typing, but are 50+% faster than for loops on overhead. They also preserve length information etc and are better for the interpreter to optimize.
* Manually went back and made mypy happy after the change.
* Also fixed style lints in files covered by flake8 but not by pyfmt
Pull Request resolved: https://github.com/pytorch/pytorch/pull/140980
Approved by: https://github.com/justinchuby , https://github.com/malfet
2024-11-20 17:52:07 +00:00
3d26c08dda
Fix unintended deprecation warning in torch.distributed.optim ( #140889 )
...
We have a deprecation warning for scripted functional optimizer at module level in `torch/distributed/optim/__init__.py`. However, not all optimizers exposed by the module are scripted functional optimizers, causing some false deprecation warning (e.g. https://github.com/pytorch/pytorch/issues/139661 ).
This PR moves the deprecation warning to the `__init__` functions of the deprecated scripted functional optimizers.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/140889
Approved by: https://github.com/d4l3k , https://github.com/kwen2501 , https://github.com/XilunWu
2024-11-18 02:34:51 +00:00
31715be72a
[BE]: Update mypy to 1.11.2 ( #133816 )
...
Updates mypy to 1.11.1 to improve type inference
Pull Request resolved: https://github.com/pytorch/pytorch/pull/133816
Approved by: https://github.com/ezyang
2024-09-16 19:44:11 +00:00
3117f2cf67
Revert "[BE]: Update mypy to 1.11.2 ( #133816 )"
...
This reverts commit 55299cfc223fa838aadd8d6d6fa3ed541fa5acd1.
Reverted https://github.com/pytorch/pytorch/pull/133816 on behalf of https://github.com/jeanschmidt due to seems to have broken https://github.com/pytorch/pytorch/actions/runs/10865710499/job/30155699792 on main ([comment](https://github.com/pytorch/pytorch/pull/133816#issuecomment-2352377684 ))
2024-09-16 09:11:16 +00:00
55299cfc22
[BE]: Update mypy to 1.11.2 ( #133816 )
...
Updates mypy to 1.11.1 to improve type inference
Pull Request resolved: https://github.com/pytorch/pytorch/pull/133816
Approved by: https://github.com/ezyang
2024-09-14 21:40:36 +00:00
609447a626
Revert "[BE] typing for decorators - _jit_internal ( #131573 )"
...
This reverts commit f0f20f7e97716b4b077dca2a1a42930ccf990c1c.
Reverted https://github.com/pytorch/pytorch/pull/131573 on behalf of https://github.com/clee2000 due to breaking lint internally D60265575 ([comment](https://github.com/pytorch/pytorch/pull/131572#issuecomment-2254328359 ))
2024-07-28 03:29:32 +00:00
f0f20f7e97
[BE] typing for decorators - _jit_internal ( #131573 )
...
See #131429
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131573
Approved by: https://github.com/oulgen , https://github.com/zou3519
ghstack dependencies: #131568 , #131569 , #131570 , #131571 , #131572
2024-07-25 22:24:19 +00:00
5a0068cc69
[BE] mypy: disallow untyped decorators ( #131428 )
...
Untyped decorators strip the types from their decorated function so even if the underlying function is fully typed then callers to it don't get any benefit from type annotations.
Step 1 - Enable the error and override in all the offending files.
#131429
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131428
Approved by: https://github.com/justinchuby , https://github.com/oulgen
2024-07-23 21:50:55 +00:00
ad314a2f05
Pass torch.load(weights_only=)
internally to avoid FutureWarning ( #130663 )
...
Fixes #130658
Pull Request resolved: https://github.com/pytorch/pytorch/pull/130663
Approved by: https://github.com/malfet , https://github.com/LucasLLC
2024-07-16 01:24:38 +00:00
56935684c3
Use Generic TypeAlias (PEP 585) and Union Type (PEP 604) in .pyi
stub files ( #129419 )
...
------
- [Generic TypeAlias (PEP 585)](https://peps.python.org/pep-0585 ): e.g. `typing.List[T] -> list[T]`, `typing.Dict[KT, VT] -> dict[KT, VT]`, `typing.Type[T] -> type[T]`.
- [Union Type (PEP 604)](https://peps.python.org/pep-0604 ): e.g. `Union[X, Y] -> X | Y`, `Optional[X] -> X | None`, `Optional[Union[X, Y]] -> X | Y | None`.
Note that in `.pyi` stub files, we do not need `from __future__ import annotations`. So this PR does not violate issue #117449 :
- #117449
Pull Request resolved: https://github.com/pytorch/pytorch/pull/129419
Approved by: https://github.com/ezyang
ghstack dependencies: #129375 , #129376
2024-06-29 09:23:39 +00:00
83caf4960f
Revert "Use Generic TypeAlias (PEP 585) and Union Type (PEP 604) in .pyi
stub files ( #129419 )"
...
This reverts commit e40f50cb87bcd176a380b729af5dda13dbe9c399.
Reverted https://github.com/pytorch/pytorch/pull/129419 on behalf of https://github.com/huydhn due to Sorry for reverting your change but I need to revert to cleanly revert https://github.com/pytorch/pytorch/pull/129374 , please do a rebase and reland this ([comment](https://github.com/pytorch/pytorch/pull/129375#issuecomment-2197800541 ))
2024-06-29 00:44:24 +00:00
e40f50cb87
Use Generic TypeAlias (PEP 585) and Union Type (PEP 604) in .pyi
stub files ( #129419 )
...
------
- [Generic TypeAlias (PEP 585)](https://peps.python.org/pep-0585 ): e.g. `typing.List[T] -> list[T]`, `typing.Dict[KT, VT] -> dict[KT, VT]`, `typing.Type[T] -> type[T]`.
- [Union Type (PEP 604)](https://peps.python.org/pep-0604 ): e.g. `Union[X, Y] -> X | Y`, `Optional[X] -> X | None`, `Optional[Union[X, Y]] -> X | Y | None`.
Note that in `.pyi` stub files, we do not need `from __future__ import annotations`. So this PR does not violate issue #117449 :
- #117449
Pull Request resolved: https://github.com/pytorch/pytorch/pull/129419
Approved by: https://github.com/ezyang
ghstack dependencies: #129375 , #129376
2024-06-28 15:37:57 +00:00
3b798df853
[BE][Easy] enable UFMT for torch/distributed/{fsdp,optim,rpc}/
( #128869 )
...
Part of #123062
- #123062
Pull Request resolved: https://github.com/pytorch/pytorch/pull/128869
Approved by: https://github.com/fegin
ghstack dependencies: #128868
2024-06-18 21:49:08 +00:00
7c12cc7ce4
Flip default value for mypy disallow_untyped_defs [6/11] ( #127843 )
...
See #127836 for details.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/127843
Approved by: https://github.com/oulgen
ghstack dependencies: #127842
2024-06-08 18:49:29 +00:00
67ef2683d9
[BE] wrap deprecated function/class with typing_extensions.deprecated
( #127689 )
...
Use `typing_extensions.deprecated` for deprecation annotation if possible. Otherwise, add `category=FutureWarning` to `warnings.warn("message")` if the category is missing.
Note that only warnings that their messages contain `[Dd]eprecat(ed|ion)` are updated in this PR.
Resolves #126888
- #126888
This PR is split from PR #126898 .
- #126898
------
Pull Request resolved: https://github.com/pytorch/pytorch/pull/127689
Approved by: https://github.com/Skylion007
2024-06-02 12:30:43 +00:00
033e733021
Revert "[BE] wrap deprecated function/class with typing_extensions.deprecated
( #126898 )"
...
This reverts commit 749a132fb0a8325cbad4734a563aa459ca611991.
Reverted https://github.com/pytorch/pytorch/pull/126898 on behalf of https://github.com/fbgheith due to switching typing-extensions=4.3.0 to 4.9.0 causes internal failure ([comment](https://github.com/pytorch/pytorch/pull/126898#issuecomment-2142884456 ))
2024-05-31 19:47:24 +00:00
749a132fb0
[BE] wrap deprecated function/class with typing_extensions.deprecated
( #126898 )
...
Use `typing_extensions.deprecated` for deprecation annotation if possible. Otherwise, add `category=FutureWarning` to `warnings.warn("message")` if the category is missing.
Note that only warnings that their messages contain `[Dd]eprecat(ed|ion)` are updated in this PR.
UPDATE: Use `FutureWarning` instead of `DeprecationWarning`.
Resolves #126888
- #126888
Pull Request resolved: https://github.com/pytorch/pytorch/pull/126898
Approved by: https://github.com/albanD
2024-05-29 12:09:27 +00:00
f9d107af66
[optim] add fused_adagrad support for CPU device ( #124905 )
...
Support fused_sgd_kernel support for CPU.
## Bench result:
32 core/sockets ICX
Test Scripts:
https://gist.github.com/zhuhaozhe/79e842e0a6e25d6d7fa1e4598807272c
https://gist.github.com/zhuhaozhe/b4c6998a509dcea1796dd05b3005c969
```
Tensor Size: 262144, Num Tensor 4, Num Threads: 1
_single_tensor_adagrad time: 0.2500 seconds
_fused_adagrad time: 0.0933 seconds
Tensor Size: 4194304, Num Tensor 32, Num Threads: 32
_single_tensor_adagrad time: 2.8819 seconds
_fused_adagrad time: 1.7591 seconds
```
## Test Plan:
```
python test_optim.py -k test_fused_matches_forloop
python test_optim.py -k test_fused_large_tensor
python test_optim.py -k test_can_load_older_state_dict
python test_optim.py -k test_grad_scaling_autocast_fused_optimizers
python test_torch.py -k test_grad_scaling_autocast_fused
python test_torch.py -k test_params_invalidated_with_grads_invalidated_between_unscale_and_step
```
Co-authored-by: Jane (Yuan) Xu <31798555+janeyx99@users.noreply.github.com >
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124905
Approved by: https://github.com/jgong5 , https://github.com/janeyx99
2024-05-16 01:11:51 +00:00
af9acc4168
Fix public binding to actually traverse modules ( #126103 )
...
The current call passes in `['/actual/path']` to os.walk which is a string pointing to no path and thus silently leads to and empty traversal.
There is an unused function just above that handles that, so I guess this is what was supposed to be called.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/126103
Approved by: https://github.com/suo
2024-05-15 19:36:03 +00:00
bd3cbdba2f
Revert "[optim] add fused_adagrad support for CPU device ( #124905 )"
...
This reverts commit 1c3fe8403365db3cc9b75524ae742e3027b745e2.
Reverted https://github.com/pytorch/pytorch/pull/124905 on behalf of https://github.com/huydhn due to Sorry for reverting your change, but it is failing distributed multigpu test in trunk 1c3fe84033
([comment](https://github.com/pytorch/pytorch/pull/124905#issuecomment-2108777063 ))
2024-05-13 20:53:22 +00:00
1c3fe84033
[optim] add fused_adagrad support for CPU device ( #124905 )
...
Support fused_sgd_kernel support for CPU.
## Bench result:
32 core/sockets ICX
Test Scripts:
https://gist.github.com/zhuhaozhe/79e842e0a6e25d6d7fa1e4598807272c
https://gist.github.com/zhuhaozhe/b4c6998a509dcea1796dd05b3005c969
```
Tensor Size: 262144, Num Tensor 4, Num Threads: 1
_single_tensor_adagrad time: 0.2500 seconds
_fused_adagrad time: 0.0933 seconds
Tensor Size: 4194304, Num Tensor 32, Num Threads: 32
_single_tensor_adagrad time: 2.8819 seconds
_fused_adagrad time: 1.7591 seconds
```
## Test Plan:
```
python test_optim.py -k test_fused_matches_forloop
python test_optim.py -k test_fused_large_tensor
python test_optim.py -k test_can_load_older_state_dict
python test_optim.py -k test_grad_scaling_autocast_fused_optimizers
python test_torch.py -k test_grad_scaling_autocast_fused
python test_torch.py -k test_params_invalidated_with_grads_invalidated_between_unscale_and_step
```
Co-authored-by: Jane (Yuan) Xu <31798555+janeyx99@users.noreply.github.com >
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124905
Approved by: https://github.com/jgong5 , https://github.com/janeyx99
2024-05-13 01:16:20 +00:00
0f02e0aa39
Disable dynamo on functional optims if capturable=False ( #123619 )
...
This resolves a bug in eager where if an old state dict is loaded (without the capturable flag) but the original dict had the capturable flag, then state_steps would be on cuda but we would take the non-capturable path. We now fallback to eager if capturable=False.
Current design doc and discussion: https://docs.google.com/document/d/1DmmbiaSp16CDZtGw1qzXKHFTY_0gqc0xpnBdviXq0vk/edit#heading=h.871u7bvwz7ze
Note on the actual fallback logic - there was an issue with torchscript originally not handling *args, **kwargs properly, after rectifying that by using `functools.wraps`, there was an additional bug with scoping which required the single tensor implementation to be in the global scope at the time of the fallback closure being created. I pass in the single tensor function to the `_disable_dynamo_if_unsupported` decorator to workaround this bug.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123619
Approved by: https://github.com/janeyx99
2024-05-07 22:17:01 +00:00
16771747c2
Add tensor step and capturable support to rprop ( #122261 )
...
Towards fixing https://github.com/pytorch/pytorch/issues/115679
Fixes Rprop step update while compiling
Also adds capturable support + testing
Pull Request resolved: https://github.com/pytorch/pytorch/pull/122261
Approved by: https://github.com/janeyx99
2024-03-28 23:31:18 +00:00
caa57e4fcd
Add tensor step and capturable support to rmsprop ( #122264 )
...
Towards fixing https://github.com/pytorch/pytorch/issues/115679
Fixes RMSprop step update while compiling
Adds capturable support to RMSprop
Pull Request resolved: https://github.com/pytorch/pytorch/pull/122264
Approved by: https://github.com/janeyx99
2024-03-28 03:39:28 +00:00
365e89a591
Add tensor step to adadelta ( #122252 )
...
Towards fixing https://github.com/pytorch/pytorch/issues/115679
Fixes Adadelta step update while compiling
Pull Request resolved: https://github.com/pytorch/pytorch/pull/122252
Approved by: https://github.com/janeyx99
2024-03-21 07:28:47 +00:00