Logo
Explore Help
Register Sign In
frozenleaves/pytorch
1
0
Fork 0
You've already forked pytorch
mirror of https://github.com/pytorch/pytorch.git synced 2025-10-21 21:49:24 +08:00
Code Issues Packages Projects Releases Wiki Activity
Files
c82c46ccc78b1ea36a6d3524d99850e65088c48d
pytorch/torch/distributed/algorithms
History
Will Constable c82c46ccc7 [C10D] support group_src/dst in broadcast/reduce ops (#140843)
Also add mypy annotations

Partially addresses RFC 0042 (pytorch/rfcs#71)
See more details/motivation in #140460
Pull Request resolved: https://github.com/pytorch/pytorch/pull/140843
Approved by: https://github.com/kwen2501
2024-11-19 01:23:08 +00:00
..
_checkpoint
[BE] [Reland] Make nn.Module state_dict load_state_dict pre-hook and state_dict post-hook public (#131690)
2024-07-26 18:14:07 +00:00
_comm_hooks
[BE]: Update mypy to 1.11.2 (#133816)
2024-09-16 19:44:11 +00:00
_optimizer_overlap
[BE][Easy] enable UFMT for torch/distributed/{algorithms,autograd,benchmarks,checkpoint,elastic}/ (#128866)
2024-06-18 13:51:53 +00:00
_quantization
[BE][Easy] enable UFMT for torch/distributed/{algorithms,autograd,benchmarks,checkpoint,elastic}/ (#128866)
2024-06-18 13:51:53 +00:00
ddp_comm_hooks
[C10D] support group_src/dst in broadcast/reduce ops (#140843)
2024-11-19 01:23:08 +00:00
model_averaging
Use device-agnostic runtime API in distributed DDP/FSDP instead of cuda device specific. (#137678)
2024-11-13 05:32:19 +00:00
__init__.py
[BE][Easy] enable UFMT for torch/distributed/{algorithms,autograd,benchmarks,checkpoint,elastic}/ (#128866)
2024-06-18 13:51:53 +00:00
join.py
[BE][Easy] enable ruff rule PIE790: unnecessary pass statement (#133200)
2024-08-15 15:50:19 +00:00
Powered by Gitea Version: 1.24.0-rc0 Page: 313ms Template: 2ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API