mirror of
https://github.com/pytorch/pytorch.git
synced 2025-10-21 05:34:18 +08:00
main
13 Commits
Author | SHA1 | Message | Date | |
---|---|---|---|---|
596b418391 |
[BE][PYFMT] migrate PYFMT for {torch,test}/{nn,optim}/** to ruff format (#144548)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/144548 Approved by: https://github.com/ezyang |
|||
5a80d2df84 |
[BE] enable UFMT for torch/nn/utils (#128595)
Part of #123062 - #123062 Pull Request resolved: https://github.com/pytorch/pytorch/pull/128595 Approved by: https://github.com/Skylion007 |
|||
27f9d3b0a1 |
Flip default value for mypy disallow_untyped_defs [8/11] (#127845)
See #127836 for details. Pull Request resolved: https://github.com/pytorch/pytorch/pull/127845 Approved by: https://github.com/oulgen ghstack dependencies: #127842, #127843, #127844 |
|||
db66f15785 |
docs: fix docstrings in distributed.py and others (fixes #112604) (#112657)
Fixes #112604 Fixes docstring by following `pydocstyle` outputs. - torch/nn/parallel/distributed.py Before: 84 ``` torch/nn/parallel/distributed.py:1 at module level: D100: Missing docstring in public module torch/nn/parallel/distributed.py:92 in private function `_cast_buffers`: D200: One-line docstring should fit on one line with quotes (found 3) torch/nn/parallel/distributed.py:103 in private function `_setup_mixed_precision_params`: D200: One-line docstring should fit on one line with quotes (found 3) torch/nn/parallel/distributed.py:103 in private function `_setup_mixed_precision_params`: D401: First line should be in imperative mood (perhaps 'Create', not 'Creates') torch/nn/parallel/distributed.py:143 in private function `_find_tensors`: D200: One-line docstring should fit on one line with quotes (found 3) torch/nn/parallel/distributed.py:273 in private method `__init__`: D200: One-line docstring should fit on one line with quotes (found 3) torch/nn/parallel/distributed.py:273 in private method `__init__`: D401: First line should be in imperative mood (perhaps 'Set', not 'Sets') torch/nn/parallel/distributed.py:287 in private method `main_hook`: D205: 1 blank line required between summary line and description (found 0) torch/nn/parallel/distributed.py:287 in private method `main_hook`: D400: First line should end with a period (not 'd') torch/nn/parallel/distributed.py:324 in private method `post_hook`: D205: 1 blank line required between summary line and description (found 0) torch/nn/parallel/distributed.py:324 in private method `post_hook`: D400: First line should end with a period (not 'l') torch/nn/parallel/distributed.py:324 in private method `post_hook`: D401: First line should be in imperative mood (perhaps 'Sync', not 'Syncs') torch/nn/parallel/distributed.py:332 in public class `DistributedDataParallel`: D205: 1 blank line required between summary line and description (found 0) torch/nn/parallel/distributed.py:332 in public class `DistributedDataParallel`: D400: First line should end with a period (not 'n') torch/nn/parallel/distributed.py:633 in public method `__init__`: D107: Missing docstring in __init__ torch/nn/parallel/distributed.py:960 in private method `_fire_reducer_autograd_hook`: D205: 1 blank line required between summary line and description (found 0) torch/nn/parallel/distributed.py:960 in private method `_fire_reducer_autograd_hook`: D401: First line should be in imperative mood (perhaps 'Fire', not 'Fires') torch/nn/parallel/distributed.py:969 in private method `_root_copy_hook`: D205: 1 blank line required between summary line and description (found 0) torch/nn/parallel/distributed.py:969 in private method `_root_copy_hook`: D400: First line should end with a period (not 's') torch/nn/parallel/distributed.py:1012 in private method `_module_wait_for_copy_hook`: D205: 1 blank line required between summary line and description (found 0) torch/nn/parallel/distributed.py:1012 in private method `_module_wait_for_copy_hook`: D400: First line should end with a period (not 'e') torch/nn/parallel/distributed.py:1050 in private method `_ddp_init_helper`: D205: 1 blank line required between summary line and description (found 0) torch/nn/parallel/distributed.py:1050 in private method `_ddp_init_helper`: D400: First line should end with a period (not ':') torch/nn/parallel/distributed.py:1050 in private method `_ddp_init_helper`: D401: First line should be in imperative mood (perhaps 'Initialize', not 'Initialization') torch/nn/parallel/distributed.py:1146 in public method `__getstate__`: D105: Missing docstring in magic method torch/nn/parallel/distributed.py:1154 in public method `__setstate__`: D105: Missing docstring in magic method torch/nn/parallel/distributed.py:1222 in private method `_assign_modules_buffers`: D205: 1 blank line required between summary line and description (found 0) torch/nn/parallel/distributed.py:1222 in private method `_assign_modules_buffers`: D400: First line should end with a period (not 'o') torch/nn/parallel/distributed.py:1222 in private method `_assign_modules_buffers`: D401: First line should be in imperative mood (perhaps 'Assign', not 'Assigns') torch/nn/parallel/distributed.py:1277 in private method `_get_parameters`: D200: One-line docstring should fit on one line with quotes (found 3) torch/nn/parallel/distributed.py:1277 in private method `_get_parameters`: D400: First line should end with a period (not 's') torch/nn/parallel/distributed.py:1277 in private method `_get_parameters`: D401: First line should be in imperative mood (perhaps 'Return', not 'Returns') torch/nn/parallel/distributed.py:1312 in public method `no_sync`: D205: 1 blank line required between summary line and description (found 0) torch/nn/parallel/distributed.py:1312 in public method `no_sync`: D400: First line should end with a period (not 'P') torch/nn/parallel/distributed.py:1312 in public method `no_sync`: D401: First line should be in imperative mood; try rephrasing (found 'A') torch/nn/parallel/distributed.py:1340 in private method `_get_active_ddp_module`: D200: One-line docstring should fit on one line with quotes (found 3) torch/nn/parallel/distributed.py:1340 in private method `_get_active_ddp_module`: D403: First word of the first line should be properly capitalized ('Torchdynamo', not 'TorchDynamo') torch/nn/parallel/distributed.py:1517 in public method `forward`: D102: Missing docstring in public method torch/nn/parallel/distributed.py:1527 in public method `scatter`: D102: Missing docstring in public method torch/nn/parallel/distributed.py:1530 in public method `to_kwargs`: D102: Missing docstring in public method torch/nn/parallel/distributed.py:1539 in public method `gather`: D102: Missing docstring in public method torch/nn/parallel/distributed.py:1542 in public method `train`: D102: Missing docstring in public method torch/nn/parallel/distributed.py:1617 in public method `join`: D205: 1 blank line required between summary line and description (found 0) torch/nn/parallel/distributed.py:1617 in public method `join`: D400: First line should end with a period (not 'f') torch/nn/parallel/distributed.py:1617 in public method `join`: D401: First line should be in imperative mood; try rephrasing (found 'A') torch/nn/parallel/distributed.py:1723 in public method `join_hook`: D205: 1 blank line required between summary line and description (found 0) torch/nn/parallel/distributed.py:1723 in public method `join_hook`: D400: First line should end with a period (not 'y') torch/nn/parallel/distributed.py:1723 in public method `join_hook`: D401: First line should be in imperative mood (perhaps 'Return', not 'Returns') torch/nn/parallel/distributed.py:1752 in public method `join_device`: D102: Missing docstring in public method torch/nn/parallel/distributed.py:1756 in public method `join_process_group`: D102: Missing docstring in public method torch/nn/parallel/distributed.py:1765 in private method `_register_buffer_comm_hook`: D205: 1 blank line required between summary line and description (found 0) torch/nn/parallel/distributed.py:1765 in private method `_register_buffer_comm_hook`: D400: First line should end with a period (not 'e') torch/nn/parallel/distributed.py:1765 in private method `_register_buffer_comm_hook`: D401: First line should be in imperative mood (perhaps 'Allow', not 'Allows') torch/nn/parallel/distributed.py:1805 in public method `register_comm_hook`: D205: 1 blank line required between summary line and description (found 0) torch/nn/parallel/distributed.py:1805 in public method `register_comm_hook`: D400: First line should end with a period (not 'a') torch/nn/parallel/distributed.py:1805 in public method `register_comm_hook`: D401: First line should be in imperative mood (perhaps 'Register', not 'Registers') torch/nn/parallel/distributed.py:1887 in private method `_register_builtin_comm_hook`: D205: 1 blank line required between summary line and description (found 0) torch/nn/parallel/distributed.py:1887 in private method `_register_builtin_comm_hook`: D400: First line should end with a period (not 'P') torch/nn/parallel/distributed.py:1887 in private method `_register_builtin_comm_hook`: D401: First line should be in imperative mood (perhaps 'Register', not 'Registers') torch/nn/parallel/distributed.py:1914 in private method `_register_fused_optim`: D205: 1 blank line required between summary line and description (found 0) torch/nn/parallel/distributed.py:1914 in private method `_register_fused_optim`: D400: First line should end with a period (not 'a') torch/nn/parallel/distributed.py:1914 in private method `_register_fused_optim`: D401: First line should be in imperative mood (perhaps 'Register', not 'Registers') torch/nn/parallel/distributed.py:2005 in public method `will_sync_module_buffers`: D102: Missing docstring in public method torch/nn/parallel/distributed.py:2060 in private method `_default_broadcast_coalesced`: D205: 1 blank line required between summary line and description (found 0) torch/nn/parallel/distributed.py:2060 in private method `_default_broadcast_coalesced`: D400: First line should end with a period (not 'e') torch/nn/parallel/distributed.py:2128 in private method `_get_data_parallel_params`: D200: One-line docstring should fit on one line with quotes (found 3) torch/nn/parallel/distributed.py:2128 in private method `_get_data_parallel_params`: D401: First line should be in imperative mood (perhaps 'Return', not 'Returns') torch/nn/parallel/distributed.py:2141 in private method `_set_params_and_buffers_to_ignore_for_model`: D205: 1 blank line required between summary line and description (found 0) torch/nn/parallel/distributed.py:2141 in private method `_set_params_and_buffers_to_ignore_for_model`: D400: First line should end with a period (not 'r') torch/nn/parallel/distributed.py:2141 in private method `_set_params_and_buffers_to_ignore_for_model`: D401: First line should be in imperative mood (perhaps 'Set', not 'Sets') torch/nn/parallel/distributed.py:2170 in private method `_get_ddp_logging_data`: D205: 1 blank line required between summary line and description (found 0) torch/nn/parallel/distributed.py:2170 in private method `_get_ddp_logging_data`: D400: First line should end with a period (not 's') torch/nn/parallel/distributed.py:2170 in private method `_get_ddp_logging_data`: D401: First line should be in imperative mood; try rephrasing (found 'This') torch/nn/parallel/distributed.py:2184 in private method `_set_ddp_runtime_logging_sample_rate`: D205: 1 blank line required between summary line and description (found 0) torch/nn/parallel/distributed.py:2184 in private method `_set_ddp_runtime_logging_sample_rate`: D400: First line should end with a period (not 'g') torch/nn/parallel/distributed.py:2184 in private method `_set_ddp_runtime_logging_sample_rate`: D401: First line should be in imperative mood; try rephrasing (found 'This') torch/nn/parallel/distributed.py:2202 in private method `_set_static_graph`: D205: 1 blank line required between summary line and description (found 0) torch/nn/parallel/distributed.py:2202 in private method `_set_static_graph`: D400: First line should end with a period (not 'l') torch/nn/parallel/distributed.py:2202 in private method `_set_static_graph`: D401: First line should be in imperative mood; try rephrasing (found 'It') torch/nn/parallel/distributed.py:2227 in private method `_remove_autograd_hooks`: D200: One-line docstring should fit on one line with quotes (found 3) torch/nn/parallel/distributed.py:2227 in private method `_remove_autograd_hooks`: D401: First line should be in imperative mood (perhaps 'Remove', not 'Removes') torch/nn/parallel/distributed.py:2233 in private method `_check_reducer_finalized`: D205: 1 blank line required between summary line and description (found 0) torch/nn/parallel/distributed.py:2233 in private method `_check_reducer_finalized`: D400: First line should end with a period (not 'd') torch/nn/parallel/distributed.py:2233 in private method `_check_reducer_finalized`: D401: First line should be in imperative mood (perhaps 'Check', not 'Checks') 84 ``` After: 12 ``` torch/nn/parallel/distributed.py:1 at module level: D100: Missing docstring in public module torch/nn/parallel/distributed.py:618 in public method `__init__`: D107: Missing docstring in __init__ torch/nn/parallel/distributed.py:1133 in public method `__getstate__`: D105: Missing docstring in magic method torch/nn/parallel/distributed.py:1141 in public method `__setstate__`: D105: Missing docstring in magic method torch/nn/parallel/distributed.py:1503 in public method `forward`: D102: Missing docstring in public method torch/nn/parallel/distributed.py:1513 in public method `scatter`: D102: Missing docstring in public method torch/nn/parallel/distributed.py:1516 in public method `to_kwargs`: D102: Missing docstring in public method torch/nn/parallel/distributed.py:1525 in public method `gather`: D102: Missing docstring in public method torch/nn/parallel/distributed.py:1528 in public method `train`: D102: Missing docstring in public method torch/nn/parallel/distributed.py:1734 in public method `join_device`: D102: Missing docstring in public method torch/nn/parallel/distributed.py:1738 in public method `join_process_group`: D102: Missing docstring in public method torch/nn/parallel/distributed.py:1986 in public method `will_sync_module_buffers`: D102: Missing docstring in public method 12 ``` - torch/nn/utils/_named_member_accessor.py Before: 23 ``` torch/nn/utils/_named_member_accessor.py:12 in public function `set_tensor`: D103: Missing docstring in public function torch/nn/utils/_named_member_accessor.py:29 in public function `swap_tensor`: D103: Missing docstring in public function torch/nn/utils/_named_member_accessor.py:85 in public function `swap_submodule`: D103: Missing docstring in public function torch/nn/utils/_named_member_accessor.py:109 in public class `NamedMemberAccessor`: D205: 1 blank line required between summary line and description (found 0) torch/nn/utils/_named_member_accessor.py:109 in public class `NamedMemberAccessor`: D400: First line should end with a period (not 's') torch/nn/utils/_named_member_accessor.py:115 in public method `__init__`: D107: Missing docstring in __init__ torch/nn/utils/_named_member_accessor.py:122 in public method `get_submodule`: D205: 1 blank line required between summary line and description (found 0) torch/nn/utils/_named_member_accessor.py:155 in public method `swap_submodule`: D205: 1 blank line required between summary line and description (found 0) torch/nn/utils/_named_member_accessor.py:164 in public method `get_tensor`: D205: 1 blank line required between summary line and description (found 0) torch/nn/utils/_named_member_accessor.py:185 in public method `set_tensor`: D205: 1 blank line required between summary line and description (found 0) torch/nn/utils/_named_member_accessor.py:194 in public method `del_tensor`: D205: 1 blank line required between summary line and description (found 0) torch/nn/utils/_named_member_accessor.py:211 in public method `swap_tensor`: D205: 1 blank line required between summary line and description (found 0) torch/nn/utils/_named_member_accessor.py:224 in public method `get_tensors`: D205: 1 blank line required between summary line and description (found 0) torch/nn/utils/_named_member_accessor.py:233 in public method `set_tensors`: D205: 1 blank line required between summary line and description (found 0) torch/nn/utils/_named_member_accessor.py:249 in public method `set_tensors_dict`: D205: 1 blank line required between summary line and description (found 0) torch/nn/utils/_named_member_accessor.py:261 in public method `del_tensors`: D205: 1 blank line required between summary line and description (found 0) torch/nn/utils/_named_member_accessor.py:276 in public method `swap_tensors`: D205: 1 blank line required between summary line and description (found 0) torch/nn/utils/_named_member_accessor.py:296 in public method `swap_tensors_dict`: D205: 1 blank line required between summary line and description (found 0) torch/nn/utils/_named_member_accessor.py:325 in public method `check_keys`: D200: One-line docstring should fit on one line with quotes (found 3) torch/nn/utils/_named_member_accessor.py:340 in public method `named_parameters`: D200: One-line docstring should fit on one line with quotes (found 3) torch/nn/utils/_named_member_accessor.py:349 in public method `named_buffers`: D200: One-line docstring should fit on one line with quotes (found 3) torch/nn/utils/_named_member_accessor.py:358 in public method `named_tensors`: D200: One-line docstring should fit on one line with quotes (found 3) torch/nn/utils/_named_member_accessor.py:368 in public method `named_modules`: D200: One-line docstring should fit on one line with quotes (found 3) 23 ``` After: 4 ``` torch/nn/utils/_named_member_accessor.py:12 in public function `set_tensor`: D103: Missing docstring in public function torch/nn/utils/_named_member_accessor.py:29 in public function `swap_tensor`: D103: Missing docstring in public function torch/nn/utils/_named_member_accessor.py:85 in public function `swap_submodule`: D103: Missing docstring in public function torch/nn/utils/_named_member_accessor.py:116 in public method `__init__`: D107: Missing docstring in __init__ 4 ``` - torch/nn/utils/_per_sample_grad.py Before: 3 ``` torch/nn/utils/_per_sample_grad.py:12 in public function `call_for_per_sample_grads`: D205: 1 blank line required between summary line and description (found 0) torch/nn/utils/_per_sample_grad.py:12 in public function `call_for_per_sample_grads`: D400: First line should end with a period (not ')') torch/nn/utils/_per_sample_grad.py:12 in public function `call_for_per_sample_grads`: D402: First line should not be the function's "signature" 3 ``` After: 0 ``` 0 ``` - torch/nn/utils/init.py Before: 3 ``` torch/nn/utils/init.py:1 at module level: D100: Missing docstring in public module torch/nn/utils/init.py:6 in public function `skip_init`: D205: 1 blank line required between summary line and description (found 0) torch/nn/utils/init.py:6 in public function `skip_init`: D400: First line should end with a period (not 'g') 3 ``` After: 1 ``` torch/nn/utils/init.py:1 at module level: D100: Missing docstring in public module 1 ``` - torch/nn/utils/memory_format.py Before: 4 ``` torch/nn/utils/memory_format.py:1 at module level: D100: Missing docstring in public module torch/nn/utils/memory_format.py:5 in public function `convert_conv2d_weight_memory_format`: D202: No blank lines allowed after function docstring (found 1) torch/nn/utils/memory_format.py:5 in public function `convert_conv2d_weight_memory_format`: D205: 1 blank line required between summary line and description (found 0) torch/nn/utils/memory_format.py:5 in public function `convert_conv2d_weight_memory_format`: D400: First line should end with a period (not '`') 4 ``` After: 1 ``` torch/nn/utils/memory_format.py:1 at module level: D100: Missing docstring in public module 1 ``` Pull Request resolved: https://github.com/pytorch/pytorch/pull/112657 Approved by: https://github.com/fduwjj |
|||
66c32d099a |
Use pytree.arg_tree_leaves everywhere (#112394)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/112394 Approved by: https://github.com/lezcano ghstack dependencies: #112391, #112392, #112393 |
|||
bbd5b935e4 |
Use pytree.tree_leaves everywhere (#112324)
This changes all the instances I could find of `tree_flatten(...)[0]` or `x, _ = tree_flatten` to use `tree_leaves`. Pull Request resolved: https://github.com/pytorch/pytorch/pull/112324 Approved by: https://github.com/lezcano ghstack dependencies: #112327, #112323 |
|||
20d01d2dc9 |
[expanded weights] add RNN support via decomp (#91807)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91807 Approved by: https://github.com/albanD |
|||
5d01277fea |
Deprecate torch.nn.utils.stateless.functional_call (#92280)
This PR: - Updates the docs to say it is deprecated - Raises a UserWarning - Changes most of the callsites inside PyTorch to use torch.func.functional_call, minus the test_stateless testing. The motivation behind this is that we can now align behind a single functional_call API in PyTorch. Test Plan: - existing tests Pull Request resolved: https://github.com/pytorch/pytorch/pull/92280 Approved by: https://github.com/albanD |
|||
ad782ff7df |
Enable xdoctest runner in CI for real this time (#83816)
Builds on #83317 and enables running the doctests. Just need to figure out what is causing the failures. Pull Request resolved: https://github.com/pytorch/pytorch/pull/83816 Approved by: https://github.com/ezyang, https://github.com/malfet |
|||
4618371da5 |
Integrate xdoctest - Rebased (#82797)
This is a new version of #15648 based on the latest master branch. Unlike the previous PR where I fixed a lot of the doctests in addition to integrating xdoctest, I'm going to reduce the scope here. I'm simply going to integrate xdoctest, and then I'm going to mark all of the failing tests as "SKIP". This will let xdoctest run on the dashboards, provide some value, and still let the dashboards pass. I'll leave fixing the doctests themselves to another PR. In my initial commit, I do the bare minimum to get something running with failing dashboards. The few tests that I marked as skip are causing segfaults. Running xdoctest results in 293 failed, 201 passed tests. The next commits will be to disable those tests. (unfortunately I don't have a tool that will insert the `#xdoctest: +SKIP` directive over every failing test, so I'm going to do this mostly manually.) Fixes https://github.com/pytorch/pytorch/issues/71105 @ezyang Pull Request resolved: https://github.com/pytorch/pytorch/pull/82797 Approved by: https://github.com/ezyang |
|||
3d74fd4870 |
[Expanded Weights] add ability to not specify batch size (#80944)
Opacus has been asking for the ability to not specify a batch size. Previously a user had to do `call_for_per_sample_grads(module, batch_size)(*args, **kwargs)` They rightfully pointed out that in most cases when you're passing a single argument to a module's forward function, it seems repetitive to specify the batch_size. The argument here is that in cases where a user was passing more than one argument, we might not know what the batch size is if they don't match. So, this lets a user not specify a batch size (or pass it as None), meaning that `call_for_per_sample_grad(linear_module)(torch.randn(5, 4))` now works and has a batch size of 5 If there are multiple tensor arguments with different batch sizes, we fail, even if one of the inputs wouldn't have been used by the module because we can't tell which batch size we should be using. Pull Request resolved: https://github.com/pytorch/pytorch/pull/80944 Approved by: https://github.com/zou3519 |
|||
799bc645d9 |
[Expanded Weights] fix loss reduction (#80892)
Two changes in here: (1) Changes `call_for_per_sample_grads` to be curried. Old call looks like: `call_for_per_sample_grads(module, batch_size, args, kwargs)` New call looks like: `call_for_per_sample_grads(module, batch_size, loss_reduction=loss_reduction)(args, kwargs)` (2) Adds the ability to specify a loss reduction, to match what is done in Opacus. Opacus has a more complete explanation but essentially, they want the per sample gradient behavior to match what is happens in a for loop with a single example. This gets messed up if you use a mean reduction at the end since in a batch that ends up scaling all the grad_outputs by 1/batch_size, so we offset that by scaling all the grad_samples by batch_size if the loss_reduction is mean Pull Request resolved: https://github.com/pytorch/pytorch/pull/80892 Approved by: https://github.com/zou3519 |
|||
53faf78143 |
expanded weights without fast rules (#70140)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/70140 [Design Doc for Expanded Weights](https://gist.github.com/samdow/fa0a164fec7963f93ff45284989cfc55) <-- gives an overview of the design for Expanded Weights Introduces the ExpandedWeights mechanism and user-facing API without any custom implemented, faster rules. - User facing API is in `_stateless.py` (with documentation) - Testing is in test_expanded_weights - The rest is the implementation of the erroring fallback + the mechanism for being able to register faster per sample grad rules. Only linear is implemented here, but they are all implemented in #70141 Test Plan: Imported from OSS Reviewed By: mikaylagawarecki Differential Revision: D34350950 Pulled By: samdow fbshipit-source-id: 69c664b0bc3dff6951358d79d7e5d94882f7aef2 (cherry picked from commit ae1620d3b6507b27c3bc08ecfb2b1418aa8ce7d7) |