Commit Graph

37 Commits

Author SHA1 Message Date
086dec3235 Pyrefly suppressions 6/n (#164877)
Adds suppressions to pyrefly will typecheck clean: https://github.com/pytorch/pytorch/issues/163283

Almost there!

Test plan:
dmypy restart && python3 scripts/lintrunner.py -a
pyrefly check

step 1: delete lines in the pyrefly.toml file from the project-excludes field
step 2: run pyrefly check
step 3: add suppressions, clean up unused suppressions
before: https://gist.github.com/maggiemoss/4b3bf2037014e116bc00706a16aef199

after:

INFO 0 errors (5,064 ignored)

Only four directories left to enable

Pull Request resolved: https://github.com/pytorch/pytorch/pull/164877
Approved by: https://github.com/oulgen
2025-10-08 02:30:57 +00:00
671a9d175b Add warning for module full backward hook when no input requires gradient (#155339)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/155339
Approved by: https://github.com/Skylion007
2025-06-10 04:42:06 +00:00
2f9d378f7b PEP585 update - torch/utils (#145201)
See #145101 for details.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/145201
Approved by: https://github.com/bobrenjc93
2025-01-21 21:04:10 +00:00
12e95aa4ee [BE]: Apply PERF401 autofixes from ruff (#140980)
* Automatically applies ruff rule 401. Turns loops into equivalent list comprehensions which are faster and do not leak the scope of the loop variables.
* list comprehensions not only often have better typing, but are 50+% faster than for loops on overhead. They also preserve length information etc and are better for the interpreter to optimize.
* Manually went back and made mypy happy after the change.
* Also fixed style lints in files covered by flake8 but not by pyfmt

Pull Request resolved: https://github.com/pytorch/pytorch/pull/140980
Approved by: https://github.com/justinchuby, https://github.com/malfet
2024-11-20 17:52:07 +00:00
57536286e2 Flip default value for mypy disallow_untyped_defs [10/11] (#127847)
See #127836 for details.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/127847
Approved by: https://github.com/oulgen
ghstack dependencies: #127842, #127843, #127844, #127845, #127846
2024-06-08 18:50:06 +00:00
9cd4bcb2c4 [FSDP] mark pre_backward_hook unserializable (#125464)
Saw a warning like this:

```
/opt/conda/lib/python3.10/site-packages/torch/utils/hooks.py:86: UserWarning: backward hook functools.partial(<function _pre_backward_hook at 0x7f9a3940fac0>, FullyShardedDataParallel(

....

), <torch.distributed.fsdp.flat_param.FlatParamHandle object at 0x7f25202a9720>) on tensor will not be serialized.  If this is expected, you can decorate the function with @torch.utils.hooks.unserializable_hook to suppress this warning
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/125464
Approved by: https://github.com/ezyang
2024-05-06 20:20:31 +00:00
f10c3f4184 Fix module pre bw hooks when input doesn't req grad but gradients are changed by the user (#116454)
As per title.

FYI @vkuzo
Pull Request resolved: https://github.com/pytorch/pytorch/pull/116454
Approved by: https://github.com/mikaylagawarecki
2023-12-28 18:32:50 +00:00
7f9fafed53 Resolve docstring errors in throughput_benchmark.py, weak.py, _traceback.py, file_baton.py, _contextlib.py, _device.py, cpp_backtrace.py, bundled_inputs.py, run_cpu.py, hooks.py, mobile_optimizer.py, _freeze.py, __init__.py, mkldnn.py, dlpack.py (#113311)
Fixes #112633

Fixed errors relating to pydocstyle in the following files. The remaining errors are not covered in this issue. `torch/utils/dlpack.py` was not modified as the errors are relating to the function signature in the first line in the docstring which must be maintained as is for proper Sphinx interpretation.

```python
def from_dlpack(ext_tensor: Any) -> 'torch.Tensor':
    """from_dlpack(ext_tensor) -> Tensor
         .....
    """
```

pydocstyle torch/utils/_contextlib.py --count
before: 4
after: 0

pydocstyle torch/backends/mps/__init__.py --count
before: 8
after: 1

**remaining errors**
```
torch/backends/mps/__init__.py:1 at module level:
        D104: Missing docstring in public package
```

pydocstyle torch/backends/xeon/run_cpu.py --count
before: 13
after: 1

**remaining errors**
```
torch/backends/xeon/run_cpu.py:864 in public function `main`:
        D103: Missing docstring in public function
```

pydocstyle torch/backends/cpu/__init__.py --count
before: 2
after: 1

**remaining errors**
```
torch/backends/cpu/__init__.py:1 at module level:
        D104: Missing docstring in public package
```

pydocstyle torch/utils/cpp_backtrace.py --count
before: 4
after: 1

**remaining errors**
```
torch/utils/cpp_backtrace.py:1 at module level:
        D100: Missing docstring in public module
```

pydocstyle torch/utils/bundled_inputs.py --count
before: 8
after: 1

**remaining errors**
```
torch/utils/bundled_inputs.py:1 at module level:
        D100: Missing docstring in public module
```

pydocstyle torch/utils/file_baton.py --count
before: 8
after: 1

**remaining errors**
```
torch/utils/file_baton.py:1 at module level:
        D100: Missing docstring in public module
```

pydocstyle torch/utils/mobile_optimizer.py --count
before: 6
after: 1

**remaining errors**
```
torch/utils/mobile_optimizer.py:8 in public class `LintCode`:
        D101: Missing docstring in public class
```

pydocstyle torch/backends/opt_einsum/__init__.py --count
before: 7
after: 5

**remaining errors**
```
torch/backends/opt_einsum/__init__.py:1 at module level:
        D104: Missing docstring in public package
torch/backends/opt_einsum/__init__.py:67 in public function `set_flags`:
        D103: Missing docstring in public function
torch/backends/opt_einsum/__init__.py:77 in public function `flags`:
        D103: Missing docstring in public function
torch/backends/opt_einsum/__init__.py:93 in public class `OptEinsumModule`:
        D101: Missing docstring in public class
torch/backends/opt_einsum/__init__.py:94 in public method `__init__`:
        D107: Missing docstring in __init__
```

pydocstyle torch/utils/_device.py --count
before:  9
after: 6

**remaining errors**
```
torch/utils/_device.py:58 in public class `DeviceContext`:
        D101: Missing docstring in public class
torch/utils/_device.py:59 in public method `__init__`:
        D107: Missing docstring in __init__
torch/utils/_device.py:62 in public method `__enter__`:
        D105: Missing docstring in magic method
torch/utils/_device.py:68 in public method `__exit__`:
        D105: Missing docstring in magic method
torch/utils/_device.py:73 in public method `__torch_function__`:
        D105: Missing docstring in magic method
torch/utils/_device.py:80 in public function `device_decorator`:
        D103: Missing docstring in public function

```

pydocstyle torch/utils/_freeze.py --count
before: 15
after: 7

**remaining errors**
```
torch/utils/_freeze.py:77 in public function `indent_msg`:
        D103: Missing docstring in public function
torch/utils/_freeze.py:89 in public class `FrozenModule`:
        D101: Missing docstring in public class
torch/utils/_freeze.py:100 in public class `Freezer`:
        D101: Missing docstring in public class
torch/utils/_freeze.py:101 in public method `__init__`:
        D107: Missing docstring in __init__
torch/utils/_freeze.py:106 in public method `msg`:
        D102: Missing docstring in public method
torch/utils/_freeze.py:185 in public method `get_module_qualname`:
        D102: Missing docstring in public method
torch/utils/_freeze.py:206 in public method `compile_string`:
        D102: Missing docstring in public method

```

pydocstyle torch/utils/throughput_benchmark.py --count
before: 25
after: 8
**remaining errors**
```
torch/utils/throughput_benchmark.py:1 at module level:
        D100: Missing docstring in public module
torch/utils/throughput_benchmark.py:27 in public class `ExecutionStats`:
        D101: Missing docstring in public class
torch/utils/throughput_benchmark.py:28 in public method `__init__`:
        D107: Missing docstring in __init__
torch/utils/throughput_benchmark.py:33 in public method `latency_avg_ms`:
        D102: Missing docstring in public method
torch/utils/throughput_benchmark.py:37 in public method `num_iters`:
        D102: Missing docstring in public method
torch/utils/throughput_benchmark.py:46 in public method `total_time_seconds`:
        D102: Missing docstring in public method
torch/utils/throughput_benchmark.py:50 in public method `__str__`:
        D105: Missing docstring in magic method
torch/utils/throughput_benchmark.py:94 in public method `__init__`:
        D107: Missing docstring in __init__

```

pydocstyle torch/utils/hooks.py --count

before: 14
after: 11

**remaining errors**
```
torch/utils/hooks.py:1 at module level:
        D100: Missing docstring in public module
torch/utils/hooks.py:23 in public method `__init__`:
        D107: Missing docstring in __init__
torch/utils/hooks.py:34 in public method `remove`:
        D102: Missing docstring in public method
torch/utils/hooks.py:44 in public method `__getstate__`:
        D105: Missing docstring in magic method
torch/utils/hooks.py:50 in public method `__setstate__`:
        D105: Missing docstring in magic method
torch/utils/hooks.py:64 in public method `__enter__`:
        D105: Missing docstring in magic method
torch/utils/hooks.py:67 in public method `__exit__`:
        D105: Missing docstring in magic method
torch/utils/hooks.py:82 in public function `warn_if_has_hooks`:
        D103: Missing docstring in public function
torch/utils/hooks.py:103 in public method `__init__`:
        D107: Missing docstring in __init__
torch/utils/hooks.py:188 in public method `setup_input_hook`:
        D102: Missing docstring in public method
torch/utils/hooks.py:197 in public method `setup_output_hook`:
        D102: Missing docstring in public method
```

pydocstyle torch/utils/_traceback.py --count
before: 19
after: 14

**remaining errors**
```
torch/utils/_traceback.py:47 in public function `report_compile_source_on_error`:
        D103: Missing docstring in public function
torch/utils/_traceback.py:160 in public class `CapturedTraceback`:
        D101: Missing docstring in public class
torch/utils/_traceback.py:163 in public method `__init__`:
        D107: Missing docstring in __init__
torch/utils/_traceback.py:167 in public method `cleanup`:
        D102: Missing docstring in public method
torch/utils/_traceback.py:170 in public method `summary`:
        D102: Missing docstring in public method
torch/utils/_traceback.py:182 in public method `__getstate__`:
        D105: Missing docstring in magic method
torch/utils/_traceback.py:190 in public method `extract`:
        D205: 1 blank line required between summary line and description (found 0)
torch/utils/_traceback.py:190 in public method `extract`:
        D400: First line should end with a period (not 't')
torch/utils/_traceback.py:213 in public method `format`:
        D205: 1 blank line required between summary line and description (found 0)
torch/utils/_traceback.py:213 in public method `format`:
        D400: First line should end with a period (not 'f')
torch/utils/_traceback.py:213 in public method `format`:
        D401: First line should be in imperative mood (perhaps 'Format', not 'Formats')
torch/utils/_traceback.py:224 in public method `format_all`:
        D200: One-line docstring should fit on one line with quotes (found 3)
torch/utils/_traceback.py:247 in private function `_extract_symbolized_tb`:
        D205: 1 blank line required between summary line and description (found 0)
torch/utils/_traceback.py:247 in private function `_extract_symbolized_tb`:
        D400: First line should end with a period (not 'f')
```

pydocstyle torch/utils/mkldnn.py --count
before: 28
after: 26

**remaining errors**
```
torch/utils/mkldnn.py:1 at module level:
        D100: Missing docstring in public module
torch/utils/mkldnn.py:4 in public class `MkldnnLinear`:
        D101: Missing docstring in public class
torch/utils/mkldnn.py:5 in public method `__init__`:
        D107: Missing docstring in __init__
torch/utils/mkldnn.py:19 in public method `__getstate__`:
        D105: Missing docstring in magic method
torch/utils/mkldnn.py:23 in public method `__setstate__`:
        D105: Missing docstring in magic method
torch/utils/mkldnn.py:29 in public method `forward`:
        D102: Missing docstring in public method
torch/utils/mkldnn.py:75 in public class `MkldnnConv1d`:
        D101: Missing docstring in public class
torch/utils/mkldnn.py:76 in public method `__init__`:
        D107: Missing docstring in __init__
torch/utils/mkldnn.py:82 in public method `__setstate__`:
        D105: Missing docstring in magic method
torch/utils/mkldnn.py:88 in public class `MkldnnConv2d`:
        D101: Missing docstring in public class
torch/utils/mkldnn.py:89 in public method `__init__`:
        D107: Missing docstring in __init__
torch/utils/mkldnn.py:100 in public method `__setstate__`:
        D105: Missing docstring in magic method
torch/utils/mkldnn.py:110 in public class `MkldnnConv3d`:
        D101: Missing docstring in public class
torch/utils/mkldnn.py:111 in public method `__init__`:
        D107: Missing docstring in __init__
torch/utils/mkldnn.py:122 in public method `__setstate__`:
        D105: Missing docstring in magic method
torch/utils/mkldnn.py:133 in public class `MkldnnBatchNorm`:
        D101: Missing docstring in public class
torch/utils/mkldnn.py:136 in public method `__init__`:
        D107: Missing docstring in __init__
torch/utils/mkldnn.py:155 in public method `__getstate__`:
        D105: Missing docstring in magic method
torch/utils/mkldnn.py:163 in public method `__setstate__`:
        D105: Missing docstring in magic method
torch/utils/mkldnn.py:171 in public method `forward`:
        D102: Missing docstring in public method
torch/utils/mkldnn.py:184 in public class `MkldnnPrelu`:
        D101: Missing docstring in public class
torch/utils/mkldnn.py:185 in public method `__init__`:
        D107: Missing docstring in __init__
torch/utils/mkldnn.py:190 in public method `__getstate__`:
        D105: Missing docstring in magic method
torch/utils/mkldnn.py:194 in public method `__setstate__`:
        D105: Missing docstring in magic method
torch/utils/mkldnn.py:199 in public method `forward`:
        D102: Missing docstring in public method
torch/utils/mkldnn.py:205 in public function `to_mkldnn`:
        D103: Missing docstring in public function
```

pydocstyle torch/utils/weak.py --count
before: 32
after: 30

**remaining errors**
```
torch/utils/weak.py:1 at module level:
        D100: Missing docstring in public module
torch/utils/weak.py:42 in public class `WeakIdRef`:
        D101: Missing docstring in public class
torch/utils/weak.py:45 in public method `__init__`:
        D107: Missing docstring in __init__
torch/utils/weak.py:54 in public method `__call__`:
        D102: Missing docstring in public method
torch/utils/weak.py:61 in public method `__hash__`:
        D105: Missing docstring in magic method
torch/utils/weak.py:64 in public method `__eq__`:
        D105: Missing docstring in magic method
torch/utils/weak.py:84 in public class `WeakIdKeyDictionary`:
        D101: Missing docstring in public class
torch/utils/weak.py:87 in public method `__init__`:
        D107: Missing docstring in __init__
torch/utils/weak.py:131 in public method `__delitem__`:
        D105: Missing docstring in magic method
torch/utils/weak.py:135 in public method `__getitem__`:
        D105: Missing docstring in magic method
torch/utils/weak.py:138 in public method `__len__`:
        D105: Missing docstring in magic method
torch/utils/weak.py:145 in public method `__repr__`:
        D105: Missing docstring in magic method
torch/utils/weak.py:148 in public method `__setitem__`:
        D105: Missing docstring in magic method
torch/utils/weak.py:151 in public method `copy`:
        D102: Missing docstring in public method
torch/utils/weak.py:162 in public method `__deepcopy__`:
        D105: Missing docstring in magic method
torch/utils/weak.py:172 in public method `get`:
        D102: Missing docstring in public method
torch/utils/weak.py:175 in public method `__contains__`:
        D105: Missing docstring in magic method
torch/utils/weak.py:182 in public method `items`:
        D102: Missing docstring in public method
torch/utils/weak.py:189 in public method `keys`:
        D102: Missing docstring in public method
torch/utils/weak.py:198 in public method `values`:
        D102: Missing docstring in public method
torch/utils/weak.py:216 in public method `popitem`:
        D102: Missing docstring in public method
torch/utils/weak.py:224 in public method `pop`:
        D102: Missing docstring in public method
torch/utils/weak.py:228 in public method `setdefault`:
        D102: Missing docstring in public method
torch/utils/weak.py:231 in public method `update`:
        D102: Missing docstring in public method
torch/utils/weak.py:241 in public method `__ior__`:
        D105: Missing docstring in magic method
torch/utils/weak.py:245 in public method `__or__`:
        D105: Missing docstring in magic method
torch/utils/weak.py:252 in public method `__ror__`:
        D105: Missing docstring in magic method
torch/utils/weak.py:262 in public method `__eq__`:
        D105: Missing docstring in magic method
torch/utils/weak.py:276 in public method `__init__`:
        D107: Missing docstring in __init__
torch/utils/weak.py:280 in public method `__call__`:
        D102: Missing docstring in public method

```

@mikaylagawarecki @jbschlosser @svekars
Pull Request resolved: https://github.com/pytorch/pytorch/pull/113311
Approved by: https://github.com/ezyang
2023-11-15 17:40:04 +00:00
2f51b9223c Make sure namedtuple are preserved when adding backward hooks on Module (#112433)
As per title.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/112433
Approved by: https://github.com/mikaylagawarecki
2023-10-31 18:40:35 +00:00
660e8060ad [BE]: Update ruff to 0.285 (#107519)
This updates ruff to 0.285 which is faster, better, and have fixes a bunch of false negatives with regards to fstrings.

I also enabled RUF017 which looks for accidental quadratic list summation. Luckily, seems like there are no instances of it in our codebase, so enabling it so that it stays like that. :)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107519
Approved by: https://github.com/ezyang
2023-08-22 23:16:38 +00:00
d59a6864fb Revert "[BE]: Update ruff to 0.285 (#107519)"
This reverts commit 88ab3e43228b7440a33bf534cde493446a31538c.

Reverted https://github.com/pytorch/pytorch/pull/107519 on behalf of https://github.com/ZainRizvi due to Sorry, but this PR breaks internal tests. @ezyang, can you please hep them get unblocked? It seems like one of the strings was prob accidentally modified ([comment](https://github.com/pytorch/pytorch/pull/107519#issuecomment-1688833480))
2023-08-22 19:53:32 +00:00
88ab3e4322 [BE]: Update ruff to 0.285 (#107519)
This updates ruff to 0.285 which is faster, better, and have fixes a bunch of false negatives with regards to fstrings.

I also enabled RUF017 which looks for accidental quadratic list summation. Luckily, seems like there are no instances of it in our codebase, so enabling it so that it stays like that. :)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107519
Approved by: https://github.com/ezyang
2023-08-20 01:36:18 +00:00
1ad435772b Added option to always call nn.Module global/non-global forward hooks (#104278)
Fix #103997

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104278
Approved by: https://github.com/albanD
2023-07-10 18:58:07 +00:00
ee1c539ecf Fix module backward pre-hooks to actually update gradient (#97983)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/97983
Approved by: https://github.com/albanD
2023-03-30 20:33:44 +00:00
2f6a371ae9 Revert "Optimize nn.Module __call__ fast path for dynamo (#95931)" (#96242)
Reverting due to concerns over silent unsoundness (skipped hooks) if users have directly added hooks dicts without using official torch APIs.

This reverts commit 26045336ca323fd27cff2a7340fe896117d5fb6e.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96242
Approved by: https://github.com/albanD
2023-03-10 01:05:01 +00:00
26045336ca Optimize nn.Module __call__ fast path for dynamo (#95931)
This PR optimizes the guards overhead introduced by dynamo tracing module forward hooks.

It can and maybe should be followed by a wider change proposed by @voznesenskym to optimize specialized nnmodules by 'observing' any user mutations and directly invalidating the root guard, obviating the need to install other nnmodule guards.  (But this observer change seems more involved...)

Idea: maintain a flag, and keep it up to date whenever adding or
removing hooks. Use the flag rather than dict checks to enter the call fast path.
  - need to extend RemovableHandle to keep a ref to nnModule so it can update the flag on removal.
  - also need to handle the flag in ScriptModule which still uses the python call impl when called from python.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95931
Approved by: https://github.com/ezyang, https://github.com/voznesenskym
2023-03-04 15:09:40 +00:00
8fce9a09cd [BE]: pyupgrade Python to 3.8 - imports and object inheritance only (#94308)
Apply parts of pyupgrade to torch (starting with the safest changes).
This PR only does two things: removes the need to inherit from object and removes unused future imports.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94308
Approved by: https://github.com/ezyang, https://github.com/albanD
2023-02-07 21:10:56 +00:00
f5d18574a3 Allow Module forward-pre and forward hooks to take kwargs (#89389)
closes #35643

This PR is mostly borrowed from #82042. Thanks @Padarn for implementing
the first version and debugging into the errors.

Based on the discussion in #82042 this PR adds a with_kwargs
argument to register_forward_pre_hook and register_forward_hook
methods. When the arg is set to true, the provided hook must accept
kwargs args. Under the hook, this PR adds a
`_forward_pre_hooks_with_kwargs` and a `_forward_hook_with_kwargs`
set to keep track of which hooks accept kwargs.

Differential Revision: [D41431111](https://our.internmc.facebook.com/intern/diff/D41431111)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89389
Approved by: https://github.com/soulitzer
2022-11-23 02:43:32 +00:00
6b521bbf35 Prevent module full_backward_hook from erroring in double backward (#88357)
Also clarifies documentation to say "execute if and only if gradients wrt outputs are computed" (previously, "execute every time gradients wrt inputs are computed")

See https://docs.google.com/document/d/1tFZKYdsSzRBJ7Di7SWt8X8fSg-E3eiUPwomMF10UyhM/edit for more details regarding the question: 'should module full_backward_hooks be called every time the gradients wrt module inputs are called, or should module full_backward_hooks only be called when the "backward for the module" have been computed?'

Fixes https://github.com/pytorch/pytorch/issues/88312

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88357
Approved by: https://github.com/albanD
2022-11-16 19:27:30 +00:00
54ee95c8ec [nn] module: full_backward_pre_hook (#86700)
Fixes https://github.com/pytorch/pytorch/issues/42824

* [x] Test
* [x] Doc
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86700
Approved by: https://github.com/soulitzer
2022-10-13 17:36:39 +00:00
0183c1e336 Add __all__ to torch.utils submodules (#85331)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85331
Approved by: https://github.com/albanD
2022-09-27 14:45:26 +00:00
1cafb1027f Fix leak when create_graph and full backward hook registered (#82788)
Fixes #82528
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82788
Approved by: https://github.com/albanD
2022-08-05 15:35:36 +00:00
b71f01f70d Fix full backward hook when grad is disabled (#65335)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/59901. See discussion in the issue.

cc albanD soulitzer

Pull Request resolved: https://github.com/pytorch/pytorch/pull/65335

Reviewed By: malfet

Differential Revision: D31055865

Pulled By: albanD

fbshipit-source-id: 53605df62bc73c99d8908248087ab400b81ac495
2021-09-20 13:31:19 -07:00
8b12c8e8b3 Fixes: register_full_backward_hook crash if first argument don't require a gradient (#57944) (#57945)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/57944

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57945

Reviewed By: mruberry

Differential Revision: D28351929

Pulled By: albanD

fbshipit-source-id: d0db898e6bf13d1877cd81892a5a65c7854c8102
2021-05-11 15:07:35 -07:00
22b151a3ba Make sure full backward hook fire when no input requires grad (#56693)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/56380

BC-breaking note:
This changes the behavior of full backward hooks as they will now fire properly even if no input to the Module require gradients.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56693

Reviewed By: ezyang

Differential Revision: D27947030

Pulled By: albanD

fbshipit-source-id: e8353d769ba5a2c1b6bdf3b64e2d61308cf624a2
2021-04-23 08:46:49 -07:00
f6df18f6ca Clean up future imports for Python 2 (#53349)
Summary:
See https://github.com/pytorch/pytorch/issues/42919

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53349

Reviewed By: malfet

Differential Revision: D27039089

Pulled By: bugra

fbshipit-source-id: 8063dc184248604506a8dbb1bcb73da8ec85bb18
2021-03-14 15:56:13 -07:00
ccd646696b Fix Module backward hooks for all Tensor inputs/outputs (#46163)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/598

This is BC-breaking as we now explicitly don't call the hook when there are not Tensors at the top level of the output.
This feature was not working anyways as the returned grad_input/grad_output were wrong (not respecting the output structure and wrong inputs for multi-Node Module).

This is also BC-breaking as we now report the correct gradients for `nn.Module`s that contain multiple autograd `Node`s while we use to return bad results before.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46163

Reviewed By: ailzhang, mruberry

Differential Revision: D24894180

Pulled By: albanD

fbshipit-source-id: e1b5d193d2818eb2f51e2a2722c7405c8bd13c2b
2020-12-18 09:04:36 -08:00
20ac736200 Remove py2 compatible future imports (#44735)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44735

Reviewed By: mruberry

Differential Revision: D23731306

Pulled By: ezyang

fbshipit-source-id: 0ba009a99e475ddbe22981be8ac636f8a1c8b02f
2020-09-16 12:55:57 -07:00
da32bf4cc6 Move type annotations for remaining torch.utils stub files inline (#43406)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/43406

Reviewed By: mruberry

Differential Revision: D23319736

Pulled By: malfet

fbshipit-source-id: e25fbb49f27aa4893590b022441303d6d98263a9
2020-08-31 18:44:09 -07:00
d770fbc1d2 Some modifications to improve readability (#31352)
Summary:
In the long string, formalstring thinks it is good to have a name.

When using dict, literal is better for readability and faster than dict constructor.

I always appreciate your efforts in creating the world's best frameworks.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31352

Differential Revision: D19191967

Pulled By: ngimel

fbshipit-source-id: 21f063b163b67de8cf9761a4db5991f74318e991
2020-01-02 12:48:34 -08:00
c47f680086 arc lint torch/utils (#13141)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13141

This is an example diff to show what lint rules are being applied.

Reviewed By: mingzhe09088

Differential Revision: D10858478

fbshipit-source-id: cbeb013f10f755b0095478adf79366e7cf7836ff
2018-10-25 14:59:03 -07:00
3bfa7258b3 Don't serialize hooks (#11705)
Summary:
Fixes #11683.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11705

Differential Revision: D9833057

Pulled By: ezyang

fbshipit-source-id: 18af9bcd77b088326738d567100fbe4a4c869dd6
2018-10-16 20:11:03 -07:00
ea563c1df1 Make weight norm pickleable (#2066) 2017-07-12 17:21:22 -04:00
6336300880 Fix bug where adding a hook could replace an existing hook.
We were keying hooks by RemovableHandle id. However, we don't hold onto
handles and ids of dead objects can be reused. This replaces id(handle)
with a global counter.
2017-03-06 12:47:53 -08:00
e7c1e6a8e3 [pep8] Fix most lint automatically with autopep8
Here's the command I used to invoke autopep8 (in parallel!):

    git ls-files | grep '\.py$' | xargs -n1 -P`nproc` autopep8 -i

Several rules are ignored in setup.cfg. The goal is to let autopep8
handle everything which it can handle safely, and to disable any rules
which are tricky or controversial to address. We may want to come back
and re-enable some of these rules later, but I'm trying to make this
patch as safe as possible.

Also configures flake8 to match pep8's behavior.

Also configures TravisCI to check the whole project for lint.
2017-01-28 01:15:51 +01:00
69d8331195 Use functools.partial 2017-01-13 23:10:45 +01:00
7e4ddcfe8a Remove names from register_hook calls (#446)
The register hook calls now return an object that can be used to remove
the hook. For example,

   >>> h = module.register_forward_hook(callback)
   >>> h.remove()  # removes hook

Or as a context manager:

   >>> with module.register_forward_hook(callback):
   ...     pass

This makes it easier for libraries to use hooks without worrying about
name collisions.
2017-01-13 15:57:03 -05:00