Commit Graph

126 Commits

Author SHA1 Message Date
8817e5ac80 Render Example: and not Example:: in docs (#153978)
Everything here is a grep except the changes in tools/autograd/load_derivatives.py which I manually corrected.

The correct notation is:
```
Example::

    >>> ...
```

It is common and wrong to have:
```
Example::
    >>> ...
```

In the wrong example, we get these pesky double colons:
![image](https://github.com/user-attachments/assets/20ffd349-68bb-4552-966c-e23923350476)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/153978
Approved by: https://github.com/soulitzer, https://github.com/malfet
2025-05-21 01:03:26 +00:00
9d00f2b375 [autograd][docs] Add more details on why save_for_backward is important in extending autograd note (#153005)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/153005
Approved by: https://github.com/albanD
2025-05-09 16:36:57 +00:00
387c993c3b [ca] remove private API: _compiled_autograd_should_lift (#146720)
Since the functional autograd + compiled autograd migration, we don't trace into nodes anymore, and everything is lifted. We can't support this flag which tries to inline make_fx style in CA initial pass. There's no more usage internally.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/146720
Approved by: https://github.com/zou3519
2025-02-10 04:29:57 +00:00
ea141d8134 functional compiled autograd (#144707)
This PR squashes together the following commits:

https://github.com/pytorch/pytorch/pull/144115
https://github.com/pytorch/pytorch/pull/143417
https://github.com/pytorch/pytorch/pull/143405
https://github.com/pytorch/pytorch/pull/143387
https://github.com/pytorch/pytorch/pull/143304
https://github.com/pytorch/pytorch/pull/143296

This is a refactor of compiled autograd to use "functional autograd". The end goal is that it gets compiled autograd's initial capture to stop specializing on Tensor metadata, therefore allowing compiled autograd to better handle Tensor subclasses.

For more information, please read the commit messages for each PR.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/144707
Approved by: https://github.com/bdhirsh, https://github.com/xmfan, https://github.com/jansel
2025-01-27 05:20:56 +00:00
16c4f8c395 Revert "[compiled autograd] Always proxy autograd.Function nodes; handle AOT backwards (#143405)"
This reverts commit ec820fe57c2d6a2847569a107856e7fcff87dc5c.

Reverted https://github.com/pytorch/pytorch/pull/143405 on behalf of https://github.com/izaitsevfb due to breaking internal tests T213390054 ([comment](https://github.com/pytorch/pytorch/pull/143296#issuecomment-2611224926))
2025-01-23 23:34:13 +00:00
ec820fe57c [compiled autograd] Always proxy autograd.Function nodes; handle AOT backwards (#143405)
We will always proxy autograd.Function nodes in compiled autograd's
initial graph capture (previously there was an
option to proxy vs trace into the autograd.Function)

We have some requirements for the AOTBackward. Compiled Autograd runs
accumulate grad reordering passes on the AOTBackward graph directly
after the initial graph capture, so we can't just proxy a single node for it.

Instead, we:
- proxy the AOTBackward prologue function into the CA graph
- copy-paste the AOTBackward graph into the CA graph
- trace directly through the epilogue (the traced nodes go into the CA
  graph).

Tracing through the epilogue is safe (assuming no Tensor subclasses)
because the only thing the epilogue does is drop some outputs. The
Tensor subclass situation was already broken so this doesn't regress
anything but this PR sets it up to be fixed (in a followup, where we
will proxy "make_subclass" calls into the graph from the epilogue).

Test Plan:
- existing tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/143405
Approved by: https://github.com/jansel, https://github.com/xmfan
ghstack dependencies: #143296, #143304, #143387
2025-01-22 21:50:56 +00:00
cyy
d87aad6877 [5/N] Apply Ruff fixes and pyupgrade to Python 3.9 (#144205)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/144205
Approved by: https://github.com/albanD
2025-01-15 04:00:47 +00:00
dc23f1944a Remove unused Python variables in torch/[_-a]* (#133492)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/133492
Approved by: https://github.com/albanD
2024-12-12 17:39:14 +00:00
5c97ac9721 Revert "Remove unused Python variables in torch/[_-a]* (#133492)"
This reverts commit fda975a7b3071a20dab8fc2c4e453479e1bb7cf2.

Reverted https://github.com/pytorch/pytorch/pull/133492 on behalf of https://github.com/clee2000 due to Sorry, I need to revert this in order to revert something else.  The only thing you need to do is rebase and remerge ([comment](https://github.com/pytorch/pytorch/pull/133492#issuecomment-2536635516))
2024-12-11 17:29:12 +00:00
fda975a7b3 Remove unused Python variables in torch/[_-a]* (#133492)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/133492
Approved by: https://github.com/albanD
2024-12-10 21:48:44 +00:00
3e90c00a87 Missing space in torch.autograd.Function deprecation warning (#141562)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/141562
Approved by: https://github.com/colesbury
2024-11-27 01:31:26 +00:00
f3fce597e9 [BE][Easy][17/19] enforce style for empty lines in import segments in torch/[a-c]*/ and torch/[e-n]*/ (#129769)
See https://github.com/pytorch/pytorch/pull/129751#issue-2380881501. Most changes are auto-generated by linter.

You can review these PRs via:

```bash
git diff --ignore-all-space --ignore-blank-lines HEAD~1
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/129769
Approved by: https://github.com/ezyang
2024-08-04 10:24:09 +00:00
62bcdc0ac9 Flip default value for mypy disallow_untyped_defs [4/11] (#127841)
See #127836 for details.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/127841
Approved by: https://github.com/oulgen
2024-06-08 18:36:48 +00:00
67ef2683d9 [BE] wrap deprecated function/class with typing_extensions.deprecated (#127689)
Use `typing_extensions.deprecated` for deprecation annotation if possible. Otherwise, add `category=FutureWarning` to `warnings.warn("message")` if the category is missing.

Note that only warnings that their messages contain `[Dd]eprecat(ed|ion)` are updated in this PR.

Resolves #126888

- #126888

This PR is split from PR #126898.

- #126898

------

Pull Request resolved: https://github.com/pytorch/pytorch/pull/127689
Approved by: https://github.com/Skylion007
2024-06-02 12:30:43 +00:00
033e733021 Revert "[BE] wrap deprecated function/class with typing_extensions.deprecated (#126898)"
This reverts commit 749a132fb0a8325cbad4734a563aa459ca611991.

Reverted https://github.com/pytorch/pytorch/pull/126898 on behalf of https://github.com/fbgheith due to switching typing-extensions=4.3.0 to 4.9.0 causes internal failure ([comment](https://github.com/pytorch/pytorch/pull/126898#issuecomment-2142884456))
2024-05-31 19:47:24 +00:00
749a132fb0 [BE] wrap deprecated function/class with typing_extensions.deprecated (#126898)
Use `typing_extensions.deprecated` for deprecation annotation if possible. Otherwise, add `category=FutureWarning` to `warnings.warn("message")` if the category is missing.

Note that only warnings that their messages contain `[Dd]eprecat(ed|ion)` are updated in this PR.

UPDATE: Use `FutureWarning` instead of `DeprecationWarning`.

Resolves #126888

- #126888

Pull Request resolved: https://github.com/pytorch/pytorch/pull/126898
Approved by: https://github.com/albanD
2024-05-29 12:09:27 +00:00
ce503c1b40 Dynamo x autograd.Function supports setup_context (#124802)
Fixes part of #118397

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124802
Approved by: https://github.com/zou3519
2024-04-27 04:57:13 +00:00
-
70ad64e8a6 update docs for separate context and forward functions (#121955)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/121955
Approved by: https://github.com/soulitzer
2024-04-15 22:31:12 +00:00
3d2d7ba19d Delete torch.autograd.function.traceable APIs (#122817)
We deprecated them in 2.3 with plans to delete in 2.4. Very few OSS
repos use this flag at all and it also does nothing.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/122817
Approved by: https://github.com/albanD
2024-03-28 18:24:15 +00:00
e26280ad8b Fix typing for autograd.Function with ctx-less forward (#122167)
Previously, typing an autograd.Function like the following would lead to
a mypy error (which expects the first arg to forward to be named `ctx`).

This PR fixes that by deleting the ctx arg.

```py
class MySin(torch.autograd.Function):
    @staticmethod
    def forward(x: torch.Tensor) -> torch.Tensor:
        return x.sin()

    @staticmethod
    def setup_context(*args, **kwargs):
        pass

    @staticmethod
    def backward(ctx, grad):
        if grad.stride(0) > 1:
            return grad.sin()
        return grad.cos()
```

Test Plan:
- tested locally (I don't know how to put up a test in CI for this).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/122167
Approved by: https://github.com/soulitzer
2024-03-19 16:15:23 +00:00
115c9c6d6b Remove __getattribute__ on autograd.Function (#122033)
Improves `benchmarks/dynamo/microbenchmarks/overheads.py` from 38.7us to
34.3us.

See #122029
Pull Request resolved: https://github.com/pytorch/pytorch/pull/122033
Approved by: https://github.com/zou3519, https://github.com/soulitzer
ghstack dependencies: #122032
2024-03-18 18:08:06 +00:00
8a5a377190 Move doc links to point to main (#121823)
The previous links were pointing to an outdated branch

Command: `find . -type f -exec sed -i "s:docs/main:docs/master:g" {} + `

Pull Request resolved: https://github.com/pytorch/pytorch/pull/121823
Approved by: https://github.com/albanD, https://github.com/malfet
2024-03-15 19:49:37 +00:00
b52e0bf131 Deprecate torch.autograd.function.traceable, is_traceable (#121413)
- There are no usages of this internally.
- There are very few usages of this in OSS (most of these are forks of old
repositories).
- This flag doesn't do anything.

We're deprecating it to prevent confusion. I will delete it immediately
after the branch cut.

Test Plan:
- new tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/121413
Approved by: https://github.com/albanD, https://github.com/soulitzer
2024-03-08 18:41:07 +00:00
623632a401 More informative stacklevel for autograd function warning (#120512)
Internal xref:
https://fb.workplace.com/groups/1405155842844877/posts/8064897663537295

Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/120512
Approved by: https://github.com/albanD
2024-02-23 21:48:55 +00:00
a40be5f4dc Autograd doc cleanup (#118500)
I don't think we'll realistically go though deprecation for these now since there are a couple use of each online. So document appropriately.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/118500
Approved by: https://github.com/soulitzer
2024-01-29 21:51:33 +00:00
9eb842cbd6 Compiled autograd: Lift autograd functions' backward and provide default key for custom autograd functions (#115573)
This PR adds support for torch.autograd.Function subclasses in compiled autograd. We do this by:
- Creating a uid for all torch.autograd.Function via its metaclass. This uid is used in the compiled autograd key, which is a subset of the cache key to the compiled graph
- "Lifting" the backward/saved_tensors, having them as input arguments in the compiled graph
  - Creating proxies to track the backward's inputs and outputs. Since the backward's outputs (grads) have to match the forward's inputs, we pass the node's `input_info` (forward's input sizes) to build the proxies tracking the backward's outputs.
  - Use a `FakeContext` class as a replacement for the autograd node's context object (`BackwardCFunction`) during tracing, only support passing saved_tensors from the forward to the backward
  - Index each backward, to support multiple torch.autograd.Functions in the same graph
  - Special case for `CompiledFunctionBackward`, lifting CompiledFunction will fail 4 tests and requires some skipfiles changes that I'd rather do that in a separate PR

Example graph: test_custom_fn_saved_multiple_tensors (eager fw + compiled autograd)
```python
class MyFn(torch.autograd.Function):
    @staticmethod
    def forward(ctx, x, y):
        ctx.save_for_backward(x, y)
        return torch.sin(x), torch.sin(y)

    @staticmethod
    def backward(ctx, gO_x, gO_y):
        (x, y) = ctx.saved_tensors
        return gO_x * torch.cos(x), gO_y * torch.cos(y)
```
The backwards is lifted via `getitem_5` and `call_backward`
```python
# Compiled autograd graph
 ===== Compiled autograd graph =====
 <eval_with_key>.0 class CompiledAutograd(torch.nn.Module):
    def forward(self, inputs, sizes, hooks):
        # No stacktrace found for following nodes
        getitem: "f32[]" = inputs[0]
        getitem_1: "f32[10]" = inputs[1]
        getitem_2: "f32[10]" = inputs[2]
        getitem_3: "f32[10]" = inputs[3]
        getitem_4: "f32[10]" = inputs[4];  inputs = None
        expand: "f32[10]" = torch.ops.aten.expand.default(getitem, [10]);  getitem = None
        mul: "f32[10]" = torch.ops.aten.mul.Tensor(expand, getitem_2);  getitem_2 = None
        mul_1: "f32[10]" = torch.ops.aten.mul.Tensor(expand, getitem_1);  expand = getitem_1 = None
        getitem_5 = hooks[0];  hooks = None
        call_backward = torch__dynamo_external_utils_call_backward(getitem_5, (getitem_3, getitem_4), mul_1, mul);  getitem_5 = mul_1 = mul = None
        getitem_6: "f32[10]" = call_backward[0]
        getitem_7: "f32[10]" = call_backward[1];  call_backward = None
        accumulate_grad_ = torch.ops.inductor.accumulate_grad_.default(getitem_4, getitem_7);  getitem_4 = getitem_7 = None
        accumulate_grad__1 = torch.ops.inductor.accumulate_grad_.default(getitem_3, getitem_6);  getitem_3 = getitem_6 = None
        return []
```

then is later inlined by dynamo
```python
# Dynamo graph
 ===== __compiled_fn_0 =====
 <eval_with_key>.1 class GraphModule(torch.nn.Module):
    def forward(self, L_inputs_0_ : torch.Tensor, L_inputs_1_ : torch.Tensor, L_inputs_2_ : torch.Tensor, L_inputs_3_ : torch.Tensor, L_inputs_4_ : torch.Tensor):
        getitem = L_inputs_0_
        getitem_1 = L_inputs_1_
        getitem_2 = L_inputs_2_
        x = L_inputs_3_
        y = L_inputs_4_

        # File: <eval_with_key>.0:10, code: expand = torch.ops.aten.expand.default(getitem, [10]);  getitem = None
        expand = torch.ops.aten.expand.default(getitem, [10]);  getitem = None

        # File: <eval_with_key>.0:11, code: mul = torch.ops.aten.mul.Tensor(expand, getitem_2);  getitem_2 = None
        mul = torch.ops.aten.mul.Tensor(expand, getitem_2);  getitem_2 = None

        # File: <eval_with_key>.0:12, code: mul_1 = torch.ops.aten.mul.Tensor(expand, getitem_1);  expand = getitem_1 = None
        mul_1 = torch.ops.aten.mul.Tensor(expand, getitem_1);  expand = getitem_1 = None

        # File: /data/users/xmfan/core/pytorch/test/inductor/test_compiled_autograd.py:412, code: return gO_x * torch.cos(x), gO_y * torch.cos(y)
        cos = torch.cos(x)
        getitem_6 = mul_1 * cos;  mul_1 = cos = None
        cos_1 = torch.cos(y)
        getitem_7 = mul * cos_1;  mul = cos_1 = None

        # File: <eval_with_key>.0:17, code: accumulate_grad_ = torch.ops.inductor.accumulate_grad_.default(getitem_4, getitem_7);  getitem_4 = getitem_7 = None
        accumulate_grad__default = torch.ops.inductor.accumulate_grad_.default(y, getitem_7);  y = getitem_7 = None

        # File: <eval_with_key>.0:18, code: accumulate_grad__1 = torch.ops.inductor.accumulate_grad_.default(getitem_3, getitem_6);  getitem_3 = getitem_6 = None
        accumulate_grad__default_1 = torch.ops.inductor.accumulate_grad_.default(x, getitem_6);  x = getitem_6 = None
        return ()
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115573
Approved by: https://github.com/jansel
2024-01-10 18:01:28 +00:00
f5ce4d8baf Fixed docstring errors in gradcheck.py, forward_ad.py, profiler_util.py, profiler_legacy.py, functional.py, grad_mode.py, function.py (#113266)
Fixes #112594

docstring updated.

Here are the output to with the number before and after.

1) torch/autograd/forward_ad.py

Before :

```
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/forward_ad.py:1 at module level:
        D100: Missing docstring in public module
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/forward_ad.py:23 in public function `enter_dual_level`:
        D205: 1 blank line required between summary line and description (found 0)
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/forward_ad.py:23 in public function `enter_dual_level`:
        D401: First line should be in imperative mood; try rephrasing (found 'Function')
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/forward_ad.py:42 in public function `exit_dual_level`:
        D205: 1 blank line required between summary line and description (found 0)
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/forward_ad.py:42 in public function `exit_dual_level`:
        D401: First line should be in imperative mood; try rephrasing (found 'Function')
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/forward_ad.py:62 in public function `make_dual`:
        D205: 1 blank line required between summary line and description (found 0)
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/forward_ad.py:62 in public function `make_dual`:
        D400: First line should end with a period (not 'a')
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/forward_ad.py:128 in public class `UnpackedDualTensor`:
        D204: 1 blank line required after class docstring (found 0)
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/forward_ad.py:128 in public class `UnpackedDualTensor`:
        D205: 1 blank line required between summary line and description (found 0)
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/forward_ad.py:128 in public class `UnpackedDualTensor`:
        D209: Multi-line docstring closing quotes should be on a separate line
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/forward_ad.py:134 in public function `unpack_dual`:
        D205: 1 blank line required between summary line and description (found 0)
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/forward_ad.py:165 in public class `dual_level`:
        D205: 1 blank line required between summary line and description (found 0)
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/forward_ad.py:165 in public class `dual_level`:
        D400: First line should end with a period (not 't')
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/forward_ad.py:199 in public method `__enter__`:
        D105: Missing docstring in magic method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/forward_ad.py:202 in public method `__exit__`:
        D105: Missing docstring in magic method
15
```

After:
```
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/forward_ad.py:1 at module level:
        D100: Missing docstring in public module
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/forward_ad.py:205 in public method `__enter__`:
        D105: Missing docstring in magic method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/forward_ad.py:208 in public method `__exit__`:
        D105: Missing docstring in magic method
3
```

2) torch/autograd/functional.py

Before:
```
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/functional.py:1 at module level:
        D100: Missing docstring in public module
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/functional.py:262 in public function `vjp`:
        D202: No blank lines allowed after function docstring (found 1)
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/functional.py:262 in public function `vjp`:
        D205: 1 blank line required between summary line and description (found 0)
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/functional.py:262 in public function `vjp`:
        D400: First line should end with a period (not 'e')
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/functional.py:262 in public function `vjp`:
        D401: First line should be in imperative mood; try rephrasing (found 'Function')
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/functional.py:359 in public function `jvp`:
        D202: No blank lines allowed after function docstring (found 1)
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/functional.py:359 in public function `jvp`:
        D205: 1 blank line required between summary line and description (found 0)
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/functional.py:359 in public function `jvp`:
        D400: First line should end with a period (not 'f')
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/functional.py:359 in public function `jvp`:
        D401: First line should be in imperative mood; try rephrasing (found 'Function')
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/functional.py:584 in public function `jacobian`:
        D401: First line should be in imperative mood; try rephrasing (found 'Function')
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/functional.py:841 in public function `hessian`:
        D202: No blank lines allowed after function docstring (found 1)
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/functional.py:841 in public function `hessian`:
        D401: First line should be in imperative mood; try rephrasing (found 'Function')
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/functional.py:973 in public function `vhp`:
        D202: No blank lines allowed after function docstring (found 1)
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/functional.py:973 in public function `vhp`:
        D205: 1 blank line required between summary line and description (found 0)
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/functional.py:973 in public function `vhp`:
        D400: First line should end with a period (not 'e')
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/functional.py:973 in public function `vhp`:
        D401: First line should be in imperative mood; try rephrasing (found 'Function')
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/functional.py:1076 in public function `hvp`:
        D202: No blank lines allowed after function docstring (found 1)
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/functional.py:1076 in public function `hvp`:
        D205: 1 blank line required between summary line and description (found 0)
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/functional.py:1076 in public function `hvp`:
        D400: First line should end with a period (not 'r')
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/functional.py:1076 in public function `hvp`:
        D401: First line should be in imperative mood; try rephrasing (found 'Function')
20
```
After:
```
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/functional.py:1 at module level:
        D100: Missing docstring in public module
1
```
3) torch/autograd/profiler_legacy.py

Before:
```
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_legacy.py:1 at module level:
        D100: Missing docstring in public module
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_legacy.py:27 in public class `profile`:
        D400: First line should end with a period (not 'd')
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_legacy.py:29 in public method `__init__`:
        D107: Missing docstring in __init__
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_legacy.py:62 in public method `config`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_legacy.py:74 in public method `__enter__`:
        D105: Missing docstring in magic method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_legacy.py:86 in public method `__exit__`:
        D105: Missing docstring in magic method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_legacy.py:103 in public method `__repr__`:
        D105: Missing docstring in magic method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_legacy.py:108 in public method `__str__`:
        D105: Missing docstring in magic method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_legacy.py:117 in public method `table`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_legacy.py:141 in public method `export_chrome_trace`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_legacy.py:148 in public method `export_stacks`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_legacy.py:154 in public method `key_averages`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_legacy.py:161 in public method `total_average`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_legacy.py:170 in public method `self_cpu_time_total`:
        D205: 1 blank line required between summary line and description (found 0)
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_legacy.py:170 in public method `self_cpu_time_total`:
        D400: First line should end with a period (not 'f')
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_legacy.py:180 in private nested function `_get_record_key`:
        D205: 1 blank line required between summary line and description (found 0)
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_legacy.py:180 in private nested function `_get_record_key`:
        D400: First line should end with a period (not 'd')
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_legacy.py:180 in private nested function `_get_record_key`:
        D401: First line should be in imperative mood (perhaps 'Return', not 'Returns')
18
```
After:
```
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_legacy.py:1 at module level:
        D100: Missing docstring in public module
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_legacy.py:29 in public method `__init__`:
        D107: Missing docstring in __init__
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_legacy.py:62 in public method `config`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_legacy.py:74 in public method `__enter__`:
        D105: Missing docstring in magic method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_legacy.py:86 in public method `__exit__`:
        D105: Missing docstring in magic method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_legacy.py:103 in public method `__repr__`:
        D105: Missing docstring in magic method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_legacy.py:108 in public method `__str__`:
        D105: Missing docstring in magic method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_legacy.py:117 in public method `table`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_legacy.py:141 in public method `export_chrome_trace`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_legacy.py:148 in public method `export_stacks`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_legacy.py:154 in public method `key_averages`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_legacy.py:161 in public method `total_average`:
        D102: Missing docstring in public method
12
```

4) torch/autograd/gradcheck.py

Before:
```
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/gradcheck.py:1 at module level:
        D100: Missing docstring in public module
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/gradcheck.py:27 in public class `GradcheckError`:
        D204: 1 blank line required after class docstring (found 0)
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/gradcheck.py:27 in public class `GradcheckError`:
        D400: First line should end with a period (not '`')
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/gradcheck.py:258 in private function `_get_numerical_jacobian`:
        D205: 1 blank line required between summary line and description (found 0)
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/gradcheck.py:258 in private function `_get_numerical_jacobian`:
        D400: First line should end with a period (not 'f')
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/gradcheck.py:258 in private function `_get_numerical_jacobian`:
        D401: First line should be in imperative mood (perhaps 'Compute', not 'Computes')
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/gradcheck.py:308 in public function `get_numerical_jacobian`:
        D401: First line should be in imperative mood; try rephrasing (found 'Deprecated')
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/gradcheck.py:459 in public function `get_numerical_jacobian_wrt_specific_input`:
        D103: Missing docstring in public function
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/gradcheck.py:488 in private function `_get_analytical_jacobian_forward_ad`:
        D205: 1 blank line required between summary line and description (found 0)
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/gradcheck.py:488 in private function `_get_analytical_jacobian_forward_ad`:
        D400: First line should end with a period (not 't')
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/gradcheck.py:488 in private function `_get_analytical_jacobian_forward_ad`:
        D401: First line should be in imperative mood (perhaps 'Compute', not 'Computes')
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/gradcheck.py:816 in public function `get_analytical_jacobian`:
        D103: Missing docstring in public function
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/gradcheck.py:1944 in public function `gradcheck`:
        D205: 1 blank line required between summary line and description (found 0)
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/gradcheck.py:1944 in public function `gradcheck`:
        D400: First line should end with a period (not 'l')
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/gradcheck.py:2133 in public function `gradgradcheck`:
        D205: 1 blank line required between summary line and description (found 0)
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/gradcheck.py:2133 in public function `gradgradcheck`:
        D400: First line should end with a period (not 's')
16
```
After:
```
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/gradcheck.py:1 at module level:
        D100: Missing docstring in public module
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/gradcheck.py:463 in public function `get_numerical_jacobian_wrt_specific_input`:
        D103: Missing docstring in public function
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/gradcheck.py:820 in public function `get_analytical_jacobian`:
        D103: Missing docstring in public function
3
```
5) torch/autograd/function.py

Before:
```
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:1 at module level:
        D100: Missing docstring in public module
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:27 in public class `FunctionCtx`:
        D101: Missing docstring in public class
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:29 in public method `save_for_backward`:
        D401: First line should be in imperative mood (perhaps 'Save', not 'Saves')
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:88 in public method `save_for_forward`:
        D401: First line should be in imperative mood (perhaps 'Save', not 'Saves')
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:141 in public method `mark_dirty`:
        D401: First line should be in imperative mood (perhaps 'Mark', not 'Marks')
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:177 in public method `mark_shared_storage`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:185 in public method `mark_non_differentiable`:
        D401: First line should be in imperative mood (perhaps 'Mark', not 'Marks')
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:217 in public method `set_materialize_grads`:
        D401: First line should be in imperative mood (perhaps 'Set', not 'Sets')
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:276 in public class `BackwardCFunction`:
        D101: Missing docstring in public class
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:277 in public method `apply`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:291 in public method `apply_jvp`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:308 in public method `__init__`:
        D107: Missing docstring in __init__
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:322 in private method `forward`:
        D205: 1 blank line required between summary line and description (found 0)
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:322 in private method `forward`:
        D400: First line should end with a period (not 's')
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:322 in private method `forward`:
        D401: First line should be in imperative mood; try rephrasing (found 'This')
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:384 in private method `backward`:
        D205: 1 blank line required between summary line and description (found 0)
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:384 in private method `backward`:
        D400: First line should end with a period (not 'e')
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:384 in private method `backward`:
        D401: First line should be in imperative mood (perhaps 'Define', not 'Defines')
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:416 in private method `jvp`:
        D205: 1 blank line required between summary line and description (found 0)
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:416 in private method `jvp`:
        D400: First line should end with a period (not 'e')
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:416 in private method `jvp`:
        D401: First line should be in imperative mood (perhaps 'Define', not 'Defines')
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:439 in public class `Function`:
        D400: First line should end with a period (not '`')
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:472 in public method `__init__`:
        D107: Missing docstring in __init__
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:482 in public method `__call__`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:505 in public method `vmap`:
        D205: 1 blank line required between summary line and description (found 0)
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:505 in public method `vmap`:
        D400: First line should end with a period (not 'h')
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:505 in public method `vmap`:
        D401: First line should be in imperative mood (perhaps 'Define', not 'Defines')
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:536 in public method `apply`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:564 in public function `once_differentiable`:
        D103: Missing docstring in public function
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:612 in public function `traceable`:
        D401: First line should be in imperative mood (perhaps 'Mark', not 'Marks')
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:626 in public class `InplaceFunction`:
        D101: Missing docstring in public class
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:627 in public method `__init__`:
        D107: Missing docstring in __init__
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:741 in public class `NestedIOFunction`:
        D101: Missing docstring in public class
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:761 in public method `backward`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:768 in public method `forward`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:775 in public method `save_for_backward`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:780 in public method `saved_tensors`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:784 in public method `mark_dirty`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:787 in public method `mark_non_differentiable`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:790 in public method `forward_extended`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:793 in public method `backward_extended`:
        D102: Missing docstring in public method
41
```
After:
```
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:1 at module level:
        D100: Missing docstring in public module
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:27 in public class `FunctionCtx`:
        D101: Missing docstring in public class
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:177 in public method `mark_shared_storage`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:276 in public class `BackwardCFunction`:
        D101: Missing docstring in public class
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:277 in public method `apply`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:291 in public method `apply_jvp`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:308 in public method `__init__`:
        D107: Missing docstring in __init__
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:471 in public method `__init__`:
        D107: Missing docstring in __init__
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:481 in public method `__call__`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:536 in public method `apply`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:564 in public function `once_differentiable`:
        D103: Missing docstring in public function
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:626 in public class `InplaceFunction`:
        D101: Missing docstring in public class
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:627 in public method `__init__`:
        D107: Missing docstring in __init__
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:741 in public class `NestedIOFunction`:
        D101: Missing docstring in public class
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:761 in public method `backward`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:768 in public method `forward`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:775 in public method `save_for_backward`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:780 in public method `saved_tensors`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:784 in public method `mark_dirty`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:787 in public method `mark_non_differentiable`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:790 in public method `forward_extended`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/function.py:793 in public method `backward_extended`:
        D102: Missing docstring in public method
22
```
6) torch/autograd/profiler_util.py

Before:
```
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:1 at module level:
        D100: Missing docstring in public module
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:26 in public class `EventList`:
        D400: First line should end with a period (not ')')
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:28 in public method `__init__`:
        D107: Missing docstring in __init__
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:46 in public method `__str__`:
        D105: Missing docstring in magic method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:70 in private method `_populate_cpu_children`:
        D202: No blank lines allowed after function docstring (found 1)
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:70 in private method `_populate_cpu_children`:
        D205: 1 blank line required between summary line and description (found 0)
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:70 in private method `_populate_cpu_children`:
        D401: First line should be in imperative mood (perhaps 'Populate', not 'Populates')
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:166 in public method `self_cpu_time_total`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:179 in public method `table`:
        D401: First line should be in imperative mood (perhaps 'Print', not 'Prints')
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:210 in public method `export_chrome_trace`:
        D401: First line should be in imperative mood (perhaps 'Export', not 'Exports')
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:266 in public method `supported_export_stacks_metrics`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:273 in public method `export_stacks`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:354 in private function `_format_time`:
        D400: First line should end with a period (not 't')
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:354 in private function `_format_time`:
        D401: First line should be in imperative mood (perhaps 'Define', not 'Defines')
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:365 in private function `_format_time_share`:
        D400: First line should end with a period (not 't')
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:365 in private function `_format_time_share`:
        D401: First line should be in imperative mood (perhaps 'Define', not 'Defines')
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:373 in private function `_format_memory`:
        D400: First line should end with a period (not 'g')
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:373 in private function `_format_memory`:
        D401: First line should be in imperative mood (perhaps 'Return', not 'Returns')
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:408 in public method `cpu_time`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:412 in public method `cuda_time`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:416 in public method `privateuse1_time`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:420 in public class `Interval`:
        D101: Missing docstring in public class
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:421 in public method `__init__`:
        D107: Missing docstring in __init__
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:425 in public method `elapsed_us`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:435 in public method `__init__`:
        D107: Missing docstring in __init__
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:488 in public method `append_kernel`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:504 in public method `set_cpu_parent`:
        D400: First line should end with a period (not 't')
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:518 in public method `self_cpu_memory_usage`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:526 in public method `self_cuda_memory_usage`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:534 in public method `self_privateuse1_memory_usage`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:542 in public method `self_cpu_time_total`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:550 in public method `cuda_time_total`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:567 in public method `self_cuda_time_total`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:579 in public method `cpu_time_total`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:586 in public method `self_privateuse1_time_total`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:598 in public method `privateuse1_time_total`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:615 in public method `key`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:618 in public method `__repr__`:
        D105: Missing docstring in magic method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:659 in public method `__init__`:
        D107: Missing docstring in __init__
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:687 in public method `add`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:726 in public method `__iadd__`:
        D105: Missing docstring in magic method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:729 in public method `__repr__`:
        D105: Missing docstring in magic method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:763 in public class `StringTable`:
        D101: Missing docstring in public class
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:764 in public method `__missing__`:
        D105: Missing docstring in magic method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:773 in public class `MemRecordsAcc`:
        D400: First line should end with a period (not 'l')
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:775 in public method `__init__`:
        D107: Missing docstring in __init__
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:783 in public method `in_interval`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:846 in private function `_build_table`:
        D401: First line should be in imperative mood (perhaps 'Print', not 'Prints')
48
```
After :
```
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:1 at module level:
        D100: Missing docstring in public module
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:28 in public method `__init__`:
        D107: Missing docstring in __init__
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:46 in public method `__str__`:
        D105: Missing docstring in magic method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:166 in public method `self_cpu_time_total`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:266 in public method `supported_export_stacks_metrics`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:273 in public method `export_stacks`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:408 in public method `cpu_time`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:412 in public method `cuda_time`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:416 in public method `privateuse1_time`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:420 in public class `Interval`:
        D101: Missing docstring in public class
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:421 in public method `__init__`:
        D107: Missing docstring in __init__
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:425 in public method `elapsed_us`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:435 in public method `__init__`:
        D107: Missing docstring in __init__
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:488 in public method `append_kernel`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:518 in public method `self_cpu_memory_usage`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:526 in public method `self_cuda_memory_usage`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:534 in public method `self_privateuse1_memory_usage`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:542 in public method `self_cpu_time_total`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:550 in public method `cuda_time_total`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:567 in public method `self_cuda_time_total`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:579 in public method `cpu_time_total`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:586 in public method `self_privateuse1_time_total`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:598 in public method `privateuse1_time_total`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:615 in public method `key`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:618 in public method `__repr__`:
        D105: Missing docstring in magic method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:659 in public method `__init__`:
        D107: Missing docstring in __init__
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:687 in public method `add`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:726 in public method `__iadd__`:
        D105: Missing docstring in magic method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:729 in public method `__repr__`:
        D105: Missing docstring in magic method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:763 in public class `StringTable`:
        D101: Missing docstring in public class
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:764 in public method `__missing__`:
        D105: Missing docstring in magic method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:775 in public method `__init__`:
        D107: Missing docstring in __init__
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/profiler_util.py:783 in public method `in_interval`:
        D102: Missing docstring in public method
33
```
7) torch/autograd/grad_mode.py

Before:
```
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/grad_mode.py:1 at module level:
        D100: Missing docstring in public module
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/grad_mode.py:73 in public method `__init__`:
        D107: Missing docstring in __init__
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/grad_mode.py:78 in public method `__enter__`:
        D105: Missing docstring in magic method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/grad_mode.py:82 in public method `__exit__`:
        D105: Missing docstring in magic method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/grad_mode.py:133 in public method `__enter__`:
        D105: Missing docstring in magic method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/grad_mode.py:137 in public method `__exit__`:
        D105: Missing docstring in magic method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/grad_mode.py:182 in public method `__init__`:
        D107: Missing docstring in __init__
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/grad_mode.py:187 in public method `__enter__`:
        D105: Missing docstring in magic method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/grad_mode.py:190 in public method `__exit__`:
        D105: Missing docstring in magic method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/grad_mode.py:193 in public method `clone`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/grad_mode.py:198 in public class `inference_mode`:
        D400: First line should end with a period (not 'e')
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/grad_mode.py:250 in public method `__init__`:
        D107: Missing docstring in __init__
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/grad_mode.py:257 in public method `__new__`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/grad_mode.py:262 in public method `__enter__`:
        D105: Missing docstring in magic method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/grad_mode.py:266 in public method `__exit__`:
        D105: Missing docstring in magic method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/grad_mode.py:269 in public method `clone`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/grad_mode.py:301 in public method `__init__`:
        D107: Missing docstring in __init__
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/grad_mode.py:306 in public method `__enter__`:
        D105: Missing docstring in magic method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/grad_mode.py:309 in public method `__exit__`:
        D105: Missing docstring in magic method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/grad_mode.py:312 in public method `clone`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/grad_mode.py:354 in private class `_unsafe_preserve_version_counter`:
        D400: First line should end with a period (not '!')
21
```
After:
```
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/grad_mode.py:1 at module level:
        D100: Missing docstring in public module
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/grad_mode.py:73 in public method `__init__`:
        D107: Missing docstring in __init__
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/grad_mode.py:78 in public method `__enter__`:
        D105: Missing docstring in magic method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/grad_mode.py:82 in public method `__exit__`:
        D105: Missing docstring in magic method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/grad_mode.py:133 in public method `__enter__`:
        D105: Missing docstring in magic method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/grad_mode.py:137 in public method `__exit__`:
        D105: Missing docstring in magic method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/grad_mode.py:182 in public method `__init__`:
        D107: Missing docstring in __init__
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/grad_mode.py:187 in public method `__enter__`:
        D105: Missing docstring in magic method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/grad_mode.py:190 in public method `__exit__`:
        D105: Missing docstring in magic method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/grad_mode.py:193 in public method `clone`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/grad_mode.py:250 in public method `__init__`:
        D107: Missing docstring in __init__
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/grad_mode.py:257 in public method `__new__`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/grad_mode.py:262 in public method `__enter__`:
        D105: Missing docstring in magic method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/grad_mode.py:266 in public method `__exit__`:
        D105: Missing docstring in magic method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/grad_mode.py:269 in public method `clone`:
        D102: Missing docstring in public method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/grad_mode.py:301 in public method `__init__`:
        D107: Missing docstring in __init__
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/grad_mode.py:306 in public method `__enter__`:
        D105: Missing docstring in magic method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/grad_mode.py:309 in public method `__exit__`:
        D105: Missing docstring in magic method
/home/ubuntu/Desktop/Docathon/pytorch/torch/autograd/grad_mode.py:312 in public method `clone`:
        D102: Missing docstring in public method
19
```

@svekars @kit1980 @subramen

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113266
Approved by: https://github.com/aaronenyeshi, https://github.com/soulitzer, https://github.com/kit1980
2023-11-14 23:39:43 +00:00
70f2adaec3 Setup_context does not contain default values of forward() (#108561)
Fixes #108529

As the title shown.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108561
Approved by: https://github.com/soulitzer
2023-09-19 16:23:52 +00:00
f2639a2c37 Back out "Dynamo support for autograd.Function w/ once_differentiable (#108686)" (#109199)
Summary:
Original commit changeset: e11cddf1fecc

Original Phabricator Diff: D49064185

Test Plan:
Comparing PT1 and PT2 performance on the IG Feed Model with this diff backed out: N4274204

Comparing the PT1 and PT2 performance on IG Feed with this diff committed: N4271093

Reviewed By: zou3519

Differential Revision: D49230047

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109199
Approved by: https://github.com/zou3519, https://github.com/xw285cornell
2023-09-13 15:43:20 +00:00
ef2bbe1ae1 Dynamo support for autograd.Function w/ once_differentiable (#108686)
Fixes #106893

There are two main changes:
- Before this PR, the function returned by once_differentiable was
included in skipfiles (because its .co_code is
torch/autograd/function.py). This PR adds a mechanism to tell Dynamo
to inline a function, no matter if it is included in skipfiles.
- A bugfix: when we are introspecting the backward, we need to turn the
grad mode off. This is to accurately model the eager-mode semantics:
In eager-mode PyTorch, if second-order gradients were not requested, then
the grad mode is off. torch.compile does not work with higher-order
gradients and just assumes we do first-order gradients, so this is OK.

Test Plan:
- new test

Differential Revision: [D49064185](https://our.internmc.facebook.com/intern/diff/D49064185)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108686
Approved by: https://github.com/voznesenskym
2023-09-08 16:10:32 +00:00
9178deedff removing some redundant str splits (#106089)
drop some redundant string splits, no factual changes, just cleaning the codebase

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106089
Approved by: https://github.com/albanD, https://github.com/malfet
2023-09-01 00:22:58 +00:00
3bf922a6ce Apply UFMT to low traffic torch modules (#106249)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106249
Approved by: https://github.com/Skylion007
2023-07-29 23:37:30 +00:00
c902b84e0b Compiled autograd (#103822)
This branch:
1) converts the autograd tape into an FX graph
2) caches that conversion using a "shadow" graph
3) compiles and runs the generated FX graph instead of the normal autograd

What works currently:
1) Caching, capture, and initial integration
2) Backwards hooks
3) Inlining AotAutograd generated subgraphs
4) torch.compiling the generated FX graph
5) Auto-detecting dynamic shapes based on changes

Future work
1) Larger scale testing
1) Boxed calling convention, so memory can be freed incrementally
1) Support hooks on SavedTensor
1) Additional testing by running eager autograd tests under compiled_autograd.enable()

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103822
Approved by: https://github.com/ezyang, https://github.com/albanD
2023-07-24 21:12:05 +00:00
79c5e33349 [BE] Enable ruff's UP rules and autoformat nn/ mps/ and torch/ (#105436)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105436
Approved by: https://github.com/malfet, https://github.com/albanD
2023-07-21 07:38:46 +00:00
e7681b53e3 Fix typing for setup_context in autograd (#101464)
The original only matches a tuple of length 1, but it's intended to match any length.

Also, it now aligns with the docstring at L320
d5cba0618a/torch/autograd/function.py (L320)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/101464
Approved by: https://github.com/soulitzer, https://github.com/kit1980
2023-05-16 18:41:35 +00:00
bafa2c4724 Change 'w.r.t.' to 'wrt' in function docstrings to fix doc rendering (#100028)
Fixes #72428 according to decision reached in comments.

I've left other instances of `w.r.t.` in tact (e.g. in parameter/return descriptions, in comments, etc) because there were many, and I didn't' want to go out-of-scope. That being said, I'm happy to change those as well if we'd prefer the consistency!

I've also fixed a typo that I came across while grepping for instances.

Will update with screenshots once docs are built.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100028
Approved by: https://github.com/albanD
2023-04-25 23:53:26 +00:00
b005ec62b9 [BE] Remove dependency on six and future (#94709)
Remove the Python 2 and 3 compatibility library [six](https://pypi.org/project/six) and [future](https://pypi.org/project/future) and `torch._six`. We only support Python 3.8+ now. It's time to retire them.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94709
Approved by: https://github.com/malfet, https://github.com/Skylion007
2023-02-14 09:14:14 +00:00
5b1cedacde [BE] [2/3] Rewrite super() calls in functorch and torch (#94588)
Rewrite Python built-in class `super()` calls. Only non-semantic changes should be applied.

- #94587
- #94588
- #94592

Also, methods with only a `super()` call are removed:

```diff
class MyModule(nn.Module):
-   def __init__(self):
-       super().__init__()
-
    def forward(self, ...):
        ...
```

Some cases that change the semantics should be kept unchanged. E.g.:

f152a79be9/caffe2/python/net_printer.py (L184-L190)

f152a79be9/test/test_jit_fuser_te.py (L2628-L2635)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94588
Approved by: https://github.com/ezyang, https://github.com/albanD
2023-02-10 21:16:33 +00:00
8fce9a09cd [BE]: pyupgrade Python to 3.8 - imports and object inheritance only (#94308)
Apply parts of pyupgrade to torch (starting with the safest changes).
This PR only does two things: removes the need to inherit from object and removes unused future imports.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94308
Approved by: https://github.com/ezyang, https://github.com/albanD
2023-02-07 21:10:56 +00:00
98b78aa11c [autograd.Function] setup_context always appears on the Function (#92312)
Previously, we used the existence of setup_context to switch between if
forward should take a ctx object or not.

To be consistent with all other staticmethod (which always exist on the
autograd.Function), this PR change it so that we use IF setup_context
gets overriden by the user to switch between if forward should take a
ctx object or not.

Fixes https://github.com/pytorch/pytorch/issues/91451

Test Plan:
- existing tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/92312
Approved by: https://github.com/albanD, https://github.com/soulitzer
2023-01-18 02:55:42 +00:00
81cc9bba5e [autograd.Function] Kill the extension feature flag (#92026)
This PR removes the autograd.Function extension feature flag. This was
previously used for development of the functorch <> autograd.Function
interaction.

It's been in master for long enough with the feature flag defaulting to
True, so it's time to remove it.

Test Plan:
- existing tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/92026
Approved by: https://github.com/soulitzer
2023-01-17 13:36:42 +00:00
2f9166ef89 [autograd.Function] Cleanup asymmetry in generate_vmap_rule and vmap (#91787)
This PR:
- changes generate_vmap_rule to either be True or False. Previously it
  could be True, False, or not set. This simplifies the implementation a
  bit.
- changes the vmap staticmethod to always be on the autograd.Function
  rather than sometimes defined.
  This is how the other staticmethod (forward, backward, jvp) are
  implemented and allows us to document it.

There are 4 possible states for the autograd.Function w.r.t. to the
above:
- generate_vmap_rule is True, vmap staticmethod overriden. This raises
  an error when used with vmap.
- generate_vmap_rule is False, vmap staticmethod overriden. This is
  valid.
- generate_vmap_rule is True, vmap staticmethod not overriden. This is
  valid.
- generate_vmap_rule is False, vmap staticmethod not overriden. This
  raises an error when used with vmap.

Future:
- setup_context needs the same treatment, but that's a bit tricker to
  implement.

Test Plan:
- new unittest
- existing tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91787
Approved by: https://github.com/soulitzer
2023-01-17 13:36:34 +00:00
264f5ed516 [autograd.Function] Add docs on the functorch interaction (#91452)
This PR:
- Updates autograd.Function.forward docs to reflect how you either
  define a forward with ctx or a separate forward and setup_context
- Updates the "Extending Autograd" docs to suggest the usage of
  autograd.Function with separate forward and setup_context. This should
  be the default because there is a low barrier to go from this to
  an autograd.Function that is fully supported by functorch transforms.
- Adds a new "Extending torch.func with autograd.Function" doc that
  explains how to use autograd.Function with torch.func. It also
  explains how to use generate_vmap_rule and how to manually write a
  vmap staticmethod.

While writing this, I noticed that the implementation of
setup_context staticmethod/generate_vmap_rule/vmap staticmethod are a
bit inconsistent with the other method/attributes on autograd.Function:
- https://github.com/pytorch/pytorch/issues/91451
- I'm happy to fix those if we think it is a problem, either in this PR
  or a followup (this PR is getting long, I want some initial docs
  out that I can point early adopters at, and fixing the problems in the
  future isn't really BC-breaking).

Test Plan:
- view docs preview
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91452
Approved by: https://github.com/soulitzer
2023-01-04 00:28:19 +00:00
ad782ff7df Enable xdoctest runner in CI for real this time (#83816)
Builds on #83317 and enables running the doctests. Just need to figure out what is causing the failures.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/83816
Approved by: https://github.com/ezyang, https://github.com/malfet
2022-12-29 05:32:42 +00:00
b66862ba87 [autograd Function] Don't materialize forward grad for non-differentiable types (#91183)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91183
Approved by: https://github.com/zou3519
2022-12-21 05:05:44 +00:00
2f37804cae [generate_vmap_rule] Add generate_vmap_rule to autograd.Function (#90966)
Design document:
https://docs.google.com/document/d/1bIQkWXy3J35_20c_a5kchikabBW5M8_uRAhl0BIMwU4/edit

This PR adds a `generate_vmap_rule` option (default False) to autograd.Function.
By setting it to True, a user promises to us that their autograd.Function's
{forward, backward, jvp}, if defined, only uses PyTorch operations, in addition to the other
limitations of autograd.Function+functorch (such as the user not
capturing any Tensors being transformed over from outside of the
autograd.Function).

Concretely, the approach is:
- we update `custom_function_call` to accept an additional
`generate_vmap_rule` argument.
- The vmap rule for `custom_function_call` and `generate_vmap_rule=True`
is: we construct a vmapped version of the autograd.Function and dispatch
on it.
- The vmapped version of the autograd.Function can be thought of like
the following: if we have an autograd.Function Foo, then
VmappedFoo.apply(in_dims, ...) has the same semantics as
vmap(Foo.apply, in_dims...)
- VmappedFoo's forward, setup_context, and backward staticmethod are
vmapped versions of Foo's staticmethods.
- See the design doc for more motivation and explanation

Test Plan:
- This PR introduces additional autograd.Function with the suffix "GenVmap" to
autograd_function_db.
- There are also some minor UX tests

Future:
- jvp support
- likely more testing to come, but please let me know if you have
cases that you want me to test here.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90966
Approved by: https://github.com/soulitzer
2022-12-21 00:34:44 +00:00
da42eab48b Fix circular import in torch/autograd/function.py (#90415)
It turns out it is possible to break cycles by not directly importing a
module:
- there's a problem that torch.jit imports torch._ops and torch._ops
import torch.jit
- there's another problem that torch.autograd.function imports
custom_function_call but torch._functorch.autograd_function imports
torch.autograd.function

The "better" way to handle all of this is to do some large refactoring so
that torch._functorch.autograd_function imports some file that has
_SingleLevelAutogradFunction and then have torch.autograd.function
depend on torch.functorch.autograd_function... (and ditto for torch.jit
vs torch._ops), but I'm scared to move code around too much for BC
reasons and the fix in this PR works well.

Test Plan:
- import torch
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90415
Approved by: https://github.com/albanD, https://github.com/soulitzer
2022-12-14 16:20:57 +00:00
7342251281 functorch.grad support for autograd.Function (#89860)
Happy to split this PR more if it helps.

This PR adds functorch.grad support for autograd.Function. There's a lot
going on; here is the high level picture and there are more details as
comments in the code.

Mechanism (PyOperator)
- Somehow, autograd.Function needs to dispatch with functorch. This is
necessary because every layer of functorch needs to see the
autograd.Function; grad layers need to preserve the backward pass.
- The mechanism for this is via PyOperator. If functorch transforms are
active, then we wrap the autograd.Function in a `custom_function_call`
PyOperator where we are able to define various rules for functorch
transforms.
- `custom_function_call` has a rule for the functorch grad transform.

autograd.Function changes
- I needed to make some changes to autograd.Function to make this work.
- First, this PR splits autograd.Function into a _SingleLevelFunction
(that works with a single level of functorch transform) and
autograd.Function (which works with multiple levels). This is necessary
because functorch's grad rule needs some way of specifying a backward
pass for that level only.
- This PR changes autograd.Function's apply to eitehr call
`custom_function_call` (if functorch is active) or super().apply (if
functorch isn't active).

Testing
- Most of this PR is just testing. It creates an autograd.Function
OpInfo database that then gets passed to the functorch grad-based tests
(grad, vjp, vjpvjp).
- Since functorch transform tests are autogenerated from OpInfo tests,
this is the easiest way to test various autograd.Function with
functorch.

Future
- jvp and vmap support coming next
- better error message (functorch only supports autograd.Function that
have the optional setup_context staticmethod)
- documentation to come when we remove the feature flag

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89860
Approved by: https://github.com/soulitzer
2022-12-08 19:31:04 +00:00
eb314f9b1a Add setup_context staticmethod to autograd.Function (#89859)
Adds a setup_context staticmethod to autograd.Function.
If it exists, then the user splits the ctx-specific logic from the
forward() and puts it in the setup_context staticmethod.

Docs will come later when we remove the feature flag.

Test Plan:
- some light tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89859
Approved by: https://github.com/soulitzer
2022-12-08 19:31:04 +00:00
2b20a3d3ef Simplify by using yield from (#90160)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90160
Approved by: https://github.com/albanD, https://github.com/soulitzer
2022-12-05 20:48:05 +00:00