22 Commits

Author SHA1 Message Date
d25e6e623f Fix unused Python variables in test/[a-d]* (#134665)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/134665
Approved by: https://github.com/albanD
2024-12-13 22:13:12 +00:00
cb71bcc542 Replace clone.detach with detach.clone (#140264)
Fixes #64532

As state in issue, replace `clone.detach` by `detach.clone`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/140264
Approved by: https://github.com/soulitzer
2024-11-13 07:01:02 +00:00
548c460bf1 [BE][Easy][7/19] enforce style for empty lines in import segments in test/[a-c]*/ and test/[q-z]*/ (#129758)
See https://github.com/pytorch/pytorch/pull/129751#issue-2380881501. Most changes are auto-generated by linter.

You can review these PRs via:

```bash
git diff --ignore-all-space --ignore-blank-lines HEAD~1
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/129758
Approved by: https://github.com/ezyang
2024-07-31 10:54:03 +00:00
f71e368969 UFMT formatting on test/autograd test/ao test/cpp test/backends (#123369)
Partially addresses #123062

Ran lintrunner on
- test/_test_bazel.py
- test/ao
- test/autograd test/backends test/benchmark_uitls test/conftest.py test/bottleneck_test test/cpp

Pull Request resolved: https://github.com/pytorch/pytorch/pull/123369
Approved by: https://github.com/huydhn
2024-04-05 18:51:38 +00:00
77d979f748 Autograd attaches logging hooks only in debug level (#116522)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/116522
Approved by: https://github.com/albanD
2024-01-02 20:06:18 +00:00
cfb3cd11c1 Add basic autograd TORCH_LOGS support (#115438)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115438
Approved by: https://github.com/albanD
2023-12-20 15:23:44 +00:00
deb800ee81 Fix typo under test directory (#111304)
This PR fixes typo in comments under `test` directory.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111304
Approved by: https://github.com/Skylion007
2023-10-16 23:06:06 +00:00
b9ad7bc533 Don't run test/autograd/test_fallback.py in parallel (#106866)
Fixes https://github.com/pytorch/pytorch/issues/106754

This PR:
- moves test/autograd/test_fallback.py to test_autograd_fallback.py and
removes it from test_autograd.py (necessary for the next step)
- adds test_autograd_fallback.py to parallel test blocklist.
- lintrunner really wanted to make changes to the files, but other than
that, it is a move.

The problem is that we set a global option (the autograd fallback mode)
during these tests which may cause the tests to interfere with each
other.

Test Plan:
- python test/run_test.py -i test_autograd_fallback

NOTE to diff train oncall:
- You'll also need to modify the test/autograd/test_fallback.py TARGET in
caffe2/test/TARGETS since we renamed the file.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106866
Approved by: https://github.com/soulitzer
2023-08-10 00:26:23 +00:00
da4f3fdca1 Fix bug in basicAutogradNotImplementedFallback (#105660)
In some situations we were registering a hook on a Tensor that does not
require grad, which immediately raises an error. This PR fixes that by
skipping the hook registration if the Tensor in question does not
require grad.

Test Plan:
- new tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105660
Approved by: https://github.com/soulitzer
2023-07-25 18:41:22 +00:00
04da0c76a0 Improve basicAutogradNotImplementedFallback + new tests (#105591)
This PR:
- removes some reference count bumps (to potentially improve overhead)
- adds some tests for undefined gradients
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105591
Approved by: https://github.com/soulitzer
2023-07-21 14:37:21 +00:00
f03a8f0589 [reland] Deprecate registering autograd kernels at not an autograd key (#105078)
Summary:
Context
-------
This PR adds a new fallback to the Autograd dispatch keys.

If you would prefer the old behavior:
- A quick (unsupported) way to get the previous behavior is to call
`torch._C._set_autograd_fallback("nothing")`
- Register "torch::CppFunction::makeFallthrough()" to your Autograd key,
like in https://gist.github.com/zou3519/d09a5f4b1afe2430af09fea67c6ff2c8

It is possible that this PR regresses performance of overhead-bound
models. If this is the case, please reach out (and apply one of the
temporary fixes in the previous section).

Description for reviewers
-------------------------
In order to deprecate registering autograd kernels at not an autograd
key, we add a fallback to the Autograd dispatch keys. This fallback
raises a warning if the user attempts to backprop through the operator
and is also configurable to either warn or not warn.

The goal of this PR is to
- preserve as much BC as possible
- raise a warning that whatever the user is doing is potentially wrong.
- be as performant as possible

There are roughly two cases:
- if the post-autograd kernels return a Tensor that requires grad, then
we install an autograd hook that raises a warning. We are preserving BC
in that it is possible that the user has a torch::autograd::Function
registered to their CPU key.
- if the post-autograd kernels return Tensors that do not require grad,
then we make them require_grad and install a WarnNotImplemented grad fn
that warns in the backward pass. This is mildy BC-breaking (see next
section).

Test Plan:
- bunch of new tests

BC-Breaking Note
----------------
This PR adds a new fallback to the Autograd dispatch keys. It affects
custom operators that do not have a kernel registered to the Autograd
keys (e.g. AutogradCPU and AutogradCUDA).

If the previous behavior was that the custom operator would return
Tensors that do not require grad if the inputs do require grad, then
this PR changes it so that all floating-point and complex returns do
require grad. See the "Context" section above for how to get the old
behavior.

Differential Revision: D47408353

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105078
Approved by: https://github.com/soulitzer
2023-07-14 15:03:07 +00:00
24aa8b9b9a Revert "Deprecate registering autograd kernels at not an autograd key (#104481)"
This reverts commit ed13ab666419ae5dd3adbdb048c8f96f62b14b3d.

Reverted https://github.com/pytorch/pytorch/pull/104481 on behalf of https://github.com/atalman due to failed in periodic tests ([comment](https://github.com/pytorch/pytorch/pull/104481#issuecomment-1631552846))
2023-07-11 21:48:22 +00:00
ed13ab6664 Deprecate registering autograd kernels at not an autograd key (#104481)
Context
-------
This PR adds a new fallback to the Autograd dispatch keys.

If you would prefer the old behavior:
- A quick (unsupported) way to get the previous behavior is to call
`torch._C._set_autograd_fallback("nothing")`
- Register "torch::CppFunction::makeFallthrough()" to your Autograd key,
like in https://gist.github.com/zou3519/d09a5f4b1afe2430af09fea67c6ff2c8

It is possible that this PR regresses performance of overhead-bound
models. If this is the case, please reach out (and apply one of the
temporary fixes in the previous section).

Description for reviewers
-------------------------
In order to deprecate registering autograd kernels at not an autograd
key, we add a fallback to the Autograd dispatch keys. This fallback
raises a warning if the user attempts to backprop through the operator
and is also configurable to either warn or not warn.

The goal of this PR is to
- preserve as much BC as possible
- raise a warning that whatever the user is doing is potentially wrong.
- be as performant as possible

There are roughly two cases:
- if the post-autograd kernels return a Tensor that requires grad, then
we install an autograd hook that raises a warning. We are preserving BC
in that it is possible that the user has a torch::autograd::Function
registered to their CPU key.
- if the post-autograd kernels return Tensors that do not require grad,
then we make them require_grad and install a WarnNotImplemented grad fn
that warns in the backward pass. This is mildy BC-breaking (see next
section).

Test Plan:
- bunch of new tests

BC-Breaking Note
----------------
This PR adds a new fallback to the Autograd dispatch keys. It affects
custom operators that do not have a kernel registered to the Autograd
keys (e.g. AutogradCPU and AutogradCUDA).

If the previous behavior was that the custom operator would return
Tensors that do not require grad if the inputs do require grad, then
this PR changes it so that all floating-point and complex returns do
require grad. See the "Context" section above for how to get the old
behavior.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104481
Approved by: https://github.com/soulitzer
2023-07-11 16:48:39 +00:00
3b966a6ce3 [autograd] disable backward/grad for complex scalar output (#92753)
Fixes https://github.com/pytorch/pytorch/issues/92750

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92753
Approved by: https://github.com/ezyang
2023-02-23 11:38:27 +00:00
81722f6630 Fix autograd.functional tests to not fail with logging tensor
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76057

Approved by: https://github.com/albanD
2022-04-20 20:32:40 +00:00
0a1bc5f501 Miscellaneous __torch_function__ fixes
I figured these out by unconditionally turning on a no-op torch function
mode on the test suite and then fixing errors as they showed up.  Here's
what I found:

- _parse_to failed internal assert when __torch_function__'ed because it
  claims its name is "to" to the argument parser; added a name override
  so we know how to find the correct name

- Infix operator magic methods on Tensor did not uniformly handle
  __torch_function__ and TypeError to NotImplemented.  Now, we always
  do the __torch_function__ handling in
  _wrap_type_error_to_not_implemented and your implementation of
  __torch_function__ gets its TypeErrors converted to NotImplemented
  (for better or for worse; see
  https://github.com/pytorch/pytorch/issues/75462 )

- A few cases where code was incorrectly testing if a Tensor was
  Tensor-like in the wrong way, now use is_tensor_like (in grad
  and in distributions).  Also update docs for has_torch_function to
  push people to use is_tensor_like.

- is_grads_batched was dropped from grad in handle_torch_function, now
  fixed

- Report that you have a torch function even if torch function is
  disabled if a mode is enabled.  This makes it possible for a mode
  to return NotImplemented, pass to a subclass which does some
  processing and then pass back to the mode even after the subclass
  disables __torch_function__ (so the tensors are treated "as if"
  they are regular Tensors).  This brings the C++ handling behavior
  in line with the Python behavior.

- Make the Python implementation of overloaded types computation match
  the C++ version: when torch function is disabled, there are no
  overloaded types (because they all report they are not overloaded).

Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/75484

Approved by: https://github.com/zou3519
2022-04-11 16:52:16 +00:00
3f140c5b32 Parametrize remaining tests in TestAutogradFunctional to use logging_tensor
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73970

Approved by: https://github.com/albanD
2022-03-15 14:54:01 +00:00
1c152f800b Parametrize some TestAutogradFunctional tests to use logging_tensor (#73854)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73854

Remaining tests can be parametrized as well once this PR is finalized

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D34703584

Pulled By: soulitzer

fbshipit-source-id: 83d90748c87cc8a18a7bb034e6402dae3d8c23a1
(cherry picked from commit 427aace4800ea37b847353a76df546aadc66642e)
2022-03-08 23:45:34 +00:00
a149a4dda5 Clean up some tests to use common_utils.parametrize (#73853)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/73853

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D34703585

Pulled By: soulitzer

fbshipit-source-id: 20a1ba5df7448c78dce866bbebfeabf3cdc960e0
(cherry picked from commit dd0e2af93d1db89c587ee513c8fb3e25aca7c566)
2022-03-08 23:45:34 +00:00
15df909d34 Move autograd functional tests to separate file (#73852)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/73852

Test Plan: Imported from OSS

Reviewed By: bdhirsh

Differential Revision: D34703586

Pulled By: soulitzer

fbshipit-source-id: 58e8b17ab3dc41ce7bf15bb32ea0653d90f44791
(cherry picked from commit 526ab20fd6026144171bf3b02a5381da57ca9f91)
2022-03-08 23:45:34 +00:00
299a6a65b2 [skip ci] Set test owners for autograd tests (#66834)
Summary:
Action following https://github.com/pytorch/pytorch/issues/66232

cc ezyang albanD zou3519 gqchen pearu nikitaved soulitzer Lezcano Varal7

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66834

Reviewed By: albanD

Differential Revision: D31761778

Pulled By: janeyx99

fbshipit-source-id: 355edfb1b940154e84fbba6f7b096605e75ae459
2021-10-19 08:35:02 -07:00
cdb46f4c6e extract TestAutogradComplex into its own test file (#63400)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63400

This is the first step to break up test_autograd.py for #63205.

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D30541499

Pulled By: dagitses

fbshipit-source-id: 8d9d32007938b9eade0e88f95a6a3190e7e2ef01
2021-09-02 04:34:35 -07:00