Commit Graph

43 Commits

Author SHA1 Message Date
c09617f98f Revert "Revert "Remove python key when setting functional tensor metadata (#81401)"" (#81456)
This reverts commit f2bb25a758b358c0534aee5e103c2022afc2ffd6.

For the gory story see https://github.com/pytorch/pytorch/issues/73537
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81456
Approved by: https://github.com/Chillee
2022-07-15 03:53:40 +00:00
cce2f0d0e4 Disable test_functionalization.py under torchdynamo (#81458)
Tracked at https://github.com/pytorch/pytorch/issues/81457

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81458
Approved by: https://github.com/anijain2305
2022-07-14 16:56:56 +00:00
f2bb25a758 Revert "Remove python key when setting functional tensor metadata (#81401)"
This reverts commit b0199c06f604dcfaf59bd59ecee9f638ef0e5c3f.

Reverted https://github.com/pytorch/pytorch/pull/81401 on behalf of https://github.com/clee2000 due to broke trunk win force_on_cpu tests https://github.com/pytorch/pytorch/runs/7329017706?check_suite_focus=true
2022-07-13 21:55:47 +00:00
b0199c06f6 Remove python key when setting functional tensor metadata (#81401)
Fixes https://github.com/pytorch/pytorch/issues/81365

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81401
Approved by: https://github.com/bdhirsh
2022-07-13 19:57:40 +00:00
f2dcb11bac basic SymInt test for functionalization (#80418)
`expand` is one of a handful of ops with SymInt support today, so this PR gives a basic test that shows functionalization properly mapping `expand.SymInt` -> `expand_copy.SymInt`. I added the logic to handle this properly in https://github.com/pytorch/pytorch/pull/80251, but didn't add a test for it. (see the [code](https://github.com/pytorch/pytorch/pull/80251/files#diff-da7d91d9e59774e3ee8d120a0f97e52058b73125fd7edd55b5c2e71d4ce5629dR330))

I want to add a more comprehensive test that also shows something more E2E (using `PySymInt`'s to avoid baking in shapes, running functionalization, and fx-tracing the output to show that functionalization ran properly), but I think it's currently blocked on some other work.

At least today, `FakeSymbolicTensor` doesn't play well with `make_fx` (but @Chillee mentioned - should it?)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80418
Approved by: https://github.com/ezyang, https://github.com/albanD
2022-07-12 01:46:16 +00:00
f84b30f790 fix functionalization regression introduced by ProxyTorchDispatchMode, migrate testing to make_fx (#80416)
`ProxyTorchDispatchMode` was added recently as part of `make_fx`, which was secretly causing the meta tensor calls used inside of functionalization to get baked into the graph. It also wasn't caught because the functionalization tests in core don't use `make_fx`, and the tests in functorch aren't as comprehensive.

Now that `make_fx` is in core, I also ported the functionalization test infra over to use it, which would have caught the regression. This also makes the tests cleaner, since mode-based tracing lets us pick up factory functions in the trace output.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80416
Approved by: https://github.com/ezyang, https://github.com/albanD
2022-07-12 01:46:16 +00:00
1d90d6ee60 Setup for running PyTorch tests with TorchDynamo and skips for known failing tests (#80106)
@ezyang I am going to keep adding more skips in this PR for now. And once we have the CI running, I will replace with the appropriate decorators.

cc @mlazos , we should add those tests in test_ops.py in this PR as well

cc @jansel
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80106
Approved by: https://github.com/ezyang, https://github.com/jansel
2022-07-07 18:57:33 +00:00
960758b0b7 fix overload ambiguity with functional ops; fix _foreach op grouping (#80556)
This should fix the last issue that @anijain2305 hit when running ResNet with TorchDynamo <> functionalization.

Today if you try to call an `OpOverloadPacket` from python with some arguments, we will use the types of those arguments to perform overload resolution. With some functional variants of ops, this can be ambiguous.

Today this affects just one op: `_fused_moving_avg_obs_fq_helper`, although it would potentially affect e.g. `native_batch_norm` in the future.

Example:
```
# There are technically two overloads:
# torch.ops.aten._fused_moving_avg_obs_fq_helper.default (returns 2 argument, mutates 4 of its inputs inplace)
# torch.ops.aten._fused_moving_avg_obs_fq_helper.functional (returns 6 argument, mutates none of its inputs)

# We pick the wrong one - no way to know that we should pick the functional one, just from the call site.
outs = torch.ops.aten._fused_moving_avg_obs_fq_helper(a, a, a, a, a, a, a, 1.0, 0, 1, 0)
# raises an error - tries to call the overload with only 2 returns
return _fused_moving_avg_obs_fq_helper_functional[5]
```

Specifically, functionalization will bake `_fused_moving_avg_obs_fq_helper.functional` into the graph, but when AOTAutograd tries to compile with TorchScript, it needs to remove the overload name (TS doesn't know how to parse overload names directly, so we need to remove the overload name and let it infer the right overload at runtime later- so it picks the wrong one).

The situation is pretty similar to inplace; `ops.aten.add` and `ops.aten.add_` represent two different `OverloadPacket` objects; they can't be overloads of the same op, because their schemas would be ambiguous - the alias annotations are different, but that isn't enough to disambiguate).

In this PR, I try to fix the situation in a pretty similar way to how we handle `inplace` in the data model: `inplace` ops get their own base operator name, but they are represented as a flag inside of `BaseOperatorName` in the data model.

Two other important changes that I made as part of this PR:

(1) Originally, there were ~100 different `*_functional` operators: e.g. we had operators named `resize.functional` and `zero.functional`. The `_functional` bit isn't actually necessary in most cases: it's only necessary for operators that **also** have a `SchemaKind.mutable` variant, where `_fused_moving_avg_obs_fq_helper` is the only op that fits that description today. So I removed the unnecessary notion of "functional" from those other ops. I also added a bunch of assertions to force this restriction.

I think that makes more sense in the long run, because it eliminates an unnecessary difference in the model. E.g. we don't have `add_.Tensor` and `add.Tensor_functional`. We just have `add_.Tensor` and `add.Tensor`.

(2) I noticed that we actually still weren't pairing up a bunch of `_foreach` operators correctly, because their input arguments were different (`self` vs. `tensors`). Since they're private API's, I went ahead and changed the argument names directly so they get matched up. Before this PR, we were generating a separate `_foreach_add` and `_foreach_add.functional` variant in a bunch of cases, that really did the same thing (but happened to have a different name for the first argument).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/80556
Approved by: https://github.com/ezyang, https://github.com/albanD
2022-07-06 12:45:11 +00:00
adf8060600 add a new alias key for functional to view op decompositions
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79615

Approved by: https://github.com/zou3519
2022-06-15 23:18:09 +00:00
d2200e38f7 Revert "fix _unsafe_view schema to work with functionalization"
This reverts commit 46234df5f12e62b891be4ef4574bfa5380c0ad21.

Reverted https://github.com/pytorch/pytorch/pull/79148 on behalf of https://github.com/janeyx99 due to Broke 11.3 tests on trunk and on PR, see 46234df5f1
2022-06-10 13:09:00 +00:00
46234df5f1 fix _unsafe_view schema to work with functionalization
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79148

Approved by: https://github.com/albanD
2022-06-10 01:45:04 +00:00
92229adf0c add special handling for resize_() in functionalization pass
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77714

Approved by: https://github.com/ezyang
2022-05-26 16:15:44 +00:00
e9c54ae1c2 functionalization: remove some unnecessary view_copies in inplace views
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77713

Approved by: https://github.com/ezyang
2022-05-26 16:15:44 +00:00
7ff091fc4e move Functionalize dispatch key closer to backends
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77132

Approved by: https://github.com/ezyang, https://github.com/zou3519
2022-05-26 16:15:43 +00:00
5cc258ec9e make block_diag composite compliant
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77716

Approved by: https://github.com/zou3519
2022-05-26 16:15:42 +00:00
07e4533403 reland of as_strided support for functionalization; introduce as_strided_scatter
This reverts commit a95f1edd8549b6a249ffa448df073ac4c8b81382.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78199

Approved by: https://github.com/ezyang
2022-05-24 22:40:44 +00:00
a95f1edd85 Revert "as_strided support for functionalization; introduce as_strided_scatter"
This reverts commit 3a921f2d267430f292a111e8bcd40c76022cfd47.

Reverted https://github.com/pytorch/pytorch/pull/77128 on behalf of https://github.com/suo due to This broke rocm tests on master 3a921f2d26. rocm tests are no longer run on PRs, you should add a `ciflow/trunk` label if you want to run them
2022-05-24 20:19:12 +00:00
2eea5eff62 functionalization: fix bug with multiple views of same base
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77129

Approved by: https://github.com/ezyang
2022-05-24 19:56:43 +00:00
3a921f2d26 as_strided support for functionalization; introduce as_strided_scatter
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77128

Approved by: https://github.com/ezyang
2022-05-24 18:20:31 +00:00
7ddc1425ff functionalization fix for inplace comparison ops
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77125

Approved by: https://github.com/ezyang
2022-05-24 18:20:31 +00:00
22d566acda functionalization fix for inplace_view ops
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77126

Approved by: https://github.com/ezyang
2022-05-24 18:20:30 +00:00
0161e9eb00 [test] attempt to functionalize ops with mutable positional-only args
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76320

Approved by: https://github.com/ezyang
2022-05-19 18:50:34 +00:00
b5bc954a71 Fix optional dtype/layout/memory_format pycall; fix memory format
Double-header bug fix:

- As reported by jansel, dtypes are still showing up as integers
  when the schema is an optional dtype.  This is simple enough to
  fix and I added a test for it.  But while I was at it...

- I noticed that the THPMemoryFormat_new idiom with "unused" name
  doesn't actually work, the repr of the returned memory format
  object is wrong and this shows up when we try to log the args/kwargs.
  So I fixed memory format to do it properly along with everything
  else.

Fixes https://github.com/pytorch/pytorch/issues/77135

Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77543

Approved by: https://github.com/albanD, https://github.com/jansel
2022-05-16 16:46:08 +00:00
47dd092bae add a new at::lift operator, fix torch.tensor for functionalization
This reverts commit 85bd65a880ddd7a1f4b1ea4288423d75d45a56b3.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77285

Approved by: https://github.com/albanD, https://github.com/ezyang
2022-05-12 13:31:19 +00:00
85bd65a880 Revert "[test] try to fix torch.tensor for functionalization"
This reverts commit 9edee09ed6518a75a164d80554698ff59b60e449.

Reverted https://github.com/pytorch/pytorch/pull/76319 on behalf of https://github.com/janeyx99
2022-05-11 18:48:42 +00:00
9edee09ed6 [test] try to fix torch.tensor for functionalization
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76319

Approved by: https://github.com/ezyang
2022-05-11 17:27:34 +00:00
f2eed9400d Register PrimTorch refs as decompositions.
For the most part, PrimTorch refs have the same signature as their
ATen equivalents.  I modify most PrimTorch refs to register themselves
as decompositions, using the prim name they wrap to find the aten name
(except for a few cases where the prim/aten names mismatch).  There are
some exclusions, falling into one of two categories:

- The torch equivalent was already implemented as a CompositeImplicitAutograd
  decomposition in C++

- The ref doesn't support enough features (e.g., the real deal has more
  kwargs / overloads than are currently implemented)

PrimTorch refs are written as a single function that supports all
overloads, and this style is convenient for cases where we have a bundle
of overloads for what morally is a single overload with a Union type
on an argument (which we ought to have supported in
native_functions.yaml but blah); to support registering a single decomp
for all the overloads, we modify register_decomposition to register
to ALL overloads if you pass it an overload packet.  This is technically
BC breaking but no tests started failing because of it.

Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76835

Approved by: https://github.com/Chillee, https://github.com/mruberry
2022-05-06 20:11:45 +00:00
40d96f0afd Revert "functionalization: add support for zero_()"
This reverts commit 7d44b3675bafdfbd59e6c81734ca3febd771dd7b.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76375

Approved by: https://github.com/datumbox, https://github.com/albanD
2022-04-26 19:27:27 +00:00
640ce6bc9b functionalization bugfix: using owning type when unwrapping tensors
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76125

Approved by: https://github.com/ezyang
2022-04-25 22:00:19 +00:00
ea5209c9fd functionalization: add native fill() op
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76084

Approved by: https://github.com/ezyang
2022-04-25 21:34:16 +00:00
7d44b3675b functionalization: add support for zero_()
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75913

Approved by: https://github.com/albanD
2022-04-25 21:31:48 +00:00
81722f6630 Fix autograd.functional tests to not fail with logging tensor
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76057

Approved by: https://github.com/albanD
2022-04-20 20:32:40 +00:00
cd0591dff3 Change default TLS behavior in dispatch to favor is-a style
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75827

Approved by: https://github.com/ezyang
2022-04-20 17:32:29 +00:00
7ca03dcdfc avoid some unnecessary view_copy calls
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75819

Approved by: https://github.com/ezyang
2022-04-18 20:38:55 +00:00
204df13d42 teach ivalue about List[Optional[Tensor]], fix fallbacks
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75716

Approved by: https://github.com/ezyang
2022-04-18 20:05:26 +00:00
4c7b4b5770 fix out= op handling for functionalization
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75818

Approved by: https://github.com/ezyang
2022-04-18 20:05:21 +00:00
cb17973a2b split out functionalization codegen to use view_copy operators
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75302

Approved by: https://github.com/ezyang
2022-04-18 20:05:21 +00:00
9429dbb434 make functionalization work better with subclasses
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73441

Approved by: https://github.com/ezyang, https://github.com/albanD
2022-04-04 15:33:27 +00:00
b40dbdc49f Fix test ownership lint (#71554)
Summary:
I noticed after creating https://github.com/pytorch/pytorch/issues/71553 that the test ownership lint was not working properly.

This fixes my egregious mistake and fixes the broken lints.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/71554

Reviewed By: malfet

Differential Revision: D33690732

Pulled By: janeyx99

fbshipit-source-id: ba4dfbcd98038e4afd63e326832ae40935d2501e
(cherry picked from commit 1bbc3d343ac143f10b3d4052496812fccfd9e853)
2022-01-21 18:24:42 +00:00
0de7a618a3 functionalization: update is_aliased() logic (#68881)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/68881

Test Plan: Imported from OSS

Reviewed By: zou3519

Differential Revision: D32647614

Pulled By: bdhirsh

fbshipit-source-id: 6bec50d3e54419d1707d0b6c0c6729bcc1ced1f0
2021-12-02 09:19:12 -08:00
53bfb00ee1 [bugfix] TensorList args in functionalization pass (#68395)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68395

At the time that I wrote the pass, I thought that `c10::TensorList` and `c10::List<Tensor>` were the same thing. But it looks like a `TensorList` is actually an `ArrayRef<Tensor>`. This led to a nasty bug when I tried to add conditional functionalization to `block_diag`, where in the boxed kernel, I would:

(1) unwrap the first `IValue` by calling `.toTensorList()` (this actually returns a `List<Tensor>`, not a `TensorList`).
(2) call `TensorList to_functional_tensor(List<Tensor>)` to get out a `TensorList` with the functionalized tensors
(3) wrap that back into an `IValue` and put in on the stack.

Somewhere in that sequence of operations, something bad happens and we segfault. Fixing up the signature of `to_functional_tensor` to be `List<Tensor> to_functional_tensor(List<Tensor>)` fixes the bug. I have a feeling that there's a latent TensorList-related bug in the boxing/unboxing logic that made this worse, but I'm okay to stick with my narrow fix for now.

Additionally tested by running `pytest test/test_ops.py test/test_vmap.py -v -k block_diag` on top of this PR: https://github.com/pytorch/functorch/pull/235

Test Plan: Imported from OSS

Reviewed By: zou3519

Differential Revision: D32448258

Pulled By: bdhirsh

fbshipit-source-id: 3b2b6c7cd5e4c29533d0502f24272d826bfe03c1
2021-11-17 15:50:30 -08:00
5b05983497 [bugfix] fix two edge cases in functionalization (#68269)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/68269

Test Plan: Imported from OSS

Reviewed By: mrshenli

Differential Revision: D32396357

Pulled By: bdhirsh

fbshipit-source-id: 1d374b693f3f526d027104cbdc08b8bbe9d38307
2021-11-15 11:58:18 -08:00
7c90bd77ec Test functionalization pass in python (#66101)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66101

Updated description:

This PR tests the functionalization pass in python in two ways. For each of the test programs that I have in `test_functionalization.py`, it:
- runs the program with and without functionalization, and asserts the outputs and (potentially mutated) inputs are equal in both cases
- runs the program with `LoggingTensor`, and uses expecttests on the resulting graph. I manually confirm that the graphs look reasonable and only contain functional ops.

Mechanically, the changes include:
- factoring out `LoggingTensor` into a testing util so it can be re-used in multiple tests
- adding some private python api's in the `torch` namespace as hooks that I can use during testing

In the original version of this PR, I also added some fixes to the `_make_subclass()` function in python: allowing you to pass in strides and storage_offset. I kept them in mainly because the changes were already there.

Test Plan: Imported from OSS

Reviewed By: zou3519

Differential Revision: D31942095

Pulled By: bdhirsh

fbshipit-source-id: 90ff4c88d461089704922e779571eee09c21d707
2021-11-09 14:34:05 -08:00