Commit Graph

89 Commits

Author SHA1 Message Date
f0560f7b3b [opcheck] Stop doing test_aot_dispatch_static by default (#124495)
Motivations:
- this is pretty redundant with test_aot_dispatch_dynamic.
- The user story for opcheck is that a user should use opcheck to see
  if their operator was "registered correctly". If a user's custom op
  only supports dynamic shapes, then it's a bit awkward for
  one of the tests (e.g. `test_aot_dispatch_static`) to fail.
- We've already stopped running test_aot_dispatch_static in all of
  our opcheck tests.

Test Plan:
- wait for CI
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124495
Approved by: https://github.com/williamwen42
ghstack dependencies: #124180, #124200, #124299, #124134, #124199, #124403, #124414
2024-04-19 21:57:22 +00:00
25c65d6642 Change register_autograd to reflect ordering of setup_context and backward (#124403)
old: `register_autograd(setup_context, backward, /)`
new: `register_autograd(backward, /, *, setup_context=None)`

Motivations:
- We introduce these APIs as "give us a backward and use setup_context
  to save things for backward".
- setup_context isn't always necessary.

Test Plan:
- tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124403
Approved by: https://github.com/albanD
ghstack dependencies: #124180, #124200, #124299, #124134, #124199
2024-04-19 17:56:30 +00:00
a8e17b2d4d Move schema inference to torch._library (#124199)
After this PR, we can delete torch._custom_op/torch._custom_ops (except
there are external libraries depending it).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124199
Approved by: https://github.com/albanD
ghstack dependencies: #124180, #124200, #124299, #124134
2024-04-19 17:56:30 +00:00
bad8d25881 Add torch.library.register_kernel (#124299)
This mirrors the .register_kernel method on the object produced by the
custom_op decorator.

Test Plan:
- new tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124299
Approved by: https://github.com/albanD
ghstack dependencies: #124180, #124200
2024-04-19 13:54:21 +00:00
3918dfedc5 [custom_op] Rename register_impl to register_kernel (#124200)
Motivation:
- The API is used for registering an implementation for a specific
  device type.
- "impl" is ambiguous and can be confused with Library.impl.

Test Plan:
- existing tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124200
Approved by: https://github.com/albanD
ghstack dependencies: #124180
2024-04-19 13:54:21 +00:00
22a2f676c3 [custom_op] add ability to provide manual schema (#124180)
Test Plan:
- new tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124180
Approved by: https://github.com/albanD
2024-04-19 13:54:13 +00:00
645173a0b5 Add torch.library.register_autograd (#124071)
Allows registering autograd for all custom op entry points:
- the new-style custom op API (custom_op)
- the old-style torch.library APIs
- C++ operator registration

Test Plan:
- tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124071
Approved by: https://github.com/albanD
ghstack dependencies: #123937, #124064, #124065, #124066
2024-04-18 12:47:59 +00:00
8135c4b921 torch.library.register_fake now accepts more types (#124066)
We allow it to accept:
- a string with the op name
- an opoverload
- a new-style custom op

If any of these are referring to a new-style custom op (created with the
custom_op decorator), then we dispatch to CustomOpDef.register_fake.
Otherwise, we do what we previously did.

Test Plan:
- new tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124066
Approved by: https://github.com/albanD
ghstack dependencies: #123937, #124064, #124065
2024-04-18 12:47:55 +00:00
27daa110c8 Back out "Refresh OpOverloadPacket if a new OpOverload gets added (#123578)" (#124324)
Summary:
Original commit changeset: 528276bc8a92

Original Phabricator Diff: D56057952

Differential Revision: D56271240

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124324
Approved by: https://github.com/davidberard98
2024-04-18 03:33:54 +00:00
93e249969b [BE] enable ruff rule RSE and remove useless parentheses in raise statements (#124261)
Remove useless parentheses in `raise` statements if the exception type is raised with no argument.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124261
Approved by: https://github.com/albanD
2024-04-17 19:29:34 +00:00
47dbfecd37 Rename impl_abstract to register_fake, part 1/2 (#123937)
This PR:
- adds a new torch.library.register_fake and deprecates
  torch.library.impl_abstract. The motivation is that we have a lot of
  confusion around the naming so we are going to align the naming with
  the actual subsystem (FakeTensor).
- renames `m.impl_abstract_pystub("fbgemm_gpu.sparse_ops")` to
  `m.has_python_registration("fbgemm_gpu.sparse_ops")`. No deprecation
  here yet; I need to test how this works with static initialization.
- Renames a bunch of internals to match (e.g. abstractimplpystub ->
  pystub)

I'm scared to rename the Python-side internal APIs (e.g.
torch._library.abstract_impl) because of torch.package concerns. I'll do
that in its own isolated PR next just in case it causes problems.

DEPRECATION NOTE: torch.library.impl_abstract was renamed to to
torch.library.register_fake. Please use register_fake. We'll delete
impl_abstract in a future version of PyTorch.

Test Plan:
- existing tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123937
Approved by: https://github.com/albanD
2024-04-17 12:46:01 +00:00
a03711d24d [custom_ops] Support TensorList inputs/outputs (#123615)
We add a `supports_tensorlist` decorator that gives an autograd.Function
the ability to handle TensorLists.

Test Plan:
- custom_op_db tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123615
Approved by: https://github.com/albanD
2024-04-15 23:32:43 +00:00
1b4419dc4d Refresh OpOverloadPacket if a new OpOverload gets added (#123578)
If a user accesses an OpOverloadPacket, then creates a new OpOverload,
then uses the OpOverloadPacket, the new OpOverload never gets hit. This
is because OpOverloadPacket caches OpOverloads when it is constructed.

This PR fixes the problem by "refreshing" the OpOverloadPacket if a new
OpOverload gets constructed and the OpOverloadPacket exists.

Test Plan:
- new tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123578
Approved by: https://github.com/albanD
ghstack dependencies: #123453
2024-04-11 13:18:06 +00:00
8a5e7a01b5 [custom_op] Schema inference now includes default values (#123453)
If the function has default values, we should be able to do schema
inference and put the default values into the schema.

Test Plan:
- new tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123453
Approved by: https://github.com/albanD
2024-04-11 13:18:02 +00:00
cd6c58baea [custom_ops] mutated_args -> mutates_args (#123437)
This seemed better, since when you're construction a custom op you need
to provide "the args that the custom op mutates".

Test Plan:
- existing tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123437
Approved by: https://github.com/albanD
ghstack dependencies: #123108, #123109, #123110, #123129
2024-04-05 22:03:51 +00:00
81e7a7c955 Add mutated_args field to custom_op (#123129)
If provided, we:
- autogenerate an ADInplaceOrView implementation
- assume that no mutated inputs are returned as outputs. There are
  already aliasing runtime checks that check this.

Test Plan:
- new tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123129
Approved by: https://github.com/albanD
ghstack dependencies: #123108, #123109, #123110
2024-04-05 22:03:51 +00:00
9e8d2b6de2 Add register_autograd to register backward formulas for custom ops (#123110)
The user provides a `setup_context` and a `backward_function`. These
get put into a torch.autograd.Function that gets registered as the
custom op's autograd implementation.

Test Plan:
- we update custom ops in the custom_op_db to use the new
  register_autograd API.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123110
Approved by: https://github.com/albanD
ghstack dependencies: #123108, #123109
2024-04-05 22:03:47 +00:00
d8e1c1087d Add is_tensorlist_like_type helper (#123109)
Checks if the type of an argument in a schema is some form of
TensorList.

Test Plan:
- new test
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123109
Approved by: https://github.com/albanD
ghstack dependencies: #123108
2024-04-05 22:03:42 +00:00
067851dd0d Expand is_functional_schema to work with torch._C._FunctionSchema (#123108)
Previously it worked with torchgen.model.FunctionSchema. This PR extends
it to work with torch._C._FunctionSchema by making
torchgen.model.FunctionSchema look more like torch._C._FunctionSchema.

Test Plan:
- new tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123108
Approved by: https://github.com/albanD
2024-04-05 22:03:39 +00:00
cbde0f048b [dynamo, 3.12] enable tests disabled due to missing dynamo 3.12 support (#123300)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123300
Approved by: https://github.com/jansel, https://github.com/malfet, https://github.com/zou3519
2024-04-05 20:13:17 +00:00
8f20cf1c71 Update the functionalization error message (#123261)
Previously, it suggested that a user add a manual functionalization
kernel. However, since we have auto_functionalize now, the user's first
course of action should be to modify their op into the form that
auto_functionalize accepts (this is possible in the majority of custom
ops).

Test Plan:
- new test
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123261
Approved by: https://github.com/williamwen42
2024-04-04 16:20:42 +00:00
44c0c0fc0f Add torch.library.custom_op (#122344)
This is the entrypoint for defining an opaque/blackbox (e.g. PyTorch will
never peek into it) custom op. In this PR, you can specify backend impls
and the abstract impl for this op.

NB: most of this PR is docstrings, please don't be intimidated by the
line count.

There are a number of interesting features:
- we infer the schema from type hints. In a followup I add the ability
  to manually specify a schema.
- name inference. The user needs to manually specify an op name for now.
  In a followup we add the ability to automatically infer a name (this
  is a little tricky).
- custom_op registrations can override each other. This makes them
  more pleasant to work with in environments like colab.
- we require that the outputs of the custom_op do not alias any inputs
  or each other. We enforce this via a runtime check, but can relax this
  into an opcheck test if it really matters in the future.

Test Plan:
- new tests

Pull Request resolved: https://github.com/pytorch/pytorch/pull/122344
Approved by: https://github.com/ezyang, https://github.com/albanD
2024-04-03 18:36:17 +00:00
621fdc9db8 infer_schema can add alias annotations when passed a list of mutated args (#122343)
Test Plan:
- new tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/122343
Approved by: https://github.com/ezyang
ghstack dependencies: #122319, #122320
2024-03-21 21:39:07 +00:00
639d6201b4 Expand the types infer_schema can infer (#122320)
This PR allows it to infer:
- None return as ()
- List[Tensor] as Tensor[]

Test Plan:
- new tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/122320
Approved by: https://github.com/ezyang, https://github.com/soulitzer
ghstack dependencies: #122319
2024-03-21 21:39:07 +00:00
0dd78f1828 Add standalone tests for infer_schema (#122319)
We're gonna reuse this helper in the new python custom ops API. Given a
function with type annotations, `infer_schema(fun)` returns an inferred
schema.

Test Plan:
- new tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/122319
Approved by: https://github.com/ezyang, https://github.com/soulitzer
2024-03-21 21:39:04 +00:00
8b1b61bc70 [compiled autograd] support custom ops backed by c++ autograd::Function (#120681)
- Adds support for custom ops backed by c++ custom autograd functions, e.g. fbgemm
- Include files more granularly to avoid namespace pollution and circular imports

limitations:
- requires user to audit their code and opt-in their custom autograd::Function via autograd::Function::is_traceable and maybe additional compiled_args + apply_with_saved implementation. this was the only way I can think of for soundness
- will throw if we can't hash the saved_data i.e. for any non implemented type other than list and dict in at::IValue::hash b0cfa96e82/aten/src/ATen/core/ivalue.cpp (L364)
- can technically silently fail if both the typeid hash and the typeid string name of the custom autograd::Function collide at the same time, and an identical autograd graph containing a different custom autograd::Function, yet that has an identical implementation, is called. this case seems extremely unlikely, and the only alternative to hash collision i can think of is compiling with reflection
- tensors not saved via save_variables are not lifted, and are specialized on TensorImpl*'s hash (treated as a memory address). if needed, we can lift them.

Differential Revision: [D54818488](https://our.internmc.facebook.com/intern/diff/D54818488)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/120681
Approved by: https://github.com/jansel
2024-03-13 21:13:21 +00:00
b2f09c1859 Revert "[compiled autograd] support custom ops backed by c++ autograd::Function (#120681)"
This reverts commit d27509c384c9847cd2ac1f5d63ec143704b50591.

Reverted https://github.com/pytorch/pytorch/pull/120681 on behalf of https://github.com/xmfan due to breaking internal builds, see D54707287 ([comment](https://github.com/pytorch/pytorch/pull/120681#issuecomment-1989542344))
2024-03-11 22:18:36 +00:00
d27509c384 [compiled autograd] support custom ops backed by c++ autograd::Function (#120681)
- Adds support for custom ops backed by c++ custom autograd functions, e.g. fbgemm
- Include files more granularly to avoid namespace pollution and circular imports

limitations:
- requires user to audit their code and opt-in their custom autograd::Function via autograd::Function::is_traceable and maybe additional compiled_args + apply_with_saved implementation. this was the only way I can think of for soundness
- will throw if we can't hash the saved_data i.e. for any non implemented type other than list and dict in at::IValue::hash b0cfa96e82/aten/src/ATen/core/ivalue.cpp (L364)
- can technically silently fail if both the typeid hash and the typeid string name of the custom autograd::Function collide at the same time, and an identical autograd graph containing a different custom autograd::Function, yet that has an identical implementation, is called. this case seems extremely unlikely, and the only alternative to hash collision i can think of is compiling with reflection
- tensors not saved via save_variables are not lifted, and are specialized on TensorImpl*'s hash (treated as a memory address). if needed, we can lift them.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/120681
Approved by: https://github.com/jansel
2024-03-08 20:43:29 +00:00
2b1661c7a0 Revert "[compiled autograd] support custom ops backed by c++ autograd::Function (#120681)"
This reverts commit 05c256849b464deee16ccd70152fd54071c6c79c.

Reverted https://github.com/pytorch/pytorch/pull/120681 on behalf of https://github.com/izaitsevfb due to breaking internal builds, see D54617701 ([comment](https://github.com/pytorch/pytorch/pull/120681#issuecomment-1984214079))
2024-03-07 18:53:51 +00:00
05c256849b [compiled autograd] support custom ops backed by c++ autograd::Function (#120681)
- Adds support for custom ops backed by c++ custom autograd functions, e.g. fbgemm
- Include files more granularly to avoid namespace pollution and circular imports

limitations:
- requires user to audit their code and opt-in their custom autograd::Function via autograd::Function::is_traceable and maybe additional compiled_args + apply_with_saved implementation. this was the only way I can think of for soundness
- will throw if we can't hash the saved_data i.e. for any non implemented type other than list and dict in at::IValue::hash b0cfa96e82/aten/src/ATen/core/ivalue.cpp (L364)
- can technically silently fail if both the typeid hash and the typeid string name of the custom autograd::Function collide at the same time, and an identical autograd graph containing a different custom autograd::Function, yet that has an identical implementation, is called. this case seems extremely unlikely, and the only alternative to hash collision i can think of is compiling with reflection
- tensors not saved via save_variables are not lifted, and are specialized on TensorImpl*'s hash (treated as a memory address). if needed, we can lift them.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/120681
Approved by: https://github.com/jansel
2024-03-06 18:01:56 +00:00
b3a9d677a3 [ez] Add super() calls in test_custom_ops (#121239)
Some disable issues are getting spammed
Check that test_impl_invalid_devices gets skipped by the disable issue
Pull Request resolved: https://github.com/pytorch/pytorch/pull/121239
Approved by: https://github.com/zou3519
2024-03-05 21:16:06 +00:00
d08ce51881 [compiled autograd] refactor eager test loading and run custom ops tests (#120679)
TestCustomOp's tests uses helper attributes and functions from a util parent class. To support arbitrary test classes, we need to refactor the current approach. Instead of allowlisting certain methods, we can instead copy the whole class and only overwrite the "test_.*" methods.

Compiled autograd fails on ~10/90 of the newly added tests. test_autograd_function_backed_op is the example we discussed in PT-2D meeting about requiring c++ autograd::Function support. I'm addressing this in #120732

Pull Request resolved: https://github.com/pytorch/pytorch/pull/120679
Approved by: https://github.com/jansel, https://github.com/zou3519
2024-03-01 22:48:17 +00:00
244b124bb8 Add linux cpu test for 3.12 (#117853)
This is continuation of work: https://github.com/pytorch/pytorch/pull/113987

Co-authored-by: albanD <desmaison.alban@gmail.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/117853
Approved by: https://github.com/albanD
2024-02-14 20:52:23 +00:00
bd9db6a9c7 Update to TorchFix 0.4.0 (#119424)
`torch.library.Library` updated to `torch.library._scoped_library` in files with many tests where it seems obvious to do, otherwise `noqa: TOR901` added - see https://github.com/pytorch/pytorch/pull/118318 for more context.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/119424
Approved by: https://github.com/zou3519
2024-02-12 23:30:12 +00:00
0249c4a785 Add config toggle suggestions for data-dependent/dynamic output shape (#114337)
Fixes https://github.com/pytorch/pytorch/issues/114220

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/114337
Approved by: https://github.com/aakhundov
2024-01-05 14:01:01 +00:00
16373bbc1f fix error message in pytorch (#115349)
Fixes https://dev-discuss.pytorch.org/t/typo-in-error-message/1709 .

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115349
Approved by: https://github.com/Skylion007
2023-12-07 19:27:29 +00:00
b694f88ef6 Grandfather in built-in TorchScript ops to being pt2_compliant (#113061)
I'm seeing ops like torch.ops.aten.mul.complex being used with
torch.compile (though this seems strange to me), but we should
grandfather these in.

Test Plan:
- new tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/113061
Approved by: https://github.com/ezyang
ghstack dependencies: #113050
2023-11-09 02:35:33 +00:00
d98182e34e Revert "Grandfather in built-in TorchScript ops to being pt2_compliant (#113061)"
This reverts commit 493b52b3d9395bde3c0dc072885a15e71f786c78.

Reverted https://github.com/pytorch/pytorch/pull/113061 on behalf of https://github.com/PaliC due to breaking internal tests - contacted author with errors ([comment](https://github.com/pytorch/pytorch/pull/113061#issuecomment-1802528592))
2023-11-08 19:36:41 +00:00
d1c092ae1b Update impl_abstract_pystub to be less boilerplatey (#113182)
Summary:

We've made the following changes:
- The new way to use the API is `m.impl_abstract_pystub(module, context)`.
  Every subsequent m.def of an op inside the TORCH_LIBRARY block gives
  the op the `impl_abstract_pystub`.
- Added a mechanism to determine if an operator was defined in Python or C++.
  Library.define in Python appends the op to a global set, which is analogous
  to what we do for tracking Library.impl.
- If someone does `torch.library.impl_abstract` in Python for an operator, then
  we require that it has an `impl_abstract_pystub` specified and we also check
  that the module in the `impl_abstract_pystub` is the same as the module where
  the call to `torch.library.impl_abstract` exists.
- Unfortunately we can't check the "context" (which is the buck target on
  buck-based systems) because buck sits above us.

bypass-github-export-checks

Test Plan: - existing tests

Differential Revision: D51080493

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113182
Approved by: https://github.com/ezyang
2023-11-08 00:39:00 +00:00
bc3e2e03cd Revert "Update impl_abstract_pystub to be less boilerplatey (#112851)"
This reverts commit 6ae4e3a8d249a96d9a8bbfba389d0509783e11e1.

Reverted https://github.com/pytorch/pytorch/pull/112851 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally ([comment](https://github.com/pytorch/pytorch/pull/112851#issuecomment-1799539354))
2023-11-07 18:53:13 +00:00
6ae4e3a8d2 Update impl_abstract_pystub to be less boilerplatey (#112851)
Summary:
We've made the following changes:
- The new way to use the API is `m.impl_abstract_pystub(module, context)`.
  Every subsequent m.def of an op inside the TORCH_LIBRARY block gives
  the op the `impl_abstract_pystub`.
- Added a mechanism to determine if an operator was defined in Python or C++.
  Library.define in Python appends the op to a global set, which is analogous
  to what we do for tracking Library.impl.
- If someone does `torch.library.impl_abstract` in Python for an operator, then
  we require that it has an `impl_abstract_pystub` specified and we also check
  that the module in the `impl_abstract_pystub` is the same as the module where
  the call to `torch.library.impl_abstract` exists.
- Unfortunately we can't check the "context" (which is the buck target on
  buck-based systems) because buck sits above us.

Test Plan: - existing tests

Differential Revision: D50972148

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112851
Approved by: https://github.com/ezyang
2023-11-07 16:07:42 +00:00
493b52b3d9 Grandfather in built-in TorchScript ops to being pt2_compliant (#113061)
I'm seeing ops like torch.ops.aten.mul.complex being used with
torch.compile (though this seems strange to me), but we should
grandfather these in.

Test Plan:
- new tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/113061
Approved by: https://github.com/ezyang
ghstack dependencies: #113049, #113050
2023-11-07 12:55:16 +00:00
d94d72b397 Revert "Grandfather in built-in TorchScript ops to being pt2_compliant (#113061)"
This reverts commit 1d4d5e4319a5ddacdb4e0d1ac944bbb63921fdb1.

Reverted https://github.com/pytorch/pytorch/pull/113061 on behalf of https://github.com/clee2000 due to something in the stack broke distributed and inductor, pretty sure its the c10 one.  Not sure why so many things were flaky on this PR ([comment](https://github.com/pytorch/pytorch/pull/113061#issuecomment-1797251293))
2023-11-07 02:28:14 +00:00
1d4d5e4319 Grandfather in built-in TorchScript ops to being pt2_compliant (#113061)
I'm seeing ops like torch.ops.aten.mul.complex being used with
torch.compile (though this seems strange to me), but we should
grandfather these in.

Test Plan:
- new tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/113061
Approved by: https://github.com/ezyang
ghstack dependencies: #113036, #113049, #113050
2023-11-06 23:43:31 +00:00
71dca16610 Grandfather autogen'ed ops as pt2_compliant (#113036)
Summary:
I missed this when I grandfathered torchgen'ed aten ops as pt2_compliant.

Test Plan:
New test.

Reviewers:

Subscribers:

Tasks:

Tags:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/113036
Approved by: https://github.com/williamwen42
2023-11-06 23:43:17 +00:00
542fa4a2e7 Revert "Revert "Use OpOverload instead of OpOverloadPacket for size/s… (#113058)
Revert "Revert "Use OpOverload instead of OpOverloadPacket for size/stride/etc slots (#112119)""

This reverts commit a1d1b73a7c2cf6b9a2edb4170ec268dfd90956bd.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113058
Approved by: https://github.com/izaitsevfb
2023-11-06 19:38:49 +00:00
a1d1b73a7c Revert "Use OpOverload instead of OpOverloadPacket for size/stride/etc slots (#112119)"
This reverts commit 2337d8d0625f230f9a0469c5806e282fa4b964e9.

Reverted https://github.com/pytorch/pytorch/pull/112119 on behalf of https://github.com/PaliC due to still breaking trt tests :( refer to diff ([comment](https://github.com/pytorch/pytorch/pull/112119#issuecomment-1795496395))
2023-11-06 17:01:50 +00:00
185515368b Add generated opcheck test for if the pt2_compliant_tag is incorrectly applied (#112759)
Summary:
If there are xfails in the failures_dict and the operator has the
pt2_compliant_tag, then we raise an error. These generated tests are separate
from those in the failures dict because we don't actually need any sample
inputs to check this.

Test Plan: - New tests

Differential Revision: D50936201

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112759
Approved by: https://github.com/ezyang
2023-11-06 13:45:35 +00:00
2337d8d062 Use OpOverload instead of OpOverloadPacket for size/stride/etc slots (#112119)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112119
Approved by: https://github.com/yanboliang
2023-11-03 13:54:41 +00:00
25e17f3522 Revert "Use OpOverload instead of OpOverloadPacket for size/stride/etc slots (#112119)"
This reverts commit dd24e92949ad13960dc91fac93c3be5a43579201.

Reverted https://github.com/pytorch/pytorch/pull/112119 on behalf of https://github.com/ZainRizvi due to Breaking internal tests. See D50912326 ([comment](https://github.com/pytorch/pytorch/pull/112119#issuecomment-1791072363))
2023-11-02 16:32:25 +00:00