Commit Graph

1849 Commits

Author SHA1 Message Date
089203f8bc Updates floor_divide to perform floor division (#78411)
Fixes https://github.com/pytorch/pytorch/issues/43874

This PR changes floor_divide to perform floor division instead of truncation division.

This is a BC-breaking change, but it's a "bug fix," and we've already warned users for several releases this behavior would change.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78411
Approved by: https://github.com/ngimel
2022-05-29 21:28:45 +00:00
1a41cd8f97 Conv BN folding data type issue when conv has no bias (#78241)
PR https://github.com/pytorch/pytorch/pull/77042 has fixed the new folding conv-bn data type issue but missing the case when original conv has no bias input.
In this PR:

- Fix the new folding conv-bn's bias data type issue, when conv has no bias but weight as lower precision datatype, the new generated bias data type should be same as conv's weight.
- Move the Autocast JIT Trace UT from `test_jit.py` to `test_jit_autocast.py`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78241
Approved by: https://github.com/davidberard98
2022-05-26 18:42:17 +00:00
max
25a6aabe71 Expose permute inputs (#77391)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77391
Approved by: https://github.com/eellison
2022-05-13 22:18:51 +00:00
f6eb811786 Add RefineTypes JIT pass for Tuple (#76919)
Consider the following JIT graph, where the type of `%a` and `%b` are out of sync with tuple `%c`.
Before:
```
graph(%a : Float(123), %b : Float(4, 5, 6)):
    c : (Tensor, Tensor) = prim::TupleConstruct(%a, %b)
    return (%c)
```
After:
```
graph(%a : Float(123), %b : Float(4, 5, 6)):
    c : (Float(123), Float(4, 5, 6)) = prim::TupleConstruct(%a, %b)
    return (%c)
```
This PR adds a pass `RefineTypes(...)` to update all such instances with the correct type. This is also available via Python by using `torch._C._jit_pass_refine_types(...)`.

A unit test has been added for unnamed tuples, but no test exists for `NamedTuple` (though it was tested manually) since it isn't supported by the parser:
```
RuntimeError:
unknown type specifier:

        graph(%a : Float(123), %b : Float(4, 5, 6)):
          %c : NamedTuple(Tensor : Tuple, Tensor : Tuple) = prim::TupleConstruct(%a, %b)
               ~~~~~~~~~~ <--- HERE
          return (%c)
```

cc: @ke1337 @antoniojkim @wconstab @eellison
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76919
Approved by: https://github.com/eellison
2022-05-12 00:48:39 +00:00
1467e0dd5d Revert "Deprecate torch.lu"
This reverts commit a5bbfd94fb91c078416a99b95eb7b45d3ea81b6f.

Reverted https://github.com/pytorch/pytorch/pull/73804 on behalf of https://github.com/malfet
2022-05-09 19:06:44 +00:00
bb8baea932 [primTorch] flatten, squeeze, unsqueeze... (#77043)
This PR ...

Makes the following testing changes:

- Updates stride testing in test_python_reference_consistency to only check strides of dimensions with length > 1
- Creates reference inputs for reshape
- Creates reference inputs for chunk
- Extends the sample inputs for unsqueeze
- Extends the sample inputs for stack -- test_conj_view and test_neg_view are now xfailed
  - https://github.com/pytorch/pytorch/issues/77046

Makes the following architecture changes:
- Adds the refs.special (sub)module
- Adds the refs.nn.functional (sub)module

Adds the following prims:
- expand_dims
- view_of
- rev
- clone

Adds the following references:
  -  flatten
  - squeeze
  - unsqueeze
  - special.i0e
  - special.i1e
  - logical_or
  - logical_and
  - isclose
  - flip
  - stack
  - nn.functional.elu
  - chunk
  - clone
  - narrow

Identifies the following bugs in PyTorch today:
- https://github.com/pytorch/pytorch/issues/77054
- https://github.com/pytorch/pytorch/issues/77055

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77043
Approved by: https://github.com/ngimel
2022-05-09 11:24:55 +00:00
f2eed9400d Register PrimTorch refs as decompositions.
For the most part, PrimTorch refs have the same signature as their
ATen equivalents.  I modify most PrimTorch refs to register themselves
as decompositions, using the prim name they wrap to find the aten name
(except for a few cases where the prim/aten names mismatch).  There are
some exclusions, falling into one of two categories:

- The torch equivalent was already implemented as a CompositeImplicitAutograd
  decomposition in C++

- The ref doesn't support enough features (e.g., the real deal has more
  kwargs / overloads than are currently implemented)

PrimTorch refs are written as a single function that supports all
overloads, and this style is convenient for cases where we have a bundle
of overloads for what morally is a single overload with a Union type
on an argument (which we ought to have supported in
native_functions.yaml but blah); to support registering a single decomp
for all the overloads, we modify register_decomposition to register
to ALL overloads if you pass it an overload packet.  This is technically
BC breaking but no tests started failing because of it.

Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76835

Approved by: https://github.com/Chillee, https://github.com/mruberry
2022-05-06 20:11:45 +00:00
a5bbfd94fb Deprecate torch.lu
**BC-breaking note**:

This PR deprecates `torch.lu` in favor of `torch.linalg.lu_factor`.
A upgrade guide is added to the documentation for `torch.lu`.

Note this PR DOES NOT remove `torch.lu`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/73804

Approved by: https://github.com/IvanYashchuk, https://github.com/mruberry
2022-05-05 19:17:11 +00:00
aca5594818 Turn on memory efficient format for jit pickle files.
Summary:
This enables previous change made at D35196883 (b34b192d6b)
Previous change is landed for 2 weeks to make sure that the format change introduced here will be handed in code.

Test Plan: existing tests

Differential Revision: D36074453

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76688
Approved by: https://github.com/gmagogsfm
2022-05-03 18:42:30 +00:00
b182c22e15 [PyTorch] Exercise MHA fast path in JIT
Tests previously did not exercise this; now they do.

Differential Revision: [D35945821](https://our.internmc.facebook.com/intern/diff/D35945821/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76416

Approved by: https://github.com/ezyang
2022-05-02 16:39:45 +00:00
cb37e7a080 Remove F.pad python implementation
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73433

Approved by: https://github.com/albanD, https://github.com/jbschlosser
2022-04-23 00:13:20 +00:00
a71fabab33 Revert "Dnt CSE across context managers"
This reverts commit 0981b01af623adc4a4fb0eb0e76bd8b887508996.

Reverted https://github.com/pytorch/pytorch/pull/76075 on behalf of https://github.com/seemethere
2022-04-22 20:44:57 +00:00
0981b01af6 Dnt CSE across context managers
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76075

Approved by: https://github.com/davidberard98
2022-04-22 02:23:31 +00:00
ee955b8bb9 Cannibalize noarch CI job into crossref CI job
crossref is a new strategy for performing tests when you want
to run a normal PyTorch API call, separately run some variation of
the API call (e.g., same thing but all the arguments are meta tensors)
and then cross-reference the results to see that they are consistent.
Any logic you add to CrossRefMode will get run on *every* PyTorch API
call that is called in the course of PyTorch's test suite.  This can
be a good choice for correctness testing if OpInfo testing is not
exhaustive enough.

For now, the crossref test doesn't do anything except verify that
we can validly push a mode onto the torch function mode stack for all
functions.

Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/75988

Approved by: https://github.com/seemethere
2022-04-20 11:56:25 +00:00
0c671c15ec [JIT] Remove CSE Hoisting
This has led to a couple bugs, and I don't think the additional complexity was worth keeping in codebase.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75756
Approved by: https://github.com/davidberard98
2022-04-19 20:59:25 +00:00
de949a0e59 Various OpInfo architecture improvements
This PR makes the following improvements:

- moves the custom skip list for test_normalize_operator_exhaustive in test_fx_experimental to use the typical OpInfo skip architecture. The skips were updated to xfails, and that identified some operators which were no longer failing the test
- redundant tests with OpInfo-based testing in test_jit.py were removed
- test_dtypes was improved so its error messages are clear and it makes test_nondifferentiable redundant; the latter test has been removed
- OpInfo.supports_complex_autograd() is removed in favor of a more accurate and general test for whether the particular dtype is in the backward dtypes of the operator
- gradchecks have been improved to verify that an operator doesn't support grad if it claims not to
- gradchecks have been improved to test the gradient of all input tensors that require gradient
- the concept of "default test dtypes" has been removed
- excessive and mostly redundant out testing for elementwise unary operators has been removed
- metadata for whether an op supports nuanced "safe casting" to out behavior has been removed from OpInfos
- numerous skips have been converted to xfails
- numerous OpInfos have had their metadata fixed based on the new checks
- jit-specific utilities in common_methods_invocations.py have been moved to jit_programming_utils.py
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75951
Approved by: https://github.com/ngimel
2022-04-18 21:55:32 +00:00
ad07b7c338 fix to map an undefined tensor back to a tensor list
Taken from https://github.com/pytorch/pytorch/pull/60516

Pull Request resolved: https://github.com/pytorch/pytorch/pull/75262

Approved by: https://github.com/Krovatkin
2022-04-07 20:07:27 +00:00
b72b5b2833 Add support for nested var names in parser (#75124)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75124

These occur with freezing cc Krovatkin

Test Plan: Imported from OSS

Reviewed By: ejguan

Differential Revision: D35373998

Pulled By: eellison

fbshipit-source-id: c043d728900f833b8d027ff75b088f9d3eb389e0
(cherry picked from commit 89dc6185d0abbe9921bae817097ed7a55b658416)
2022-04-06 18:00:53 +00:00
43b56b3814 Add Parsing of tensor constants (#75119)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75119

Add support for parsing Tensor constants like Double(4, 4) ... by initializing random tensors. This makes saving IR and then parsing it lossy, so I have it toggled as default not on, but is useful in cases like repro-ing Fusions with tensor constants post-freezing.

cc Krovatkin

Test Plan: Imported from OSS

Reviewed By: ejguan

Differential Revision: D35373999

Pulled By: eellison

fbshipit-source-id: a5c8d9f93f23a7442258fc745ed6b6def330dca8
(cherry picked from commit 32dd6567522973563bd452bf486ed27b02e4e35c)
2022-04-06 18:00:53 +00:00
5177f95d21 Introducing SymInt to Pytorch (for tracing size arithmetic) (master rebase) (#74861)
Summary:
This PR introduces `SymInt` type to Pytorch which will be used by LTC and AOTAutograd for tracing size arithmetic and tests.
`SymInt` is a C++ union structure [int64_t, SymbolicIntNode*] that wraps around an int64_t field where the value of the field could be an index into a list of `shared_ptr<SymbolicIntNode>` or a real int.
This PR doesn't add any support for actually tracing symbolic ints. i.e. data_ for now can only contain real ints.

```
Goal 1: just to show we can add a type to PyTorch core. (wraps int) LANDEABLE
Finalize the naming - symint
Want the name to be short
Does invoke “size” - NO
SInt/SymInt/SymbolicInt
SInt could mean signed int
sym_int or symint or SymInt (originally it was “int”; capitalized implies object semantics, whereas lowercase implies value semantics)
JIT schema - symint
C++ - symint
```

See more details here: https://docs.google.com/document/d/1iiLNwR5ohAsw_ymfnOpDsyF6L9RTUaHMpD8 (d843f63f2a)YLw-jxEw

Pull Request resolved: https://github.com/pytorch/pytorch/pull/74861

Reviewed By: qihqi, ngimel

Differential Revision: D35226230

Pulled By: Krovatkin

fbshipit-source-id: 34acf342bd50fcaa4d8d5dd49c2fd6a98823a5b3
(cherry picked from commit 218643f63ef181cabb92d13a6e837eb64f2dda3c)
2022-03-31 21:59:59 +00:00
fa1a41ca71 Revert "Reland "[pytorch][PR] Support dataclasses in TorchScript" take 2 (#74353)"
This reverts commit 5547741960a01fbd3a97d1ddd5ae9b43d8f1169c.

Reverted https://github.com/pytorch/pytorch/pull/74889 on behalf of https://github.com/malfet
2022-03-31 04:17:33 -07:00
5547741960 Reland "[pytorch][PR] Support dataclasses in TorchScript" take 2 (#74353)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74353

Repatched `d00de0d43598522b8f6ab2de553b6aaf6768faa5` by Nora Belrose (norabelrose). With following changes:
* Register fake source of generated methods in linecache so that inspect.get_source will succeed.
* this patching is only triggered if the given dataclass passed to torch.jit.script previously. Effectively we make this feature opt-in.

## Original Summary:
Fixes #72901.

Since we can't get access to the source code for synthesized magic methods on dataclasses, we have to synthesize our own versions. torch/jit/_dataclass_impls.py has the code that does this.

What's supported

Synthesized __init__, __eq__, and the comparison magic methods when order=True is set on the dataclass decorator
Default values for fields
__post_init__, including using InitVar fields inside of __post_init__, on Python 3.8+
Overriding __eq__ or any of the comparison magic methods to provide your own implementation
What's not supported

Default factory initializers for fields
Frozen dataclasses
InitVar on Python 3.7
__repr__ and __hash__ (these are actually implemented, but the TorchScript interpreter won't call them)
Using the != operator on dataclasses inside TorchScript; this is because TorchScript requires that you implement __ne__ to use this operator, whereas in regular Python the != operator will resolve to the negation of whatever is returned by __eq__ if there's no __ne__. Dataclasses don't actually synthesize an __ne__ method for this reason. I've been toying with different ways to fix this but != is not working in this PR at the moment.

Test Plan:
unittest

Also run previously failed test:
```
buck test mode/dev-nosan //fblearner/flow/projects/fluent2/definition/transformers/contrib/faim/test:tests -- --exact 'fblearner/flow/projects/fluent2/definition/transformers/contrib/faim/test:tests - test_mixmatch_multiclass (fblearner.flow.projects.fluent2.definition.transformers.contrib.faim.test.faim_mixmatch_test.TestFaimTransformerMixMatch)'
```
passes

Differential Revision: D35206262

Pull Request resolved: https://github.com/pytorch/pytorch/pull/74889
Approved by: https://github.com/zhxchen17
2022-03-31 00:20:48 +00:00
aacdf291e0 [JIT] Make aot autograd decompositions usable in JIT, add script for serializing the decompositions (#73938)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73938

This is a first step in porting and making usable all of the decompositions defined in [functorch](https://github.com/pytorch/functorch/blob/main/functorch/_src/decompositions.py#L349) in core and in JIT as well as C++.

The decompositions are defined in python, scripted and inlined, and then serialized as C++ code which TorchScript can parse. The workflow is edit python decomposition file then run [tools/codegen/decompositions/gen_jit_decompositions.py](https://github.com/pytorch/pytorch/pull/73938/files#diff-6adef2116be233c3524e3b583e373ab0ffc9169beb6c1f6d96b5d0385e75afa1).

Decompositions are mapped to their corresponding aten schemas via the schema in their python def. This allows multiple decompositions for an overloaded op like `aten.var` (shown here in the example).

This is just a first PR, i'm sure there will be many follows ups such as:
- making these runnable in C++ with simple executor
- porting over more decompositions from AOT Autograd
- Using opinfos / more robust testing
- Categorizing decompositions
- Hooking in decompositions at various points of JIT execution

Test Plan: Imported from OSS

Reviewed By: gchanan

Differential Revision: D34938126

Pulled By: eellison

fbshipit-source-id: 9559a7cb731982e3a726f2f95af498b84fb09c13
(cherry picked from commit a4e0e748791e378e7e12a9dd0b63fb3c62dc1890)
2022-03-29 18:38:52 +00:00
6694fdaccd Clean up profiling mode and profiling executor strategy (#73875)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73875

Previously we had a few settings:
- getExecutor - which toggled between Profiling Executor and Legacy
- getGraphOptimize - if true, overrides PE/Legacy to run with simple executor (no optimizations)
and then...
- getProfilingMode - which would set PE to 0 specializtions.

The last mode is redundant with getGraphOptimize, we should just remove it and use getGraphOptimize in these cases. It would lead to potentially invalid combinations of logic - what does mean if getProfilingMode is true but getExecutor is set to false ? This would lead to a bug in specialize_autograd_zero in this case, see: https://github.com/pytorch/pytorch/blob/master/torch%2Fcsrc%2Fjit%2Fpasses%2Fspecialize_autogradzero.cpp#L93.

The tests here are failing but get fixed with the PR above it, so i'll squash for landing.

Test Plan: Imported from OSS

Reviewed By: cpuhrsch

Differential Revision: D34938130

Pulled By: eellison

fbshipit-source-id: 1a9c0ae7f6d1cfddc2ed3499a5af611053ae5e1b
(cherry picked from commit cf69ce3d155ba7d334022c42fb2cee54bb088c23)
2022-03-29 18:38:51 +00:00
8e12d2bf25 fixes torch.jit.script lp_pool bug. (#73287)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/60258

I used the solution proposed in https://github.com/pytorch/pytorch/issues/61275.  His solution failed unit tests and there was no progress after 08/07/2021. I'm willing to fix problems if they arise during CI.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/73287

Reviewed By: navahgar, zou3519

Differential Revision: D35057812

Pulled By: eellison

fbshipit-source-id: 8e82e9f73b9536979aecf476c5c65336cdffc93a
(cherry picked from commit e85e912a4edec1111623c5cbbba4171fe3bc5b1d)
2022-03-28 23:16:07 +00:00
3b3bdfd51c Revert D34808842: Reland "[pytorch][PR] Support dataclasses in TorchScript"
Test Plan: revert-hammer

Differential Revision:
D34808842 (b57cc9c752)

Original commit changeset: 02f807cff1ea

Original Phabricator Diff: D34808842 (b57cc9c752)

fbshipit-source-id: bd7c47493b598677e77634d06d7dc3e3a457b92d
(cherry picked from commit e1853d73b3ad2494457626fbb34c65169ae8cc31)
2022-03-25 17:17:30 +00:00
b57cc9c752 Reland "[pytorch][PR] Support dataclasses in TorchScript" (#74353)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74353

Repatched `d00de0d43598522b8f6ab2de553b6aaf6768faa5` by Nora Belrose (norabelrose). With following changes:
* Register fake source of generated methods in linecache so that inspect.get_source will succeed.
* this patching is only triggered if the given dataclass passed to torch.jit.script previously. Effectively we make this feature opt-in.

## Original Summary:
Fixes #72901.

Since we can't get access to the source code for synthesized magic methods on dataclasses, we have to synthesize our own versions. torch/jit/_dataclass_impls.py has the code that does this.

What's supported

Synthesized __init__, __eq__, and the comparison magic methods when order=True is set on the dataclass decorator
Default values for fields
__post_init__, including using InitVar fields inside of __post_init__, on Python 3.8+
Overriding __eq__ or any of the comparison magic methods to provide your own implementation
What's not supported

Default factory initializers for fields
Frozen dataclasses
InitVar on Python 3.7
__repr__ and __hash__ (these are actually implemented, but the TorchScript interpreter won't call them)
Using the != operator on dataclasses inside TorchScript; this is because TorchScript requires that you implement __ne__ to use this operator, whereas in regular Python the != operator will resolve to the negation of whatever is returned by __eq__ if there's no __ne__. Dataclasses don't actually synthesize an __ne__ method for this reason. I've been toying with different ways to fix this but != is not working in this PR at the moment.

Test Plan:
unittest

Also run previously failed test:
```
buck test mode/dev-nosan //fblearner/flow/projects/fluent2/definition/transformers/contrib/faim/test:tests -- --exact 'fblearner/flow/projects/fluent2/definition/transformers/contrib/faim/test:tests - test_mixmatch_multiclass (fblearner.flow.projects.fluent2.definition.transformers.contrib.faim.test.faim_mixmatch_test.TestFaimTransformerMixMatch)'
```
passes

Reviewed By: zhxchen17

Differential Revision: D34808842

fbshipit-source-id: 02f807cff1ea99e606333960225c71a239743a4b
(cherry picked from commit ec885a2bc04f9e5f65838fa5704d9a05815ebd37)
2022-03-25 06:41:07 +00:00
75d6cbe605 [4/5]Testing jit module in flatbuffer in Python. (#74387)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74387

Make temporary python bindings for flatbuffer to test ScriptModule save / load.

(Note: this ignores all push blocking failures!)

Test Plan: unittest

Reviewed By: iseeyuan

Differential Revision: D34968080

fbshipit-source-id: d23b16abda6e4b7ecf6b1198ed6e00908a3db903
(cherry picked from commit 5cbbc390c5f54146a1c469106ab4a6286c754325)
2022-03-24 23:29:47 +00:00
15c98700ed Add CPU slow test job (#73748)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73748

This adds CPU-only slow test jobs, which previously would never run.

Includes fixes/skips for slow tests which fail (they need to be skipped now because they used to never run)

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D34628803

Pulled By: davidberard98

fbshipit-source-id: c090ab7bf7bda9e24ec5cdefa6fd35c6310dbac0
(cherry picked from commit 06f7a94a57cc7023e9c5442be8298d20cd011144)
2022-03-23 21:17:27 +00:00
fde282fc23 supporting complex with requires_grad in autodiff (#74339)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/65480

autodiff should propagate requires_grad for complex tensors as well as float tensors.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/74339

Reviewed By: anjali411

Differential Revision: D34967622

Pulled By: eellison

fbshipit-source-id: 89d23469294c0191f3a5d1c8e1df3d34acc94056
(cherry picked from commit 712f8bdf03b072ab6f4ab90a64ccaad11d64c862)
2022-03-21 21:32:24 +00:00
fdd12a9f4c Support tensor.__getitem__() in TorchScript compilation (#73952)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/73952

Reviewed By: tugsbayasgalan

Differential Revision: D34743346

Pulled By: gmagogsfm

fbshipit-source-id: 2273c289c2224166cb1eed10a138d4ac7043ed83
(cherry picked from commit 37aefb9a95e0df4586bb623a1aaa974fbe799687)
2022-03-11 01:45:18 +00:00
63932edcc7 Back out "[pytorch][PR] Support dataclasses in TorchScript"
Summary:
Original commit changeset: f5a792555c88

Original Phabricator Diff: D34398107 (d00de0d435)

Backing out as this broke fluent2 tests

Test Plan: sandcastle

Reviewed By: qihqi

Differential Revision: D34597363

fbshipit-source-id: 26bbe64b981aeb53b901cda61557614d9f28700e
(cherry picked from commit f17adfed8125ef84efaf2c8923c11a751eb7fb98)
2022-03-03 14:30:54 +00:00
dc81ba1f9f parse TernaryIf as right associative, fix #68221 (#68416)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/68221

Pull Request resolved: https://github.com/pytorch/pytorch/pull/68416

Reviewed By: gchanan

Differential Revision: D32819402

Pulled By: eellison

fbshipit-source-id: c32d9fcf49e24cc0df877b794dfcb8df7c7a6d78
(cherry picked from commit 8a5a1000859bb4bdbf84730b4b137a3ec171151f)
2022-03-01 23:28:14 +00:00
d00de0d435 Support dataclasses in TorchScript (#73066)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/72901.

Since we can't get access to the source code for synthesized magic methods on dataclasses, we have to synthesize our own versions. `torch/jit/_dataclass_impls.py` has the code that does this.

What's supported
- Synthesized `__init__`, `__eq__`, and the comparison magic methods when `order=True` is set on the dataclass decorator
- Default values for fields
- `__post_init__`, including using `InitVar` fields inside of `__post_init__`, on Python 3.8+
- Overriding `__eq__` or any of the comparison magic methods to provide your own implementation

What's not supported
- Default factory initializers for fields
- Frozen dataclasses
- `InitVar` on Python 3.7
- `__repr__` and `__hash__` (these are actually implemented, but the TorchScript interpreter won't call them)
- Using the `!=` operator on dataclasses inside TorchScript; this is because TorchScript requires that you implement `__ne__` to use this operator, whereas in regular Python the `!=` operator will resolve to the negation of whatever is returned by `__eq__` if there's no `__ne__`. Dataclasses don't actually synthesize an `__ne__` method for this reason. I've been toying with different ways to fix this but `!=` is not working in this PR at the moment.

qihqi

Pull Request resolved: https://github.com/pytorch/pytorch/pull/73066

Reviewed By: mrshenli

Differential Revision: D34398107

Pulled By: qihqi

fbshipit-source-id: f5a792555c88f3631f97837a96687e4890660a32
(cherry picked from commit ea7f077dc49a4ee75ca0d1409aedd85228952881)
2022-02-28 19:34:20 +00:00
0973c5a1cc align signature of make_tensor with other creation ops (#72702)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/72702

Test Plan: Imported from OSS

Reviewed By: mrshenli

Differential Revision: D34457729

Pulled By: mruberry

fbshipit-source-id: 83d580c4201eef946dc9cf4b9e28a3d36be55609
(cherry picked from commit aa4cf20fbeb4b795595729b8ac2e6ba7707d8283)
2022-02-25 06:30:31 +00:00
8bc28e9c9c [JIT] Add more python ir utilities (#69871)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/69871

Test Plan: Imported from OSS

Reviewed By: jbschlosser

Differential Revision: D33515232

Pulled By: eellison

fbshipit-source-id: d48da7b398a3f1a8862789484a4035d874196763
(cherry picked from commit e5976b8b7a4995be25a93601bbae5c52d6d3fca8)
2022-02-25 01:07:05 +00:00
e59403fe2a Make TS recognize input arg name (#73253)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73253

This PR allows TS schema_matching to match input arg with self for aten operators. This is because, operators in their functional form have input as paremeter instead of self.

fixes: https://github.com/pytorch/pytorch/issues/71994

Test Plan: Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D34427556

Pulled By: tugsbayasgalan

fbshipit-source-id: 96c2340d605c59634bf6e37db1db6025d93a933a
(cherry picked from commit 45a593d73bc5e6308dd80a4a29afed8e318a0a1c)
2022-02-24 20:38:15 +00:00
3bd1507ff2 Revert D33994011: Make debug_pkl smaller by only emitting unique traces.
Test Plan: revert-hammer

Differential Revision:
D33994011 (3d37f5b052)

Original commit changeset: 8e6224c6e942

Original Phabricator Diff: D33994011 (3d37f5b052)

fbshipit-source-id: 885e739efa1081382e1fcf9c6cccba92c57e9f7a
(cherry picked from commit a6d98c85a736c2eb321a6f38005dd0f5dc43eb87)
2022-02-24 16:38:55 +00:00
3d37f5b052 Make debug_pkl smaller by only emitting unique traces. (#72596)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72596

debug_pkl file inside of pytorch's .pt file consists of a list of SourceRanges. Each SourceRange points to a Source which is a stack track, filename, and start, end numbers. Those are emitted in debug_pkl file as strings.

Since many SourceRange shares the same source, the string for trace can be deduped.

The newer format saves a set of unique traces in a tuple, then each SourceRange will save the offset of it's trace w.r.t. position in that tuple. (i.e. manually applying dictionary compression).

The above helps with smaller file size. On loading, if we copy each trace to Source as string the runtime memory would still blowup.
To mitigate this, we use SourceView directly instead of source which will take the reference of string inside of Deserializer and make that into string_view. This is safe because Deserializer is hold by Unpickler by shared_ptr, and Unpickler is also hold by shared_ptr by another Source object. That Source object will be alive during the model construction.

Test Plan:
unit test

Took original file (312271638_930.predictor.disagg.local); loaded with `torch.jit.load` save again with `torch.jit.save`. Unzip both, look at contents:
```
[qihan@devvm5585.vll0 ~]$ du archive -h
4.0K    archive/xl_model_weights
3.7M    archive/extra
8.0K    archive/code/__torch__/caffe2/torch/fb/model_transform/splitting
8.0K    archive/code/__torch__/caffe2/torch/fb/model_transform
8.0K    archive/code/__torch__/caffe2/torch/fb
8.0K    archive/code/__torch__/caffe2/torch
8.0K    archive/code/__torch__/caffe2
20M     archive/code/__torch__/torch/fx/graph_module
20M     archive/code/__torch__/torch/fx
8.0K    archive/code/__torch__/torch/classes
20M     archive/code/__torch__/torch
20M     archive/code/__torch__
20M     archive/code
2.7M    archive/constants
35M     archive
[qihan@devvm5585.vll0 ~]$ du resaved -h
4.0K    resaved/extra
8.0K    resaved/code/__torch__/caffe2/torch/fb/model_transform/splitting
8.0K    resaved/code/__torch__/caffe2/torch/fb/model_transform
8.0K    resaved/code/__torch__/caffe2/torch/fb
8.0K    resaved/code/__torch__/caffe2/torch
8.0K    resaved/code/__torch__/caffe2
1.3M    resaved/code/__torch__/torch/fx/graph_module
1.3M    resaved/code/__torch__/torch/fx
8.0K    resaved/code/__torch__/torch/classes
1.4M    resaved/code/__torch__/torch
1.4M    resaved/code/__torch__
1.4M    resaved/code
2.7M    resaved/constants
13M     resaved
[qihan@devvm5585.vll0 ~]$
```

Reviewed By: JasonHanwen

Differential Revision: D33994011

fbshipit-source-id: 8e6224c6e942e91c3403f686c8f0937d1002ed41
(cherry picked from commit a7014dd4029308c95007f362a57c31796d686647)
2022-02-24 09:31:16 +00:00
763ad1bf25 (2/2) Make TorchScript Preserve Fully Qualified Class Name for Python Exceptions: frontend change (#72899)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72899

Reland D33282878 (911d527b87). This is the frontend change.
ghstack-source-id: 149204031

Test Plan: Refer to D33282878 (911d527b87). Also check CI

Reviewed By: gmagogsfm

Differential Revision: D34252127

fbshipit-source-id: 27b17ddd4d05d904eb91fd9ee094d9121f00e388
(cherry picked from commit 1d276baca308110ac40111ccd622400b3bbdc864)
2022-02-16 03:45:15 +00:00
7db4a48d92 Revert D33342569: (2/2) Make TorchScript Preserve Fully Qualified Class Name for Python Exceptions: frontend change
Test Plan: revert-hammer

Differential Revision:
D33342569 (856157fcee)

Original commit changeset: 57984ac67ae2

Original Phabricator Diff: D33342569 (856157fcee)

fbshipit-source-id: 4c12235a1776a3652e7f91e93b626705759d5176
(cherry picked from commit 4cbd7d8bab76fcf050e376c8528dba36541a779f)
2022-02-15 18:45:44 +00:00
856157fcee (2/2) Make TorchScript Preserve Fully Qualified Class Name for Python Exceptions: frontend change (#70471)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70471

Reland D33282878 (911d527b87). This is the frontend change.
ghstack-source-id: 149114933

Test Plan: Refer to D33282878 (911d527b87). Also check CI

Reviewed By: gmagogsfm

Differential Revision: D33342569

fbshipit-source-id: 57984ac67ae2c56c38f72d3b1fb69105901fb472
(cherry picked from commit b47cc935ee1fd7aa63aa453a323a637bc2c22f3c)
2022-02-15 07:21:19 +00:00
dc5cda0cca Update min python version to 3.7 in setup.py and mypy configs (#71494)
Summary:
As Python-3.6 have reached EOL

Pull Request resolved: https://github.com/pytorch/pytorch/pull/71494

Reviewed By: atalman

Differential Revision: D33667509

Pulled By: malfet

fbshipit-source-id: ab1f03085cfb9161df77ba5ce373b81f5e7ef3ae
(cherry picked from commit 60343166d97b1eb1649b29a78ad390d39926b642)
2022-01-20 00:03:57 +00:00
dabcbb2726 Testing for Default Inference for Device Type (#69052)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/69052

Test Plan: Imported from OSS

Reviewed By: anjali411

Differential Revision: D33555888

Pulled By: Gamrix

fbshipit-source-id: dbd43ebfc1bea4b17a96bdd378ea730ccf5944b2
2022-01-13 13:59:12 -08:00
97e8dcba5e Fix mis-specified device arg name (#69645)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69645

As noted in code comment:
existing device operator is registered with input name `a`, which prevents torch.device(type="cuda") from working. add shim-layer here

Test Plan: Imported from OSS

Reviewed By: jbschlosser

Differential Revision: D33515231

Pulled By: eellison

fbshipit-source-id: c04af8158a9568a20cd5fbbbd573f6efab98fd60
2022-01-11 22:11:24 -08:00
80659b71a5 Hoisting common expressions out of If blocks [retry] (#65645)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65645

This is a retry of PR: https://github.com/pytorch/pytorch/pull/59492

Latest Changes: Added more tests, added the getOrCreateDB pattern, updated logic to remove unnecessary checks
addressed all comments.

Adding code to find common expressions from the two subblocks of an if
operation and hoist them before the if block.
This also allows Dead Code Elimination to
then eliminate some if blocks.

Test Plan: python test_jit.py TestIfHoisting

Reviewed By: eellison

Differential Revision: D33302065

Pulled By: Gamrix

fbshipit-source-id: a5a184a480cf07354359aaca344c6e27b687a3c2
2022-01-10 13:28:17 -08:00
70b18b9511 Fix comment indentation issue (#70227)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/70227

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D33251107

Pulled By: tugsbayasgalan

fbshipit-source-id: 293ffe5dde38480ea13963a2d7e1eb99dc594d22
2022-01-06 19:14:39 -08:00
7b8f73dd32 No-batch-dim support for ConvNd (#70506)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/70506

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D33355034

Pulled By: jbschlosser

fbshipit-source-id: 5a42645299b1d82cee7d461826acca1c5b35a71c
2022-01-06 16:53:50 -08:00
b0fdca8855 Bump version number to 7 and compile old operators with old schema (#68358)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/68358

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D33433730

Pulled By: tugsbayasgalan

fbshipit-source-id: 202c58365bae13195d3545cefcb0da9162b02151
2022-01-05 23:57:22 -08:00
0ece9a49d7 Revert D33198155: Bump version number to 7 and compile old operators with old schema
Test Plan: revert-hammer

Differential Revision:
D33198155 (d35fc409ad)

Original commit changeset: 38a1185f9ecb

Original Phabricator Diff: D33198155 (d35fc409ad)

fbshipit-source-id: 411aaeb4e047aad9202db50d4d0f2ff35bc51f9d
2022-01-04 13:44:59 -08:00