Commit Graph

146 Commits

Author SHA1 Message Date
1e2d82b8e4 [BE] Merge isinstance calls together (#94419)
Simplify and speeds up isinstance calls by checking for multiple types at the same time.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94419
Approved by: https://github.com/ezyang
2023-02-09 00:47:26 +00:00
8fce9a09cd [BE]: pyupgrade Python to 3.8 - imports and object inheritance only (#94308)
Apply parts of pyupgrade to torch (starting with the safest changes).
This PR only does two things: removes the need to inherit from object and removes unused future imports.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94308
Approved by: https://github.com/ezyang, https://github.com/albanD
2023-02-07 21:10:56 +00:00
6a2838eec5 [jit] jit._drop fun modifier to allow in jit class non-jit decl funs (#93012)
`@torch.jit.unused` and `@torch.jit.ignore` do not allow to keep in torch scripted class member function, that has non scriptable declaration (e.g. return type)

Adding FunctionModifier _DROP to allow fully skip those functions from scripting and keep them in the code of the scripted class.

E.g. it can be used for:

```
@torch.jit._drop
def __fx_create_arg__(self, tracer: torch.fx.Tracer) -> torch.fx.node.Argument:
    # torch.fx classes are not scriptable
    return tracer.create_node(
        "call_function",
        CFX,
        args=(tracer.create_arg(self.features),),
        kwargs={},
    )

def __iter__(self) -> Iterator[torch.Tensor]:
    return iter(self.a)
```

Testing:
Added test case in `test/jit/test_types.py` with non-scriptable type annotations (fx.* classes) that fails before fix and passes after.

```
python test/test_jit.py
```

Differential Revision: [D42774830](https://our.internmc.facebook.com/intern/diff/D42774830)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93012
Approved by: https://github.com/davidberard98
2023-02-01 09:02:05 +00:00
2fc73622f8 [jit] Support Awaitable type (#90863)
We want to make TorchRec sharded models TorchScriptable.

TorchRec sharded models uses generic types Awaitable[W] and LazyAwaitable[W] (https://github.com/pytorch/torchrec/blob/main/torchrec/distributed/types.py#L212).
In sharded model those types are used instead of contained type W, having the initialization function that produces object of type W.

At the moment when the first attribute of W is requested - `LazyAwaitable[W]` will call its initialization function (on the same stack), cache the result inside and work transparently as an object of W. So we can think about it as a delayed object initialization.

To support this behavior in TorchScript - we propose a new type to TorchScript - `Await`.
In eager mode it works the same as `LazyAwaitable[W]` in TorchRec, being dynamically typed - acting as a type `W` while it is `Await[W]`.

Within torchscript it is `Await[W]` and can be only explicitly converted to W, using special function `torch.jit.awaitable_wait(aw)`.
Creation of this `Await[W]` is done via another special function `torch.jit.awaitable(func, *args)`.

The semantic is close to `torch.jit.Future`, fork, wait and uses the same jit mechanics (inline fork Closures) with the difference that it does not start this function in parallel on fork. It only stores as a lambda inside IValue that will be called on the same thread when `torch.jit.awaitable_wait` is called.

For example (more examples in this PR `test/jit/test_await.py`)
```
      def delayed(z: Tensor) -> Tensor:
          return Tensor * 3

      @torch.jit.script
      def fn(x: Tensor):
          aw: Await[int] = torch.jit._awaitable(delayed, 99)
          a = torch.eye(2)
          b = torch.jit._awaitable_wait(aw)
          return a + b + x
```

Functions semantics:

`_awaitable(func -> Callable[Tuple[...], W], *args, **kwargs) -> Await[W]`

Creates Await object, owns args and kwargs. Once _awaitable_wait calls, executes function func and owns the result of the function. Following _awaitable_wait calls will return this result from the first function call.

`_awaitable_wait(Await[W]) -> W`
Returns either cached result of W if it is not the first _awaitable_wait call to this Await object or calls specified function if the first.

`_awaitable_nowait(W) -> Await[W]`

Creates trivial Await[W] wrapper on specified object To be type complaint for the corner cases.

Differential Revision: [D42502706](https://our.internmc.facebook.com/intern/diff/D42502706)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90863
Approved by: https://github.com/davidberard98
2023-01-30 17:38:59 +00:00
5976f0bdfe Set min supported Python version to 3.8 (#93155)
Also, grep for `if sys.version_info .cond. (3, 8)` and replaces them with appropriate action.

This is a last in a series of PRs that moved CI/CD away from testing PyTorch behavior against Python-3.7.

Fixes https://github.com/pytorch/pytorch/issues/80513

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93155
Approved by: https://github.com/huydhn
2023-01-29 18:28:46 +00:00
4f1f14e38b [JIT] Skip builtins while enumerating class methods (#91805)
This is needed to support `enum.Enum` derived classes in Python-3.11
that adds `_new_member_` to classdict, see:
15c44789bb/Lib/enum.py (L529)

Following snippet illustrates the problem with the previous iteration of
the code on 3.11:
```python
from enum import Enum
import inspect

class Color(Enum):
    RED = 1
    GREEN = 2

def print_routines(cls):
    print(cls.__name__)
    for name in cls.__dict__:
        fn = getattr(cls, name)
        if inspect.isroutine(fn):
            print(name, fn, f"has_globals: {hasattr(fn, '__globals__')}")

print_routines(Color)
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91805
Approved by: https://github.com/albanD, https://github.com/suo
2023-01-06 21:45:09 +00:00
12cb26509a Apply ufmt to torch internal (#81643)
This is a big bang PR, merge conflicts are probably expected and will be addressed at merge.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81643
Approved by: https://github.com/ezyang
2022-07-22 02:19:50 +00:00
75fdebde62 [JIT] Fix annotation extraction for named tuple (#81506)
Followup similar to https://github.com/pytorch/pytorch/pull/81334

In Python-3.10 type annotations from base class are not inherited to the
child class which leads to the following error:
```
% python -c "import torch;torch.jit.script(torch.nn.utils.rnn.pack_padded_sequence)"
...
PackedSequence(Tensor data, Tensor batch_sizes, Tensor sorted_indices, Tensor unsorted_indices) -> ():
Expected a value of type 'Tensor (inferred)' for argument 'sorted_indices' but instead found type 'Optional[Tensor]'.
Inferred 'sorted_indices' to be of type 'Tensor' because it was not annotated with an explicit type.
:
  File "/Users/nshulga/git/pytorch-worktree/torch/nn/utils/rnn.py", line 197
    data, batch_sizes, sorted_indices, unsorted_indices = _packed_sequence_init_args(
        data, batch_sizes, sorted_indices, unsorted_indices)
    return PackedSequence(data, batch_sizes, sorted_indices, unsorted_indices)
           ~~~~~~~~~~~~~~ <--- HERE
```

Which stems from the fact that  `torch.nn.utils.rnn.PackedSequence.__annotations__` returns empty list for python-3.10 as seen below:
```
% conda run -n py_39 python3 -c "import sys;import torch;print(torch.nn.utils.rnn.PackedSequence.__annotations__, sys.version_info[:2])"
{'data': <class 'torch.Tensor'>, 'batch_sizes': <class 'torch.Tensor'>, 'sorted_indices': typing.Optional[torch.Tensor], 'unsorted_indices': typing.Optional[torch.Tensor]} (3, 9)

 % conda run -n py_310 python3 -c "import sys;import torch;print(torch.nn.utils.rnn.PackedSequence.__annotations__, sys.version_info[:2])"
{} (3, 10)
```

Fix by checking annotations of parent class if base one does not have any.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/81506
Approved by: https://github.com/suo
2022-07-15 02:47:15 +00:00
13dff3b2c2 Reland "[pytorch][PR] Support dataclasses in TorchScript" take 2 (#74353) (#74353) (#76771)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74353

Repatched `d00de0d43598522b8f6ab2de553b6aaf6768faa5` by Nora Belrose (norabelrose). With following changes:
* Register fake source of generated methods in linecache so that inspect.get_source will succeed.
* this patching is only triggered if the given dataclass passed to torch.jit.script previously. Effectively we make this feature opt-in.

## Original Summary:
Fixes https://github.com/pytorch/pytorch/issues/72901.

Since we can't get access to the source code for synthesized magic methods on dataclasses, we have to synthesize our own versions. torch/jit/_dataclass_impls.py has the code that does this.

What's supported

Synthesized __init__, __eq__, and the comparison magic methods when order=True is set on the dataclass decorator
Default values for fields
__post_init__, including using InitVar fields inside of __post_init__, on Python 3.8+
Overriding __eq__ or any of the comparison magic methods to provide your own implementation
What's not supported

Default factory initializers for fields
Frozen dataclasses
InitVar on Python 3.7
__repr__ and __hash__ (these are actually implemented, but the TorchScript interpreter won't call them)
Using the != operator on dataclasses inside TorchScript; this is because TorchScript requires that you implement __ne__ to use this operator, whereas in regular Python the != operator will resolve to the negation of whatever is returned by __eq__ if there's no __ne__. Dataclasses don't actually synthesize an __ne__ method for this reason. I've been toying with different ways to fix this but != is not working in this PR at the moment.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/74889

Test Plan:
unittest

Also run previously failed test:
```
buck test mode/dev-nosan //fblearner/flow/projects/fluent2/definition/transformers/contrib/faim/test:tests -- --exact 'fblearner/flow/projects/fluent2/definition/transformers/contrib/faim/test:tests - test_mixmatch_multiclass (fblearner.flow.projects.fluent2.definition.transformers.contrib.faim.test.faim_mixmatch_test.TestFaimTransformerMixMatch)'
```
passes

Reviewed By: zhxchen17

Differential Revision: D35206262

Pulled By: qihqi

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76771
Approved by: https://github.com/seemethere
2022-06-07 21:44:55 +00:00
fa1a41ca71 Revert "Reland "[pytorch][PR] Support dataclasses in TorchScript" take 2 (#74353)"
This reverts commit 5547741960a01fbd3a97d1ddd5ae9b43d8f1169c.

Reverted https://github.com/pytorch/pytorch/pull/74889 on behalf of https://github.com/malfet
2022-03-31 04:17:33 -07:00
5547741960 Reland "[pytorch][PR] Support dataclasses in TorchScript" take 2 (#74353)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74353

Repatched `d00de0d43598522b8f6ab2de553b6aaf6768faa5` by Nora Belrose (norabelrose). With following changes:
* Register fake source of generated methods in linecache so that inspect.get_source will succeed.
* this patching is only triggered if the given dataclass passed to torch.jit.script previously. Effectively we make this feature opt-in.

## Original Summary:
Fixes #72901.

Since we can't get access to the source code for synthesized magic methods on dataclasses, we have to synthesize our own versions. torch/jit/_dataclass_impls.py has the code that does this.

What's supported

Synthesized __init__, __eq__, and the comparison magic methods when order=True is set on the dataclass decorator
Default values for fields
__post_init__, including using InitVar fields inside of __post_init__, on Python 3.8+
Overriding __eq__ or any of the comparison magic methods to provide your own implementation
What's not supported

Default factory initializers for fields
Frozen dataclasses
InitVar on Python 3.7
__repr__ and __hash__ (these are actually implemented, but the TorchScript interpreter won't call them)
Using the != operator on dataclasses inside TorchScript; this is because TorchScript requires that you implement __ne__ to use this operator, whereas in regular Python the != operator will resolve to the negation of whatever is returned by __eq__ if there's no __ne__. Dataclasses don't actually synthesize an __ne__ method for this reason. I've been toying with different ways to fix this but != is not working in this PR at the moment.

Test Plan:
unittest

Also run previously failed test:
```
buck test mode/dev-nosan //fblearner/flow/projects/fluent2/definition/transformers/contrib/faim/test:tests -- --exact 'fblearner/flow/projects/fluent2/definition/transformers/contrib/faim/test:tests - test_mixmatch_multiclass (fblearner.flow.projects.fluent2.definition.transformers.contrib.faim.test.faim_mixmatch_test.TestFaimTransformerMixMatch)'
```
passes

Differential Revision: D35206262

Pull Request resolved: https://github.com/pytorch/pytorch/pull/74889
Approved by: https://github.com/zhxchen17
2022-03-31 00:20:48 +00:00
3b3bdfd51c Revert D34808842: Reland "[pytorch][PR] Support dataclasses in TorchScript"
Test Plan: revert-hammer

Differential Revision:
D34808842 (b57cc9c752)

Original commit changeset: 02f807cff1ea

Original Phabricator Diff: D34808842 (b57cc9c752)

fbshipit-source-id: bd7c47493b598677e77634d06d7dc3e3a457b92d
(cherry picked from commit e1853d73b3ad2494457626fbb34c65169ae8cc31)
2022-03-25 17:17:30 +00:00
b57cc9c752 Reland "[pytorch][PR] Support dataclasses in TorchScript" (#74353)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74353

Repatched `d00de0d43598522b8f6ab2de553b6aaf6768faa5` by Nora Belrose (norabelrose). With following changes:
* Register fake source of generated methods in linecache so that inspect.get_source will succeed.
* this patching is only triggered if the given dataclass passed to torch.jit.script previously. Effectively we make this feature opt-in.

## Original Summary:
Fixes #72901.

Since we can't get access to the source code for synthesized magic methods on dataclasses, we have to synthesize our own versions. torch/jit/_dataclass_impls.py has the code that does this.

What's supported

Synthesized __init__, __eq__, and the comparison magic methods when order=True is set on the dataclass decorator
Default values for fields
__post_init__, including using InitVar fields inside of __post_init__, on Python 3.8+
Overriding __eq__ or any of the comparison magic methods to provide your own implementation
What's not supported

Default factory initializers for fields
Frozen dataclasses
InitVar on Python 3.7
__repr__ and __hash__ (these are actually implemented, but the TorchScript interpreter won't call them)
Using the != operator on dataclasses inside TorchScript; this is because TorchScript requires that you implement __ne__ to use this operator, whereas in regular Python the != operator will resolve to the negation of whatever is returned by __eq__ if there's no __ne__. Dataclasses don't actually synthesize an __ne__ method for this reason. I've been toying with different ways to fix this but != is not working in this PR at the moment.

Test Plan:
unittest

Also run previously failed test:
```
buck test mode/dev-nosan //fblearner/flow/projects/fluent2/definition/transformers/contrib/faim/test:tests -- --exact 'fblearner/flow/projects/fluent2/definition/transformers/contrib/faim/test:tests - test_mixmatch_multiclass (fblearner.flow.projects.fluent2.definition.transformers.contrib.faim.test.faim_mixmatch_test.TestFaimTransformerMixMatch)'
```
passes

Reviewed By: zhxchen17

Differential Revision: D34808842

fbshipit-source-id: 02f807cff1ea99e606333960225c71a239743a4b
(cherry picked from commit ec885a2bc04f9e5f65838fa5704d9a05815ebd37)
2022-03-25 06:41:07 +00:00
2d3c220c8d Support RRefs that contain threading.Thread
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74462
Approved by: https://github.com/mrshenli
2022-03-21 19:43:27 +00:00
63932edcc7 Back out "[pytorch][PR] Support dataclasses in TorchScript"
Summary:
Original commit changeset: f5a792555c88

Original Phabricator Diff: D34398107 (d00de0d435)

Backing out as this broke fluent2 tests

Test Plan: sandcastle

Reviewed By: qihqi

Differential Revision: D34597363

fbshipit-source-id: 26bbe64b981aeb53b901cda61557614d9f28700e
(cherry picked from commit f17adfed8125ef84efaf2c8923c11a751eb7fb98)
2022-03-03 14:30:54 +00:00
d00de0d435 Support dataclasses in TorchScript (#73066)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/72901.

Since we can't get access to the source code for synthesized magic methods on dataclasses, we have to synthesize our own versions. `torch/jit/_dataclass_impls.py` has the code that does this.

What's supported
- Synthesized `__init__`, `__eq__`, and the comparison magic methods when `order=True` is set on the dataclass decorator
- Default values for fields
- `__post_init__`, including using `InitVar` fields inside of `__post_init__`, on Python 3.8+
- Overriding `__eq__` or any of the comparison magic methods to provide your own implementation

What's not supported
- Default factory initializers for fields
- Frozen dataclasses
- `InitVar` on Python 3.7
- `__repr__` and `__hash__` (these are actually implemented, but the TorchScript interpreter won't call them)
- Using the `!=` operator on dataclasses inside TorchScript; this is because TorchScript requires that you implement `__ne__` to use this operator, whereas in regular Python the `!=` operator will resolve to the negation of whatever is returned by `__eq__` if there's no `__ne__`. Dataclasses don't actually synthesize an `__ne__` method for this reason. I've been toying with different ways to fix this but `!=` is not working in this PR at the moment.

qihqi

Pull Request resolved: https://github.com/pytorch/pytorch/pull/73066

Reviewed By: mrshenli

Differential Revision: D34398107

Pulled By: qihqi

fbshipit-source-id: f5a792555c88f3631f97837a96687e4890660a32
(cherry picked from commit ea7f077dc49a4ee75ca0d1409aedd85228952881)
2022-02-28 19:34:20 +00:00
763ad1bf25 (2/2) Make TorchScript Preserve Fully Qualified Class Name for Python Exceptions: frontend change (#72899)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72899

Reland D33282878 (911d527b87). This is the frontend change.
ghstack-source-id: 149204031

Test Plan: Refer to D33282878 (911d527b87). Also check CI

Reviewed By: gmagogsfm

Differential Revision: D34252127

fbshipit-source-id: 27b17ddd4d05d904eb91fd9ee094d9121f00e388
(cherry picked from commit 1d276baca308110ac40111ccd622400b3bbdc864)
2022-02-16 03:45:15 +00:00
7db4a48d92 Revert D33342569: (2/2) Make TorchScript Preserve Fully Qualified Class Name for Python Exceptions: frontend change
Test Plan: revert-hammer

Differential Revision:
D33342569 (856157fcee)

Original commit changeset: 57984ac67ae2

Original Phabricator Diff: D33342569 (856157fcee)

fbshipit-source-id: 4c12235a1776a3652e7f91e93b626705759d5176
(cherry picked from commit 4cbd7d8bab76fcf050e376c8528dba36541a779f)
2022-02-15 18:45:44 +00:00
856157fcee (2/2) Make TorchScript Preserve Fully Qualified Class Name for Python Exceptions: frontend change (#70471)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70471

Reland D33282878 (911d527b87). This is the frontend change.
ghstack-source-id: 149114933

Test Plan: Refer to D33282878 (911d527b87). Also check CI

Reviewed By: gmagogsfm

Differential Revision: D33342569

fbshipit-source-id: 57984ac67ae2c56c38f72d3b1fb69105901fb472
(cherry picked from commit b47cc935ee1fd7aa63aa453a323a637bc2c22f3c)
2022-02-15 07:21:19 +00:00
8bf3179f6e #71946 Remove Python 3.6 references (#72211)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/71946

This commit removes some bits of code that were hard coded for Python 3.6 support from the `.circleci` and `torch` folders. It should only be merged if https://github.com/pytorch/pytorch/issues/66462 is complete.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/72211

Reviewed By: dagitses, seemethere

Differential Revision: D33982604

Pulled By: musebc

fbshipit-source-id: 8f453bf9909df615addd59538adb369c65484044
(cherry picked from commit 944a9970fe68a40999b5c8af731e632c28fd15c5)
2022-02-08 03:46:20 +00:00
bf610f08b0 Back out "Make TorchScript Preserve Fully Qualified Class Name for Python Exceptions"
Summary: as title

Test Plan:
```
buck run mode/opt-split-dwarf -c=python.package_style=inplace //ai_infra/distributed_ai/pyper_test_framework/templates:pyper_release_v2 -- --model inline_cvr_post_imp_deterministic_shrunk_pyper_release_v2 --cluster TSCTestCluster --hpc_identity oncall_pyper_oncall --stage prod_offline_training --test_module training_platform
...
############## Start inline_cvr_post_imp_model Test Results Analysis ##############
I1226 22:03:56.789000 3346280 test_driver.py:139  UNKNOWN     ] Test finished in 808.2743511786684 seconds.
+-------------------------+---------+------------------------+-----------------+
| Test Case               | Status  | Message                | Model Entity ID |
+-------------------------+---------+------------------------+-----------------+
| SmallWorld_release_test | Success | finished successfully. | 987987491       |
+-------------------------+---------+------------------------+-----------------+
I1226 22:03:56.790000 3346280 test_driver.py:143  UNKNOWN     ] test_run_id: 3d085f61-28d1-411d-bd27-940ea2554b23 use this id to find your run in scuba pyper_test_framework
I1226 22:03:56.792000 3346280 test_driver.py:160  UNKNOWN     ] Calling cleanup
I1226 22:03:56.792000 3346280 training_platform_test_launcher.py:385  UNKNOWN     ] Stopping launched jobs 1
I1226 22:03:59.563122 3346280 ClientSingletonManager.cpp:100] Shutting down Manifold ClientSingletonManager
```

Reviewed By: seemethere

Differential Revision: D33325936

fbshipit-source-id: 64414bf7061ad77e8ac12eb8abafee4043e0fa1e
2021-12-27 09:11:46 -08:00
911d527b87 Make TorchScript Preserve Fully Qualified Class Name for Python Exceptions (#70339)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70339

When a python program is translated to TorchScript, the python exception type is dropped. This makes users's life hard when they need to categorize errors based more than only exception message.

Here we make the change so when we raise a python exception, we record the fully qualified class name for the exception. Later on when the TorchScript is interpreted, a special exception CustomJITException is thrown. User can get the python class name from CustomJITException::getPythonClassName .

Note that, this diff does not customize the mapping from C++ exception to Python exception. It's left to the users to do whatever mapping they want.

Code under scripts/shunting are just my own experimental code. I can split them out if requested.
ghstack-source-id: 146221879

Test Plan: buck test mode/opt //caffe2/test:jit

Reviewed By: gmagogsfm

Differential Revision: D33282878

fbshipit-source-id: 910f67a764519f1053a48589d1a34df69001525d
2021-12-24 00:25:40 -08:00
45b2f41c3e [package] fix torchscript classes in package (#68028)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68028

Today, we demangle a typename before passing it to the TorchScript
compiler. This breaks compilation of torch classes in cases where we are
attempting to script the same class name from inside a package and out,
since we will return the same qualified name for both.

Differential Revision:
D32261907
D32261907

Test Plan: Imported from OSS

Reviewed By: saketh-are

Pulled By: suo

fbshipit-source-id: 921bc03ad385d94b9279fbc6f3b7dcd0ddbe5bc7
2021-11-16 10:01:40 -08:00
6831d8e379 Support Union in TorchScript (#64234)
Summary:
This PR is created to replace https://github.com/pytorch/pytorch/pull/53180 PR stack, which has all the review discussions. Reason for needing a replacement is due to a messy Sandcastle issue.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/64234

Reviewed By: gmagogsfm

Differential Revision: D30656444

Pulled By: ansley

fbshipit-source-id: 77536c8bcc88162e2c72636026ca3c16891d669a
2021-09-03 06:12:24 -07:00
01c35115d8 Fix bug in check_empty_containers (#63492)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/63492

Test Plan: Imported from OSS

Reviewed By: bdhirsh

Differential Revision: D30402749

Pulled By: ansley

fbshipit-source-id: 7de533355fe91ca4f45b2bafc3bfb205a028c1ed
2021-08-25 09:05:08 -07:00
988ef190e3 Show warning in eager mode for empty containers (#62978)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/54873

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62978

Reviewed By: navahgar

Differential Revision: D30278343

Pulled By: ansley

fbshipit-source-id: ebb19f7b8a10720f2612b99a2668d1ebbc1f2d16
2021-08-12 16:11:27 -07:00
1022443168 Revert D30279364: [codemod][lint][fbcode/c*] Enable BLACK by default
Test Plan: revert-hammer

Differential Revision:
D30279364 (b004307252)

Original commit changeset: c1ed77dfe43a

fbshipit-source-id: eab50857675c51e0088391af06ec0ecb14e2347e
2021-08-12 11:45:01 -07:00
b004307252 [codemod][lint][fbcode/c*] Enable BLACK by default
Test Plan: manual inspection & sandcastle

Reviewed By: zertosh

Differential Revision: D30279364

fbshipit-source-id: c1ed77dfe43a3bde358f92737cd5535ae5d13c9a
2021-08-12 10:58:35 -07:00
4c630773e8 [jit] warn if _check_overload_body fails to find source
Summary:
Under certain conditions (particularly if a module is frozen, like with
PyInstaller or torch::deploy), we will not have source code available for
functions. `import torch` should still work in this case, but this check is
currently causing it to raise an exception.

Since this is an initial check (if an overload is actually exercised there will
be hard failure), raise a warning and move on.

Test Plan: unit tests

Reviewed By: eellison

Differential Revision: D30214271

fbshipit-source-id: eb021503e416268e8585e0708d6271c1e7b91e95
2021-08-10 09:28:50 -07:00
e62189ad69 [jit] Better checking for overload function declarations. (#59956)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/59956

Issue #50175. Basically two things need to be checked and are lacking currently:
1. Overload declarations should always have a single `pass` statement as the body.
2. There should be always an implementation provided for decls which doesn't
   have the torch.jit._overload decorator. So in this case we need to check
   whether we are actually compiling a function body with decorator ahead.

Test Plan:
python test/test_jit.py TestScript.test_function_overloads

Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D29106555

fbshipit-source-id: 2d9d7df2fb51ab6db0e1b726f9644e4cfbf733d6
2021-08-05 14:21:48 -07:00
b454275f47 Support eager mode use of torch.jit.isinstance with multiple types (#60465)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/60095

Pull Request resolved: https://github.com/pytorch/pytorch/pull/60465

Reviewed By: soulitzer

Differential Revision: D30093110

Pulled By: ansley

fbshipit-source-id: ee9c654bdb031e9eff4837f9f1d489c81e47cc06
2021-08-04 12:45:24 -07:00
10f372601d Support RRefs that contain torch.cuda.Event (#61354)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/61354

Test Plan: Imported from OSS

Reviewed By: iramazanli

Differential Revision: D29617155

Pulled By: pbelevich

fbshipit-source-id: 6e56b3fd0a0f93ecec048b58c90f2a47b4cba688
2021-07-08 15:33:08 -07:00
0fbc471d10 Support default values on NamedTuple fields (#54682)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/54682

Test Plan: Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D27327241

Pulled By: ansley

fbshipit-source-id: 76546f1770d50ebc3435bba3b74540e3c6be8a1c
2021-06-26 15:18:21 -07:00
bdb964f89f Support RRefs that contain threading.Locks (#57943)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57943

This is a common scenario (our own tutorials propose it), hence we should ensure it works.

A more generic solution is desirable, but this should fix the immediate concern.
ghstack-source-id: 132289683

Test Plan: Added a test

Reviewed By: mrshenli

Differential Revision: D28316076

fbshipit-source-id: 64e9766189f40474298876227ea247ce5b699d97
2021-06-24 06:36:09 -07:00
7589d9c58b Enable rcb lookup for typing (#60413)
Summary:
-----------

For FX traced models, types from typing modules are not available during the lookup for the function to be traced. Because of which the resolving the type results to a None type object. By enabling lookup for `typing` module in `_jit_internal.py`, we can mitigate this issue with FX_Tracing and scripting.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/60413

Test Plan:
--------
with-proxy python test/test_jit.py -k TestPDT.test_fx_tracing_with_typing

Reviewed By: bhosmer

Differential Revision: D29314531

Pulled By: nikithamalgifb

fbshipit-source-id: 1aa651430b1074c7e6fa74ba02bbcc4e1b00b01b
2021-06-22 18:53:19 -07:00
b0ac9bfb2b Add warning about should_drop for JIT coverage plug-in (#57961)
Summary:
This adds a comment above `should_drop` to prevent someone from inadvertently breaking JIT coverage by renaming the function without updating the correct references.

The current JIT plug-in uses `should_drop` to figure out which code is going to be JIT'd. If the function is named differently, the plug-in would also need to be updated.

Question: I understand this may not be the cleanest solution. Would a cleaner solution be to create a dummy function that would simply exist for the JIT plug-in? I did not immediately do that as that may be adding unnecessary code complexity in torch.jit.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57961

Reviewed By: samestep

Differential Revision: D28933587

Pulled By: janeyx99

fbshipit-source-id: 260aaf7b11f07de84a81d6c3554c4a5ce479d623
2021-06-07 12:48:01 -07:00
5268b5a29a Add parsing logic for Tuple[()] annotation (#58340)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/58340

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D28459502

Pulled By: ansley

fbshipit-source-id: 4bb188448d66269b42b068858b895debac86e9ee
2021-05-25 12:12:43 -07:00
6b6a27e430 [jit] Add Python API for ScriptProfile (#57398)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/57398

Test Plan: Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D28133577

fbshipit-source-id: dcb8338159a24b00b5c495ecec66a3303d9b4aba
2021-05-25 11:09:18 -07:00
88ff651e90 torch.jit.ignore as a context manager (#55172)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55172

Description:
This is part 1 of series of PRs for supporting torch.jit.ignore as context manager. Following features are implemented in this PR:

- Unique name for the registered function under torch.jit.frontend module. The unique name is generated based on the file name and line number of context manager
- Forcing user to explicitly annotate the input and outputs.
- No side effects are considered.

Test Plan: Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D27895283

Pulled By: tugsbayasgalan

fbshipit-source-id: 5d36d9aa5d457055a6bb1676f264647a745ec36a
2021-05-14 01:53:50 -07:00
a688b29750 Support custom Python classes in CUDAFuture (#56516)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56516

One problem with CUDAFuture's extraction of DataPtrs from IValues is that it only supported Python objects that could be converted to "regular" IValues (e.g., lists/dicts/tuples of ints/strings/tensors/...). One notable exception are custom Python classes, which are in fact a very common data type transferred over RPC. The only solution we found for those is to use the Python pickler to extract the tensors contained in them.

We can't insert a Python dependency directly into CUDAFuture, so instead I'm proposing to use the same indirection technique used to support `getSubValues` on Python objects: define some methods on the abstract class `PyObjectHolder` (which can be used by CUDAFuture) but only implement them in the concrete subclass `ConcretePyObjectHolder` (which is only built when Python support is enabled).

I am a bit worried about the performance toll of this (pickling isn't exactly known to be cheap) but I think we should start by providing a functionally complete API. We already have ideas on how to make this faster if needed, for example by having users provide a custom DataPtr extractor tailored to their class via a decorator. (Or just use TorchScript).
ghstack-source-id: 127295014

Test Plan: Added a test later in the stack

Reviewed By: mrshenli

Differential Revision: D27887189

fbshipit-source-id: 9d27e4e62390b836e5bb4f06f401cc002f0cf95b
2021-04-24 07:06:28 -07:00
75024e228c Add lint for unqualified type: ignore (#56290)
Summary:
The other half of https://github.com/pytorch/pytorch/issues/56272.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56290

Test Plan:
CI should pass on the tip of this PR, and we know that the lint works because the following CI runs (before this PR was finished) failed:

- https://github.com/pytorch/pytorch/runs/2384511062
- https://github.com/pytorch/pytorch/actions/runs/765036024

Reviewed By: seemethere

Differential Revision: D27867219

Pulled By: samestep

fbshipit-source-id: e648f07b6822867e70833e23ddafe7fb7eaca235
2021-04-21 08:07:23 -07:00
49e5e284ea Additional annotations in fbcode/caffe2/torch/_jit_internal.py (#55855)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/55855

Test Plan: Sandcastle

Reviewed By: ezyang

Differential Revision: D27715202

fbshipit-source-id: 99d59345a1915030f12441de91a6b7d4250a1f43
2021-04-15 09:47:17 -07:00
6866c033d5 [JIT] Add recursive scripting for class type module attributes (#55124)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55124

**Summary**
This commit modifies type inference (used by the module scripting code)
so that it tries to script the type of any class instances that it
encounters. This enables recursive, automatic scripting of class type
module attributes.

**Test Plan**
This commit adds a test case for this to `TestClassType`.

Test Plan: Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D23971883

Pulled By: SplitInfinity

fbshipit-source-id: 7a5a2e7c12ee68cbdeb0a07e6aaf98734a79cb06
2021-04-02 12:16:21 -07:00
8a170fbacd [package] fix mangling issues with TorchScript (#54915)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54915

TorchScript and torch.package have different mangling schemes. To avoid
them interfering with each other, we should undo the torch.package
mangling before processing anything with TorchScript (since TS
independently makes sure that no names collide).

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D27410472

Pulled By: suo

fbshipit-source-id: d1cc013c532d9abb7fb9615122bc465ded4785bb
2021-03-31 00:58:05 -07:00
b9fdf72174 Fix doc (#53996)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53996

Fixes issue: #52479

Test Plan: Imported from OSS

Reviewed By: anjali411

Differential Revision: D27051056

Pulled By: nikithamalgifb

fbshipit-source-id: ff5d2fc3599571346e2323fa893c1e238097a164
2021-03-15 15:44:30 -07:00
705fa7e964 [Usability] Capture argument names for traced functions and modules (#51775)
Summary:
Previously `torch.jit.trace` relies on AutoGrad hooks to infer name of tensors in computation, including those of function/method arguments. This often doesn't work out because:

- These names often do not exist
- Tracer uses argument name of first tensor operation on each tensor as inferred argument names. These tensor operations have programmatically-generated names like `argument_1`

This PR extracts argument names directly from Python functions and pass them down to tracer, which then assigns them to correct graph inputs. This way, we always have the correct argument names captured in IR.

This is useful for both debugging and supporting using `InterfaceType` to represent traced modules.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51775

Reviewed By: izdeby

Differential Revision: D26273105

Pulled By: gmagogsfm

fbshipit-source-id: 934a385041137dc3731bb6fa8657b11532fed9e5
2021-02-10 18:28:08 -08:00
58eb23378f Clean up usage of torch._six partially (#49785)
Summary:
See https://github.com/pytorch/pytorch/issues/42919

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49785

Reviewed By: mruberry

Differential Revision: D25963833

Pulled By: bugra

fbshipit-source-id: 11c90d6b8d3f206c9d0a4d8621b773beb10c6ba2
2021-02-08 13:58:34 -08:00
3f23ad5bce [Bug] fix for module_has_exports (#50680)
Summary:
The attributes in `dir(mod)` may not be valid, this will throw error when calling `getattr`.
Use `hasattr` to test if it is valid.

Here is an example:
```python
class A:
    def __init__(self, x):
        if x:
            self._attr = 1

    property
    def val(self):
        return getattr(self, '_attr')

a = A(False)
print('val' in dir(a))
print(hasattr(a, 'val'))

b = A(True)
print('val' in dir(b))
print(hasattr(b, 'val'))
```

And the outputs:
```
True
False
True
True
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50680

Reviewed By: malfet

Differential Revision: D26103975

Pulled By: eellison

fbshipit-source-id: 67a799afe7d726153c91654d483937c5e198ba94
2021-01-27 16:03:24 -08:00
00adc7b07f Fix more JIT tests under Python-3.9 (#51182)
Summary:
Mostly replace `global Foo` with `make_global(Foo)`
The only real fix is generating Subscript annotation, which is a follow up from https://github.com/pytorch/pytorch/pull/48676

Fixes https://github.com/pytorch/pytorch/issues/49617

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51182

Reviewed By: gmagogsfm

Differential Revision: D26095244

Pulled By: malfet

fbshipit-source-id: 0e043d9a2cf43fff71dfbb341f708cd7af87c39a
2021-01-27 10:57:03 -08:00
a949d7b1c8 Workaround Python3.9 limitations in test_jit_py3 (#51088)
Summary:
In Python-3.9 and above `inspect.getsource` of a local class does not work if it was marked as default, see https://bugs.python.org/issue42666 https://github.com/pytorch/pytorch/issues/49617
Workaround by defining `make_global` function that programmatically accomplishes the same

Partially addresses issue raised in https://github.com/pytorch/pytorch/issues/49617

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51088

Reviewed By: gmagogsfm

Differential Revision: D26069189

Pulled By: malfet

fbshipit-source-id: 7cf14b88ae5d2b95d2b0fd852717a9202b86356e
2021-01-26 12:49:35 -08:00