666 Commits

Author SHA1 Message Date
fb0f285638 [lint] upgrade mypy to latest version
Fixes https://github.com/pytorch/pytorch/issues/75927.

Had to fix some bugs and add some ignores.

To check if clean:
```
lintrunner --paths-cmd='git grep -Il .' --take MYPY,MYPYSTRICT
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76753

Approved by: https://github.com/malfet
2022-05-03 20:51:34 +00:00
db21e22b4b [EASY] Quick Fix for broken shape function autogen.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76703

Approved by: https://github.com/eellison
2022-05-03 17:34:05 +00:00
f8a4780eb2 [LT] Move MakeNode into ir_builder.h
Summary: Move MakeNode into ir_builder.h to avoid circular header
reference later when introducing a trie cache for IR node lookup.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76482

Approved by: https://github.com/wconstab
2022-05-03 14:53:19 +00:00
d0cb31d5bc Make lazy tensor ptr class customizable (#76476)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/76476

Test Plan: Imported from OSS

Reviewed By: Krovatkin, bdhirsh

Differential Revision: D35980433

Pulled By: wconstab

fbshipit-source-id: 1d4d00a494bf8aea86278b007f7f353cd7a822f8
(cherry picked from commit a78655bef23b5fa8487ced13443ca0bfdec65e5c)
2022-04-28 03:21:56 +00:00
4cae57080a Make lazy tensor creation and value strings customizable (#76472)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76472

- lets XLA backend customize codegenned functions during
  migration to LTC

Test Plan: Imported from OSS

Reviewed By: Krovatkin, bdhirsh

Differential Revision: D35980435

Pulled By: wconstab

fbshipit-source-id: 6138aef20862fccec40d715ffbb5a40a0a7d0401
(cherry picked from commit bad23f4b7ef73ffc2ef4a893364512907e9c4555)
2022-04-28 03:21:56 +00:00
cfc90cf3eb Fix GenLazyIR.node_base_ctor_call (#76471)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76471

Make node_base_ctor_call produce the entire node_bace_ctor_call.

Previously it was only producing the beginning of the call, which was unintended.

Addresses part of https://github.com/pytorch/xla/issues/3472

Test Plan: Imported from OSS

Reviewed By: qihqi, ngimel

Differential Revision: D35980436

Pulled By: wconstab

fbshipit-source-id: a443cf593ac7c35b2b65e72b82907e88e1e71c7a
(cherry picked from commit 360ad6d82a7e8303b8a60e61b177dabf0131ea8b)
2022-04-28 03:21:56 +00:00
b204ad863f Revert "Revert "Allow specifying tags for aten operators in native_functions.yaml""
This reverts commit ea44645c9a682a4e212e64b94a86383c3388ed6b.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76456

Approved by: https://github.com/osalpekar
2022-04-28 02:04:57 +00:00
40d96f0afd Revert "functionalization: add support for zero_()"
This reverts commit 7d44b3675bafdfbd59e6c81734ca3febd771dd7b.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76375

Approved by: https://github.com/datumbox, https://github.com/albanD
2022-04-26 19:27:27 +00:00
c2ae0b01c0 Reapply black for torchgen, this time with lint to fix!
Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76359

Approved by: https://github.com/suo
2022-04-26 04:03:38 +00:00
bb60cac25a E2E SymInt example narrow_copy
This **roughly** corresponds to Goal 3.2 in https://docs.google.com/document/d/1iiLNwR5ohAsw_ymfnOpDsyF6L9RTUaHMpD8YLw-jxEw/edit#

Namely:

It adds the following:

* SymbolicIntNode interface
* LazySymbolicIntNode implementation
* Lazy `narrow_copy` implementation
* Need add support for SymInt in codegen
* Test (below)

```cpp
TEST(LazyDynamicOpsTest, NarrowCopy) {
  auto x = torch::rand({5, 10, 10}).to(kLazy);
  const size_t Y_DIM = 3;
  const size_t X_DIM_INDEX = 2;
  auto y = torch::rand({Y_DIM}).to(kLazy);
  auto ly = torch::lazy::TryGetLtcTensor(y);
  auto dim_node = MakeNode<SizeNode>(ly->GetIrValue(), 0);
  auto lmn = new torch::lazy::SymbolicIntNode(dim_node);
  auto z = x.narrow_copy(X_DIM_INDEX, 0, lmn->toSymInt());
  AllClose(z.cpu(), x.cpu().narrow_copy(X_DIM_INDEX, 0, Y_DIM));
}
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/75759
Approved by: https://github.com/wconstab
2022-04-26 02:40:27 +00:00
640ce6bc9b functionalization bugfix: using owning type when unwrapping tensors
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76125

Approved by: https://github.com/ezyang
2022-04-25 22:00:19 +00:00
74e93f727a remove _is_foreach_op codegen special cases, clean up mutable return type checks
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76190

Approved by: https://github.com/ezyang
2022-04-25 21:34:17 +00:00
5da76acd1d functionalization: add a copy() native function
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76083

Approved by: https://github.com/albanD
2022-04-25 21:31:48 +00:00
7d44b3675b functionalization: add support for zero_()
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75913

Approved by: https://github.com/albanD
2022-04-25 21:31:48 +00:00
f954c0a774 [Pytorch][4/4 Static dispatch] Support multiple backends with multiple kernels (#76059)
Summary:
- Supports multiple backends with multiple kernels in static dispatch
- Refactor static dispatch generators

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76059
ghstack-source-id: 154735166

Test Plan:
```
(pytorch)  ~/fbsource
└─ $ buck build --config pt.enable_lightweight_dispatch=1 --config pt.static_dispatch_backend="CPU;QuantizedCPU;CompositeExplicitAutograd" //xplat/caffe2/fb/lite_predictor:lite_predictor_flatbuffer
```

Reviewed By: bdhirsh

Differential Revision: D35727473

fbshipit-source-id: 986ba3390c6e585fcf8477b6d069720ee1fbc90b
(cherry picked from commit 6473990c208a78879985e4cdfb50960f5727ad5e)
2022-04-25 21:18:08 +00:00
36420b5e8c Rename tools/codegen to torchgen (#76275)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76275

In preparation for addressing
https://github.com/pytorch/pytorch/issues/73212

Diff was generated with:

```
git mv tools/codegen torchgen
git grep -l 'tools.codegen' | xargs sed -i 's/tools.codegen/torchgen/g'
sed -i "s/\${TOOLS_PATH}\/codegen/\${TORCH_ROOT}\/torchgen/g" caffe2/CMakeLists.txt
```

and a manual edits to:

* tools/test/test_gen_backend_stubs.py
* torchgen/build.bzl
* torchgen/gen_backend_stubs.py

aka this diff:

```
 diff --git a/tools/test/test_gen_backend_stubs.py b/tools/test/test_gen_backend_stubs.py
index 3dc26c6d2d..104054575e 100644
 --- a/tools/test/test_gen_backend_stubs.py
+++ b/tools/test/test_gen_backend_stubs.py
@@ -9,7 +9,7 @@ from torchgen.gen_backend_stubs import run
 from torchgen.gen import _GLOBAL_PARSE_NATIVE_YAML_CACHE  # noqa: F401

 path = os.path.dirname(os.path.realpath(__file__))
-gen_backend_stubs_path = os.path.join(path, '../torchgen/gen_backend_stubs.py')
+gen_backend_stubs_path = os.path.join(path, '../../torchgen/gen_backend_stubs.py')

 # gen_backend_stubs.py is an integration point that is called directly by external backends.
 # The tests here are to confirm that badly formed inputs result in reasonable error messages.
 diff --git a/torchgen/build.bzl b/torchgen/build.bzl
index ed04e35a43..d00078a3cf 100644
 --- a/torchgen/build.bzl
+++ b/torchgen/build.bzl
@@ -1,6 +1,6 @@
 def define_targets(rules):
     rules.py_library(
-        name = "codegen",
+        name = "torchgen",
         srcs = rules.glob(["**/*.py"]),
         deps = [
             rules.requirement("PyYAML"),
@@ -11,6 +11,6 @@ def define_targets(rules):

     rules.py_binary(
         name = "gen",
-        srcs = [":codegen"],
+        srcs = [":torchgen"],
         visibility = ["//visibility:public"],
     )
 diff --git a/torchgen/gen_backend_stubs.py b/torchgen/gen_backend_stubs.py
index c1a672a655..beee7a15e0 100644
 --- a/torchgen/gen_backend_stubs.py
+++ b/torchgen/gen_backend_stubs.py
@@ -474,7 +474,7 @@ def run(
 ) -> None:

     # Assumes that this file lives at PYTORCH_ROOT/torchgen/gen_backend_stubs.py
-    pytorch_root = pathlib.Path(__file__).parent.parent.parent.absolute()
+    pytorch_root = pathlib.Path(__file__).parent.parent.absolute()
     template_dir = os.path.join(pytorch_root, "aten/src/ATen/templates")

     def make_file_manager(install_dir: str) -> FileManager:
```

run_all_fbandroid_tests

Test Plan: sandcastle

Reviewed By: albanD, ngimel

Differential Revision: D35770317

fbshipit-source-id: 153ac4a7fef15b1e750812a90bfafdbc8f1ebcdf
(cherry picked from commit c6d485d1d4648fa1c8a4c14c5bf3d8e899b9b4dd)
2022-04-25 01:38:06 +00:00