Commit Graph

15 Commits

Author SHA1 Message Date
634659e262 Update mypy to 1.4.1 (#91983)
Mostly fixes for PEP-484 violation (i.e. when default arg is set to None, but type is not annotated as optional)
Plus few real fixes:
  - Add missing `_get_upgraders_entry_map` to `torch/_C/__init__.pyi`
  - Add missing return statement to `torch._export. deserialize_graph`
  - Fix error message in `torch.ao.ns.fx.weight_utils.get_lstm_mod_weights`
  -
TODO (in followup PR):
  - Fix erroneous `isinstance` check in `torch/ao/quantization/_pt2e/qat_utils.py`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91983
Approved by: https://github.com/kit1980, https://github.com/ZainRizvi, https://github.com/huydhn, https://github.com/thiagocrepaldi, https://github.com/aaronenyeshi
2023-07-13 16:30:36 +00:00
47dca20d80 [BE] Enable flake8-comprehension rule C417 (#97880)
Enables flake8-comprehension rule C417. Ruff autogenerated these fixes to the codebase.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97880
Approved by: https://github.com/ezyang, https://github.com/kit1980, https://github.com/albanD
2023-03-30 14:34:24 +00:00
60a68477a6 Bump black version to 23.1.0 (#96578)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96578
Approved by: https://github.com/ezyang
2023-03-15 06:27:59 +00:00
a229b4526f [BE] Prefer dash over underscore in command-line options (#94505)
Preferring dash over underscore in command-line options. Add `--command-arg-name` to the argument parser. The old arguments with underscores `--command_arg_name` are kept for backward compatibility.

Both dashes and underscores are used in the PyTorch codebase. Some argument parsers only have dashes or only have underscores in arguments. For example, the `torchrun` utility for distributed training only accepts underscore arguments (e.g., `--master_port`). The dashes are more common in other command-line tools. And it looks to be the default choice in the Python standard library:

`argparse.BooleanOptionalAction`: 4a9dff0e5a/Lib/argparse.py (L893-L895)

```python
class BooleanOptionalAction(Action):
    def __init__(...):
            if option_string.startswith('--'):
                option_string = '--no-' + option_string[2:]
                _option_strings.append(option_string)
```

It adds `--no-argname`, not `--no_argname`. Also typing `_` need to press the shift or the caps-lock key than `-`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94505
Approved by: https://github.com/ezyang, https://github.com/seemethere
2023-02-09 20:16:49 +00:00
347b036350 Apply ufmt linter to all py files under tools (#81285)
With ufmt in place https://github.com/pytorch/pytorch/pull/81157, we can now use it to gradually format all files. I'm breaking this down into multiple smaller batches to avoid too many merge conflicts later on.

This batch (as copied from the current BLACK linter config):
* `tools/**/*.py`

Upcoming batchs:
* `torchgen/**/*.py`
* `torch/package/**/*.py`
* `torch/onnx/**/*.py`
* `torch/_refs/**/*.py`
* `torch/_prims/**/*.py`
* `torch/_meta_registrations.py`
* `torch/_decomp/**/*.py`
* `test/onnx/**/*.py`

Once they are all formatted, BLACK linter will be removed.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/81285
Approved by: https://github.com/suo
2022-07-13 07:59:22 +00:00
36420b5e8c Rename tools/codegen to torchgen (#76275)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76275

In preparation for addressing
https://github.com/pytorch/pytorch/issues/73212

Diff was generated with:

```
git mv tools/codegen torchgen
git grep -l 'tools.codegen' | xargs sed -i 's/tools.codegen/torchgen/g'
sed -i "s/\${TOOLS_PATH}\/codegen/\${TORCH_ROOT}\/torchgen/g" caffe2/CMakeLists.txt
```

and a manual edits to:

* tools/test/test_gen_backend_stubs.py
* torchgen/build.bzl
* torchgen/gen_backend_stubs.py

aka this diff:

```
 diff --git a/tools/test/test_gen_backend_stubs.py b/tools/test/test_gen_backend_stubs.py
index 3dc26c6d2d..104054575e 100644
 --- a/tools/test/test_gen_backend_stubs.py
+++ b/tools/test/test_gen_backend_stubs.py
@@ -9,7 +9,7 @@ from torchgen.gen_backend_stubs import run
 from torchgen.gen import _GLOBAL_PARSE_NATIVE_YAML_CACHE  # noqa: F401

 path = os.path.dirname(os.path.realpath(__file__))
-gen_backend_stubs_path = os.path.join(path, '../torchgen/gen_backend_stubs.py')
+gen_backend_stubs_path = os.path.join(path, '../../torchgen/gen_backend_stubs.py')

 # gen_backend_stubs.py is an integration point that is called directly by external backends.
 # The tests here are to confirm that badly formed inputs result in reasonable error messages.
 diff --git a/torchgen/build.bzl b/torchgen/build.bzl
index ed04e35a43..d00078a3cf 100644
 --- a/torchgen/build.bzl
+++ b/torchgen/build.bzl
@@ -1,6 +1,6 @@
 def define_targets(rules):
     rules.py_library(
-        name = "codegen",
+        name = "torchgen",
         srcs = rules.glob(["**/*.py"]),
         deps = [
             rules.requirement("PyYAML"),
@@ -11,6 +11,6 @@ def define_targets(rules):

     rules.py_binary(
         name = "gen",
-        srcs = [":codegen"],
+        srcs = [":torchgen"],
         visibility = ["//visibility:public"],
     )
 diff --git a/torchgen/gen_backend_stubs.py b/torchgen/gen_backend_stubs.py
index c1a672a655..beee7a15e0 100644
 --- a/torchgen/gen_backend_stubs.py
+++ b/torchgen/gen_backend_stubs.py
@@ -474,7 +474,7 @@ def run(
 ) -> None:

     # Assumes that this file lives at PYTORCH_ROOT/torchgen/gen_backend_stubs.py
-    pytorch_root = pathlib.Path(__file__).parent.parent.parent.absolute()
+    pytorch_root = pathlib.Path(__file__).parent.parent.absolute()
     template_dir = os.path.join(pytorch_root, "aten/src/ATen/templates")

     def make_file_manager(install_dir: str) -> FileManager:
```

run_all_fbandroid_tests

Test Plan: sandcastle

Reviewed By: albanD, ngimel

Differential Revision: D35770317

fbshipit-source-id: 153ac4a7fef15b1e750812a90bfafdbc8f1ebcdf
(cherry picked from commit c6d485d1d4648fa1c8a4c14c5bf3d8e899b9b4dd)
2022-04-25 01:38:06 +00:00
a11c1bbdd0 Run Black on all of tools/
Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76089

Approved by: https://github.com/albanD
2022-04-20 17:29:41 +00:00
55e3b23abe [Pytorch Edge] Generic Build Features for Selective Build (#67817)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67817

Implementation of build features as a useable feature. Includes tracing support and selectivity support. Follow up of Dhruv's prototype in D30076214.

The general idea is to allow selectivity of arbitrary sections of the codebase through the 2 apis,
BUILD_FEATURE_REQUIRED(NAME), and
BUILD_FEATURE_AVAILABLE(NAME)

References
PyTorch Edge Team Workplace group post link: https://fb.workplace.com/groups/pytorch.edge.team/posts/905584476662959/
Quip talking about some early ideas related to build features: https://fb.quip.com/iur3ApU9q29v
Google Doc about most recent discussion and details: https://docs.google.com/document/d/1533zuN_9pwpQBa4RhtstUjT5B7guowblqJz35QYWPE0/edit

Will remove the copy kernel example after. Its just here as an example.
ghstack-source-id: 142850218

Test Plan: CI, dummy traced a model, and played around with its unit test if i removed the traced value from the yaml

Reviewed By: dhruvbird

Differential Revision: D32151856

fbshipit-source-id: 33764c1f6902a025e53807b784792a83c8385984
2021-11-09 15:37:21 -08:00
8f63cfda14 [LiteInterpreter] Specify Loader to yaml.load (#67694)
Summary:
It became a mandatory argument since PyYaml-6, but has been present since PyYaml-3

Unblock migration to newer runtime

Pull Request resolved: https://github.com/pytorch/pytorch/pull/67694

Reviewed By: seemethere

Differential Revision: D32106043

Pulled By: malfet

fbshipit-source-id: 35246b97a974b168c066396ea31987b267534c7f
2021-11-02 12:52:57 -07:00
6c22b96082 [Pytorch Edge] Extend Tracer to Custom Classes (#67004)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67004

New version because the other one was impossible to rebase

Trace custom classes

Test Plan: CI.

Reviewed By: dhruvbird

Differential Revision: D31818978

fbshipit-source-id: daa22ccb153e32685bcca43a303ba9e21042d052
2021-10-26 11:38:06 -07:00
640a615150 [easy] [PyTorch Edge] Remove double pragma once directive in the generated code (#65620)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65620

This was bothering me for a while.

ghstack-source-id: 138914860

Test Plan: Sandcastle

Reviewed By: beback4u

Differential Revision: D31162648

fbshipit-source-id: 72c47ea34d40c772bb53da721fcb36365b5dbaf3
2021-09-24 13:14:37 -07:00
737d920b21 Strictly type everything in .github and tools (#59117)
Summary:
This PR greatly simplifies `mypy-strict.ini` by strictly typing everything in `.github` and `tools`, rather than picking and choosing only specific files in those two dirs. It also removes `warn_unused_ignores` from `mypy-strict.ini`, for reasons described in https://github.com/pytorch/pytorch/pull/56402#issuecomment-822743795: basically, that setting makes life more difficult depending on what libraries you have installed locally vs in CI (e.g. `ruamel`).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59117

Test Plan:
```
flake8
mypy --config mypy-strict.ini
```

Reviewed By: malfet

Differential Revision: D28765386

Pulled By: samestep

fbshipit-source-id: 3e744e301c7a464f8a2a2428fcdbad534e231f2e
2021-06-07 14:49:36 -07:00
4753100a3b Un-ignore F403 in .flake8 (#55838)
Summary:
Generally wildcard imports are bad for the reasons described here: https://www.flake8rules.com/rules/F403.html

This PR replaces wildcard imports with an explicit list of imported items where possible, and adds a `# noqa: F403` comment in the other cases (mostly re-exports in `__init__.py` files).

This is a prerequisite for https://github.com/pytorch/pytorch/issues/55816, because currently [`tools/codegen/dest/register_dispatch_key.py` simply fails if you sort its imports](https://github.com/pytorch/pytorch/actions/runs/742505908).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55838

Test Plan: CI. You can also run `flake8` locally.

Reviewed By: jbschlosser

Differential Revision: D27724232

Pulled By: samestep

fbshipit-source-id: 269fb09cb4168f8a51fd65bfaacc6cda7fb87c34
2021-04-13 09:24:07 -07:00
b7b481bd07 [PyTorch] Enable template build at aten operator level (#53801)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53801

## Summary

Enable partial explicit Aten level sources list for lite interpreter. More aten level source list will be added.

1. Use `gen_selected_mobile_ops_header.py ` to generate `selected_mobile_ops.h`. Currently, it only includes selected operators, and dtypes is all.
2. Add a custom target includes only `seleteted_mobile_ops.h`, and add it to `torch_cpu` dependency, when `BUILD_LITE_INTERPRETER` is enabled.

As a note, the current input yaml file is slightly different than the one use in internal. Will align these two yaml as next step.

**Android**
x86:
`SELECTED_OP_LIST=/Users/chenlai/Documents/pytorch/experiemnt/deeplabv3_scripted.yaml BUILD_LITE_INTERPRETER=1 ./scripts/build_pytorch_android.sh x86`

libpytorch_jni_lite.so -- 3.4 MB

armeabi-v7a
`SELECTED_OP_LIST=/Users/chenlai/Documents/pytorch/experiemnt/deeplabv3_scripted.yaml BUILD_LITE_INTERPRETER=1 ./scripts/build_pytorch_android.sh armeabi-v7a`
libpytorch_jni_lite.so -- 2.5 MB

**iOS:**
```
(base) chenlai@chenlai-mp install % du -sh *
 15M	include
 57M	lib
2.8M	share
```

```
(base) chenlai@chenlai-mp lib % ls -lh
total 117296
-rw-r--r--  1 chenlai  staff   3.2M Mar 15 22:03 libXNNPACK.a
-rw-r--r--  1 chenlai  staff   913K Mar 15 22:03 libc10.a
-rw-r--r--  1 chenlai  staff   4.6K Mar 15 22:03 libclog.a
-rw-r--r--  1 chenlai  staff    42K Mar 15 22:03 libcpuinfo.a
-rw-r--r--  1 chenlai  staff   1.5M Mar 15 22:03 libeigen_blas.a
-rw-r--r--  1 chenlai  staff    44K Mar 15 22:03 libpthreadpool.a
-rw-r--r--  1 chenlai  staff   166K Mar 15 22:03 libpytorch_qnnpack.a
-rw-r--r--  1 chenlai  staff   384B Mar 15 22:03 libtorch.a
-rw-r--r--  1 chenlai  staff    51M Mar 15 22:03 libtorch_cpu.a
```

### **Master (Baseline):**

**Android**
x86:
`SELECTED_OP_LIST=/Users/chenlai/Documents/pytorch/experiemnt/deeplabv3_scripted.yaml BUILD_LITE_INTERPRETER=1 ./scripts/build_pytorch_android.sh x86`

libpytorch_jni_lite.so -- 3.8 MB

armeabi-v7a
`SELECTED_OP_LIST=/Users/chenlai/Documents/pytorch/experiemnt/deeplabv3_scripted.yaml BUILD_LITE_INTERPRETER=1 ./scripts/build_pytorch_android.sh armeabi-v7a`
libpytorch_jni_lite.so -- 2.8 MB

**iOS:**
```
(base) chenlai@chenlai-mp install % du -sh *
 15M	include
 58M	lib
2.8M	share
```

```
(base) chenlai@chenlai-mp lib % ls -lh
total 119600
-rw-r--r--  1 chenlai  staff   3.2M Mar  4 23:16 libXNNPACK.a
-rw-r--r--  1 chenlai  staff   910K Mar  4 23:16 libc10.a
-rw-r--r--  1 chenlai  staff   4.6K Mar  4 23:16 libclog.a
-rw-r--r--  1 chenlai  staff    42K Mar  4 23:16 libcpuinfo.a
-rw-r--r--  1 chenlai  staff   1.5M Mar  4 23:16 libeigen_blas.a
-rw-r--r--  1 chenlai  staff    44K Mar  4 23:16 libpthreadpool.a
-rw-r--r--  1 chenlai  staff   166K Mar  4 23:16 libpytorch_qnnpack.a
-rw-r--r--  1 chenlai  staff   384B Mar  4 23:16 libtorch.a
-rw-r--r--  1 chenlai  staff    52M Mar  4 23:16 libtorch_cpu.a
```

Test Plan: Imported from OSS

Reviewed By: dhruvbird

Differential Revision: D27074814

Pulled By: cccclai

fbshipit-source-id: 762b5ad5b87b6a262444392fd089249c4837ba18
2021-03-25 23:57:48 -07:00
1772e26f63 [PyTorch] Move selected_mobile_ops.h codegen function to tools (#53786)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53786

To generate `selected_mobile_ops.h` in OSS, move the header file codegen functions to `tools/lite_interpreter/gen_selected_mobile_ops_header.py` file, so OSS can reuse these functions.
ghstack-source-id: 123754437

Test Plan:
```
buck test //xplat/caffe2:supported_mobile_models_test
```

```
buck run //xplat/caffe2:gen_oplist -- --model_file_list_path @/data/users/chenlai/data/pytorch/oplist_folder/file_list_path.macro  --allow_include_all_overloads --output_dir /home/chenlai/local/data/pytorch/oplist_folder
```

`file_list_path.macro` content is:
```
chenlai@devvm2090:~/fbsource(45a9b7888)$ cat /data/users/chenlai/data/pytorch/oplist_folder/file_list_path.macro
/data/users/chenlai/fbsource/buck-out/gen/aab7ed39/xplat/caffe2/supported_mobile_models_test_op_list/model_operators.yaml
```

In output folder `/home/chenlai/local/data/pytorch/oplist_folder`, these files are generated:
```
selected_mobile_ops.h  selected_operators.yaml  SupportedMobileModelsRegistration.cpp
```

the generated files are the same as before.

{P282056731}

{P282055046}

Reviewed By: dhruvbird, iseeyuan

Differential Revision: D26907868

fbshipit-source-id: 9ba786f9c5674a72cad237ae7baadbe4642c51d5
2021-03-12 00:13:03 -08:00