47 Commits

Author SHA1 Message Date
cyy
df458be4e5 [4/N] Apply py39 ruff and pyupgrade fixes (#143257)
```torch/fx/passes/annotate_getitem_nodes.py``` was changed to support the new type hinting annotations.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/143257
Approved by: https://github.com/justinchuby, https://github.com/albanD
2025-01-04 10:47:51 +00:00
221350e3a4 Add None return type to init -- tests (#132352)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132352
Approved by: https://github.com/ezyang
ghstack dependencies: #132335, #132351
2024-08-01 15:44:51 +00:00
8979412442 Enable ufmt format on test files (#126845)
Fixes some files in  #123062

Run lintrunner on files:

test/test_nnapi.py,
test/test_numba_integration.py,
test/test_numpy_interop.py,
test/test_openmp.py,
test/test_optim.py

```bash
$ lintrunner -a --take UFMT --all-files
ok No lint issues.
Successfully applied all patches.
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/126845
Approved by: https://github.com/ezyang
2024-05-28 01:42:07 +00:00
c5fafe9f48 [BE]: TRY002 - Ban raising vanilla exceptions (#124570)
Adds a ruff lint rule to ban raising raw exceptions. Most of these should at the very least be runtime exception, value errors, type errors or some other errors. There are hundreds of instance of these bad exception types already in the codebase, so I have noqa'd most of them. Hopefully this error code will get commiters to rethink what exception type they should raise when they submit a PR.

I also encourage people to gradually go and fix all the existing noqas that have been added so they can be removed overtime and our exception typing can be improved.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124570
Approved by: https://github.com/ezyang
2024-04-21 22:26:40 +00:00
73e1455327 [BE] Enable ruff's UP rules and autoformat test/ (#105434)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105434
Approved by: https://github.com/albanD
2023-07-19 20:36:06 +00:00
380ccfd442 Revert "Added round_with_scale_factor arg to ATen (#97868)"
This reverts commit aa99c5b4eda345f792687c490e72c8575110977a.

Reverted https://github.com/pytorch/pytorch/pull/97868 on behalf of https://github.com/osalpekar due to Caused breakages in the glow compiler - see [D45374622](https://www.internalfb.com/diff/D45374622) for more details
2023-04-28 20:47:00 +00:00
aa99c5b4ed Added round_with_scale_factor arg to ATen (#97868)
Addresses #62396 following the strategy described in https://github.com/pytorch/pytorch/pull/64983#issuecomment-1026177629.

Fixing output size to match opencv, scikit-image, scipy if scale factor is specified on ATen side only due to JIT FC.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97868
Approved by: https://github.com/lezcano, https://github.com/mikaylagawarecki
2023-04-26 18:48:37 +00:00
046e88a291 [BE] [3/3] Rewrite super() calls in test (#94592)
Rewrite Python built-in class `super()` calls. Only non-semantic changes should be applied.

- #94587
- #94588
- #94592

Also, methods with only a `super()` call are removed:

```diff
class MyModule(nn.Module):
-   def __init__(self):
-       super().__init__()
-
    def forward(self, ...):
        ...
```

Some cases that change the semantics should be kept unchanged. E.g.:

f152a79be9/caffe2/python/net_printer.py (L184-L190)

f152a79be9/test/test_jit_fuser_te.py (L2628-L2635)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94592
Approved by: https://github.com/ezyang, https://github.com/seemethere
2023-02-12 22:20:53 +00:00
zaf
c92e5ac95b [quant][ao_migration] torch.nn.quantized.modulestorch.ao.nn.quantized.modules (#78713)
Context: In order to avoid the cluttering of the `torch.nn` namespace
the quantized modules namespace is moved to `torch.ao.nn`.

The list of the `nn.quantized` files that are being migrated:

- [ ] `torch.nn.quantized` → `torch.ao.nn.quantized`
    - [X] `torch.nn.quantized.functional` → `torch.ao.nn.quantized.functional`
    - [X] [Current PR] `torch.nn.quantized.modules` → `torch.ao.nn.quantized.modules`
    - [ ] `torch.nn.quantized.dynamic` → `torch.ao.nn.quantized.dynamic`
    - [ ] `torch.nn.quantized._reference` → `torch.ao.nn.quantized._reference`
- [ ] `torch.nn.quantizable` → `torch.ao.nn.quantizable`
- [ ] `torch.nn.qat` → `torch.ao.nn.qat`
    - [ ] `torch.nn.qat.modules` → `torch.ao.nn.qat.modules`
    - [ ] `torch.nn.qat.dynamic` → `torch.ao.nn.qat.dynamic`
- [ ] `torch.nn.intrinsic` → `torch.ao.nn.intrinsic`
    - [ ] `torch.nn.intrinsic.modules` → `torch.ao.nn.intrinsic.modules`
    - [ ] `torch.nn.intrinsic.qat` → `torch.ao.nn.intrinsic.qat`
    - [ ] `torch.nn.intrinsic.quantized` → `torch.ao.nn.intrinsic.quantized`
        - [ ] `torch.nn.intrinsic.quantized.modules` → `torch.ao.nn.intrinsic.quantized.modules`
        - [ ] `torch.nn.intrinsic.quantized.dynamic` → `torch.ao.nn.intrinsic.quantized.dynamic`

Majority of the files are just moved to the new location.
However, specific files need to be double checked:

- Documentation @vkuzo
  - docs/source/conf.py
  - docs/source/quantization.rst
- [quantize_fx](torch/ao/quantization/quantize_fx.py) @jerryzh168
- [common test routine](test/quantization/ao_migration/common.py) @HDCharles
- JIT stuff @jamesr66a
  - torch/csrc/jit/passes/hoist_conv_packed_params.cpp
  - torch/csrc/jit/passes/quantization/helper.h
  - torch/csrc/jit/serialization/import_source.cpp

Differential Revision: [D38926012](https://our.internmc.facebook.com/intern/diff/D38926012/)

Differential Revision: [D38926012](https://our.internmc.facebook.com/intern/diff/D38926012)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78713
Approved by: https://github.com/jerryzh168
2022-08-25 16:50:33 +00:00
6a9c02339d Revert "[quant][ao_migration] torch.nn.quantized.modulestorch.ao.nn.quantized.modules (#78713)"
This reverts commit 432f037498e3f470f1f6d2a5cc7c6ae8eb4fc870.

Reverted https://github.com/pytorch/pytorch/pull/78713 on behalf of https://github.com/janeyx99 due to Reverting for breaking (trunk-only) ios build
2022-08-22 07:32:37 +00:00
zaf
432f037498 [quant][ao_migration] torch.nn.quantized.modulestorch.ao.nn.quantized.modules (#78713)
Context: In order to avoid the cluttering of the `torch.nn` namespace
the quantized modules namespace is moved to `torch.ao.nn`.

The list of the `nn.quantized` files that are being migrated:

- [ ] `torch.nn.quantized` → `torch.ao.nn.quantized`
    - [X] `torch.nn.quantized.functional` → `torch.ao.nn.quantized.functional`
    - [X] [Current PR] `torch.nn.quantized.modules` → `torch.ao.nn.quantized.modules`
    - [ ] `torch.nn.quantized.dynamic` → `torch.ao.nn.quantized.dynamic`
    - [ ] `torch.nn.quantized._reference` → `torch.ao.nn.quantized._reference`
- [ ] `torch.nn.quantizable` → `torch.ao.nn.quantizable`
- [ ] `torch.nn.qat` → `torch.ao.nn.qat`
    - [ ] `torch.nn.qat.modules` → `torch.ao.nn.qat.modules`
    - [ ] `torch.nn.qat.dynamic` → `torch.ao.nn.qat.dynamic`
- [ ] `torch.nn.intrinsic` → `torch.ao.nn.intrinsic`
    - [ ] `torch.nn.intrinsic.modules` → `torch.ao.nn.intrinsic.modules`
    - [ ] `torch.nn.intrinsic.qat` → `torch.ao.nn.intrinsic.qat`
    - [ ] `torch.nn.intrinsic.quantized` → `torch.ao.nn.intrinsic.quantized`
        - [ ] `torch.nn.intrinsic.quantized.modules` → `torch.ao.nn.intrinsic.quantized.modules`
        - [ ] `torch.nn.intrinsic.quantized.dynamic` → `torch.ao.nn.intrinsic.quantized.dynamic`

Majority of the files are just moved to the new location.
However, specific files need to be double checked:

- Documentation @vkuzo
  - docs/source/conf.py
  - docs/source/quantization.rst
- [quantize_fx](torch/ao/quantization/quantize_fx.py) @jerryzh168
- [common test routine](test/quantization/ao_migration/common.py) @HDCharles
- JIT stuff @jamesr66a
  - torch/csrc/jit/passes/hoist_conv_packed_params.cpp
  - torch/csrc/jit/passes/quantization/helper.h
  - torch/csrc/jit/serialization/import_source.cpp

Differential Revision: [D36860145](https://our.internmc.facebook.com/intern/diff/D36860145/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78713
Approved by: https://github.com/jerryzh168
2022-08-22 01:38:55 +00:00
cafdc52cdc Skip TestNNAPI tests if QNNPACK is not supported (#82882)
### Description
As the testcase unconditionally uses qnnpack skip the test if that isn't supported like other similar tests.

Fixes https://github.com/pytorch/pytorch/issues/82810

### Testing
Ran `test_nnapi.py` on PPC

Pull Request resolved: https://github.com/pytorch/pytorch/pull/82882
Approved by: https://github.com/albanD
2022-08-05 16:19:06 +00:00
a70297e7cb NNAPI: quant logistic fix (#70847)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70847

NNAPI needs a fixed zero point and scale for sigmoid (logistic)
ghstack-source-id: 146555935

Test Plan: LIBNEURALNETWORKS_PATH="/path/to/libneuralnetworks.so" pytest test/test_nnapi.py

Reviewed By: dreiss

Differential Revision: D33237918

fbshipit-source-id: 05ef3a81bf1589ad44b599a19bce4066531c432b
2022-01-07 13:36:33 -08:00
a4a6d056e6 Add ownership to more edge tests (#67859)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/66232

This should be the last immediate task. I anticipate test ownership will change overtime but this is the last big thing to close it out

Pull Request resolved: https://github.com/pytorch/pytorch/pull/67859

Reviewed By: soulitzer

Differential Revision: D32210534

Pulled By: janeyx99

fbshipit-source-id: 7fd835d87d9d35d49ec49de1fcfa29b085133e99
2021-11-05 11:01:16 -07:00
6259601c8a Set test owners for tests with unknown owners (#67552)
Summary:
Action following https://github.com/pytorch/pytorch/issues/66232

Pull Request resolved: https://github.com/pytorch/pytorch/pull/67552

Reviewed By: jbschlosser

Differential Revision: D32028248

Pulled By: janeyx99

fbshipit-source-id: a006f7026288b7126dba58b31cac28e10ce0fed6
2021-10-29 12:42:01 -07:00
227e37dd39 pytorch quantization ao migration phase 2: caffe2/test (#65832)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65832

Renames `torch.quantization` to `torch.ao.quantization` in `caffe2/test`
folder.

```
find caffe2/test/ -type f -name "*.py" -print0 | xargs -0 sed -i "s/torch\.quantization/torch.ao.quantization/g"
HG: manually revert the files testing this migration
hg revert caffe2/test/quantization/ao_migration/common.py
hg revert caffe2/test/quantization/ao_migration/test_ao_migration.py
```

Test Plan: CI

Reviewed By: z-a-f

Differential Revision: D31275754

fbshipit-source-id: 4ed54a74525634feb0f47a26d071102e19c30049
2021-10-01 06:26:30 -07:00
1de8976e85 Add quantized::convtranspose2d (#63914)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/63914

Test Plan: Imported from OSS

Reviewed By: dreiss

Differential Revision: D30531889

fbshipit-source-id: a65e389da2722efbc62e3fe1edf503732326350d
2021-09-24 17:07:29 -07:00
ab5eb56983 add qmul (#63913)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/63913

Test Plan: Imported from OSS

Reviewed By: dreiss

Differential Revision: D30531890

fbshipit-source-id: 29d88cc61bd1e328cc7ae7a91a2f8d4819803c8d
2021-09-24 17:06:17 -07:00
130549d61b Fix typo in NNAPI tests (#63797)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63797

nnapi memory format test has a typo

Test Plan:
pytest test/test_nnapi.py::TestNNAPI

Imported from OSS

Reviewed By: Amyh11325

Differential Revision: D30495473

fbshipit-source-id: 8edad7c01a080847a64a2797e077ec4d6077552a
2021-08-23 16:34:24 -07:00
2d58f3f56d NNAPI: Support const values in binary ops
Summary:
NNAPI converter failed with 1 const value and one tensor earlier
Code suggestions from dreiss

Test Plan:
pytest test/test_nnapi.py::TestNNAPI::test_pointwise_binary

Imported from OSS

Reviewed By: anshuljain1

Differential Revision: D28893881

fbshipit-source-id: 59240373fb03c6fdafa4cb2fa4d8408dd20092f6
2021-08-20 21:10:26 -07:00
1022443168 Revert D30279364: [codemod][lint][fbcode/c*] Enable BLACK by default
Test Plan: revert-hammer

Differential Revision:
D30279364 (b004307252)

Original commit changeset: c1ed77dfe43a

fbshipit-source-id: eab50857675c51e0088391af06ec0ecb14e2347e
2021-08-12 11:45:01 -07:00
b004307252 [codemod][lint][fbcode/c*] Enable BLACK by default
Test Plan: manual inspection & sandcastle

Reviewed By: zertosh

Differential Revision: D30279364

fbshipit-source-id: c1ed77dfe43a3bde358f92737cd5535ae5d13c9a
2021-08-12 10:58:35 -07:00
8e71f48f0a Handle simple NNAPI flatten NHWC case (#61796)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61796

We can easily handle nnapi conversion for nhwc inputs
that have 1 channel or H & W are 1

Test Plan:
pytest test/test_nnapi.py::TestNNAPI::test_flatten

Imported from OSS

Reviewed By: saketh-are

Differential Revision: D29827735

fbshipit-source-id: 65dee4b42fceef1b032bf5dd1c4cc6e020d01e14
2021-07-26 10:59:04 -07:00
046272f3e5 [6/N] Nnapi Backend Delegate: Comprehensive OSS Tests (#61782)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61782

This PR depends on https://github.com/pytorch/pytorch/pull/61787

### Summary:
Added more comprehensive tests for Android NNAPI delegate.
Previously, there was only one basic test for lowering a PReLU module with the NNAPI delegate. Now, more tests are inherited from `test_nnapi.py`, the file for testing NNAPI conversion and execution without the delegate.

**test_backend_nnapi.py**
Test file for Android NNAPI delegate.
- `TestNnapiBackend` class inherits tests from `test_nnapi.py` and overrides the model conversion to use the delegate API.
- Includes an extra test for passing input arguments as Tensors and Tensor Lists.
- Has extra set up for loading the NNAPI delegate library and changing the default dtype from float64 to float32 (dtype is typically float32 by default, but not in delegate backend unit tests)

**test_nnapi.py**
Test file for Android NNAPI without the delegate.
- Some code was refactored to allow override of only the NNAPI conversion call.
- An extra function was added to allow the NNAPI delegate unit test to turn off the model execution step. Once the NNAPI delegate's execution implementation is complete, this may no longer be necessary.

### Test Plan:
I ran `python test/test_jit.py TestNnapiBackend` and `python test/test_nnapi.py` to run both test files.

Test Plan: Imported from OSS

Reviewed By: raziel, iseeyuan

Differential Revision: D29772005

fbshipit-source-id: 5d14067a4f6081835699b87a2ece5bd6bed00c6b
2021-07-23 17:04:07 -07:00
b42cc19c88 Fix broken assertion error test in NNAPI convertor (#61586)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61586

Error message was changed

Test Plan:
pytest test/test_nnapi.py:

Imported from OSS

Reviewed By: iseeyuan

Differential Revision: D29682319

fbshipit-source-id: 52a96d79633ee9aae1de2056c7583311edc92353
2021-07-13 11:46:32 -07:00
ae65f63971 Make nnapi flatten converter accept flex inputs (#61024)
Summary:
As title

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61024

Test Plan: pytest test/test_nnapi.py::TestNNAPI::test_flatten

Reviewed By: anshuljain1

Differential Revision: D29480748

fbshipit-source-id: c334b09600a64d3e552cec843d6da3de28e7d27c
2021-07-09 15:27:02 -07:00
76c0f223d3 Make nnapi cat converter accept flex inputs
Summary: As title

Test Plan: pytest test/test_nnapi.py::TestNNAPI::test_cat

Reviewed By: anshuljain1

Differential Revision: D29480747

fbshipit-source-id: 161803054ff1a4c2c750fc30a5f0fc6d8a24b2c9
2021-07-09 14:27:53 -07:00
9e81d3d869 Make NNAPI linear converter accept flex inputs (#61022)
Summary:
As title

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61022

Test Plan: pytest test/test_nnapi.py::TestNNAPI::test_linear

Reviewed By: anshuljain1

Differential Revision: D29480749

fbshipit-source-id: 35975861740298c9e16f866c939e7ee3c2151710
2021-07-09 14:27:51 -07:00
9e533a62f6 Make conv2d nnapi converter accept flexible batch (#61021)
Summary:
Same as title

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61021

Test Plan: pytest test/test_nnapi.py::TestNNAPI

Reviewed By: anshuljain1

Differential Revision: D29480746

fbshipit-source-id: 7217c8f3a811db8c3c373f3e7ca31caf9502ef22
2021-07-09 10:28:10 -07:00
8bd3e52e00 Add conv2d transpose NNAPI converter (#59529)
Summary:
* Conv2d transpose support
* Quantize WIP

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59529

Test Plan: pytest test/test_nnapi.py::TestNNAPI::test_conv2d_transpose

Reviewed By: anshuljain1

Differential Revision: D28926335

fbshipit-source-id: 8f90182f96cee0a13c4f38331d421e1e8ac618de
2021-07-09 09:29:20 -07:00
7b6ddb6793 [nnapi] add log_softmax (#61378)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/61378

Test Plan: Imported from OSS

Reviewed By: axitkhurana

Differential Revision: D29597355

Pulled By: IvanKobzarev

fbshipit-source-id: 55124749f8eeffa2b2713f7cffd5ccf965561de1
2021-07-07 18:28:39 -07:00
cf285d8eea Add aten::slice NNAPI converter (#59364)
Summary:
Add support for aten::slice op in the NNAPI model converter

* If start = 0; end = max -> identity
* Flexible shapes can be passed through
* Flexible shapes can't be sliced over

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59364

Test Plan: pytest test/test_nnapi.py::TestNNAPI::test_slice

Reviewed By: anshuljain1

Differential Revision: D28881039

fbshipit-source-id: 3c1c630ff27b5bba6eda403d87570c61d43ae90e
2021-07-07 12:40:47 -07:00
d26372794a Add aten::detach NNAPI converter (#58543)
Summary:
* Add support for aten::detach op in the NNAPI model converter as a no-op
* Also add flexible op support for add_pointwise_simple_unary_op

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58543

Test Plan: pytest test/test_nnapi.py::TestNNAPI::test_detatch

Reviewed By: anshuljain1

Differential Revision: D28531942

fbshipit-source-id: 4387dbbbadd8ce6b690841f3a903e68a380b849d
2021-07-07 12:40:46 -07:00
0be228dd5f Add aten::flatten NNAPI converter (#60885)
Summary:
Add support for aten::div op in the NNAPI model converter. Startup time
variable size support isn't supported as shapes go as inputs to NNAPI op

Runtime variable size support to supported soon

Pull Request resolved: https://github.com/pytorch/pytorch/pull/60885

Test Plan: pytest test/test_nnapi.py::TestNNAPI::test_flatten

Reviewed By: anshuljain1

Differential Revision: D29451725

fbshipit-source-id: 8902745f7758c8cc88ad4b4ce02b8301ff894bd4
2021-07-07 12:40:44 -07:00
b297f65b66 Add aten::div NNAPI converter (#58541)
Summary:
Add support for aten::div op in the NNAPI model converter. Add variable
size input test as well.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58541

Test Plan: pytest test/test_nnapi.py::TestNNAPI::test_div

Reviewed By: anshuljain1

Differential Revision: D28531943

fbshipit-source-id: e96342146f6de216f7b88443618edfc54963747c
2021-07-07 12:40:42 -07:00
eab18a9a40 Add aten::to NNAPI converter (#58540)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58540

Add support for aten::to op in the NNAPI model converter for simple
cases like to("cpu"), to("gpu")

Test Plan: pytest test/test_nnapi.py::TestNNAPI::test_to

Reviewed By: anshuljain1

Differential Revision: D28531941

fbshipit-source-id: 0c934f7aceaff2669307c3426efe32046d8c44f3
2021-07-07 12:40:41 -07:00
14d604a13e Add aten::softmax NNAPI converter (#58539)
Summary:
Add support for aten::softmax op in the NNAPI model converter with
flexible size

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58539

Test Plan: pytest test/test_nnapi.py::TestNNAPI::test_softmax

Reviewed By: anshuljain1

Differential Revision: D28531946

fbshipit-source-id: 8633f3e3f7f52795f9866ff16ad0867ea36a19e8
2021-07-07 12:39:31 -07:00
369802a504 Add aten::avgpool2d NNAPI converter (#58538)
Summary:
Add support for aten::avgpool2d op in the NNAPI model converter with var
size support

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58538

Test Plan: pytest test/test_nnapi.py::TestNNAPI::test_avgpool2d

Reviewed By: anshuljain1

Differential Revision: D28531944

fbshipit-source-id: 43ff8c9389365698c282f204042b49c7ec84d824
2021-07-01 14:07:14 -07:00
c4bb6a5781 NNAPI: flex size support for upsample_nearest2d op (#57563)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57563

Add flexible size support for upsample_nearest2d op in nnapi model conversion

Test Plan:
pytest test/test_nnapi.py

Imported from OSS

Reviewed By: dreiss

Differential Revision: D28200847

fbshipit-source-id: 901fe3f6e68e4c16ece730f3ffa68dc88c6ed6c3
2021-05-05 13:54:43 -07:00
4c609a9782 NNAPI: Add qadd flexible size support (#57562)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57562

Add flexible size support for qadd op in nnapi model conversion

Test Plan:
pytest test/test_nnapi.py

Imported from OSS

Reviewed By: dreiss

Differential Revision: D28200849

fbshipit-source-id: d5b2ea8e9eb8ae405ff2c960f7549cef60bc0991
2021-05-05 13:54:41 -07:00
28cd04ea64 NNAPI: add flexible size support for conv2d (#57561)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57561

Add flexible size support for conv2d op in nnapi model conversion

Test Plan:
pytest test/test_nnapi.py

Imported from OSS

Reviewed By: dreiss

Differential Revision: D28200848

fbshipit-source-id: d94ccf48a3d8453aa8e96c7cac02948c4cd870cc
2021-05-05 13:53:33 -07:00
e3900d2ba5 Add lint for unqualified noqa (#56272)
Summary:
As this diff shows, currently there are a couple hundred instances of raw `noqa` in the codebase, which just ignore all errors on a given line. That isn't great, so this PR changes all existing instances of that antipattern to qualify the `noqa` with respect to a specific error code, and adds a lint to prevent more of this from happening in the future.

Interestingly, some of the examples the `noqa` lint catches are genuine attempts to qualify the `noqa` with a specific error code, such as these two:
```
test/jit/test_misc.py:27:            print(f"{hello + ' ' + test}, I'm a {test}") # noqa E999
test/jit/test_misc.py:28:            print(f"format blank") # noqa F541
```
However, those are still wrong because they are [missing a colon](https://flake8.pycqa.org/en/3.9.1/user/violations.html#in-line-ignoring-errors), which actually causes the error code to be completely ignored:

- If you change them to anything else, the warnings will still be suppressed.
- If you add the necessary colons then it is revealed that `E261` was also being suppressed, unintentionally:
  ```
  test/jit/test_misc.py:27:57: E261 at least two spaces before inline comment
  test/jit/test_misc.py:28:35: E261 at least two spaces before inline comment
  ```

I did try using [flake8-noqa](https://pypi.org/project/flake8-noqa/) instead of a custom `git grep` lint, but it didn't seem to work. This PR is definitely missing some of the functionality that flake8-noqa is supposed to provide, though, so if someone can figure out how to use it, we should do that instead.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56272

Test Plan:
CI should pass on the tip of this PR, and we know that the lint works because the following CI run (before this PR was finished) failed:

- https://github.com/pytorch/pytorch/runs/2365189927

Reviewed By: janeyx99

Differential Revision: D27830127

Pulled By: samestep

fbshipit-source-id: d6dcf4f945ebd18cd76c46a07f3b408296864fcb
2021-04-19 13:16:18 -07:00
da7a27b847 [NNAPI] Initial flexible size support (#54701)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54701

We need NNAPI models to support inputs (and, by extension, intermediate
values and outputs) whose shape is only determined at load time.  For
example, a vision models input shape might be dependent on the aspect
ratio of the device camera.  While NNAPI has full support for variable
shapes (by setting components of the operand shape to 0), the guidance
we have received is that vendor-provided drivers for real hardware are
not able to support this efficiently.  Therefore, we take a hybrid
approach where shapes are calculated at model load time to
semi-dynamically construct our NNAPI model.  While this doesn't let us
have truly dynamic input shapes, it does allow us to ensure that the
vendor driver only sees fixed shapes, so we get maximum performance.

In this initial commit, only PReLU supports dynamic shapes.  Additional
operators will be converted in separate diffs.

- In order to convert a flexible-shape model, the user supplies inputs
  with shapes containing dimensions of size 0 for the flexible
  dimensions.
- During conversion, we generate code to compute the shapes of all
  intermediates and outputs as a function of the input shapes.
- We no longer run the input model to produce the output templates.
  Instead, we generate code to return properly-sized templates, given
  the input shapes.
- All of this generated code goes into a "ShapeComputeModule" that is
  used by the NnapiModule during initialization.
- The ShapeComputeModule mutates the serialized model to fill in the
  computed sizes for each operand.  This requires us to change the dtype
  for the serialized model to int32, but this should be fine because
  everything in it is already 4-byte aligned.
- NnapiInitWrapper no longer exists.  Instead, initialization is
  performed on the first run, based on the real arguments.  We plan to
  provide an API for doing eager initialization.
- Unit test updated to allow separate arguments to be given for trace,
  conversion, and inference.  A flexible-shape test case was added for
  PReLU.

Test Plan: Unit test

Reviewed By: axitkhurana

Differential Revision: D27536796

Pulled By: dreiss

fbshipit-source-id: 105585f247987b1e6ec6946a6fe44401237cb0a0
2021-04-06 13:49:43 -07:00
1f1d26137b [NNAPI] Use code generation to better support list input/output (#54697)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54697

Previously, models being converted to NNAPI were expected to take inputs
as separate arguments, but the generated NNAPI model could only take
multiple inputs as a list.  Now the generated model always takes inputs
(single or multiple) as separate tensor arguments.

Previously, models being converted to NNAPI were expected to return
outputs as a single tensor or tuple of tensors, but the generated NNAPI
model would return multiple outputs as a list. Now the generated model
returns a tuple as well (or single tensor).

Internally, we decied what output format to use (single tensor or tuple)
based on the conversion process, rather than by running the model.

Test Plan: Unit test

Reviewed By: axitkhurana

Differential Revision: D27536790

Pulled By: dreiss

fbshipit-source-id: c0f93c85d450757e568985947cc2f32043795859
2021-04-06 13:49:33 -07:00
476c597ae6 [NNAPI] Handle binary ops combining NHWC+NCHW in some cases (#48812)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48812

This came up in a squeeze-and-excitation model.  Starting with an NHWC
tensor T, we perform a mean operation across H and W, giving an NxC
tensor, which (after some fully connected layers) is reshaped to
NxCx1x1, then multiplied with T.  To handle this, we detect the specific
case of a binary op with one NHWC input and one contiguous input with
H,W == 1,1 and allow the op to be applied (after transposing the
contiguous input).

Test Plan: Unit test.

Reviewed By: axitkhurana

Differential Revision: D25317939

Pulled By: dreiss

fbshipit-source-id: b4c17ab3b874d1a7defa04664010ba82115f1c20
2021-04-06 13:49:25 -07:00
b057d27b0b [NNAPI] Add support for unsqueeze, cat, and mean (#48811)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/48811

Test Plan: Unit tests.

Reviewed By: axitkhurana

Differential Revision: D25317936

Pulled By: dreiss

fbshipit-source-id: 9b3a0a75b8157ae35ac13d52293a67800bad0ded
2021-04-06 13:49:22 -07:00
3802edd9ab [NNAPI] Add unit test (#47521)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47521

This mostly goes op-by-op.  We construct a simple model containing the
op (in various configurations for complex ops) and verify that it can be
converted to NNAPI.  Additionally, if libneuralnetworks is available, we
also run both the eager model and NNAPI model and ensure that their
outputs are equal (allowing for some slight numerical differences).

serializer.py has 94% coverage.  And most of the uncovered lines are
error cases, defensive code, or dead code that I might want to use
later.  prepare.py has 56% coverage, but probably closer to 75-80% if we
could collect coverage from TorchScript.

Test Plan:
Ran tests with NNAPI available.  Made various tweaks to the codebase to
make sure tests properly detected bugs.

Reviewed By: axitkhurana

Differential Revision: D25317940

Pulled By: dreiss

fbshipit-source-id: 709125af820440bfa7a73bab3304395f115f717f
2021-04-06 13:49:19 -07:00