Fixes some files in #123062
Run lintrunner on files:
test/test_nnapi.py,
test/test_numba_integration.py,
test/test_numpy_interop.py,
test/test_openmp.py,
test/test_optim.py
```bash
$ lintrunner -a --take UFMT --all-files
ok No lint issues.
Successfully applied all patches.
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/126845
Approved by: https://github.com/ezyang
Adds a ruff lint rule to ban raising raw exceptions. Most of these should at the very least be runtime exception, value errors, type errors or some other errors. There are hundreds of instance of these bad exception types already in the codebase, so I have noqa'd most of them. Hopefully this error code will get commiters to rethink what exception type they should raise when they submit a PR.
I also encourage people to gradually go and fix all the existing noqas that have been added so they can be removed overtime and our exception typing can be improved.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124570
Approved by: https://github.com/ezyang
Summary:
Fixes https://github.com/pytorch/pytorch/issues/66232
This should be the last immediate task. I anticipate test ownership will change overtime but this is the last big thing to close it out
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67859
Reviewed By: soulitzer
Differential Revision: D32210534
Pulled By: janeyx99
fbshipit-source-id: 7fd835d87d9d35d49ec49de1fcfa29b085133e99
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63797
nnapi memory format test has a typo
Test Plan:
pytest test/test_nnapi.py::TestNNAPI
Imported from OSS
Reviewed By: Amyh11325
Differential Revision: D30495473
fbshipit-source-id: 8edad7c01a080847a64a2797e077ec4d6077552a
Summary:
NNAPI converter failed with 1 const value and one tensor earlier
Code suggestions from dreiss
Test Plan:
pytest test/test_nnapi.py::TestNNAPI::test_pointwise_binary
Imported from OSS
Reviewed By: anshuljain1
Differential Revision: D28893881
fbshipit-source-id: 59240373fb03c6fdafa4cb2fa4d8408dd20092f6
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61796
We can easily handle nnapi conversion for nhwc inputs
that have 1 channel or H & W are 1
Test Plan:
pytest test/test_nnapi.py::TestNNAPI::test_flatten
Imported from OSS
Reviewed By: saketh-are
Differential Revision: D29827735
fbshipit-source-id: 65dee4b42fceef1b032bf5dd1c4cc6e020d01e14
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61782
This PR depends on https://github.com/pytorch/pytorch/pull/61787
### Summary:
Added more comprehensive tests for Android NNAPI delegate.
Previously, there was only one basic test for lowering a PReLU module with the NNAPI delegate. Now, more tests are inherited from `test_nnapi.py`, the file for testing NNAPI conversion and execution without the delegate.
**test_backend_nnapi.py**
Test file for Android NNAPI delegate.
- `TestNnapiBackend` class inherits tests from `test_nnapi.py` and overrides the model conversion to use the delegate API.
- Includes an extra test for passing input arguments as Tensors and Tensor Lists.
- Has extra set up for loading the NNAPI delegate library and changing the default dtype from float64 to float32 (dtype is typically float32 by default, but not in delegate backend unit tests)
**test_nnapi.py**
Test file for Android NNAPI without the delegate.
- Some code was refactored to allow override of only the NNAPI conversion call.
- An extra function was added to allow the NNAPI delegate unit test to turn off the model execution step. Once the NNAPI delegate's execution implementation is complete, this may no longer be necessary.
### Test Plan:
I ran `python test/test_jit.py TestNnapiBackend` and `python test/test_nnapi.py` to run both test files.
Test Plan: Imported from OSS
Reviewed By: raziel, iseeyuan
Differential Revision: D29772005
fbshipit-source-id: 5d14067a4f6081835699b87a2ece5bd6bed00c6b
Summary: As title
Test Plan: pytest test/test_nnapi.py::TestNNAPI::test_cat
Reviewed By: anshuljain1
Differential Revision: D29480747
fbshipit-source-id: 161803054ff1a4c2c750fc30a5f0fc6d8a24b2c9
Summary:
Same as title
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61021
Test Plan: pytest test/test_nnapi.py::TestNNAPI
Reviewed By: anshuljain1
Differential Revision: D29480746
fbshipit-source-id: 7217c8f3a811db8c3c373f3e7ca31caf9502ef22
Summary:
Add support for aten::slice op in the NNAPI model converter
* If start = 0; end = max -> identity
* Flexible shapes can be passed through
* Flexible shapes can't be sliced over
Pull Request resolved: https://github.com/pytorch/pytorch/pull/59364
Test Plan: pytest test/test_nnapi.py::TestNNAPI::test_slice
Reviewed By: anshuljain1
Differential Revision: D28881039
fbshipit-source-id: 3c1c630ff27b5bba6eda403d87570c61d43ae90e
Summary:
* Add support for aten::detach op in the NNAPI model converter as a no-op
* Also add flexible op support for add_pointwise_simple_unary_op
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58543
Test Plan: pytest test/test_nnapi.py::TestNNAPI::test_detatch
Reviewed By: anshuljain1
Differential Revision: D28531942
fbshipit-source-id: 4387dbbbadd8ce6b690841f3a903e68a380b849d
Summary:
Add support for aten::div op in the NNAPI model converter. Startup time
variable size support isn't supported as shapes go as inputs to NNAPI op
Runtime variable size support to supported soon
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60885
Test Plan: pytest test/test_nnapi.py::TestNNAPI::test_flatten
Reviewed By: anshuljain1
Differential Revision: D29451725
fbshipit-source-id: 8902745f7758c8cc88ad4b4ce02b8301ff894bd4
Summary:
Add support for aten::div op in the NNAPI model converter. Add variable
size input test as well.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58541
Test Plan: pytest test/test_nnapi.py::TestNNAPI::test_div
Reviewed By: anshuljain1
Differential Revision: D28531943
fbshipit-source-id: e96342146f6de216f7b88443618edfc54963747c
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58540
Add support for aten::to op in the NNAPI model converter for simple
cases like to("cpu"), to("gpu")
Test Plan: pytest test/test_nnapi.py::TestNNAPI::test_to
Reviewed By: anshuljain1
Differential Revision: D28531941
fbshipit-source-id: 0c934f7aceaff2669307c3426efe32046d8c44f3
Summary:
Add support for aten::softmax op in the NNAPI model converter with
flexible size
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58539
Test Plan: pytest test/test_nnapi.py::TestNNAPI::test_softmax
Reviewed By: anshuljain1
Differential Revision: D28531946
fbshipit-source-id: 8633f3e3f7f52795f9866ff16ad0867ea36a19e8
Summary:
Add support for aten::avgpool2d op in the NNAPI model converter with var
size support
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58538
Test Plan: pytest test/test_nnapi.py::TestNNAPI::test_avgpool2d
Reviewed By: anshuljain1
Differential Revision: D28531944
fbshipit-source-id: 43ff8c9389365698c282f204042b49c7ec84d824
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57563
Add flexible size support for upsample_nearest2d op in nnapi model conversion
Test Plan:
pytest test/test_nnapi.py
Imported from OSS
Reviewed By: dreiss
Differential Revision: D28200847
fbshipit-source-id: 901fe3f6e68e4c16ece730f3ffa68dc88c6ed6c3
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57562
Add flexible size support for qadd op in nnapi model conversion
Test Plan:
pytest test/test_nnapi.py
Imported from OSS
Reviewed By: dreiss
Differential Revision: D28200849
fbshipit-source-id: d5b2ea8e9eb8ae405ff2c960f7549cef60bc0991
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57561
Add flexible size support for conv2d op in nnapi model conversion
Test Plan:
pytest test/test_nnapi.py
Imported from OSS
Reviewed By: dreiss
Differential Revision: D28200848
fbshipit-source-id: d94ccf48a3d8453aa8e96c7cac02948c4cd870cc
Summary:
As this diff shows, currently there are a couple hundred instances of raw `noqa` in the codebase, which just ignore all errors on a given line. That isn't great, so this PR changes all existing instances of that antipattern to qualify the `noqa` with respect to a specific error code, and adds a lint to prevent more of this from happening in the future.
Interestingly, some of the examples the `noqa` lint catches are genuine attempts to qualify the `noqa` with a specific error code, such as these two:
```
test/jit/test_misc.py:27: print(f"{hello + ' ' + test}, I'm a {test}") # noqa E999
test/jit/test_misc.py:28: print(f"format blank") # noqa F541
```
However, those are still wrong because they are [missing a colon](https://flake8.pycqa.org/en/3.9.1/user/violations.html#in-line-ignoring-errors), which actually causes the error code to be completely ignored:
- If you change them to anything else, the warnings will still be suppressed.
- If you add the necessary colons then it is revealed that `E261` was also being suppressed, unintentionally:
```
test/jit/test_misc.py:27:57: E261 at least two spaces before inline comment
test/jit/test_misc.py:28:35: E261 at least two spaces before inline comment
```
I did try using [flake8-noqa](https://pypi.org/project/flake8-noqa/) instead of a custom `git grep` lint, but it didn't seem to work. This PR is definitely missing some of the functionality that flake8-noqa is supposed to provide, though, so if someone can figure out how to use it, we should do that instead.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56272
Test Plan:
CI should pass on the tip of this PR, and we know that the lint works because the following CI run (before this PR was finished) failed:
- https://github.com/pytorch/pytorch/runs/2365189927
Reviewed By: janeyx99
Differential Revision: D27830127
Pulled By: samestep
fbshipit-source-id: d6dcf4f945ebd18cd76c46a07f3b408296864fcb
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54701
We need NNAPI models to support inputs (and, by extension, intermediate
values and outputs) whose shape is only determined at load time. For
example, a vision models input shape might be dependent on the aspect
ratio of the device camera. While NNAPI has full support for variable
shapes (by setting components of the operand shape to 0), the guidance
we have received is that vendor-provided drivers for real hardware are
not able to support this efficiently. Therefore, we take a hybrid
approach where shapes are calculated at model load time to
semi-dynamically construct our NNAPI model. While this doesn't let us
have truly dynamic input shapes, it does allow us to ensure that the
vendor driver only sees fixed shapes, so we get maximum performance.
In this initial commit, only PReLU supports dynamic shapes. Additional
operators will be converted in separate diffs.
- In order to convert a flexible-shape model, the user supplies inputs
with shapes containing dimensions of size 0 for the flexible
dimensions.
- During conversion, we generate code to compute the shapes of all
intermediates and outputs as a function of the input shapes.
- We no longer run the input model to produce the output templates.
Instead, we generate code to return properly-sized templates, given
the input shapes.
- All of this generated code goes into a "ShapeComputeModule" that is
used by the NnapiModule during initialization.
- The ShapeComputeModule mutates the serialized model to fill in the
computed sizes for each operand. This requires us to change the dtype
for the serialized model to int32, but this should be fine because
everything in it is already 4-byte aligned.
- NnapiInitWrapper no longer exists. Instead, initialization is
performed on the first run, based on the real arguments. We plan to
provide an API for doing eager initialization.
- Unit test updated to allow separate arguments to be given for trace,
conversion, and inference. A flexible-shape test case was added for
PReLU.
Test Plan: Unit test
Reviewed By: axitkhurana
Differential Revision: D27536796
Pulled By: dreiss
fbshipit-source-id: 105585f247987b1e6ec6946a6fe44401237cb0a0
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54697
Previously, models being converted to NNAPI were expected to take inputs
as separate arguments, but the generated NNAPI model could only take
multiple inputs as a list. Now the generated model always takes inputs
(single or multiple) as separate tensor arguments.
Previously, models being converted to NNAPI were expected to return
outputs as a single tensor or tuple of tensors, but the generated NNAPI
model would return multiple outputs as a list. Now the generated model
returns a tuple as well (or single tensor).
Internally, we decied what output format to use (single tensor or tuple)
based on the conversion process, rather than by running the model.
Test Plan: Unit test
Reviewed By: axitkhurana
Differential Revision: D27536790
Pulled By: dreiss
fbshipit-source-id: c0f93c85d450757e568985947cc2f32043795859
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48812
This came up in a squeeze-and-excitation model. Starting with an NHWC
tensor T, we perform a mean operation across H and W, giving an NxC
tensor, which (after some fully connected layers) is reshaped to
NxCx1x1, then multiplied with T. To handle this, we detect the specific
case of a binary op with one NHWC input and one contiguous input with
H,W == 1,1 and allow the op to be applied (after transposing the
contiguous input).
Test Plan: Unit test.
Reviewed By: axitkhurana
Differential Revision: D25317939
Pulled By: dreiss
fbshipit-source-id: b4c17ab3b874d1a7defa04664010ba82115f1c20
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47521
This mostly goes op-by-op. We construct a simple model containing the
op (in various configurations for complex ops) and verify that it can be
converted to NNAPI. Additionally, if libneuralnetworks is available, we
also run both the eager model and NNAPI model and ensure that their
outputs are equal (allowing for some slight numerical differences).
serializer.py has 94% coverage. And most of the uncovered lines are
error cases, defensive code, or dead code that I might want to use
later. prepare.py has 56% coverage, but probably closer to 75-80% if we
could collect coverage from TorchScript.
Test Plan:
Ran tests with NNAPI available. Made various tweaks to the codebase to
make sure tests properly detected bugs.
Reviewed By: axitkhurana
Differential Revision: D25317940
Pulled By: dreiss
fbshipit-source-id: 709125af820440bfa7a73bab3304395f115f717f