This change introduces a mechanism to test onnx export based on sample inputs registered in OpInfo, similar to how MPS and other components of pytorch are tested. It provides test coverage on ops and dtypes previously unattainable with manually created test models. This is the best way for us to discover gaps in the exporter support, especially for ops with partial existing support.
This test is adapted from https://github.com/pytorch/pytorch/blob/master/test/test_mps.py
This PR also
- Update sqrt to support integer inputs to match pytorch behavior
- Add pytest-subtests for unittest subtests support in the new test file
I only enabled very few ops: `t`, `ceil` and `sqrt` because otherwise too many things will fail due to (1) unsupported dtypes in the exporter (2) unimplemented dtype support in onnxruntime (3) unexpected input to verification.verify.
Subsequent PRs should improve `verification.verify` first for it to accept any legal input to a pytorch model, then incrementally fix the symbolic functions to enable more test cases.
Fixes#85363
Design #88118
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86182
Approved by: https://github.com/BowenBao
This PR adds an internal wrapper on the [beartype](https://github.com/beartype/beartype) library to perform runtime type checking in `torch.onnx`. It uses beartype when it is found in the environment and is reduced to a no-op when beartype is not found.
Setting the env var `TORCH_ONNX_EXPERIMENTAL_RUNTIME_TYPE_CHECK=ERRORS` will turn on the feature. setting `TORCH_ONNX_EXPERIMENTAL_RUNTIME_TYPE_CHECK=DISABLED` will disable all checks. When not set and `beartype` is installed, a warning message is emitted.
Now when users call an api with invalid arguments e.g.
```python
torch.onnx.export(conv, y, path, export_params=True, training=False)
# traning should take TrainingModel, not bool
```
they get
```
Traceback (most recent call last):
File "bisect_m1_error.py", line 63, in <module>
main()
File "bisect_m1_error.py", line 59, in main
reveal_error()
File "bisect_m1_error.py", line 32, in reveal_error
torch.onnx.export(conv, y, cpu_model_path, export_params=True, training=False)
File "<@beartype(torch.onnx.utils.export) at 0x1281f5a60>", line 136, in export
File "pytorch/venv/lib/python3.9/site-packages/beartype/_decor/_error/errormain.py", line 301, in raise_pep_call_exception
raise exception_cls( # type: ignore[misc]
beartype.roar.BeartypeCallHintParamViolation: @beartyped export() parameter training=False violates type hint <class 'torch._C._onnx.TrainingMode'>, as False not instance of <protocol "torch._C._onnx.TrainingMode">.
```
when `TORCH_ONNX_EXPERIMENTAL_RUNTIME_TYPE_CHECK` is not set and `beartype` is installed, a warning message is emitted.
```
>>> torch.onnx.export("foo", "bar", "f")
<stdin>:1: CallHintViolationWarning: Traceback (most recent call last):
File "/home/justinchu/dev/pytorch/torch/onnx/_internal/_beartype.py", line 54, in _coerce_beartype_exceptions_to_warnings
return beartyped(*args, **kwargs)
File "<@beartype(torch.onnx.utils.export) at 0x7f1d4ab35280>", line 39, in export
File "/home/justinchu/anaconda3/envs/pytorch/lib/python3.9/site-packages/beartype/_decor/_error/errormain.py", line 301, in raise_pep_call_exception
raise exception_cls( # type: ignore[misc]
beartype.roar.BeartypeCallHintParamViolation: @beartyped export() parameter model='foo' violates type hint typing.Union[torch.nn.modules.module.Module, torch.jit._script.ScriptModule, torch.jit.ScriptFunction], as 'foo' not <protocol "torch.jit.ScriptFunction">, <protocol "torch.nn.modules.module.Module">, or <protocol "torch.jit._script.ScriptModule">.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/justinchu/dev/pytorch/torch/onnx/_internal/_beartype.py", line 63, in _coerce_beartype_exceptions_to_warnings
return func(*args, **kwargs)
File "/home/justinchu/dev/pytorch/torch/onnx/utils.py", line 482, in export
_export(
File "/home/justinchu/dev/pytorch/torch/onnx/utils.py", line 1422, in _export
with exporter_context(model, training, verbose):
File "/home/justinchu/anaconda3/envs/pytorch/lib/python3.9/contextlib.py", line 119, in __enter__
return next(self.gen)
File "/home/justinchu/dev/pytorch/torch/onnx/utils.py", line 177, in exporter_context
with select_model_mode_for_export(
File "/home/justinchu/anaconda3/envs/pytorch/lib/python3.9/contextlib.py", line 119, in __enter__
return next(self.gen)
File "/home/justinchu/dev/pytorch/torch/onnx/utils.py", line 95, in select_model_mode_for_export
originally_training = model.training
AttributeError: 'str' object has no attribute 'training'
```
We see the error is caught right when the type mismatch happens, improving from what otherwise would become `AttributeError: 'str' object has no attribute 'training'`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83673
Approved by: https://github.com/BowenBao
* Move memory heavy tests from `test_pytorch_onnx_onnxruntime.py` to
`test_models_onnxruntime.py`. The former is run in parallel in CI,
while the latter is not. A change is that the moved tests are now
only covered in default opset export.
* Refactor and create base class for tests that export model to ONNX
and verify with ONNX Runtime. The new base class are parameterized
with `opset_version` and `is_script`. Further work can be done to
refactor existing test classes in `test_pytorch_onnx_onnxruntime.py`.
See #75630
* Reduce unnecessarily large tensor size in
`test_pytorch_onnx_onnxruntime.py` to further reduce memory usage
and test time.
After this PR, the running time for `test_pytorch_onnx_onnxruntime.py`
is reduced from `1338.82s (0:22:18)` to `225.07s (0:03:45)`,
benchmarked on 10900x with `-n 10`.
Fixes#79179
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79640
Approved by: https://github.com/justinchuby, https://github.com/garymm
Should fix#78844
Custom op related tests utilize inline cpp extension to build custom
operator from c++ source snippet. Only two test cases become flaky after
parallel run, and both use inline cpp extension. Reverting to run these
tests in single process to try resolve the flakiness.
Reverts test skip added previously #78936.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78944
Approved by: https://github.com/janeyx99, https://github.com/garymm
Currently `torch.onnx.export(.., operator_export_type=OperatorExportTypes.ONNX_ATEN_FALLBACK)` only issues ATen ops through explicit requests (e.g. `g.at()`) calls inside each op symbolic function. This is done based on specific conditions such as `operator_export_type==OperatorExportTypes.ONNX_ATEN_FALLBACK)` or `is_caffe2_aten_fallback()`
This PR extends the ATen fallback mechanism for scenarios when the symbolic function raises `RuntimeError` during export. The idea is that partial implementation of existing ONNX ops can fallback to ATen as a last resort. That is valuable because each operator can have many input combinations and not all are always implemented.
A minor fix was done to make sure the `overload_name` attribute is added to explicit ATen op fallback requests when a symbolic is not registered to a particular op.
ps: The behavior for builds with BUILD_CAFFE2=1 is not changed to ensure BC.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74759
Approved by: https://github.com/garymm, https://github.com/msaroufim
Summary:
And add a new tool to update it in the future, which follows the policy
of using "latest as of 18 months ago". This policy is meant to balance:
* recent enough to increase the odds of being able to successfully
export
* old enough to increase the odds of exported model being runnable by
different ONNX implementations
Related changes:
* test_models.py: explicitly fix opset_version to 9 rather than relying on default. Caffe2 doesn't support newer versions.
* symbolic_helper.py:
* Remove a misleading comment
* Remove unnecessary check in `_set_opset_version`
* Use a range to define `_onnx_stable_opsets`
* test_pytorch_common.py:
* Rename a variable from min -> max. I think it was a copy-paste error.
* Make skip test messages more informative.
* Remove unused `skipIfONNXShapeInference`. More on that below.
* test_pytorch_onnx_onnxruntime.py:
* Make all the `TestCase` classes explicitly specify opset version.
* Make `test_unsupported_pad` respect `opset_version` by using `run_test`
* Unrelated simplification: make it obvious that all tests run with `onnx_shape_inference=True`. AFAICT this was already the case.
* There was one test that was entirely disabled (test_tolist) because it was asking to be skipped whenever `onnx_shape_inference=True`, but it was always True. I changed the model being tested so as to preserve the intended test coverage but still have the test actually pass.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73898
Reviewed By: msaroufim
Differential Revision: D35264615
Pulled By: malfet
fbshipit-source-id: cda8fbdffe4cc8210d8d96e659e3a9adf1b5f1d2
(cherry picked from commit b5e639e88828d34442282d0b50c977e610a2ba3a)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67805
Also fix Reduce ops on binary_cross_entropy_with_logits
The graph says the output is a scalar but with `keepdims=1`
(the default), the output should be a tensor of rank 1. We set keep
`keepdims=0` to make it clear that we want a scalar output.
This previously went unnoticed because ONNX Runtime does not strictly
enforce shape inference mismatches if the model is not using the latest
opset version.
Test Plan: Imported from OSS
Reviewed By: msaroufim
Differential Revision: D32181304
Pulled By: malfet
fbshipit-source-id: 1462d8a313daae782013097ebf6341a4d1632e2c
Co-authored-by: Bowen Bao <bowbao@microsoft.com>
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66513
These were missed in the migration of onnx to github actions.
Adds ort tests with 2 shards for the onnx workflow
Signed-off-by: Eli Uriegas <eliuriegas@fb.com>
Test Plan: Imported from OSS
Reviewed By: malfet
Differential Revision: D31599433
Pulled By: seemethere
fbshipit-source-id: 73dce0d3017c4280e64f0c8578e2be7ef6a168d6
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66513
These were missed in the migration of onnx to github actions.
Adds ort tests with 2 shards for the onnx workflow
Signed-off-by: Eli Uriegas <eliuriegas@fb.com>
Test Plan: Imported from OSS
Reviewed By: malfet
Differential Revision: D31591512
Pulled By: seemethere
fbshipit-source-id: 4a8bb3f0e62ff98ee77d3d8afc905f4e02db6f24
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57597
* Special post process for onnx::Cast and onnx::ConstantOfShape
* Update `test_pytorch_onnx_shape_inference.py` to be unit test over shape inference patterns.
Test Plan: Imported from OSS
Reviewed By: malfet
Differential Revision: D28393529
Pulled By: SplitInfinity
fbshipit-source-id: fc26032ddb842d4e299447da39564b28049752ed
Co-authored-by: BowenBao <bowbao@microsoft.com>
Summary:
This PR is step 2 (after https://github.com/pytorch/pytorch/issues/56708) to having JIT coverage--it actually uses the plug-in in CI!
Disclaimer: note that this will mark the entire JIT'd function/method as covered without seeking proof that the
compiled code has been executed. This means that even if the code chunk is merely compiled and not run, it will get
marked as covered.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56310
Test Plan:
We should see coverage improvements in CI after. A file to look out for would be `torch/jit/quantized.py`, which should have more coverage after this PR, which it does!
d3283ccd8c/torch/jit/quantized.py vs https://codecov.io/gh/pytorch/pytorch/src/master/torch/jit/quantized.py
More generally, the whole jit folder got ~3% increase in coverage, I believe.
Reviewed By: walterddr
Differential Revision: D28000672
Pulled By: janeyx99
fbshipit-source-id: 6712979d63a5e1224a92ee9bd9679ec62cf1cbba
Summary:
This PR is step 2 (after https://github.com/pytorch/pytorch/issues/56708) to having JIT coverage--it actually uses the plug-in in CI!
Disclaimer: note that this will mark the entire JIT'd function/method as covered without seeking proof that the
compiled code has been executed. This means that even if the code chunk is merely compiled and not run, it will get
marked as covered.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56310
Test Plan:
We should see coverage improvements in CI after. A file to look out for would be `torch/jit/quantized.py`, which should have more coverage after this PR, which it does!
d3283ccd8c/torch/jit/quantized.py vs https://codecov.io/gh/pytorch/pytorch/src/master/torch/jit/quantized.py
More generally, the whole jit folder got ~3% increase in coverage, I believe.
Reviewed By: ezyang
Differential Revision: D27967517
Pulled By: janeyx99
fbshipit-source-id: 53fd8431d772c2447191135c29d1b166ecd42f50
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53306
* [ONNX] Fix for sequence of mutations in blocks (#51577)
Fixes consecutive mutations in a tensor inside blocks.
Also, support append and pop in blocks.
* Support inplace operations + indexing
* Clean up old pass for remove mutations
* Add loop test
* Fixes for set attr in loops
* Removing the new jit API flag
* [ONNX] Redesign onnx pass to enable shape type dependent pattern conversion - cont (#51795)
With the introduction of ONNX shape inference, shape and type are inferred on the fly as operators get converted from ATen to ONNX when running symbolic function. This resolves the shape/type requirement for the symbolic functions. The pre-onnx passes however, can not be supported by shape inference, since at that stage the operators in the graph are still ATen operators.
This PR is to update the design of ONNX pass, to enable a mechanism of capturing subgraphs of ATen operators of certain patterns, and convert them later, when shape/type information of upstream operators are available.
The new design will require pre-onnx passes that need shape/type to be written in two parts, encapsulation and conversion.
The encapsulation part will find the nodes of patterns, like how pre-onnx passes were written previously. But instead of converting the nodes, it will encapsulate them into a sub-block of a new placeholder node. This part is called before onnx pass, so it runs before calling symbolic functions.
The conversion part will be called inside the onnx pass. In onnx pass, run_symbolic_func will be called for each node in topological order. When it reaches the placeholder node, the conversion part will be invoked. It will convert the nodes inside the sub-block based on pattern. By that time, it will have shape/type of upstream operators available. After the conversion is complete, the placeholder node will be removed, and nodes inside its sub-block converted. Run_symbolic_func will be called for these nodes, and they will be converted from ATen operator to ONNX operator.
This PR includes several other fixes, listed below.
* ~~replace helper.cpp with onnx_utils.cpp for holding utility functions.~~
* fix EraseNumberTypes on Bool type, the code was outdated that back then Bool type doesn't exist.
* ~~enable onnx shape inference in export with parameter/initializer data.~~
* other code clean ups.
* fix insertion of identity nodes for loop opset 13 sequence output.
~~PR depends on #51603~~
* Fix after merge
* clang
* Fix clang
* Fix clang
* Fix warning message.
* Fixes for non-model param attributes
* Fix for caffe2
* Additional test
* clang
* Skip test for lower opsets
* fix clang-tidy
* Update init.cpp
* Update remove_inplace_ops_for_onnx.cpp
* Update remove_inplace_ops_for_onnx.cpp
* Update remove_inplace_ops_for_onnx.cpp
* Fix for clang formatting
Test Plan: Imported from OSS
Reviewed By: pbelevich, malfet
Differential Revision: D26922416
Pulled By: SplitInfinity
fbshipit-source-id: e7108620b39b6404c594910786c4d275fee59d84
Co-authored-by: Bowen Bao <bowbao@microsoft.com>
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51521
* Add loop & if node to the list of nodes that could produce sequence type output.
* Switch from `[]` to `at()` to avoid segfault of out of range access.
Test Plan: Imported from OSS
Reviewed By: pbelevich
Differential Revision: D26203112
Pulled By: SplitInfinity
fbshipit-source-id: e990eeed933124b195be0be159271e33fb485063
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51517
Fix get/set attributes when getting/setting a model parameter.
This PR also fixes inplace ops in If blocks.
Test Plan: Imported from OSS
Reviewed By: pbelevich
Differential Revision: D26203116
Pulled By: SplitInfinity
fbshipit-source-id: bed6ee6dd92b5b43febc8c584a6872290f8fe33f
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50911
Need to replace dtype of export created scalars from float to double. (In torch implicit conversion logic, python numbers are double)
Test case skipped in CI due to that current CI job env does not have CUDA support.
Test Plan: Imported from OSS
Reviewed By: pbelevich
Differential Revision: D26050889
Pulled By: SplitInfinity
fbshipit-source-id: 1fdde23a68d4793e6b9a82840acc213e5c3aa760
Summary:
Handle sequence output shape and type inference.
This PR fixes value type of sequence outputs. Prior to this, all model sequence type outputs were unfolded for ONNX models.
This PR also enable shape inference for sequence outputs to represent the dynamic shape of these values.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46542
Reviewed By: ezyang
Differential Revision: D24924236
Pulled By: bzinodev
fbshipit-source-id: 506e70a38cfe31069191d7f40fc6375239c6aafe
Summary:
Update pass that handles prim::ListUnpack in peephole file, so that it also covers the case when input to the node is of ListType.
Fixes https://github.com/pytorch/pytorch/issues/45816
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46264
Reviewed By: mrshenli
Differential Revision: D24566070
Pulled By: bzinodev
fbshipit-source-id: 32555487054f6a7fe02cc17c66bcbe81ddf9623e
Summary:
Currently, no test reports are uploaded to CI because the paths for the `onnx` runs are incorrect. This PR attempts to change that.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47315
Reviewed By: malfet
Differential Revision: D24727607
Pulled By: janeyx99
fbshipit-source-id: f6d91698fdb15a39e01ef812032d4cd30621f864
Summary:
* Support propagating `dim_param` in ONNX by encoding as `ShapeSymbol` in `SymbolicShape` of outputs. If export is called with `dynamic_axes` provided, shape inference will start with these axes set as dynamic.
* Add new test file `test_pytorch_onnx_shape_inference.py`, reusing all test cases from `test_pytorch_onnx_onnxruntime.py`, but focus on validating shape for all nodes in graph. Currently this is not enabled in the CI, since there are still quite some existing issues and corner cases to fix. The test is default to run only at opset 12.
* Bug fixes, such as div, _len, and peephole.cpp passes for PackPadded, and LogSoftmaxCrossEntropy.
* This PR depends on existing PR such as 44332.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44920
Reviewed By: eellison
Differential Revision: D23958398
Pulled By: bzinodev
fbshipit-source-id: 00479d9bd19c867d526769a15ba97ec16d56e51d
Summary:
Among all ONNX tests, ONNXRuntime tests are taking the most time on CI (almost 60%).
This is because we are testing larger models (mainly torchvision RCNNs) for multiple onnx opsets.
I decided to divide tests between two jobs for older/newer opsets. This is now reducing the test time from 2h to around 1h10mins.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33242
Reviewed By: hl475
Differential Revision: D19866498
Pulled By: houseroad
fbshipit-source-id: 446c1fe659e85f5aef30efc5c4549144fcb5778c
Summary:
ONNX uses virtualenv, and PyTorch doesn't. So --user flag is causing problems in ONNX ci...
Fixing it by moving it to pytorch only scripts. And will install ninja in onnx ci separately.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22946
Reviewed By: bddppq
Differential Revision: D16297781
Pulled By: houseroad
fbshipit-source-id: 52991abac61beaf3cfbcc99af5bb1cd27b790485
Summary:
This is an extension to the original PR https://github.com/pytorch/pytorch/pull/21765
1. Increase the coverage of different opsets support, comments, and blacklisting.
2. Adding backend tests for both caffe2 and onnxruntime on opset 7 and opset 8.
3. Reusing onnx model tests in caffe2 for onnxruntime.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22421
Reviewed By: zrphercule
Differential Revision: D16225518
Pulled By: houseroad
fbshipit-source-id: 01ae3eed85111a83a0124e9e95512b80109d6aee