Commit Graph

44 Commits

Author SHA1 Message Date
bd39e47fee [ONNX] Default to dynamo export (#159646)
Set dynamo=True and enable fallback.

1. Implemented the compatible behavior where BytesIO objects as `f` is accepted
2. Update tests to explicitly set dynamo=False

#151693

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159646
Approved by: https://github.com/titaiwangms
2025-09-02 22:45:55 +00:00
524b78d4f6 [ONNX] Refactor torchscript based exporter (#161323)
Refactor torchscript based exporter logic to move them to a single (private) location for better code management. Original public module and method apis are preserved.

- Updated module paths in `torch/csrc/autograd/python_function.cpp` accordingly
- Removed `check_onnx_broadcast` from `torch/autograd/_functions/utils.py` because it is private&unused

@albanD / @soulitzer could you review changes in `torch/csrc/autograd/python_function.cpp` and
`torch/autograd/_functions/utils.py`? Thanks!

## BC Breaking
- **Deprecated members in `torch.onnx.verification` are removed**

Differential Revision: [D81236421](https://our.internmc.facebook.com/intern/diff/D81236421)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/161323
Approved by: https://github.com/titaiwangms, https://github.com/angelayi
2025-09-02 16:10:30 +00:00
82c7a1eb4b Revert "[ONNX] Default to dynamo export (#159646)"
This reverts commit 11b6ceb7b4f81ba02f88652136a93d685c399191.

Reverted https://github.com/pytorch/pytorch/pull/159646 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally ([comment](https://github.com/pytorch/pytorch/pull/159646#issuecomment-3198507767))
2025-08-18 21:41:32 +00:00
11b6ceb7b4 [ONNX] Default to dynamo export (#159646)
Set dynamo=True and enable fallback.

1. Implemented the compatible behavior where BytesIO objects as `f` is accepted
2. Update tests to explicitly set dynamo=False

#151693

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159646
Approved by: https://github.com/titaiwangms
2025-08-16 04:48:58 +00:00
d8c8ba2440 Fix unused Python variables in test/[e-z]* (#136964)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/136964
Approved by: https://github.com/justinchuby, https://github.com/albanD
2024-12-18 23:02:30 +00:00
fbe6f42dcf [BE][Easy][8/19] enforce style for empty lines in import segments in test/[k-p]*/ (#129759)
See https://github.com/pytorch/pytorch/pull/129751#issue-2380881501. Most changes are auto-generated by linter.

You can review these PRs via:

```bash
git diff --ignore-all-space --ignore-blank-lines HEAD~1
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/129759
Approved by: https://github.com/justinchuby, https://github.com/ezyang
2024-07-31 02:09:20 +00:00
26f4f10ac8 [5/N][Easy] fix typo for usort config in pyproject.toml (kown -> known): sort torch (#127126)
The `usort` config in `pyproject.toml` has no effect due to a typo. Fixing the typo make `usort` do more and generate the changes in the PR. Except `pyproject.toml`, all changes are generated by `lintrunner -a --take UFMT --all-files`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/127126
Approved by: https://github.com/kit1980
2024-05-27 14:49:57 +00:00
55c0ab2887 Revert "[5/N][Easy] fix typo for usort config in pyproject.toml (kown -> known): sort torch (#127126)"
This reverts commit 7763c83af67eebfdd5185dbe6ce15ece2b992a0f.

Reverted https://github.com/pytorch/pytorch/pull/127126 on behalf of https://github.com/XuehaiPan due to Broken CI ([comment](https://github.com/pytorch/pytorch/pull/127126#issuecomment-2133044286))
2024-05-27 09:22:08 +00:00
7763c83af6 [5/N][Easy] fix typo for usort config in pyproject.toml (kown -> known): sort torch (#127126)
The `usort` config in `pyproject.toml` has no effect due to a typo. Fixing the typo make `usort` do more and generate the changes in the PR. Except `pyproject.toml`, all changes are generated by `lintrunner -a --take UFMT --all-files`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/127126
Approved by: https://github.com/kit1980
ghstack dependencies: #127122, #127123, #127124, #127125
2024-05-27 04:22:18 +00:00
52fad83335 [onnx.export] Avoid linear look up in env for exist_in_env (#124909)
This PR is part of a series of PRs to significantly speed up torch.onnx.export for models with many nodes (e.g. LLM). See #121422 for more analysis.

- As part of torch.onnx.export, a reverse look-up is made in env. This is done for each node, and this look-up costs in proportional to the graph size, which incurs and overall O(N^2) time complexity.
- A pragmatic solution is simply to keep a separate data structure to make this de facto constant time. So, this introduces a set containing all the values of env. Open to other ideas. Ideally `exist_in_env` wouldn't be needed at all, but to preserve current behavior exactly I'm not sure how that can be done.
- Resolves (4) in #121422.
- This code change and the choice of py::set looks a bit more natural on top of #123063, where the env is changed from a std::unordered_map to a py::dict.

Partially fixes #121422
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124909
Approved by: https://github.com/srikris-sridhar, https://github.com/justinchuby
2024-05-09 22:38:00 +00:00
f9947830bb [ONNX] Remove the depreacated function in symbolic_helper (#109681)
These three functions in symbolic_helper are depreacated and should be removed after pytorch 2.0.

The clean up job will be separated into several patches to ensure the safety. See: https://github.com/pytorch/pytorch/pull/107208

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109681
Approved by: https://github.com/thiagocrepaldi
2023-09-20 19:31:39 +00:00
cd31c170c9 Revert "[ONNX] Remove deprecated functions (#107208)"
This reverts commit 263ca7d69bb9b3b58ae0f9b4d27864587611389c.

Reverted https://github.com/pytorch/pytorch/pull/107208 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally ([comment](https://github.com/pytorch/pytorch/pull/107208#issuecomment-1726183104))
2023-09-19 17:26:48 +00:00
263ca7d69b [ONNX] Remove deprecated functions (#107208)
The usage of some functions is deprecated. This PR drop them.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107208
Approved by: https://github.com/justinchuby, https://github.com/thiagocrepaldi
2023-09-14 19:09:56 +00:00
bd1229477d [ONNX] Add initial support for FP8 ONNX export (#107962)
This PR resurrects @tcherckez-nvidia's #106379 with changes to resolve conflicts against newer `main` and defines our own constants for the new ONNX types to [avoid breaking Meta's internal usage of an old ONNX](https://github.com/pytorch/pytorch/pull/106379#issuecomment-1675189340).

- `::torch::onnx::TensorProto_DataType_FLOAT8E4M3FN=17`
- `::torch::onnx::TensorProto_DataType_FLOAT8E5M2=19`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107962
Approved by: https://github.com/justinchuby, https://github.com/titaiwangms
2023-09-08 20:40:39 +00:00
cdc9127733 [ONNX] Perform Shape inference on added "Cast" node (#106093)
This commit fixes a bug where some "If" nodes blocked shape inference during the onnx graph building.

In fixup_onnx_controlflow, a "Cast" node is added to conditions in "If" and "Loop" nodes if the condition type is not bool.

This commit performs shape inference on this new "Cast" node which allows its output to be marked as "reliable" in ConstantValueMap during further shape inference. This would have eventually happened when shape inference is performed on the entire graph, but the inferred shapes are also useful to have during onnx graph building, since it allows some ops (like Squeeze) to export into simpler subgraphs.

Also adds a test for this.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106093
Approved by: https://github.com/thiagocrepaldi
2023-07-31 18:20:19 +00:00
ec33733701 [ONNX] Improve shape inference for Slice (#105755)
Previously, if 'starts', 'ends', or 'steps' was dynamic, then shape inference would give up, even for dimensions which are not being sliced.

This commit improves this by setting the output shape to be the same as the input shape for dimensions which are not being sliced. Add a new test to cover this case.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105755
Approved by: https://github.com/thiagocrepaldi, https://github.com/BowenBao
2023-07-25 02:58:20 +00:00
60a68477a6 Bump black version to 23.1.0 (#96578)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96578
Approved by: https://github.com/ezyang
2023-03-15 06:27:59 +00:00
5ed7c701a3 [ONNX] Remove the deprecated monkey patches to torch.Graph (#94747)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94747
Approved by: https://github.com/BowenBao, https://github.com/Skylion007
2023-02-14 00:08:09 +00:00
d540442e36 [ONNX] Fix 'prim::PackPadded' shape inference (#91829)
In `peephole` pass, user nodes of output of `prim::PackPadded` are modified to consume
the input of `prim::PackPadded` instead. Hence the logic in shape type inference. However
only the first output requires this workaround.

Fixes #91528
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91829
Approved by: https://github.com/titaiwangms
2023-01-11 07:35:55 +00:00
6daf60be5a [ONNX] Add setType from user into InferredType and Reliable in ConstantValueMap (#88622)
`setType` API is not respected in current exporter because the graph-level shape type inference simply overrides every NOT ONNX Op shape we had from node-level shape type inference. To address this issue, this PR (1) makes custom Op with `setType` **reliable** in ConstantValueMap to secure its shape/type information in pass:  _C._jit_pass_onnx. (2) If an invalid Op with shape/type in pass: _C._jit_pass_onnx_graph_shape_type_inference(graph-level), we recognize it as reliable.

1. In #62856, The refactor in onnx.cpp made regression on custom Op, as that was the step we should update custom Op shape/type information into ConstantValueMap for remaining Ops.

2. Add another condition besides IsValidONNXNode for custom Op setType in shape_type_inference.cpp. If all the node output has shape (not all dynamic), we say it's custom set type.

3. ~However, this PR won't solve the [issue](https://github.com/pytorch/pytorch/issues/87738#issuecomment-1292831219) that in the node-level shape type inference, exporter invokes the warning in terms of the unknow custom Op, since we process its symbolic_fn after this warning, but it would have shape/type if setType is used correctly. And that will be left for another issue to solve. #84661~ Add `no_type_warning` in UpdateReliable() and it only warns if non ONNX node with no given type appears.

Fixes #81693
Fixes #87738

NOTE: not confident of this not breaking anything. Please share your thoughts if there is a robust test on your mind.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88622
Approved by: https://github.com/BowenBao
2022-11-19 17:16:59 +00:00
500fd65531 [ONNX] Create common ExportTestCase base class (#88145)
Refactor out a common base class `ExportTestCase`, for common things in `setUp`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88145
Approved by: https://github.com/justinchuby, https://github.com/abock, https://github.com/AllenTiTaiWang
2022-11-10 21:51:59 +00:00
edb99df2e0 [ONNX] Fix reduce node shape inference (#85765)
Fix logic in `ProcessReduceNode`. Previously a scalar was assigned for output shape of reduce nodes
when `axes` attribute was not provided, regardless of the value of `keepdims_i` attribute. Hence it is
incorrectly assuming all output axes should be folded.
Since input rank is known, this fix populates axes to be `[0, 1, ..., input_rank - 1]` if axes is not
provided.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/85765
Approved by: https://github.com/abock
2022-09-29 03:38:52 +00:00
2fa8142cf9 [ONNX] Rename constants for clarity (#84645)
Rename constants to make them more clear. Fix styles to upper case.

Removed `onnx_stable_opsets` because it can be computed from `ONNX_MIN_OPSET` and `ONNX_MAX_OPSET`.

Fixes #84643

Pull Request resolved: https://github.com/pytorch/pytorch/pull/84645
Approved by: https://github.com/BowenBao
2022-09-09 01:22:14 +00:00
88e1c5c1d8 Apply ufmt linter to all py files under test/onnx (#81335)
Same as https://github.com/pytorch/pytorch/pull/81285 but for `test/onnx`. The merge conflicts in `linrunner.toml` is expected. I will resolve them depending on the merge order of the PR

Pull Request resolved: https://github.com/pytorch/pytorch/pull/81335
Approved by: https://github.com/BowenBao, https://github.com/kit1980
2022-07-15 18:51:38 +00:00
773d80747c [ONNX] Clean up unit tests, rename files and improve import style (#81141)
- Rename `test_pytorch_common` -> `pytorch_test_common`, `test_onnx_common` -> `onnx_test_common`, removing the test_ prefix to show that the files are not test cases
- Remove import * in `test_pytorch_common` and adjust to import from `testing._internal.common_utils` (where functions are actually defined) instead
- Import modules only in `test_pytorch_onnx_onnxruntime` (too many to handle in a single PR in other tests) (The skips are exceptions)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81141
Approved by: https://github.com/BowenBao
2022-07-12 00:00:49 +00:00
d2fbfe7fce [ONNX] subscribe onnx to our custom test infra (#79546)
Remove as many references as can be easily done of unittest in favor of our custom infra.

Left a todo where I could not easily replace unittest.main with run_tests()
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79546
Approved by: https://github.com/seemethere
2022-06-15 15:00:04 +00:00
134459161d [ONNX] Improve shape inference supporting inferred values and unspecified optionals (#78999)
Extend to support the following in onnx shape inference:
* Utilizing inferred constant values. Provides more information than just shape and type of the input.
   E.g. Enables `onnx::Resize` when `scales` input are constructed by `onnx::Concat` of constants.
* `prim::Constant`, especially the one that represents `None`, which later represents unspecified optional input in ONNX.
   E.g. Enables `onnx::Resize` when the second optional input `roi` is not provided.

Fixes #69346
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78999
Approved by: https://github.com/justinchuby, https://github.com/garymm
2022-06-13 23:16:18 +00:00
563c2719bf [ONNX] Refactor to remove inline imports - attempt 2 (#77448)
Re-land
- #77142

(diff: https://github.com/pytorch/pytorch/compare/c08b8f0..justinchuby:justinchu/remove-patch2)

Fixed:
- Delay import symbolic_opsets in the registry.

Tested locally with torchvision
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77448
Approved by: https://github.com/garymm
2022-05-16 14:44:24 +00:00
6b366dd3c1 Revert "[ONNX] Refactor to remove inline imports (#77142)"
This reverts commit c08b8f0967efc2eec078da4541c5fdd003fbdd75.

Reverted https://github.com/pytorch/pytorch/pull/77142 on behalf of https://github.com/malfet
2022-05-13 19:44:17 +00:00
c08b8f0967 [ONNX] Refactor to remove inline imports (#77142)
Reduce circular dependencies

- Lift constants and flags from `symbolic_helper` to `_constants` and `_globals`
    - Standardized constant naming to make it consistant
- Make `utils` strictly dependent on `symbolic_helper`, removing inline imports from symbolic_helper
- Move side effects from `utils` to `_patch_torch`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77142
Approved by: https://github.com/garymm, https://github.com/BowenBao
2022-05-13 03:46:33 +00:00
a812c4cd96 [ONNX] Relax node constraint for onnx shape inference (#77379)
None as input is legal per ONNX spec for representing
optional inputs. For [example](https://github.com/onnx/onnx/blob/main/docs/Operators.md#inputs-2---3-7) `constant_value` for `ONNX::Pad`.
This PR removes such constraint check that was set prior
to calling onnx shape inference. For the issue below, such
constraint prevents the onnx shape inference of `ONNX::Pad`,
which leads to falling back on an incorrect constant traced
shape.
For the unit test in this PR, prior to this PR, the ONNX shape inference
for `ONNX::Pad` would be skipped, and would return `None` instead.

Fixes https://github.com/pytorch/vision/issues/5971

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77379
Approved by: https://github.com/garymm
2022-05-13 00:12:16 +00:00
5dd1c67776 [ONNX] Format ONNX python with black
Format all onnx python code with black and isort with

```sh
isort torch/onnx/ test/onnx
black torch/onnx/ test/onnx
```

Updated lintrunner config to include these paths.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76754
Approved by: https://github.com/suo, https://github.com/BowenBao
2022-05-05 00:19:22 +00:00
57c7bf7fee [ONNX] Remove redundant warning for reshape
Fixes #73129.

The warning is not actionable and seems to be potentially false alarming.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/73265
Approved by: https://github.com/shubhambhokare1, https://github.com/garymm
2022-03-14 20:58:26 +00:00
a482fd7b2e [ONNX] Fix onnx gather shape inference
Previous code sets `only_rank_available=true` for Gather, resulting in overriding actual inferred shape values with symbols.

Fixes #68003

Pull Request resolved: https://github.com/pytorch/pytorch/pull/73607
Approved by: https://github.com/fatcat-z, https://github.com/garymm
2022-03-08 22:12:39 +00:00
956bafef8b [onnx export] Add broadcast to matmul shape inference (#70534)
Reuse the same broadcast code from the function `ProcessBroadcastNode`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/72990
2022-02-18 18:44:19 +00:00
a6517c20cf [ONNX] Improve Expand shape inference (#69264)
Extend shape inference support for `Expand`, when value of argument `shape` is unknown. Infer the rank of the output of `Expand`, and set shape to dynamic, if shape of argument `shape` is known.

Without this, shape inference aborts, and falls back to the static shape provided by tracer, which is incorrect in many cases.

Co-authored-by: BowenBao <bowbaomicrosoft.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/72985
2022-02-18 18:24:28 +00:00
03afd86295 [ONNX] Fix lstm reshape shape inference regression
Fixes #72399
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72532
2022-02-11 19:40:22 +00:00
5347dab851 Set test owners for onnx tests (#66860)
Summary:
Action following https://github.com/pytorch/pytorch/issues/66232

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66860

Reviewed By: malfet

Differential Revision: D31964696

Pulled By: janeyx99

fbshipit-source-id: 4e77d1bda92d9107ca0b90a06d24fa4477ceaffa
2021-10-27 12:50:45 -07:00
1da628bdb7 [ONNX] Update slice process shape to support rank only inference (#65782) (#66149)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66149

Updated logic will be able to infer rank of slice output, when only rank is known for slice input. Enables cases where `ConstantValueMap::HasRank(input)` is `True`, while `ConstantValueMap::HasShape(input)` is `False`.

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D31423840

Pulled By: malfet

fbshipit-source-id: 17b2b24aa63435d5212ebe6bdf66ae3c348c4e3b

Co-authored-by: BowenBao <bowbao@microsoft.com>
2021-10-22 13:46:26 -07:00
0a6828a306 [ONNX] use consistent quoting for string literals (#57757) (#58695)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58695

As PEP8 says: "Pick a rule and stick to it." [1]

[1] https://www.python.org/dev/peps/pep-0008/#string-quotes

Test Plan: Imported from OSS

Reviewed By: driazati

Differential Revision: D28714811

Pulled By: SplitInfinity

fbshipit-source-id: c95103aceb1725c17c034dc6fc8216627f189548

Co-authored-by: Gary Miguel <garymiguel@microsoft.com>
2021-05-27 12:06:42 -07:00
dc0071dfa5 [ONNX] Special post process for onnx::Cast and onnx::ConstantOfShape shape type inference (#55962) (#57597)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57597

* Special post process for onnx::Cast and onnx::ConstantOfShape
* Update `test_pytorch_onnx_shape_inference.py` to be unit test over shape inference patterns.

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D28393529

Pulled By: SplitInfinity

fbshipit-source-id: fc26032ddb842d4e299447da39564b28049752ed

Co-authored-by: BowenBao <bowbao@microsoft.com>
2021-05-13 13:42:44 -07:00
f6df18f6ca Clean up future imports for Python 2 (#53349)
Summary:
See https://github.com/pytorch/pytorch/issues/42919

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53349

Reviewed By: malfet

Differential Revision: D27039089

Pulled By: bugra

fbshipit-source-id: 8063dc184248604506a8dbb1bcb73da8ec85bb18
2021-03-14 15:56:13 -07:00
57d1df071f [ONNX] Support inplace operations on inplace indexing (#52063) (#53306)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53306

* [ONNX] Fix for sequence of mutations in blocks (#51577)

Fixes consecutive mutations in a tensor inside blocks.
Also, support append and pop in blocks.

* Support inplace operations + indexing

* Clean up old pass for remove mutations

* Add loop test

* Fixes for set attr in loops

* Removing the new jit API flag

* [ONNX] Redesign onnx pass to enable shape type dependent pattern conversion - cont (#51795)

With the introduction of ONNX shape inference, shape and type are inferred on the fly as operators get converted from ATen to ONNX when running symbolic function. This resolves the shape/type requirement for the symbolic functions. The pre-onnx passes however, can not be supported by shape inference, since at that stage the operators in the graph are still ATen operators.

This PR is to update the design of ONNX pass, to enable a mechanism of capturing subgraphs of ATen operators of certain patterns, and convert them later, when shape/type information of upstream operators are available.

The new design will require pre-onnx passes that need shape/type to be written in two parts, encapsulation and conversion.

    The encapsulation part will find the nodes of patterns, like how pre-onnx passes were written previously. But instead of converting the nodes, it will encapsulate them into a sub-block of a new placeholder node. This part is called before onnx pass, so it runs before calling symbolic functions.

    The conversion part will be called inside the onnx pass. In onnx pass, run_symbolic_func will be called for each node in topological order. When it reaches the placeholder node, the conversion part will be invoked. It will convert the nodes inside the sub-block based on pattern. By that time, it will have shape/type of upstream operators available. After the conversion is complete, the placeholder node will be removed, and nodes inside its sub-block converted. Run_symbolic_func will be called for these nodes, and they will be converted from ATen operator to ONNX operator.

This PR includes several other fixes, listed below.
* ~~replace helper.cpp with onnx_utils.cpp for holding utility functions.~~
* fix EraseNumberTypes on Bool type, the code was outdated that back then Bool type doesn't exist.
* ~~enable onnx shape inference in export with parameter/initializer data.~~
* other code clean ups.
* fix insertion of identity nodes for loop opset 13 sequence output.

~~PR depends on #51603~~

* Fix after merge

* clang

* Fix clang

* Fix clang

* Fix warning message.

* Fixes for non-model param attributes

* Fix for caffe2

* Additional test

* clang

* Skip test for lower opsets

* fix clang-tidy

* Update init.cpp

* Update remove_inplace_ops_for_onnx.cpp

* Update remove_inplace_ops_for_onnx.cpp

* Update remove_inplace_ops_for_onnx.cpp

* Fix for clang formatting

Test Plan: Imported from OSS

Reviewed By: pbelevich, malfet

Differential Revision: D26922416

Pulled By: SplitInfinity

fbshipit-source-id: e7108620b39b6404c594910786c4d275fee59d84

Co-authored-by: Bowen Bao <bowbao@microsoft.com>
2021-03-12 02:49:11 -08:00
3da4cea658 [ONNX] Add dim_param support in export with onnx shape inference (#44920)
Summary:
* Support propagating `dim_param` in ONNX by encoding as `ShapeSymbol` in `SymbolicShape` of outputs. If export is called with `dynamic_axes` provided, shape inference will start with these axes set as dynamic.
* Add new test file `test_pytorch_onnx_shape_inference.py`, reusing all test cases from `test_pytorch_onnx_onnxruntime.py`, but focus on validating shape for all nodes in graph. Currently this is not enabled in the CI, since there are still quite some existing issues and corner cases to fix. The test is default to run only at opset 12.
* Bug fixes, such as div, _len, and peephole.cpp passes for PackPadded, and LogSoftmaxCrossEntropy.
* This PR depends on existing PR such as 44332.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44920

Reviewed By: eellison

Differential Revision: D23958398

Pulled By: bzinodev

fbshipit-source-id: 00479d9bd19c867d526769a15ba97ec16d56e51d
2020-09-30 21:56:24 -07:00