58 Commits

Author SHA1 Message Date
524b78d4f6 [ONNX] Refactor torchscript based exporter (#161323)
Refactor torchscript based exporter logic to move them to a single (private) location for better code management. Original public module and method apis are preserved.

- Updated module paths in `torch/csrc/autograd/python_function.cpp` accordingly
- Removed `check_onnx_broadcast` from `torch/autograd/_functions/utils.py` because it is private&unused

@albanD / @soulitzer could you review changes in `torch/csrc/autograd/python_function.cpp` and
`torch/autograd/_functions/utils.py`? Thanks!

## BC Breaking
- **Deprecated members in `torch.onnx.verification` are removed**

Differential Revision: [D81236421](https://our.internmc.facebook.com/intern/diff/D81236421)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/161323
Approved by: https://github.com/titaiwangms, https://github.com/angelayi
2025-09-02 16:10:30 +00:00
c0582fd0f8 Remove unused Python variables in torch/[b-z]* (#136963)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/136963
Approved by: https://github.com/ezyang
2024-10-19 16:45:22 +00:00
abcd329359 [BE] typing for decorators - onnx/symbolic_helper (#131565)
See #131429
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131565
Approved by: https://github.com/justinchuby, https://github.com/oulgen, https://github.com/zou3519, https://github.com/titaiwangms
2024-07-24 16:39:47 +00:00
5a0068cc69 [BE] mypy: disallow untyped decorators (#131428)
Untyped decorators strip the types from their decorated function so even if the underlying function is fully typed then callers to it don't get any benefit from type annotations.

Step 1 - Enable the error and override in all the offending files.

#131429

Pull Request resolved: https://github.com/pytorch/pytorch/pull/131428
Approved by: https://github.com/justinchuby, https://github.com/oulgen
2024-07-23 21:50:55 +00:00
e880cb2fe0 [ONNX] Remove beartype usage (#130484)
beartype has served us well in identifying type errors and ensuring we call internal functions with the correct arguments (thanks!). However, the value of having beartype is diminished because of the following:

1. When beartype improves support for better Dict[] type checking, it discovered typing mistakes in some functions that were previously uncaught. This caused the exporter to fail with newer versions beartype when it used to succeed. Since we cannot fix PyTorch and release a new version just because of this, it creates confusion for users that have beartype in their environment from using torch.onnx
2. beartype adds an additional call line in the traceback, which makes the already thick dynamo stack even larger, affecting readability when users diagnose errors with the traceback.
3. Since the typing annotations need to be evaluated, we cannot use new syntaxes like `|` because we need to maintain compatibility with Python 3.8. We don't want to wait for PyTorch take py310 as the lowest supported Python before using the new typing syntaxes.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/130484
Approved by: https://github.com/titaiwangms
2024-07-18 22:07:40 +00:00
0851de5b16 Revert "[ONNX] Remove beartype usage (#130484)"
This reverts commit 1794c35912025aa44b0d70f67ff664b4f7bd1014.

Reverted https://github.com/pytorch/pytorch/pull/130484 on behalf of https://github.com/clee2000 due to test_sympy_utils failure is real https://github.com/pytorch/pytorch/actions/runs/9961499559/job/27523758780 1794c35912.  Dr CI is matching with commits in current commit? ([comment](https://github.com/pytorch/pytorch/pull/130484#issuecomment-2231575577))
2024-07-16 18:41:51 +00:00
1794c35912 [ONNX] Remove beartype usage (#130484)
beartype has served us well in identifying type errors and ensuring we call internal functions with the correct arguments (thanks!). However, the value of having beartype is diminished because of the following:

1. When beartype improves support for better Dict[] type checking, it discovered typing mistakes in some functions that were previously uncaught. This caused the exporter to fail with newer versions beartype when it used to succeed. Since we cannot fix PyTorch and release a new version just because of this, it creates confusion for users that have beartype in their environment from using torch.onnx
2. beartype adds an additional call line in the traceback, which makes the already thick dynamo stack even larger, affecting readability when users diagnose errors with the traceback.
3. Since the typing annotations need to be evaluated, we cannot use new syntaxes like `|` because we need to maintain compatibility with Python 3.8. We don't want to wait for PyTorch take py310 as the lowest supported Python before using the new typing syntaxes.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/130484
Approved by: https://github.com/titaiwangms
2024-07-16 17:34:36 +00:00
0effcb70ef Revert "[ONNX] Remove beartype usage (#130484)"
This reverts commit f44739cf42e22a569bd1bdb0c113f8a069c17a41.

Reverted https://github.com/pytorch/pytorch/pull/130484 on behalf of https://github.com/huydhn due to Sorry for reverting your change but those failures show up in trunk after the commit landed f44739cf42, I am reverting it to see if it fix trunk ([comment](https://github.com/pytorch/pytorch/pull/130484#issuecomment-2226812311))
2024-07-13 07:52:59 +00:00
f44739cf42 [ONNX] Remove beartype usage (#130484)
beartype has served us well in identifying type errors and ensuring we call internal functions with the correct arguments (thanks!). However, the value of having beartype is diminished because of the following:

1. When beartype improves support for better Dict[] type checking, it discovered typing mistakes in some functions that were previously uncaught. This caused the exporter to fail with newer versions beartype when it used to succeed. Since we cannot fix PyTorch and release a new version just because of this, it creates confusion for users that have beartype in their environment from using torch.onnx
2. beartype adds an additional call line in the traceback, which makes the already thick dynamo stack even larger, affecting readability when users diagnose errors with the traceback.
3. Since the typing annotations need to be evaluated, we cannot use new syntaxes like `|` because we need to maintain compatibility with Python 3.8. We don't want to wait for PyTorch take py310 as the lowest supported Python before using the new typing syntaxes.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/130484
Approved by: https://github.com/titaiwangms
2024-07-13 00:08:25 +00:00
27f9d3b0a1 Flip default value for mypy disallow_untyped_defs [8/11] (#127845)
See #127836 for details.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/127845
Approved by: https://github.com/oulgen
ghstack dependencies: #127842, #127843, #127844
2024-06-08 18:49:56 +00:00
bbe846f430 Add symbolic_opset19.py and symbolic_opset20.py to support opset 19/20, extend opset 18 support (#118828)
Start to fix https://github.com/pytorch/pytorch/issues/114801

Co-authored-by: Thiago Crepaldi <thiagofc@microsoft.com>
Co-authored-by: Justin Chu <justinchuby@users.noreply.github.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/118828
Approved by: https://github.com/thiagocrepaldi
2024-03-22 18:01:33 +00:00
f543093e06 [ONNX] Fix output mismatch issue of repeat_interleave when dim is None (#116689)
'input' is introduced but it's mixed with 'self' in repeat_interleave, which causes the mismatch issue.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/116689
Approved by: https://github.com/thiagocrepaldi
2024-01-03 18:38:00 +00:00
a3e9b80082 Fix torch.diagonal for torch.onnx.export when dim1<0 or dim2<0 (#111130)
in many cases, torch.diagonal will pass (dim1=-2, dim2=-1), onnx export will always fail in these cases
this pr try to fix the bug
Pull Request resolved: https://github.com/pytorch/pytorch/pull/111130
Approved by: https://github.com/thiagocrepaldi
2023-10-13 22:05:53 +00:00
8dcdc74915 torch->onnx export support: quantized::linear_relu (#109755)
- Adds support for quantized::linear_relu
  - Adds weight unpacking pattern matcher
  - Adds to export for opset 10 and 13.
- Adds QAT test modeled after conv2d+relu fusion test

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109755
Approved by: https://github.com/BowenBao, https://github.com/thiagocrepaldi
2023-09-21 23:24:20 +00:00
ebb8aa9c0b Correct output_padding for quantized tconv (torch->onnx) (#104207)
- In #102759, the support for `quantized::conv_transposeNd` was introduced. This incorrectly set `output_padding` to all zeros. Turns out, you can specify output_padding in PyTorch, but this parameter was not being unpacked correctly and thus did not show up in the python torch->onnx code.
- This adds unpacking of output_padding in `unpack_quantized_weights.cpp` when needed. It also adds this as a parameter in the python functions and uses that (and removes the all-zero defaults)
- Another issue with #102759 is that it only added these new ops to opset10 without adding the ability to specify axis in opset13. This PR also fixes this.

Fixes #104206

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104207
Approved by: https://github.com/BowenBao
2023-06-29 13:40:48 +00:00
3362c1d240 [ONNX] add cast operator after reduce to match desired dtype (#100700)
This PR conditionally inserts a cast operator after a reduction operation  to match the specified dtype in the exported ONNX model.  The code changes affect **opset9**, and **opset13**.

I understand there's an [automatic upcast to int64](c91a41fd68/torch/onnx/symbolic_opset9.py (L783)) before reduction most likely to prevent overflow so I left that alone and only conditionally add casting back to desired dtype.

## Test int32
```
import torch
import onnx
a = torch.tensor([10, 20, 30, 80], dtype=torch.int32)
def test():
    class SumInt32(torch.nn.Module):
        def forward(self, a):
            return torch.sum(a, dtype=torch.int32)

    sumi = SumInt32().eval()
    assert sumi(a).dtype == torch.int32
    print("Torch model output type matches input type")

    torch.onnx.export(sumi, (a), "/tmp/sumi_int32.onnx", opset_version=12)
    model = onnx.load("/tmp/sumi_int32.onnx")

    assert model.graph.output[0].type.tensor_type.elem_type == onnx.TensorProto.INT32
    print("ONNX model output type matches input type")
test()
```
![sumi_int32 onnx](https://user-images.githubusercontent.com/10516699/236499220-59b64821-5807-4f69-b0e2-90ae34280e03.png)

## Test int64

```
import onnx
import torch

a = torch.tensor([10, 20, 30, 80], dtype=torch.int64)

def test():
    class SumInt64(torch.nn.Module):
        def forward(self, a):
            return torch.sum(a, dtype=torch.int64)

    sumi = SumInt64().eval()
    assert sumi(a).dtype == torch.int64
    print("Torch model output type matches input type")
    torch.onnx.export(sumi, (a), "/tmp/sumi_int64.onnx", opset_version=12)
    model = onnx.load("/tmp/sumi_int64.onnx")
    assert model.graph.output[0].type.tensor_type.elem_type == onnx.TensorProto.INT64
    print("ONNX model output type matches input type")

test()

```
![sum_int64 onnx](https://user-images.githubusercontent.com/10516699/236422133-15f9cda3-242f-46da-9b23-c2e920f27078.png)

Fixes https://github.com/pytorch/pytorch/issues/100097

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100700
Approved by: https://github.com/thiagocrepaldi
2023-05-06 00:05:57 +00:00
40df6e1647 [ONNX] Simplify repeat_intereleave export for scalar-valued 'repeat' (#100575)
This PR simplifies the ONNX export of torch.repeat_interleave when 'repeat' is a scalar value (so each index in the input is repeated the same number of times). (Issue #100438)

Here is a before/after of a simple model export:
```python
# Model + export code
import torch

class RepeatInterleaveModel(torch.nn.Module):
    def forward(self, x):
        return x.repeat_interleave(2, dim=-1)

args = (torch.rand((2, 2, 16)),)
model = RepeatInterleaveModel()
torch.onnx.export(model, args, "repeat_interleave.onnx", opset_version=17)
```

**Before (static shapes)**
![repeat_interleave onnx(1)](https://user-images.githubusercontent.com/46343317/236014996-00726832-1e76-4fb4-950d-4b54cc5cc20c.png)

-----
**Before (dynamic shapes, second graph is Loop body)**
<p float="left">
  <img src="https://user-images.githubusercontent.com/46343317/236029895-20b0ae0a-240f-466d-bb01-e619ec5967ad.png" width="45%" />
  <img src="https://user-images.githubusercontent.com/46343317/236029915-e67b808a-029b-4997-bc05-1ce59eec409a.png" width="47%" />
</p>

-----
**After (for both static and dynamic shapes)**
<img src="https://user-images.githubusercontent.com/46343317/236015235-633811cb-09a2-435d-a293-1b2bcb7dea50.png" width="66%" />

-----

This PR also fixes a bug where the exporter throws an expection when the input has dynamic shapes and the 'dim' parameter is not specified to torch.repeat_interleave. Also adds a new testcase to cover this. (Issue #100429)

Fixes #100438 and #100429

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100575
Approved by: https://github.com/BowenBao
2023-05-05 17:00:42 +00:00
1c110652a8 [ONNX] Support aten::tile in torchscript exporter (#99927)
Fixes #99692
Pull Request resolved: https://github.com/pytorch/pytorch/pull/99927
Approved by: https://github.com/justinchuby
2023-04-25 22:58:18 +00:00
e5664c652a [ONNX] Support aten::scaled_dot_product_attention in torchscript exporter (#99658)
Fixes #97262

<!--
copilot:all
-->
### <samp>🤖 Generated by Copilot at d06d195</samp>

### Summary
🆕🚀📝

<!--
1.  🆕 for adding tests and annotations for a new operator.
2.  🚀 for adding support for exporting a new operator to ONNX.
3.  📝 for fixing a minor formatting issue.
-->
This pull request adds ONNX opset 14 support for the `nn.functional.scaled_dot_product_attention` operator, which is used for self-attention in transformer models. It does so by adding tests and annotations in `test/onnx/test_op_consistency.py`, and by adding a symbolic function in `torch/onnx/symbolic_opset14.py` that reuses an existing implementation.

> _To export `scaled_dot_product_attention`_
> _To ONNX opset 14, we need some extension_
> _We import some modules and types_
> _And add a symbolic that pipes_
> _The existing code with some annotation_

### Walkthrough
*  Implement the `nn.functional.scaled_dot_product_attention` operator for ONNX opset 14 ([link](https://github.com/pytorch/pytorch/pull/99658/files?diff=unified&w=0#diff-244955d820ec138d5ddffb20ee6f517cc4c5d281f19ccb53d8db47043b5ac46fR122-R292))
*  Add imports for modules and types needed for the operator implementation ([link](https://github.com/pytorch/pytorch/pull/99658/files?diff=unified&w=0#diff-244955d820ec138d5ddffb20ee6f517cc4c5d281f19ccb53d8db47043b5ac46fL17-R23))
*  Add a command to run the pytest module for testing the operator consistency ([link](https://github.com/pytorch/pytorch/pull/99658/files?diff=unified&w=0#diff-e968c9cb6fc6631cab526cb3a9fe66358c4c6e757e2a223a224b976471bcb753R13))
*  Add the operator to the list of operators tested for consistency ([link](https://github.com/pytorch/pytorch/pull/99658/files?diff=unified&w=0#diff-e968c9cb6fc6631cab526cb3a9fe66358c4c6e757e2a223a224b976471bcb753R311))
*  Add annotations to indicate the operator's limitations and issues ([link](https://github.com/pytorch/pytorch/pull/99658/files?diff=unified&w=0#diff-e968c9cb6fc6631cab526cb3a9fe66358c4c6e757e2a223a224b976471bcb753L333-R339), [link](https://github.com/pytorch/pytorch/pull/99658/files?diff=unified&w=0#diff-e968c9cb6fc6631cab526cb3a9fe66358c4c6e757e2a223a224b976471bcb753R354-R358))
*  Remove an empty line at the end of `test/onnx/test_op_consistency.py` ([link](https://github.com/pytorch/pytorch/pull/99658/files?diff=unified&w=0#diff-e968c9cb6fc6631cab526cb3a9fe66358c4c6e757e2a223a224b976471bcb753L441))

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99658
Approved by: https://github.com/justinchuby
2023-04-22 02:36:39 +00:00
8062735f78 [ONNX] Support aten::unflatten in torchscript exporter (#99056)
Fixes #98857
Fixes #98190
Pull Request resolved: https://github.com/pytorch/pytorch/pull/99056
Approved by: https://github.com/BowenBao
2023-04-13 22:19:02 +00:00
a8f40b39ce Update all ONNX symbolics with new JitScalarType API (#87245)
Fixes https://github.com/pytorch/pytorch/issues/84365 and more

This PR addresses not only the issue above, but the entire family of issues related to `torch._C.Value.type()` parsing when `scalarType()` or `dtype()` is not available.

This issue exists before `JitScalarType` was introduced, but the new implementation refactored the bug in because the new api `from_name` and `from_dtype` requires parsing `torch._C.Value.type()` to get proper inputs, which is exactly the root cause for this family of bugs.

Therefore `from_name` and `from_dtype` must be called when the implementor knows the `name` and `dtype` without parsing a `torch._C.Value`. To handle the corner cases hidden within `torch._C.Value`, a new `from_value` API was introduced and it should be used in favor of the former ones for most cases. The new API is safer and doesn't require type parsing from user, triggering JIT asserts in the core of pytorch.

Although CI is passing for all tests, please review carefully all symbolics/helpers refactoring to make sure the meaning/intetion of the old call are not changed in the new call

Pull Request resolved: https://github.com/pytorch/pytorch/pull/87245
Approved by: https://github.com/justinchuby, https://github.com/BowenBao
2022-11-03 03:01:33 +00:00
5deeb09d4e [ONNX] Annotate all g as GraphContext (#85491)
- Use g.opset to test export opset version
- Annotate all `g` as GraphContext

Pull Request resolved: https://github.com/pytorch/pytorch/pull/85491
Approved by: https://github.com/AllenTiTaiWang, https://github.com/BowenBao
2022-09-28 22:39:28 +00:00
3d2316670f [ONNX] Create GraphContext and load g.op method to the class (#84728)
This PR create the `GraphContext` class and relays all graph methods to _C.Graph as well as implements the `g.op`  method. The GraphContext object is passed into the symbolic functions in place of _C.Graph for compatibility with existing symbolic functions.

This way (1) we can type annotate all `g` args because the method is defined and (2) we can use additional context information in symbolic functions. (3) no more monkey patching on `_C.Graph`

Also

- Fix return type of `_jit_pass_fixup_onnx_controlflow_node`
- Create `torchscript.py` to house torch.Graph related functions
- Change `GraphContext.op` to create nodes in the Block instead of the Graph
- Create `add_op_with_blocks` to handle scenarios where we need to directly manipulate sub-blocks. Update loop and if symbolic functions to use this function.

## Discussion

Should we put all the context inside `SymbolicContext` and make it an attribute in the `GraphContext` class? This way we only define two attributes `GraphContext.graph` and `GraphContext.context`. Currently all context attributes are directly defined in the class.

### Decision

Keep GraphContext flatand note that it will change in the future.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84728
Approved by: https://github.com/AllenTiTaiWang, https://github.com/BowenBao
2022-09-28 22:21:55 +00:00
2f50d2f685 [ONNX] Update docs on symbolic registration (#85290)
- Move inline instructions on editing symbolic functions to the README
- Add a line on using the symbolic function registration decorator.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85290
Approved by: https://github.com/BowenBao
2022-09-22 13:37:11 +00:00
76d60778eb [ONNX] Use decorators for symbolic function registration (#84448)
This is the 4th PR in the series of #83787. It enables the use of `@onnx_symbolic` across `torch.onnx`.

- **Backward breaking**: Removed some symbolic functions from `__all__` because of the use of  `@onnx_symbolic` for registering the same function on multiple aten names.
- Decorate all symbolic functions with `@onnx_symbolic`
- Move Quantized and Prim ops out from classes to functions defined in the modules. Eliminate the need for `isfunction` checking, speeding up the registration process by 60%.
    - Remove the outdated unit test `test_symbolic_opset9.py`
- Symbolic function registration moved from the first call to `_run_symbolic_function` to init time.
- Registration is fast:
  ![image](https://user-images.githubusercontent.com/11205048/189164959-f3fca173-19bc-4682-b150-f13a586387bf.png)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/84448
Approved by: https://github.com/AllenTiTaiWang, https://github.com/BowenBao
2022-09-22 06:25:24 +00:00
388368b699 [ONNX] Fix type annotations and enable type checking for all apis (#84091)
Enable runtime type checking for all torch.onnx public apis, symbolic functions and most helpers (minus two that does not have a checkable type: `_.JitType` does not exist) by adding the beartype decorator. Fix type annotations to makes unit tests green.

Profile:

export `torchvision.models.alexnet(pretrained=True)`

```
with runtime type checking: 21.314 / 10 passes
without runtime type checking: 20.797 / 10 passes

+ 2.48%
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84091
Approved by: https://github.com/BowenBao, https://github.com/thiagocrepaldi
2022-09-03 01:40:18 +00:00
d8cc8368ab Revert "[ONNX] Fix type annotations and enable type checking for all apis (#84091)"
This reverts commit 6446da17305960088dfae501d5c7358af068fa81.

Reverted https://github.com/pytorch/pytorch/pull/84091 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally
2022-08-28 12:28:58 +00:00
6446da1730 [ONNX] Fix type annotations and enable type checking for all apis (#84091)
Enable runtime type checking for all torch.onnx public apis, symbolic functions and most helpers (minus two that does not have a checkable type: `_.JitType` does not exist) by adding the beartype decorator. Fix type annotations to makes unit tests green.

Profile:

export `torchvision.models.alexnet(pretrained=True)`

```
with runtime type checking: 21.314 / 10 passes
without runtime type checking: 20.797 / 10 passes

+ 2.48%
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84091
Approved by: https://github.com/BowenBao
2022-08-27 04:40:41 +00:00
3dfb8dfcf3 [ONNX] Use errors.SymbolicValueError for more context (#83332)
Replace runtime errors in torch.onnx with `errors.SymbolicValueError` for more context around jit values.

- Extend `_unimplemented`, `_onnx_unsupported`, `_onnx_opset_unsupported`, `_onnx_opset_unsupported_detailed` errors to include JIT value information
- Replace plain RuntimeError with `errors.SymbolicValueError`
- Clean up: Use `_is_bool` to replace string comparison on jit types
- Clean up: Remove the todo `Remove type ignore after #81112`

#77316
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83332
Approved by: https://github.com/AllenTiTaiWang, https://github.com/thiagocrepaldi, https://github.com/BowenBao
2022-08-23 05:39:17 +00:00
f5701a1f9a [ONNX] Remove unused patching methods (#83006)
### Description
<!-- What did you change and why was it needed? -->

Remove unused patching methods:

- `torch._C.Graph.constant`
- unpatch `torch._C.Node.__getitem__` and move the helper function to `symbolic_helper`

Add typing annotations

### Issue
<!-- Link to Issue ticket or RFP -->

#76254

### Testing
<!-- How did you test your change? -->

Unit tested
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83006
Approved by: https://github.com/BowenBao
2022-08-09 19:24:03 +00:00
c6cdca5c68 [ONNX] Reland #81953 Type utility for converting among JIT, torch and ONNX data types (#82995)
Re-land #81953

Add `_type_utils` for handling data type conversion among JIT, torch and ONNX.

- Replace dictionary / list indexing with methods in ScalarType
- Breaking: **Remove ScalarType from `symbolic_helper`** and move it to `_type_utils`
- Deprecated: "cast_pytorch_to_onnx", "pytorch_name_to_type", "scalar_name_to_pytorch", "scalar_type_to_onnx", "scalar_type_to_pytorch_type" in `symbolic_helper`
- Deprecate the type mappings and lists. Remove all internal references
- Move _cast_func_template to opset 9 and remove its reference elsewhere (clean up). Added documentation for easy discovery

Why: List / dictionary indexing and lookup are error-prone and convoluted.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82995
Approved by: https://github.com/kit1980
2022-08-08 23:43:43 +00:00
b170a52a09 Revert "[ONNX] Type utility for converting among JIT, torch and ONNX data types (#81953)"
This reverts commit 6ddf4c6f5891600a67cfc9d1092d4538bca848b3.

Reverted https://github.com/pytorch/pytorch/pull/81953 on behalf of https://github.com/kit1980 due to Broke internal builds by removing functions without deprecation
2022-08-07 20:15:28 +00:00
6ddf4c6f58 [ONNX] Type utility for converting among JIT, torch and ONNX data types (#81953)
Add `_type_utils` for handling data type conversion among JIT, torch and ONNX.

- Replace dictionary / list indexing with methods in ScalarType
- Breaking: **Remove ScalarType from `symbolic_helper`** and move it to `_type_utils`
- Breaking: **Remove "cast_pytorch_to_onnx", "pytorch_name_to_type", "scalar_name_to_pytorch", "scalar_type_to_onnx", "scalar_type_to_pytorch_type"** from `symbolic_helper`
- Deprecate the type mappings and lists. Remove all internal references
- Move _cast_func_template to opset 9 and remove its reference elsewhere (clean up). Added documentation for easy discovery

Why: List / dictionary indexing and lookup are error-prone and convoluted.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81953
Approved by: https://github.com/AllenTiTaiWang, https://github.com/BowenBao
2022-08-05 22:24:45 +00:00
6ea422dd0b Format torch/onnx with ufmt (#82137)
This is the last batch for the new ufmt (black + usort) linter. After this, black linter can finally be replaced. The previous PR to format ONNX tests was #81335
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82137
Approved by: https://github.com/kit1980, https://github.com/AllenTiTaiWang
2022-07-25 22:42:21 +00:00
e4c3e98a48 Add onnx support for torch.tensor_split (#77437)
Fixes #73454

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77437
Approved by: https://github.com/garymm
2022-06-08 06:06:34 +00:00
0d76299ff7 [ONNX] Clean up module imports (#77423)
Cleaning up onnx module imports to prepare for updating `__init__`.

- Simplify importing the `_C` and `_C._onnx` name spaces
- Remove alias of the symbolic_helper module in imports
- Remove any module level function imports. Import modules instead
    - Alias `symbilic_opsetx` as `opsetx`
- Fix some docstrings

Requires:
- https://github.com/pytorch/pytorch/pull/77448
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77423
Approved by: https://github.com/BowenBao
2022-05-20 01:56:24 +00:00
563c2719bf [ONNX] Refactor to remove inline imports - attempt 2 (#77448)
Re-land
- #77142

(diff: https://github.com/pytorch/pytorch/compare/c08b8f0..justinchuby:justinchu/remove-patch2)

Fixed:
- Delay import symbolic_opsets in the registry.

Tested locally with torchvision
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77448
Approved by: https://github.com/garymm
2022-05-16 14:44:24 +00:00
6b366dd3c1 Revert "[ONNX] Refactor to remove inline imports (#77142)"
This reverts commit c08b8f0967efc2eec078da4541c5fdd003fbdd75.

Reverted https://github.com/pytorch/pytorch/pull/77142 on behalf of https://github.com/malfet
2022-05-13 19:44:17 +00:00
c08b8f0967 [ONNX] Refactor to remove inline imports (#77142)
Reduce circular dependencies

- Lift constants and flags from `symbolic_helper` to `_constants` and `_globals`
    - Standardized constant naming to make it consistant
- Make `utils` strictly dependent on `symbolic_helper`, removing inline imports from symbolic_helper
- Move side effects from `utils` to `_patch_torch`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77142
Approved by: https://github.com/garymm, https://github.com/BowenBao
2022-05-13 03:46:33 +00:00
5dd1c67776 [ONNX] Format ONNX python with black
Format all onnx python code with black and isort with

```sh
isort torch/onnx/ test/onnx
black torch/onnx/ test/onnx
```

Updated lintrunner config to include these paths.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76754
Approved by: https://github.com/suo, https://github.com/BowenBao
2022-05-05 00:19:22 +00:00
8d31706b9e [ONNX] Support restricted quantized range for activation.
PyTorch restricts activations to be in the range (0, 127).
In ONNX, the supported ranges are (0, 255) and (-128, 127),
respectfully, uint8 and int8. This PR extends support for range
(0, 127), by adding additional clipping when detected.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76055

Approved by: https://github.com/garymm
2022-04-25 01:17:21 +00:00
cada2cd3ae [ONNX] Support per channel quantization
Extending the support for quantization with per channel quantization.
An extra attribute `axis` can be found for per channel quantized tensors,
most commonly in quantized weight of Convolution or Linear module.
The PR adds support to correctly parse the `axis` attribute, and map to
ONNX representation in `QuantizeLinear` and `DequantizeLinear`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76002

Approved by: https://github.com/garymm
2022-04-25 01:14:57 +00:00
6305e572ed [ONNX] Support dynamic scale & zero_point for fake_quantize_per_tensor_affine
Dynamic scale & zero_point requires opset 13 `ONNX::QuantizeLinear`
and `ONNX::DequantizeLinear`.
Improved error message when scale is not constant for opset 10 symbolic function.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75697
Approved by: https://github.com/garymm
2022-04-13 19:17:58 +00:00
56aa1ab010 [ONNX] Remove dangling print in repeat_interleave
Fixes #74086

Pull Request resolved: https://github.com/pytorch/pytorch/pull/74125
Approved by: https://github.com/BowenBao
2022-03-11 22:53:18 +00:00
eb22d06e5e [ONNX] Use human readable enum for dtype scalars (#66822) (#67807)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67807

Also make quoting of string literals consistent.

Test Plan: Imported from OSS

Reviewed By: msaroufim

Differential Revision: D32181309

Pulled By: malfet

fbshipit-source-id: e1053701e3589f0310d8b5ef920359c03c6713f0
2021-11-08 14:37:05 -08:00
a0fc14c20f [ONNX] Add diagonal symbolic (#64454) (#66144)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66144

* Add logic and tests

* minor edits

* Eliminate expand ops

* Fix flake and editing

* Modified errant message

* Add overrun check

* Add overrun descriptions

* Remove emptyline

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D31424095

fbshipit-source-id: 5b8ef6ac21c32d43c3dbc8e51e1ef30bffb19c25
2021-10-22 13:46:18 -07:00
136abf5aff [ONNX] Update sum symbolic to handle dtypes (#64289) (#66141)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66141

* Update aten::sum symbolic for dtype

* Remove nesting and modify opeartor tests

* Fix expect files

[ONNX] Fix expect files added in #64289 (#65356)

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D31424091

fbshipit-source-id: d4af21e9f0d7e1c68bf6ef2f3e385db84b4c53f3
2021-10-22 13:46:12 -07:00
db0771b05d [ONNX] Update repeat_interleave for dynamic repeats (#59979) (#62764)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62764

Fixes #58733

- Support dynamic interleave for cases with dynamic repeat values
- Moved repeat_interleave symbolic from opset 11 to opset 13, as sequence as output types for loop outputs is needed for this change

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D30375179

Pulled By: msaroufim

fbshipit-source-id: 787f96bf91d124fd0483761088c5f4ae930d96a9

Co-authored-by: Shubham Bhokare <shubhambhokare@gmail.com>
2021-08-20 12:46:54 -07:00
5d00c374dd [ONNX] Sum empty tensor could not be exported to ONNX successfully. (#58141) (#59537)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/59537

PyTorch sum over empty tensor gives 0, while ONNX produces an error.

torch.sum will be translated into onnx::ReduceSum op. Per the definition of ReduceSum, update the keepdims attribute for this scenario.

Test Plan: Imported from OSS

Reviewed By: nikithamalgifb, ansley

Differential Revision: D29046604

Pulled By: SplitInfinity

fbshipit-source-id: 6f5f3a66cb8eda8b5114b8474dda6fcdbae73469

Co-authored-by: fatcat-z <jiz@microsoft.com>
2021-06-15 12:24:16 -07:00
0a6828a306 [ONNX] use consistent quoting for string literals (#57757) (#58695)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58695

As PEP8 says: "Pick a rule and stick to it." [1]

[1] https://www.python.org/dev/peps/pep-0008/#string-quotes

Test Plan: Imported from OSS

Reviewed By: driazati

Differential Revision: D28714811

Pulled By: SplitInfinity

fbshipit-source-id: c95103aceb1725c17c034dc6fc8216627f189548

Co-authored-by: Gary Miguel <garymiguel@microsoft.com>
2021-05-27 12:06:42 -07:00