Commit Graph

597 Commits

Author SHA1 Message Date
b319fa3fd9 [ONNX] Opt into ruff fmt (#134120)
Add ONNX directory to use ruff format.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/134120
Approved by: https://github.com/XuehaiPan, https://github.com/Skylion007
2024-08-22 22:44:03 +00:00
b0171c3920 Revert "[ONNX] Opt into ruff fmt (#134120)"
This reverts commit 0870398fa8c3e097640f31cb8a8e2e2d3e522d33.

Reverted https://github.com/pytorch/pytorch/pull/134120 on behalf of https://github.com/albanD due to Breaks main branch lint ([comment](https://github.com/pytorch/pytorch/pull/134120#issuecomment-2305089756))
2024-08-22 15:48:14 +00:00
0870398fa8 [ONNX] Opt into ruff fmt (#134120)
Add ONNX directory to use ruff format.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/134120
Approved by: https://github.com/XuehaiPan, https://github.com/Skylion007
2024-08-21 21:43:55 +00:00
221350e3a4 Add None return type to init -- tests (#132352)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132352
Approved by: https://github.com/ezyang
ghstack dependencies: #132335, #132351
2024-08-01 15:44:51 +00:00
9e473fd868 Make adding Buffers more like adding Parameters (#125971)
Add similar semantics for creating a buffer object similar to creating a parameter. This is done by introducing a new Buffer class that can be used for type disambiguation. The underlying functionality of registering a buffer remains the same as the register_buffer method has not been changed. The persistent parameter in the Buffer type is to indicate whether a buffer object should be persistent or not. Other non-test changes have to do with getting the new Buffer type recognized by inductor and dynamo. Remaining changes are test changes to make sure that the Buffer type can be used as a drop in replacement for register_buffer as it just leads to register_buffer being called. The addition of this new functionality still allows for normal tensors to be used as buffers so these changes are intended to be backwards compatible.

Fixes #35735

Co-authored-by: Mikayla Gawarecki <mikaylagawarecki@gmail.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/125971
Approved by: https://github.com/albanD, https://github.com/anijain2305, https://github.com/mlazos
2024-07-31 10:32:40 +00:00
ae708e9791 [ONNX] Remove the deprecated SymbolicContext (#132184)
Remove the deprecated SymbolicContext class from torch.onnx
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132184
Approved by: https://github.com/titaiwangms
2024-07-31 04:24:32 +00:00
fbe6f42dcf [BE][Easy][8/19] enforce style for empty lines in import segments in test/[k-p]*/ (#129759)
See https://github.com/pytorch/pytorch/pull/129751#issue-2380881501. Most changes are auto-generated by linter.

You can review these PRs via:

```bash
git diff --ignore-all-space --ignore-blank-lines HEAD~1
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/129759
Approved by: https://github.com/justinchuby, https://github.com/ezyang
2024-07-31 02:09:20 +00:00
973037be6a [BE][Easy] apply autofix for ruff rules unnecessary-collection-call (C408): list() / tuple() / dict() (#130199)
This PR changes the empty collection factory call to Python literals:

- `list()` -> `[]`
- `tuple()` -> `()`
- `dict()` -> `{}`

The Python literals are more performant and safer. For example, the bytecode for building an empty dictionary:

```bash
$ python3 -m dis - <<EOS
import collections

d1 = {}
d2 = dict()

dict = collections.OrderedDict
d3 = dict()
EOS
```

```text
  0           0 RESUME                   0

  1           2 LOAD_CONST               0 (0)
              4 LOAD_CONST               1 (None)
              6 IMPORT_NAME              0 (collections)
              8 STORE_NAME               0 (collections)

  3          10 BUILD_MAP                0
             12 STORE_NAME               1 (d1)

  4          14 PUSH_NULL
             16 LOAD_NAME                2 (dict)
             18 CALL                     0
             26 STORE_NAME               3 (d2)

  6          28 LOAD_NAME                0 (collections)
             30 LOAD_ATTR                8 (OrderedDict)
             50 STORE_NAME               2 (dict)

  7          52 PUSH_NULL
             54 LOAD_NAME                2 (dict)
             56 CALL                     0
             64 STORE_NAME               5 (d3)
             66 RETURN_CONST             1 (None)
```

The dict literal `{}` only has one bytecode `BUILD_MAP`, while the factory call `dict()` has three `PUSH_NULL + LOAD_NAME + CALL`. Also, the factory call is not safe if users override the `dict` name in `locals` or `globals` (see the example of replacing with `OrderedDict` above).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/130199
Approved by: https://github.com/malfet
2024-07-11 17:30:28 +00:00
705346bf8d [ONNX] Skip optimizer when it fails (#127349)
continue #127039

(1) Skip optimizer when it fails
(2) Update onnx, ort, and onnx-script
(3) The update to onnx-script results in the actual optimizer and rewriter enabling in this PR, and https://github.com/pytorch/pytorch/pull/123379 did not update onnx-script.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/127349
Approved by: https://github.com/justinchuby
2024-05-30 07:08:45 +00:00
26f4f10ac8 [5/N][Easy] fix typo for usort config in pyproject.toml (kown -> known): sort torch (#127126)
The `usort` config in `pyproject.toml` has no effect due to a typo. Fixing the typo make `usort` do more and generate the changes in the PR. Except `pyproject.toml`, all changes are generated by `lintrunner -a --take UFMT --all-files`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/127126
Approved by: https://github.com/kit1980
2024-05-27 14:49:57 +00:00
55c0ab2887 Revert "[5/N][Easy] fix typo for usort config in pyproject.toml (kown -> known): sort torch (#127126)"
This reverts commit 7763c83af67eebfdd5185dbe6ce15ece2b992a0f.

Reverted https://github.com/pytorch/pytorch/pull/127126 on behalf of https://github.com/XuehaiPan due to Broken CI ([comment](https://github.com/pytorch/pytorch/pull/127126#issuecomment-2133044286))
2024-05-27 09:22:08 +00:00
7763c83af6 [5/N][Easy] fix typo for usort config in pyproject.toml (kown -> known): sort torch (#127126)
The `usort` config in `pyproject.toml` has no effect due to a typo. Fixing the typo make `usort` do more and generate the changes in the PR. Except `pyproject.toml`, all changes are generated by `lintrunner -a --take UFMT --all-files`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/127126
Approved by: https://github.com/kit1980
ghstack dependencies: #127122, #127123, #127124, #127125
2024-05-27 04:22:18 +00:00
edea2b81b5 [ONNX] Adds Support for Some Bitwise Ops in Onnx Exporter (#126229)
Addresses #126194

Adds support for
- "aten::bitwise_right_shift"
- "aten::bitwise_left_shift"
- "aten::bitwise_and"

Pull Request resolved: https://github.com/pytorch/pytorch/pull/126229
Approved by: https://github.com/justinchuby
2024-05-22 07:47:43 +00:00
c5fafe9f48 [BE]: TRY002 - Ban raising vanilla exceptions (#124570)
Adds a ruff lint rule to ban raising raw exceptions. Most of these should at the very least be runtime exception, value errors, type errors or some other errors. There are hundreds of instance of these bad exception types already in the codebase, so I have noqa'd most of them. Hopefully this error code will get commiters to rethink what exception type they should raise when they submit a PR.

I also encourage people to gradually go and fix all the existing noqas that have been added so they can be removed overtime and our exception typing can be improved.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124570
Approved by: https://github.com/ezyang
2024-04-21 22:26:40 +00:00
bbe846f430 Add symbolic_opset19.py and symbolic_opset20.py to support opset 19/20, extend opset 18 support (#118828)
Start to fix https://github.com/pytorch/pytorch/issues/114801

Co-authored-by: Thiago Crepaldi <thiagofc@microsoft.com>
Co-authored-by: Justin Chu <justinchuby@users.noreply.github.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/118828
Approved by: https://github.com/thiagocrepaldi
2024-03-22 18:01:33 +00:00
26431db939 [ONNX] Perform implicit casting of constants for the onnx::where operator (#118733) (#120619)
This PR fixes the problem of having the `Where` operator bound to different types in cases where the dtype is not explicitly set. The PR extends the implicit casting to the onnx::Where operator to fix the issue, and includes the corresponding unit test.

Fixes #118733

Pull Request resolved: https://github.com/pytorch/pytorch/pull/120619
Approved by: https://github.com/BowenBao, https://github.com/thiagocrepaldi
2024-03-04 19:27:30 +00:00
cd9a1934fb [ONNX] Bump to onnx1.15.0 and ort1.17.0 in CI (#119106)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/119106
Approved by: https://github.com/thiagocrepaldi, https://github.com/titaiwangms
2024-02-08 19:26:13 +00:00
a3cec6a7fa [ONNX] Eliminate redundant TODOs (#119060)
Remove titaiwangms/AllenTiTaiWang/titaiwang created TODOs:

1. Resolved TODOs
2. Turned TODOs to NOTEs if they are not actionable
3. Merge duplicated TODOs
Pull Request resolved: https://github.com/pytorch/pytorch/pull/119060
Approved by: https://github.com/kit1980, https://github.com/thiagocrepaldi
2024-02-02 23:37:52 +00:00
f543093e06 [ONNX] Fix output mismatch issue of repeat_interleave when dim is None (#116689)
'input' is introduced but it's mixed with 'self' in repeat_interleave, which causes the mismatch issue.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/116689
Approved by: https://github.com/thiagocrepaldi
2024-01-03 18:38:00 +00:00
bd10fea79a [BE]: Enable F821 and fix bugs (#116579)
Fixes #112371

I tried to fix as many of the bugs as I could, a few I could not figure out what the proper fix for them was though and so I left them with noqas.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/116579
Approved by: https://github.com/ezyang
2024-01-01 08:40:46 +00:00
ee5d981249 [BE]: Enable RUFF PERF402 and apply fixes (#115505)
* Enable PERF402. Makes code more efficient and succinct by removing useless list copies that could be accomplished either via a list constructor or extend call. All test cases have noqa added since performance is not as sensitive in that folder.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115505
Approved by: https://github.com/malfet
2023-12-20 18:01:24 +00:00
794545c11f [BE]: Enable RUF015 codebase wide (#115507)
Constant time access of first value in collection. This is a constant time operation instead of converting the item to a list to get the first item which is linear. The rule is turned on which automatically autofixes and enforces this.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115507
Approved by: https://github.com/malfet
2023-12-11 15:51:01 +00:00
9bab96c78c [ONNX] Consider negative dim in _index_fill_reshape_helper (#114050)
Fix export issue of index_copy op with negative dim.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/114050
Approved by: https://github.com/thiagocrepaldi
2023-11-22 15:40:57 +00:00
b88abb1674 [ONNX] Fix export issue of aten::layer_norm in opset 17 (#114058)
For torch.nn.LayerNorm, weight and bias could be None(when parameter elementwise_affine is False or bias is False), but for onnx op LayerNormalization from opset 17, weight and bias cannot be None.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/114058
Approved by: https://github.com/thiagocrepaldi
2023-11-21 22:45:50 +00:00
275a4521a9 [ONNX] Fix scalar type promotion between fp16 tensor and fp32 scalar (#113404)
Fixes https://github.com/pytorch/pytorch/issues/104594.

The reason for the exporter behavior in original posted issue is explained as follows:
ONNX model track shape related computes that were done in pytorch by python
numbers as tensor computes. This is the only way for ONNX to track them properly
since ONNX only has tensor type, otherwise the computation result will be tracked
statically as constant, and the model won't work for another input that differs in shape.

Now for type promotion logic, scalars should be treated differently with tensors.
Exporter mistook the shape related scalars as tensors in this case and incorrectly promoted.

This PR fixes the behavior and relaxes the criteria of scalar recognition. For floating point,
previously only a value from model initializer that has dtype torch.double and rank 0 is
treated as scalar. Now it is relaxed to any intermediate value, as well as for dtype torch.float.
Previous assumption was that python number is traced as torch.double dtype, which also
appears to be invalid anymore.

NOTE that this might introduce regression that a REAL 0-rank tensor is now being recognized as
scalar. The downside is the model will drop in accuracy for these cases as certain computations
will happen in lower precision data types.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113404
Approved by: https://github.com/justinchuby
2023-11-15 20:32:55 +00:00
0fd856ca22 Revert "[ONNX] Fix scalar type promotion between fp16 tensor and fp32 scalar (#113404)"
This reverts commit 39ca5a3226331428465a84d53d5b50dfb4406cfe.

Reverted https://github.com/pytorch/pytorch/pull/113404 on behalf of https://github.com/jeanschmidt due to sorry it is breaking CI jobs on main ([comment](https://github.com/pytorch/pytorch/pull/113404#issuecomment-1808314277))
2023-11-13 14:56:35 +00:00
39ca5a3226 [ONNX] Fix scalar type promotion between fp16 tensor and fp32 scalar (#113404)
Fixes https://github.com/pytorch/pytorch/issues/104594.

The reason for the exporter behavior in original posted issue is explained as follows:
ONNX model track shape related computes that were done in pytorch by python
numbers as tensor computes. This is the only way for ONNX to track them properly
since ONNX only has tensor type, otherwise the computation result will be tracked
statically as constant, and the model won't work for another input that differs in shape.

Now for type promotion logic, scalars should be treated differently with tensors.
Exporter mistook the shape related scalars as tensors in this case and incorrectly promoted.

This PR fixes the behavior and relaxes the criteria of scalar recognition. For floating point,
previously only a value from model initializer that has dtype torch.double and rank 0 is
treated as scalar. Now it is relaxed to any intermediate value, as well as for dtype torch.float.
Previous assumption was that python number is traced as torch.double dtype, which also
appears to be invalid anymore.

NOTE that this might introduce regression that a REAL 0-rank tensor is now being recognized as
scalar. The downside is the model will drop in accuracy for these cases as certain computations
will happen in lower precision data types.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113404
Approved by: https://github.com/justinchuby
2023-11-11 15:08:07 +00:00
3cb6cf1e8a Revert "[ONNX] Fix scalar type promotion between fp16 tensor and fp32 scalar (#113404)"
This reverts commit f2cd68102a56cd0427f25b748bbe3b463d43807b.

Reverted https://github.com/pytorch/pytorch/pull/113404 on behalf of https://github.com/huydhn due to Sorry for reverting your change, but it is failing in trunk f2cd68102a, may be a landrace or flaky of sort ([comment](https://github.com/pytorch/pytorch/pull/113404#issuecomment-1806613497))
2023-11-11 02:09:22 +00:00
e8e3afb784 [ONNX] Refactor MaxPool to support dynamic inputs (#113318)
In https://github.com/pytorch/pytorch/pull/106270, the solution managed to solve the [`ceil_model` corner issue](https://github.com/onnx/onnx/issues/5711) with the usage of `get_pool_ceil_padding`. However, padding the ceil in converter side only works when we already know the input shapes, therefore, a regression happens when users want to do dynamic inputs.

This PR provides (1) refactor codes with torchlib implementation, (2) add dynamic shapes test, and (3) disable the corner tests with comments saying re-enable it when the [real fix from ONNX](https://github.com/onnx/onnx/pull/5741) is merged.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/113318
Approved by: https://github.com/thiagocrepaldi
2023-11-10 23:23:49 +00:00
f2cd68102a [ONNX] Fix scalar type promotion between fp16 tensor and fp32 scalar (#113404)
Fixes https://github.com/pytorch/pytorch/issues/104594.

The reason for the exporter behavior in original posted issue is explained as follows:
ONNX model track shape related computes that were done in pytorch by python
numbers as tensor computes. This is the only way for ONNX to track them properly
since ONNX only has tensor type, otherwise the computation result will be tracked
statically as constant, and the model won't work for another input that differs in shape.

Now for type promotion logic, scalars should be treated differently with tensors.
Exporter mistook the shape related scalars as tensors in this case and incorrectly promoted.

This PR fixes the behavior and relaxes the criteria of scalar recognition. For floating point,
previously only a value from model initializer that has dtype torch.double and rank 0 is
treated as scalar. Now it is relaxed to any intermediate value, as well as for dtype torch.float.
Previous assumption was that python number is traced as torch.double dtype, which also
appears to be invalid anymore.

NOTE that this might introduce regression that a REAL 0-rank tensor is now being recognized as
scalar. The downside is the model will drop in accuracy for these cases as certain computations
will happen in lower precision data types.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113404
Approved by: https://github.com/justinchuby
2023-11-10 22:31:25 +00:00
9ab6ac5bc1 [ONNX] Fix aten::new_zeros due to TorchScript behavior change on Pytorch 2.1 Fix #110935 (#110956)
Fixes #110597

Summary:

* Generic code: The `torch._C.Value.node().mustBeNone()` is encapsulated into the high-level API `JitScalarType.from_value` ; `_is_none` was also extended to allow either `None` or `torch._C.Value.node.mustBeNone()`, so users don't manually call into TorchScript API when implementing operators
* Specific to `new_zeros` (and ops of ` *_like`  and `new_*`): When checking `dtype`, we always must use ` _is_none`, which will call  proposed by #110935
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110956
Approved by: https://github.com/justinchuby, https://github.com/BowenBao
2023-10-16 18:28:20 +00:00
a3e9b80082 Fix torch.diagonal for torch.onnx.export when dim1<0 or dim2<0 (#111130)
in many cases, torch.diagonal will pass (dim1=-2, dim2=-1), onnx export will always fail in these cases
this pr try to fix the bug
Pull Request resolved: https://github.com/pytorch/pytorch/pull/111130
Approved by: https://github.com/thiagocrepaldi
2023-10-13 22:05:53 +00:00
8dcdc74915 torch->onnx export support: quantized::linear_relu (#109755)
- Adds support for quantized::linear_relu
  - Adds weight unpacking pattern matcher
  - Adds to export for opset 10 and 13.
- Adds QAT test modeled after conv2d+relu fusion test

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109755
Approved by: https://github.com/BowenBao, https://github.com/thiagocrepaldi
2023-09-21 23:24:20 +00:00
504dceacb1 [ONNX] Fix indexing issue of meshgrid op (#109350)
Should unpack tensor_list before swapping the elements for indexing 'xy'.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109350
Approved by: https://github.com/thiagocrepaldi
2023-09-15 19:49:43 +00:00
3f88e3105f Reland: Remove remaining global set_default_dtype calls from tests (#108088)
Fixes #68972

Relands #107246

To avoid causing Meta-internal CI failures, this PR avoids always asserting that the default dtype is float in the `TestCase.setUp/tearDown` methods. Instead, the assert is only done if `TestCase._default_dtype_check_enabled == True`. `_default_dtype_check_enabled` is set to True in the `if __name__ == "__main__":` blocks of all the relevant test files that have required changes for this issue

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108088
Approved by: https://github.com/ezyang
2023-09-07 03:04:34 +00:00
35f4bb9a25 [ONNX] Return input itself for non-fp inputs and support decimals for aten::round op (#107920)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107920
Approved by: https://github.com/justinchuby
2023-08-26 05:54:52 +00:00
161ea463e6 Revert "Remove remaining global set_default_dtype calls from tests (#107246)"
This reverts commit aa8ea1d787a9d21b064b664c5344376265feea6c.

Reverted https://github.com/pytorch/pytorch/pull/107246 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally ([comment](https://github.com/pytorch/pytorch/pull/107246#issuecomment-1693838522))
2023-08-25 19:34:55 +00:00
aa8ea1d787 Remove remaining global set_default_dtype calls from tests (#107246)
Fixes #68972

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107246
Approved by: https://github.com/ezyang
2023-08-24 16:10:48 +00:00
387556318e [ONNX] Cap opset version at 17 for torch.onnx.export (#107829)
Cap opset version at 17 for torch.onnx.export and suggest users to use the dynamo exporter. Warn users instead of failing hard because we should still allow users to create custom symbolic functions for opset>17.

Also updates the default opset version by running `tools/onnx/update_default_opset_version.py`.

Fixes #107801 Fixes #107446
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107829
Approved by: https://github.com/BowenBao
2023-08-24 07:21:10 +00:00
71be8f2223 Revert "Add initial support for FP8 ONNX export (#106379)"
This reverts commit 08704f96f08da5a52f65a7c3001d6ce4aae0102e.

Reverted https://github.com/pytorch/pytorch/pull/106379 on behalf of https://github.com/kit1980 due to breaking multiple internal builds ([comment](https://github.com/pytorch/pytorch/pull/106379#issuecomment-1675192700))
2023-08-11 18:22:35 +00:00
08704f96f0 Add initial support for FP8 ONNX export (#106379)
Add support for ONNX_NAMESPACE::TensorProto_DataType_FLOAT8E5M2 and ONNX_NAMESPACE::TensorProto_DataType_FLOAT8E4M3FN to enable export of torch models that use FP8 (E4M3 and E5M2) to ONNX (opset 19)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106379
Approved by: https://github.com/justinchuby, https://github.com/thiagocrepaldi, https://github.com/malfet
2023-08-10 01:02:45 +00:00
bc88028e8e Back out "Reland "Make adding buffers more like adding parameters (#104069)" (#106224)" (#106743)
Summary:
Original commit changeset: 81319beb97f3

Original Phabricator Diff: D47961182

Test Plan: revert to maintain backward compat with legacy ads_dper3 production package. Read details in: S357822

Reviewed By: atuljangra

Differential Revision: D48131623

@diff-train-skip-merge
(D48131623 landed internally)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106743
Approved by: https://github.com/malfet
2023-08-08 15:27:34 +00:00
2f35715f0d [onnx] Fix output shape mismatch issue of max_pool (#106270)
For onnx MaxPool with ceil_mode=1, the sliding windows that starts in the right padded region won't be ignored, which causes different output shape with torch.
Therefore, need to add Pad op before and not to set ceil_mode for MaxPool op like what is done in symbolic_opset9 when convertting torch max_pool to onnx MaxPool.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106270
Approved by: https://github.com/thiagocrepaldi
2023-07-31 21:03:08 +00:00
d8e5f2aa6d Reland "Make adding buffers more like adding parameters (#104069)" (#106224)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106224
Approved by: https://github.com/atalman, https://github.com/albanD
2023-07-31 17:18:56 +00:00
d5d6eb2d46 [ONNX] Refactor AvgPool to support dynamic shapes (#105683)
In #87892, to pick up the corner cases found in #71549, the PR falls back the implementation of AvgPool to the way opset 9 implementing. However, it introduces a regression on dynamic shape cases found in #101397. This PR refactors the AvgPool op with the same implementation we have in onnxscript: https://github.com/microsoft/onnxscript/pull/754.

However, the corner case with `count_include_pad` remains unsolved in onnxruntime: https://github.com/microsoft/onnxruntime/issues/16203. The calculuation on the last value of each dimension is different between ORT and PyTorch. But the fix can be proved in: https://github.com/microsoft/onnxruntime/pull/16752, and it supports AvgPool since opset19.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105683
Approved by: https://github.com/thiagocrepaldi
2023-07-21 20:22:08 +00:00
79c5e33349 [BE] Enable ruff's UP rules and autoformat nn/ mps/ and torch/ (#105436)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105436
Approved by: https://github.com/malfet, https://github.com/albanD
2023-07-21 07:38:46 +00:00
c6653b65d8 Back out "Make adding buffers more like adding parameters (#104069)" (#105581)
Summary:
D47537831 is breaking pyper tests: https://fb.workplace.com/groups/802176577445480/posts/1018902842439518/

with `TypeError: register_buffer() takes 3 positional arguments but 4 were given`

Original commit changeset: d4b4069fbd38

Original Phabricator Diff: D47537831

Test Plan:
```
buck2 run //caffe2/torch/fb/training_toolkit/integration_tests/training_lifecycle/cogwheel_tests/pyper_release_v2:cogwheel_smallworld_inline_cvr_infer_pyper_pyper__canary_offline_training-launcher -- --run-harness-in-tupperware --build-fbpkg ads_dper3 --build-fbpkg training_platform
```

Reviewed By: atalman

Differential Revision: D47600140

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105581
Approved by: https://github.com/mikaylagawarecki
2023-07-20 03:39:53 +00:00
32d422f335 Make adding buffers more like adding parameters (#104069)
Add similar semantics for creating a buffer object similar to creating a parameter. This is done by introducing a new `Buffer` class that can be used for type disambiguation. The underlying functionality of registering a buffer remains the same as the `register_buffer` method has not been changed. The `persistent` parameter in the `Buffer` type is to indicate whether a buffer object should be persistent or not. Other non-test changes have to do with getting the new `Buffer` type recognized by inductor and dynamo. Remaining changes are test changes to make sure that the `Buffer` type can be used as a drop in replacement for `register_buffer` as it just leads to `register_buffer` being called. The addition of this new functionality still allows for normal tensors to be used as buffers so these changes are intended to be backwards compatible.

Fixes #35735

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104069
Approved by: https://github.com/mikaylagawarecki
2023-07-17 17:59:05 +00:00
bf40561ab4 [ONNX] Support 'aten::randint' in torchscript onnx exporter (#105089)
Export as 'ONNX::RandomUniform' which produces floating point result,
then round it to integer with 'ONNX::Cast'.

Fixes https://github.com/microsoft/onnx-converters-private/issues/173
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105089
Approved by: https://github.com/thiagocrepaldi
2023-07-13 01:50:03 +00:00
8c0b9a2d69 [ONNX] Export dynamic step size for aten::slice() (#104385)
This commit improves the export of aten::slice() to ONNX in the following ways:

1. The step size can be an input tensor rather than a constant.
2. Fixes a bug where using a 1-D, 1-element torch tensor as an index created a broken ONNX model.

This commit also adds tests for the new functionality.

Fixes #104314

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104385
Approved by: https://github.com/thiagocrepaldi
2023-07-06 21:38:59 +00:00