9fff8155c3
[2/N] Fix clang-tidy readability checks ( #164652 )
...
This PR applies clang-tidy readability checks to jit sources and all headers in the code base.
`readability-redundant-inline-specifier` is suppressed because it incurs too many changes. `readability-redundant-inline-specifier` is used to detect redundant inline specifiers on function and variable declarations. There are many in-class method definitions that are marked inline.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164652
Approved by: https://github.com/Skylion007
2025-10-06 01:06:01 +00:00
2c5ed6e7c0
Revert "[2/N] Fix clang-tidy readability checks ( #164652 )"
...
This reverts commit 3c5ca685d6f5b6f3971c0cd20a054aa355610419.
Reverted https://github.com/pytorch/pytorch/pull/164652 on behalf of https://github.com/izaitsevfb due to need to revert due to a conflict with revert of https://github.com/pytorch/pytorch/pull/162659 ([comment](https://github.com/pytorch/pytorch/pull/164652#issuecomment-3369346707 ))
2025-10-05 21:36:57 +00:00
3c5ca685d6
[2/N] Fix clang-tidy readability checks ( #164652 )
...
This PR applies clang-tidy readability checks to jit sources and all headers in the code base.
`readability-redundant-inline-specifier` is suppressed because it incurs too many changes. `readability-redundant-inline-specifier` is used to detect redundant inline specifiers on function and variable declarations. There are many in-class method definitions that are marked inline.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164652
Approved by: https://github.com/Skylion007
2025-10-05 07:05:11 +00:00
541584d22e
[BE][8/16] fix typos in torch/ (torch/csrc/jit/) ( #156318 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/156318
Approved by: https://github.com/albanD
2025-07-02 22:55:29 +00:00
ed5f4a4fa8
Replace size() checks with empty() ( #153805 )
...
Fixes #ISSUE_NUMBER
Pull Request resolved: https://github.com/pytorch/pytorch/pull/153805
Approved by: https://github.com/nareshrajkumar866 , https://github.com/Skylion007
Co-authored-by: Aaron Gokaslan <aaronGokaslan@gmail.com >
2025-05-19 16:20:57 +00:00
8f291e8c00
Fix clang-tidy warnings in torch/jit ( #146963 )
...
Fixes #ISSUE_NUMBER
Pull Request resolved: https://github.com/pytorch/pytorch/pull/146963
Approved by: https://github.com/davidberard98
2025-02-15 03:36:59 +00:00
d8f99f39cb
Avoid unnecessary tensor constructions ( #139039 )
...
Because Variable is an alias of Tensor
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139039
Approved by: https://github.com/Skylion007
2024-10-29 02:23:23 +00:00
af0bc75460
Remove deprecated alias macro(1/3) ( #137556 )
...
**Detailed Descriptions:**
- Remove AT_ERROR Macro
Pull Request resolved: https://github.com/pytorch/pytorch/pull/137556
Approved by: https://github.com/ezyang
2024-10-21 17:32:32 +00:00
fddabc6e0b
C10_UNUSED to [[maybe_unused]] ( #6357 ) ( #138364 )
...
Summary: Pull Request resolved: https://github.com/pytorch/executorch/pull/6357
Pull Request resolved: https://github.com/pytorch/pytorch/pull/138364
Approved by: https://github.com/Skylion007 , https://github.com/eqy
2024-10-19 13:17:43 +00:00
b7f798caa4
Use C10_UNUSED instead of (void)X ( #137239 )
...
Summary:
Auto-generated with
```
buck run //scripts/rbarnes/regex_multiline_replacer:regex_multiline_replacer -- --find '^(\s*for\s*\()(const.*\n)\s*\(void\)[A-Za-z]+;\s*//\s*Suppress.*\s*\n(.*)' --replace '\1C10_UNUSED \2\3' `find caffe2/ -regex ".*\.\(cpp\|h\)"`
```
Differential Revision: D33432600
Pull Request resolved: https://github.com/pytorch/pytorch/pull/137239
Approved by: https://github.com/Skylion007
2024-10-15 14:32:59 +00:00
ec3f52dd27
[21/N] Fix clang-tidy warnings in jit ( #134537 )
...
Follows #133399
Pull Request resolved: https://github.com/pytorch/pytorch/pull/134537
Approved by: https://github.com/Skylion007
2024-08-28 03:22:01 +00:00
73604eed0c
[20/N] Fix clang-tidy warnings in jit ( #133399 )
...
Follows #133067
Pull Request resolved: https://github.com/pytorch/pytorch/pull/133399
Approved by: https://github.com/Skylion007
2024-08-26 17:43:52 +00:00
f4dcf2ae93
[1/N] Change #include <c10/util/Optional.h> to #include <optional> ( #128301 )
...
Fixes #ISSUE_NUMBER
Pull Request resolved: https://github.com/pytorch/pytorch/pull/128301
Approved by: https://github.com/ezyang , https://github.com/r-barnes
2024-07-08 07:03:53 +00:00
846bb30e13
Revert "[1/N] Change #include <c10/util/Optional.h> to #include <optional> ( #128301 )"
...
This reverts commit bd72e28314d8d63bb347becb8309f5ac7761c6b5.
Reverted https://github.com/pytorch/pytorch/pull/128301 on behalf of https://github.com/huydhn due to Sorry for reverting your change but it fails XLA build bd72e28314
. Please rebase your PR before relanding because I think the failure is hidden by an unrelated broken trunk XLA failure from your current base commit ([comment](https://github.com/pytorch/pytorch/pull/128301#issuecomment-2169035822 ))
2024-06-15 01:58:20 +00:00
bd72e28314
[1/N] Change #include <c10/util/Optional.h> to #include <optional> ( #128301 )
...
Fixes #ISSUE_NUMBER
Pull Request resolved: https://github.com/pytorch/pytorch/pull/128301
Approved by: https://github.com/ezyang
2024-06-14 23:21:01 +00:00
ed327876f5
[codemod] c10:optional
-> std::optional
( #126135 )
...
Generated by running the following from PyTorch root:
```
find . -regex ".*\.\(cpp\|h\|cu\|hpp\|cc\|cxx\)$" | grep -v "build/" | xargs -n 50 -P 4 perl -pi -e 's/c10::optional/std::optional/'
```
`c10::optional` is just an alias for `std::optional`. This removes usages of that alias in preparation for eliminating it entirely.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/126135
Approved by: https://github.com/Skylion007 , https://github.com/malfet , https://github.com/albanD , https://github.com/aaronenyeshi
2024-05-14 19:35:51 +00:00
d17be10df1
make torch.amp.autocast more generic ( #125103 )
...
# Motivation
As discussed in [#124479 ](https://github.com/pytorch/pytorch/pull/124479 ), `torch.amp.autocast` can NOT be completely equivalent to `torch.cuda.amp.autocast` and `torch.cpu.amp.autocast` since `torch.amp.autocast` has NOT the default `dtype` for CPU (`torch.bfloat16` by default) and CUDA (`torch.float16` by default) respectively. We would like `torch.amp.autocast` to be more generic to help the developer/customer write the device-agnostic code. Because there are not enough reasons to add device-specific autocast `torch.xxx.amp.autocast` for each device backend.
# Solution
When `None` is passed to `dtype`, we should use `torch.get_autocast_dtype` to get the related dtype for each backend. Meanwhile, `torch.get_autocast_dtype` is necessary to be supported in JIT path for BC.
# Additional Context
With this PR, `torch.amp.autocast(device_type='cuda')` is equivalent to `torch.cuda.amp.autocast`.
Add two new UTs to cover this change in eager and jit path respectively.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/125103
Approved by: https://github.com/albanD , https://github.com/jgong5 , https://github.com/gujinghui
2024-05-08 12:13:26 +00:00
25f321b84f
Refactor autocast C++ APIs to be device-agnostic ( #124359 )
...
# Motivation
This PR aims to refactor autocast **C++** APIs to be device-agnostic and deprecate the device-specific autocast **C++** APIs.
In C++ side,
- `is_enabled()` -> `is_enabled(device_type)`.
- `set_enabled(new_enabled)` -> `set_enabled(device_type, new_enabled)`.
- `get_autocast_dtype()` -> `get_autocast_dtype(device_type)`
- `set_autocast_dtype(dtype)` -> `set_autocast_dtype(device_type, dtype)`
These following C++ APIs are deprecated and should be removed in PyTorch 2.5
- `is_cpu_enabled`
- `set_cpu_enabled`
- `get_autocast_cpu_dtype`
- `set_autocast_cpu_dtype`
- `is_xpu_enabled`
- `set_xpu_enabled`
- `get_autocast_xpu_dtype`
- `set_autocast_xpu_dtype`
- `is_ipu_enabled`
- `set_ipu_enabled`
- `get_autocast_ipu_dtype`
- `set_autocast_ipu_dtype`
- `is_hpu_enabled`
- `set_hpu_enabled`
- `get_autocast_hpu_dtype`
- `set_autocast_hpu_dtype`
- `is_xla_enabled`
- `set_xla_enabled`
- `get_autocast_xla_dtype`
- `set_autocast_xla_dtype`
- `is_privateuseone_enabled`
- `set_privateuseone_enabled`
- `get_autocast_privateuseone_dtype`
- `set_autocast_privateuseone_dtype`
In Python side,
provide 4 generic autocast APIs:
- `torch.is_autocast_enabled(device_type)`
- `torch.set_autocast_enabled(device_type, new_enabled)`
- `torch.get_autocast_dtype(device_type)`
- `torch.set_autocast_dtype(device_type, dtype)`
# Additional Context
We will submit another PR to refactor autocast **Python** APIs based on this PR.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124359
Approved by: https://github.com/jgong5 , https://github.com/albanD
2024-04-23 10:38:50 +00:00
375ec25f55
Add missing aten::sort.any op for assistant lm models ( #123982 )
...
Differential Revision: D56084098
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123982
Approved by: https://github.com/JacobSzwejbka
2024-04-23 01:35:07 +00:00
5f5778476a
rename ort to maia ( #123265 )
...
Fixes #123264
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123265
Approved by: https://github.com/albanD
2024-04-23 00:33:25 +00:00
7fc292930c
Add support for torch.Generator
type in TorchScript ( #110413 )
...
- Add support for `torch.Generator` type in TorchScript
- Add `generator` args to all `torch.nn.init` functions that call `uniform_` or `normal_`
- Add support for `torch.Generator` in LTC's TorchScript backend (CC: @wconstab)
CC: @eellison @davidberard98 @GlebKazantaev @behzad-a
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110413
Approved by: https://github.com/wconstab , https://github.com/albanD , https://github.com/glebk-cerebras , https://github.com/davidberard98
2023-11-21 23:07:21 +00:00
252e68a83b
Revert "Add support for torch.Generator
type in TorchScript ( #110413 )"
...
This reverts commit 54493fe8c4b1cca4c5ff993b99eb3e3dbc984226.
Reverted https://github.com/pytorch/pytorch/pull/110413 on behalf of https://github.com/huydhn due to Sorry for reverting your change but it is, unfortunately, still breaking internal builds ([comment](https://github.com/pytorch/pytorch/pull/110413#issuecomment-1811625557 ))
2023-11-15 00:51:23 +00:00
54493fe8c4
Add support for torch.Generator
type in TorchScript ( #110413 )
...
- Add support for `torch.Generator` type in TorchScript
- Add `generator` args to all `torch.nn.init` functions that call `uniform_` or `normal_`
- Add support for `torch.Generator` in LTC's TorchScript backend (CC: @wconstab)
CC: @eellison @davidberard98 @GlebKazantaev @behzad-a
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110413
Approved by: https://github.com/wconstab , https://github.com/albanD , https://github.com/glebk-cerebras , https://github.com/davidberard98
2023-11-13 23:18:14 +00:00
9a28a7b498
Revert "Add support for torch.Generator
type in TorchScript ( #110413 )"
...
This reverts commit 27e31ab6e86259b27d816d6fb6e7a69de526a0e4.
Reverted https://github.com/pytorch/pytorch/pull/110413 on behalf of https://github.com/PaliC due to breaking internal builds ([comment](https://github.com/pytorch/pytorch/pull/110413#issuecomment-1799003164 ))
2023-11-07 15:53:32 +00:00
27e31ab6e8
Add support for torch.Generator
type in TorchScript ( #110413 )
...
- Add support for `torch.Generator` type in TorchScript
- Add `generator` args to all `torch.nn.init` functions that call `uniform_` or `normal_`
- Add support for `torch.Generator` in LTC's TorchScript backend (CC: @wconstab)
CC: @eellison @davidberard98 @GlebKazantaev @behzad-a
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110413
Approved by: https://github.com/wconstab , https://github.com/albanD , https://github.com/glebk-cerebras , https://github.com/davidberard98
2023-11-06 21:27:02 +00:00
ad8aef0f98
[BE] [3/N] Use nested namespaces ( #110314 )
...
Mostly in torch/csrc/jit/runtime and in `ATen/cuda/`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110314
Approved by: https://github.com/seemethere
2023-09-30 02:23:48 +00:00
ac603bc2f8
[Reland] Eliminate invocations of c10::stoi,c10::stod,c10::stoull,c10::stoll ( #109566 )
...
This is reland of #87603 with definitions of c10::stoXX kept for further investigation.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/109566
Approved by: https://github.com/huydhn
2023-09-19 07:15:25 +00:00
4d44d8c00a
Revert "Eliminate c10::stoi,c10::stod,c10::stoull,c10::stoll ( #109179 )"
...
This reverts commit 852f1b8417e80b72a7d1c4a772f66af28da02913.
Reverted https://github.com/pytorch/pytorch/pull/109179 on behalf of https://github.com/huydhn due to Sorry for reverting your change but this is breaking periodic buck build, so please fix the issue and reland the change https://github.com/pytorch/pytorch/actions/runs/6207458526/job/16852695272 ([comment](https://github.com/pytorch/pytorch/pull/109179#issuecomment-1724168571 ))
2023-09-18 18:41:12 +00:00
852f1b8417
Eliminate c10::stoi,c10::stod,c10::stoull,c10::stoll ( #109179 )
...
We can remove these functions in favor of std ones.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/109179
Approved by: https://github.com/colesbury
2023-09-16 07:22:50 +00:00
8289ad8e5e
Support is_mtia attribute. ( #108307 ) ( #108310 )
...
Summary:
FBGEMM uses `self.iter.is_cuda` to check if the tensor is for CUDA. This diff enables similar feature `self.iter.is_mtia` for tensors with MTIA device key.
Test Plan: See diff D48693225
Reviewed By: jackm321
Differential Revision: D48809191
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108310
Approved by: https://github.com/albanD
2023-09-01 01:25:40 +00:00
12ca224662
Add hacked_twin overloads for _unsafe indexing functions ( #104127 )
...
Fixes #104037
This hacky workaround already exists for the normal overloads.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/104127
Approved by: https://github.com/ezyang
2023-07-05 01:05:27 +00:00
4e204ff87b
Added is_xla ( #103100 )
...
This change creates `is_xla` which is congruent with `is_cuda` and `is_cpu`. Useful in situations like: https://github.com/pytorch/pytorch/pull/102858
```
>>> x = torch.tensor([1], device=xm.xla_device())
>>> x.is_xla
True
>>> x.is_cpu
False
>>> x = torch.tensor([1])
>>> x.is_cpu
True
>>> x.is_xla
False
```
Attn: @albanD
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103100
Approved by: https://github.com/albanD
2023-06-22 23:31:04 +00:00
d997969b8b
[Reland] Add sym_size/stride/numel/storage_offset to native_function.yaml ( #103107 )
...
Differential Revision: D46459100
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103107
Approved by: https://github.com/angelayi , https://github.com/soulitzer
2023-06-12 19:18:49 +00:00
20cf42de2c
Revert "[Reland] Add sym_size/stride/numel/storage_offset to native_function.… ( #100749 )"
...
This reverts commit bb454891ed5ce97f580ae52e20f8e9ff2d0f3bf5.
2023-05-16 18:17:02 -07:00
bb454891ed
[Reland] Add sym_size/stride/numel/storage_offset to native_function.… ( #100749 )
...
…yaml (#91… (#91919 )
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91919 Approved by: https://github.com/ezyang
Fixes #ISSUE_NUMBER
Pull Request resolved: https://github.com/pytorch/pytorch/pull/92402
Reviewed By: ezyang
Differential Revision: D42565586
Pulled By: SherlockNoMad
fbshipit-source-id: 1c2986e45307e076d239836a1b45441a9fa3c9d9
ghstack-source-id: 969f4928486e04c57aaf98e20e3c3ca946c51613
Fixes #ISSUE_NUMBER
Pull Request resolved: https://github.com/pytorch/pytorch/pull/100749
Approved by: https://github.com/zhxchen17 , https://github.com/albanD
2023-05-12 22:57:42 +00:00
bc3108c2e2
make torch/csrc/jit/runtime/register_prim_ops.cpp data_ptr-correct ( #100832 )
...
make torch/csrc/jit/runtime/register_prim_ops.cpp data_ptr-correct
Test Plan: Rely on CI.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/100832
Approved by: https://github.com/ezyang
2023-05-09 15:07:54 +00:00
bbf180af9f
Add new aten::device variant to TorchScript ( #97023 )
...
Fixes #96627
Pull Request resolved: https://github.com/pytorch/pytorch/pull/97023
Approved by: https://github.com/jgong5 , https://github.com/BowenBao , https://github.com/davidberard98
2023-04-06 14:19:00 +00:00
555ab310dc
Add itemsize and nbytes properties to Tensor ( #98322 )
...
Adds properties for itemsize and nbytes to Tensor matching the properties in NumPy.
Fixes https://github.com/pytorch/pytorch/issues/12728
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98322
Approved by: https://github.com/ezyang
2023-04-05 12:11:55 +00:00
f7bd5d0ccb
Revert "[Reland] Add sym_size/stride/numel/storage_offset to native_function.yaml (#91… ( #92402 )"
...
This reverts commit 965f4ea3bac8186b99119e73b9ff00e390a5d28b.
Reverted https://github.com/pytorch/pytorch/pull/92402 on behalf of https://github.com/zhxchen17 due to Caused a regression for an export model.
2023-02-03 03:12:43 +00:00
965f4ea3ba
[Reland] Add sym_size/stride/numel/storage_offset to native_function.yaml (#91… ( #92402 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91919
Approved by: https://github.com/ezyang
Fixes #ISSUE_NUMBER
Pull Request resolved: https://github.com/pytorch/pytorch/pull/92402
Approved by: https://github.com/ezyang
2023-02-01 04:47:49 +00:00
0247ed27cc
Apply Clang-Tidy readability-container-size-empty ( #93236 )
...
Not only is this change usually shorter and more readable, it also can yield better performance. size() is not always a constant time operation (such as on LinkedLists), but empty() always is.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93236
Approved by: https://github.com/malfet
2023-01-29 23:28:19 +00:00
befe815466
Revert "Add sym_size/stride/numel/storage_offset to native_function.yaml ( #91919 )"
...
This reverts commit 0388400f3f8a8ecae2f809ba40ca3ddd5a8b9028.
Reverted https://github.com/pytorch/pytorch/pull/91919 on behalf of https://github.com/atalman due to Break internal build
2023-01-17 21:03:18 +00:00
0388400f3f
Add sym_size/stride/numel/storage_offset to native_function.yaml ( #91919 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91919
Approved by: https://github.com/ezyang
2023-01-17 03:39:57 +00:00
3916d7a575
Apply modernize-use-emplace to aten, c10, torch ( #91077 )
...
Apply clang-tidy check modernize-use-emplace. This is slightly more efficient by using an inplace constructor and is the recommended style in parts of the codebase covered by clang-tidy. This just manually applies the check to rest of the codebase. Pinging @ezyang as this is related to my other PRs he reviewed like #89000
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91077
Approved by: https://github.com/ezyang
2022-12-19 07:49:56 +00:00
3b6588ab74
Consistent compute numel/contiguous strategy with SymInts ( #85858 )
...
Previously, our handling for contiguity was inconsistent in the following ways:
- is_strides_like 2d/3d and is_non_overlapping_and_dense always were computed
based on sizes_and_strides_, even if you had symbolic ints
- Furthermore, even if you set custom policy for strides, these quantities were
not overridable by subclasses
- Furthermore, we didn't even store these fields on ExtraMeta
- We duplicate implementations of compute_contiguous (plain, channels last,
channels last 3d)
- We inconsistently called refresh_numel()/refresh_contiguous(), versus
recomputing it ourselves
This factor makes a consistent strategy for all of the boolean fields, and
for numel computation. After this refactor:
- All layout boolean fields are interposable via strides policy
and can be overridden from Python; you will never access a garbage field
- All layout boolean fields are on ExtraMeta
- You can always call refresh_numel/contiguous, no matter if your Tensor is
contiguous or not
- The numel/layout boolean fields are always populated consistently with
the sizes strides fields (either on Tensor or ExtraMeta), even if you
have custom policy
- There is only one implementation of the actual computation logic
Signed-off-by: Edward Z. Yang <ezyang@fb.com >
Differential Revision: [D39907696](https://our.internmc.facebook.com/intern/diff/D39907696 )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85858
Approved by: https://github.com/albanD
2022-09-30 21:26:34 +00:00
00ce302c07
Performance optimizations to proxy tensor ( #85049 )
...
- Lazily allocate FX nodes for size/stride accessors on proxy tensor
- Properly track derived computations on strides/numel/etc
- Remove unnecessary tree_map at end of proxy tensor trace checking
invariants; we will just have to be smart (it's too expensive)
- Avoid tree_map in sym proxy tracing
Signed-off-by: Edward Z. Yang <ezyang@fb.com >
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85049
Approved by: https://github.com/wconstab
2022-09-16 00:28:50 +00:00
db7784e722
[Static Runtime] Schema checks for index_put ( #84152 )
...
Summary:
`index_put` can take a list of tensors, but Static Runtime always tries to convert its argument to a list of optional tensors. This was causing crashes for some users. Add some schema checks to prevent this, and add a new overload for the new case.
Also, I found a clear bug in the JIT interpreter (mutating the argument when its not supposed to), so I fixed that too.
Test Plan: New unit test
Differential Revision: D39072214
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84152
Approved by: https://github.com/tenpercent
2022-08-31 01:20:14 +00:00
eda217ab67
Reland symint_numel ( #84281 )
...
Fixes #ISSUE_NUMBER
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84281
Approved by: https://github.com/ezyang
2022-08-30 21:53:34 +00:00
44a975335e
Revert "Re-land sym_numel ( #82374 ) ( #82726 ) ( #82731 ) ( #82855 )" ( #84207 )
...
This reverts commit bfebf254dd92f3ed35154597166e7e71fb04f31b.
Differential Revision: [D39104562](https://our.internmc.facebook.com/intern/diff/D39104562 )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84207
Approved by: https://github.com/robieta
2022-08-30 13:22:58 +00:00
7ebdb4c72f
Refactored ops on size to be dispatcher ops ( #83719 )
...
An example of how the graph looks now.
```
def forward(self, x_1):
size = torch.ops.math.size(x_1, 0)
size_1 = torch.ops.math.size(x_1, 1); x_1 = None
ones = torch.ops.aten.ones.default([1], device = device(type='cpu'), pin_memory = False)
expand_sym_int = torch.ops.aten.expand.SymInt(ones, [size, size_1]); ones = size = size_1 = None
cos_default = torch.ops.aten.cos.default(expand_sym_int); expand_sym_int = None
return (cos_default,)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83719
Approved by: https://github.com/ezyang
2022-08-23 15:48:00 +00:00