fdab48a7c1
Enable all PIE rules on ruff ( #165814 )
...
This PR enables all PIE rules on ruff, there are already some enabled rules from this family, the new added rules are
```
PIE796 Enum contains duplicate value: {value}
PIE808 Unnecessary start argument in range
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/165814
Approved by: https://github.com/ezyang
2025-10-18 07:36:18 +00:00
24520b8386
Revert "Enable all PIE rules on ruff ( #165814 )"
...
This reverts commit c79dfdc6550e872783aa5cb5fc9e86589bf18872.
Reverted https://github.com/pytorch/pytorch/pull/165814 on behalf of https://github.com/cyyever due to Need to cover more files ([comment](https://github.com/pytorch/pytorch/pull/165814#issuecomment-3417931863 ))
2025-10-18 07:21:08 +00:00
c79dfdc655
Enable all PIE rules on ruff ( #165814 )
...
This PR enables all PIE rules on ruff, there are already some enabled rules from this family, the new added rules are
```
PIE796 Enum contains duplicate value: {value}
PIE808 Unnecessary start argument in range
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/165814
Approved by: https://github.com/ezyang
2025-10-18 06:40:12 +00:00
0256f91558
[BUG] MaxUnpool2d/3d should check output dim before accessing its elements ( #163507 )
...
Fixes #163409
Pull Request resolved: https://github.com/pytorch/pytorch/pull/163507
Approved by: https://github.com/malfet , https://github.com/Skylion007
2025-09-22 21:36:48 +00:00
ce5637be29
Fix invalid indices bug for max_unpool2d/3d on MPS ( #163036 )
...
Fixes #163035
Pull Request resolved: https://github.com/pytorch/pytorch/pull/163036
Approved by: https://github.com/kulinseth , https://github.com/malfet
Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com >
2025-09-19 05:13:21 +00:00
29ea6254a0
[Bug] Add more boundary check for FractionalMaxPool3d ( #161876 )
...
This PR aims to fix the bug mentioned at [#161853 ](https://github.com/pytorch/pytorch/issues/161853#issuecomment-3240695121 )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/161876
Approved by: https://github.com/malfet
2025-09-16 06:59:02 +00:00
468c1f9e9d
Revert "[nn] Assert parsed iterable arguments are an appropriate length ( #162340 )"
...
This reverts commit b5e6e58050bd2a15f4173cfffa00c7e32e382b49.
Reverted https://github.com/pytorch/pytorch/pull/162340 on behalf of https://github.com/huydhn due to Sorry for reverting your change but it seems to break an MPS tests on ExecuTorch ([comment](https://github.com/pytorch/pytorch/pull/162340#issuecomment-3282676242 ))
2025-09-11 21:22:57 +00:00
b5e6e58050
[nn] Assert parsed iterable arguments are an appropriate length ( #162340 )
...
Fixes #162327
Pull Request resolved: https://github.com/pytorch/pytorch/pull/162340
Approved by: https://github.com/Skylion007
2025-09-10 15:15:49 +00:00
e06b110f73
[Testing] Add MPS to NATIVE_DEVICES ( #153835 )
...
This would allow me to enable more opinfo tests against MPS device eventually and supposed to be a very simple test, but actually required minor adjustments to lots of test files, namely:
- Introduce `all_mps_types_and` that is very similar to `all_types_and`, but skips `float64`
- Decorate lots of tests with `@dtypesIfMPS(*all_mps_types())`
- Skip `test_from_dlpack_noncontinguous` as it currently crashes (need to be fixed)
- Add lots of `expectedFailureIfMPS`
- Delete all `@onlyNativeDeviceTypesAnd("mps")`
<sarcasm> I love how well documented this variable are </sarcasm>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/153835
Approved by: https://github.com/Skylion007
2025-08-05 18:57:35 +00:00
52b9af163c
Add avg_pool3d
for MPS ( #158877 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/158877
Approved by: https://github.com/malfet
2025-07-29 15:22:22 +00:00
2b19d85d70
FractionalMaxPool3d add kernel_size check ( #155549 )
...
Fixes #96316
## Test Result
```python
>>> import torch
>>> from torch.func import jacrev, grad, vmap
>>>
>>> torch.manual_seed(420)
<torch._C.Generator object at 0x7fe4767810d0>
>>>
>>> input = torch.randn(1, 1, 5, 5, 5, requires_grad=True)
>>>
>>> def func(input):
... model = torch.nn.FractionalMaxPool3d(kernel_size=0, output_size=(1, 1, 1))
... output = model(input)
... return output
...
>>>
>>> func(input).sum().backward()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 2, in func
File "/home/zong/code/pytorch/torch/nn/modules/pooling.py", line 1054, in __init__
raise ValueError(f"kernel_size must greater than 0, but got {kernel_size}")
ValueError: kernel_size must greater than 0, but got 0
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/155549
Approved by: https://github.com/albanD
2025-07-10 04:55:06 +00:00
510c398a4f
Add max_pool3d
backward pass for MPS ( #157498 )
...
Note on backward precision over fp16:
A float16 number has 10 bits of mantissa, 5 bits of exponent, and 1 bit for the sign. If the sign bit is positive, then with a mantissa $m$ and exponent $e$ represented in base 10, the number that the float16 format represents is $(1 + m / 1024) \exp2(e)$. ([source](https://en.wikipedia.org/wiki/Half-precision_floating-point_format ))
Consider adding two numbers $a$ and $b$ which have arbitrary mantissas, and say their exponents are $e_a = 1$ (so $2 \le a \lt 4$) and $e_b=-3$ (so $0.175 \le b \lt 0.25$). Assume that the result has the same exponent as $a$. Since the exponents differ by 4, we'll effectively need to truncate the 4 rightmost bits of $b$'s mantissa, which would introduce a maximum error on the order of $(2^4 / 1024) \exp2(-3) \approx 0.002$.
The error is nearly the same if $e_b = -2$ (so $0.25 \le b \lt 0.5$), where the 3 rightmost bits are truncated, giving a maximum error on the order of $(2^3 / 1024) \exp2(-2) \approx 0.002$. Same for $e_b=-1$.
So if we're adding up nine different numbers that all have exponents -3, -2, or -1, and they sum to a number with exponent 1, then we would expect a maximum error of several times greater than 0.002. In my comments above, summing those particular nine numbers in different ways gave results that ranged between 3.1816 and 3.1758, a difference of $0.0058 \approx 2.9 * 0.002$.
That's within the acceptable bounds, and we can safely just increase the error tolerance used in test_output_grad_match for the case of max_pool3d_backward with float16.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/157498
Approved by: https://github.com/malfet
2025-07-07 19:46:44 +00:00
496bbf38be
add grad_output shape check for adaptive_avg_pool2d_backward ( #145241 )
...
Fix https://github.com/pytorch/pytorch/issues/145070 .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/145241
Approved by: https://github.com/malfet , https://github.com/eqy
2025-03-20 14:10:31 +00:00
df458be4e5
[4/N] Apply py39 ruff and pyupgrade fixes ( #143257 )
...
```torch/fx/passes/annotate_getitem_nodes.py``` was changed to support the new type hinting annotations.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/143257
Approved by: https://github.com/justinchuby , https://github.com/albanD
2025-01-04 10:47:51 +00:00
d2b83aa122
add grad_output shape check for fractional_max_pool2d_backward ( #141666 )
...
Fix https://github.com/pytorch/pytorch/issues/141102 .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/141666
Approved by: https://github.com/mingfeima , https://github.com/malfet
2024-12-19 22:47:02 +00:00
fa1a4a91e9
add batch_size check for max_pool2d_backward ( #141657 )
...
Fix https://github.com/pytorch/pytorch/issues/140923 .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/141657
Approved by: https://github.com/mingfeima , https://github.com/malfet
2024-12-19 06:01:41 +00:00
b588a78ca3
add grad_output shape check for adaptive_max_pool2d_backward and adaptive_max_pool3d_backward ( #141663 )
...
Fix https://github.com/pytorch/pytorch/issues/141099 , https://github.com/pytorch/pytorch/issues/141100 .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/141663
Approved by: https://github.com/mingfeima , https://github.com/malfet
2024-12-18 17:44:27 +00:00
c947a7d38e
Fix unused Python variables in test/nn ( #143396 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/143396
Approved by: https://github.com/mikaylagawarecki
2024-12-18 03:30:54 +00:00
cb71bcc542
Replace clone.detach with detach.clone ( #140264 )
...
Fixes #64532
As state in issue, replace `clone.detach` by `detach.clone`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/140264
Approved by: https://github.com/soulitzer
2024-11-13 07:01:02 +00:00
279ddfc6ee
Add type check for dilation
in torch.quantized_max_pool3d()
( #137845 )
...
Fixes #136716
repro:
```python
import torch
input = torch.randn([1, 1, 1, 1, 1])
input = torch.quantize_per_tensor(input, 0.1, 10, torch.qint32)
torch.quantized_max_pool3d(input, (1, 1, 1), (1, 1, 1), (0, 0, 0), (-3, 1, 1)) # crash
input = torch.randn([1, 1, 1, 1, 1])
input = torch.quantize_per_tensor(input, 0.1, 10, torch.qint32)
result = torch.nn.functional.max_pool3d(input, (1, 1, 1), (1, 1, 1), (0, 0, 0), (-3, 1, 1)) # crash
```
result:
```
RuntimeError: Expected dilation >= 1
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/137845
Approved by: https://github.com/albanD
2024-10-21 16:15:57 +00:00
e27c0048db
Enable additional tests for MPS CI runs ( #134356 )
...
As part of the follow up for https://github.com/pytorch/pytorch/issues/133520 , adapting existing unused tests for use in MPS CI runs. Focusing on nhwc & other memory formatting tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/134356
Approved by: https://github.com/malfet , https://github.com/eqy , https://github.com/huydhn
2024-10-04 21:52:38 +00:00
fbe6f42dcf
[BE][Easy][8/19] enforce style for empty lines in import segments in test/[k-p]*/
( #129759 )
...
See https://github.com/pytorch/pytorch/pull/129751#issue-2380881501 . Most changes are auto-generated by linter.
You can review these PRs via:
```bash
git diff --ignore-all-space --ignore-blank-lines HEAD~1
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/129759
Approved by: https://github.com/justinchuby , https://github.com/ezyang
2024-07-31 02:09:20 +00:00
5f912f480c
Fix max_pool2d decomposition for empty list and integer limits ( #129106 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/129106
Approved by: https://github.com/peterbell10 , https://github.com/lezcano , https://github.com/malfet
ghstack dependencies: #129096 , #129097
2024-06-24 22:19:42 +00:00
a625705290
Enable UFMT on all of test/nn
( #123809 )
...
Part of: #123062
Ran lintrunner on:
- `test/nn`
with command:
```bash
lintrunner -a --take UFMT --all-files
```
Co-authored-by: Edward Z. Yang <ezyang@fb.com >
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123809
Approved by: https://github.com/mikaylagawarecki
2024-04-12 18:32:25 +00:00
b5e83b8c50
Fix edge case for size 1 channels dim in AdaptiveMaxPool ( #116482 )
...
Fixes https://github.com/pytorch/pytorch/issues/107842
Unlike `AdaptiveAvgPool`, `AdaptiveMaxPool` does not have a CUDA kernel for ChannelsLast. We workaround this by calling `contiguous()` on the input. However, there is an edge case when the channels dimension has size 1.
```python
>>> t = torch.randn(2, 1, 3, 3)
>>> t.stride()
(9, 9, 3, 1)
>>> t_c = t.to(memory_format=torch.channels_last)
>>> t_c.stride()
(9, 1, 3, 1) # (CHW, 1, CW, C)
>>> t_c.is_contiguous()
True # contiguity check doesn't check strides for singleton dimensions
```
Since the CUDA kernel treats the batch,`B`, and channels,`C`, dimensions as implicitly flattened and increments the data pointer for `input` to the start of the next plane using
669b182d33/aten/src/ATen/native/cuda/AdaptiveMaxPooling2d.cu (L67)
If our input falls into the aforementioned edge case, the `data_ptr` will not be incremented correctly. The simple fix for this is to calculate the stride for the channels dimension using $\prod_{i > 1}size(i)$
Analogous fix for the 3D case.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/116482
Approved by: https://github.com/albanD
2023-12-28 15:02:29 +00:00
362bc6d7cb
Fixed a segfault issue when passing an empty kernel to quantized_max_… ( #116342 )
...
…pool1d.
Fixes #116323 .
Reused the same check as for `max_pool1d`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/116342
Approved by: https://github.com/jerryzh168
2023-12-27 01:22:49 +00:00
6de28e92d2
[BE]: Apply FURB118 (prev): replaces unnecessary lambdas with operator. ( #116027 )
...
This replaces a bunch of unnecessary lambdas with the operator package. This is semantically equivalent, but the operator package is faster, and arguably more readable. When the FURB rules are taken out of preview, I will enable it as a ruff check.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/116027
Approved by: https://github.com/malfet
2023-12-20 19:35:08 +00:00
a7bfa04da6
Revert "More markDynamoStrictTest ( #115870 )"
...
This reverts commit 7f686c8fe127cc7db07134297fa09be20ab87918.
Reverted https://github.com/pytorch/pytorch/pull/115870 on behalf of https://github.com/jeanschmidt due to Breaking internal tests and builds, please check diff ([comment](https://github.com/pytorch/pytorch/pull/115870#issuecomment-1862997125 ))
2023-12-19 15:40:57 +00:00
7f686c8fe1
More markDynamoStrictTest ( #115870 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115870
Approved by: https://github.com/voznesenskym
ghstack dependencies: #115845 , #115855 , #115856 , #115857 , #115858
2023-12-15 05:26:54 +00:00
a8acd6c410
Add Half support for AvgPool2d on CPU ( #109578 )
...
Add Half support for AvgPool2d (both channels last and channels first) on CPU
Pull Request resolved: https://github.com/pytorch/pytorch/pull/109578
Approved by: https://github.com/mingfeima , https://github.com/albanD
2023-12-12 12:59:47 +00:00
7963aaac41
add Half support for AdaptiveAvgPool2d and AdaptiveMaxPool2d on CPU ( #102079 )
...
### Testing
Single core:
AdaptiveMaxPool2d:
shape | fp32 forward / ms | fp16 forward / ms | bf16 forward / ms | fp32 backward / ms | fp16 backward / ms | bf16 backward / ms
-- | -- | -- | -- | -- | -- | --
input size: (2, 56, 264, 264), output size: (100, 100) | 71.5826 | 78.7460 | 85.7195 | 7.3925 | 6.0618 | 6.2596
input size: (2, 56, 264, 264), output size: (50, 50) | 28.122 | 30.8572 | 36.6366 | 6.2645 | 3.4781 | 3.6628
input size: (32, 32, 100, 100), output size: (50, 50) | 109.2978 | 115.0330 | 121.9500 | 13.4329 | 10.2769 | 12.1975
input size: (16, 4, 300, 300), output size: (100, 100) | 34.1849 | 36.5876 | 40.9862 | 4.7719 | 4.3362 | 4.1417
28 cores:
AdaptiveMaxPool2d:
shape | fp32 forward / ms | fp16 forward / ms | bf16 forward / ms | fp32 backward / ms | fp16 backward / ms | bf16 backward / ms
-- | -- | -- | -- | -- | -- | --
input size: (2, 56, 264, 264), output size: (100, 100) | 3.1809 | 3.5057 | 3.6728 | 0.6657 | 0.3138 | 0.2934
input size: (2, 56, 264, 264), output size: (50, 50) | 1.2779 | 1.3869 | 1.5238 | 0.4223 | 0.1775 | 0.1825
input size: (32, 32, 100, 100), output size: (50, 50) | 4.7942 | 4.9670 | 5.2330 | 1.7146 | 0.6477 | 0.7001
input size: (16, 4, 300, 300), output size: (100, 100) | 1.9522 | 2.0879 | 2.3155 | 0.4370 | 0.3175 | 0.2828
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102079
Approved by: https://github.com/jgong5 , https://github.com/malfet
2023-11-20 03:01:00 +00:00
40c44c2307
Force specialization on INT_LIST ( #111216 )
...
Follow up on https://github.com/pytorch/pytorch/pull/95479
Fixes https://github.com/pytorch/pytorch/issues/111198
Fixes https://github.com/pytorch/pytorch/issues/111197
Fixes https://github.com/pytorch/pytorch/issues/111188
Fixes https://github.com/pytorch/pytorch/issues/111201
Fixes https://github.com/pytorch/pytorch/issues/111202
I can also do this for some other types, will do this stacked on top.
Signed-off-by: Edward Z. Yang <ezyang@meta.com >
Pull Request resolved: https://github.com/pytorch/pytorch/pull/111216
Approved by: https://github.com/voznesenskym
2023-10-19 12:55:18 +00:00
f2a1b93549
Back out "[quant] Support integer implementations for adaptive_avg_pool2d ( #104226 )" ( #110316 )
...
Summary:
Original commit changeset: acdb5b34e3aa
Original Phabricator Diff: D47321689
Test Plan: opinfo tests in CI
Differential Revision: D49789403
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110316
Approved by: https://github.com/kimishpatel
2023-10-03 16:59:23 +00:00
42f94d7e9f
add Half support for maxpool on CPU ( #98819 )
...
### Testing
Single socket (28 cores):
shape | fp32 forward / ms | fp16 forward / ms | bf16 forward / ms | fp32 backward / ms | fp16 backward / ms | bf16 backward / ms
-- | -- | -- | -- | -- | -- | --
size: (1, 56, 264, 264), kernel: 3, stride: 1, mem_format: contig | 4.12895 | 6.9669 | 5.30297 | 0.55775 | 1.98917 | 0.72233
size: (1, 56, 264, 264), kernel: 3, stride: 1, mem_format: CL | 0.85093 | 1.88813 | 1.38063 | 5.5742 | 36.5086 | 10.58552
size: (32, 16, 200, 200), kernel: 3, stride: 1, mem_format: contig | 22.37212 | 37.90383 | 30.94482 | 6.85868 | 10.6116 | 3.9993
size: (32, 16, 200, 200), kernel: 3, stride: 1, mem_format: CL | 5.41658 | 4.71098 | 4.66578 | 6.69875 | 14.7171 | 5.1167
size: (32, 32, 100, 100), kernel: 3, stride: 1, mem_format: contig | 10.69831 | 18.0468 | 13.71657 | 2.61192 | 4.96172 | 1.68635
size: (32, 32, 100, 100), kernel: 3, stride: 1, mem_format: CL | 2.52637 | 2.0096 | 2.0055 | 2.60314 | 7.2093 | 2.49843
size: (4, 19, 10, 16, 16), kernel: 3, stride: 1, mem_format: contig | 0.47605 | 0.88398 | 0.65326 | 0.06525 | 0.115489 | 0.0674
size: (4, 19, 10, 16, 16), kernel: 3, stride: 1, mem_format: CL3d | 0.10902 | 0.25293 | 0.157475 | 0.11386 | 0.53319 | 0.17836
Single core:
shape | fp32 forward / ms | fp16 forward / ms | bf16 forward / ms | fp32 backward / ms | fp16 backward / ms | bf16 backward / ms
-- | -- | -- | -- | -- | -- | --
size: (1, 56, 264, 264), kernel: 3, stride: 1, mem_format: contig | 90.9809 | 163.473 | 126.1276 | 6.57721 | 41.40833 | 11.82505
size: (1, 56, 264, 264), kernel: 3, stride: 1, mem_format: CL | 9.88405 | 38.39137 | 29.62069 | 7.10636 | 36.97535 | 11.0525
size: (32, 16, 200, 200), kernel: 3, stride: 1, mem_format: contig | 476.782 | 855.4769 | 648.2248 | 46.6488 | 219.2586 | 67.10599
size: (32, 16, 200, 200), kernel: 3, stride: 1, mem_format: CL | 80.29271 | 91.33854 | 87.80345 | 48.81692 | 203.9974 | 63.39004
size: (32, 32, 100, 100), kernel: 3, stride: 1, mem_format: contig | 235.2113 | 419.0799 | 315.4284 | 20.6049 | 107.1524 | 32.39169
size: (32, 32, 100, 100), kernel: 3, stride: 1, mem_format: CL | 29.47653 | 33.54905 | 32.82823 | 22.59674 | 98.5586 | 30.05763
size: (4, 19, 10, 16, 16), kernel: 3, stride: 1, mem_format: contig | 7.90684 | 13.9208 | 10.03272 | 0.23725 | 1.35269 | 0.41728
size: (4, 19, 10, 16, 16), kernel: 3, stride: 1, mem_format: CL3d | 2.33638 | 3.36894 | 2.64635 | 0.26535 | 1.244 | 0.38895
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98819
Approved by: https://github.com/mingfeima , https://github.com/mikaylagawarecki
2023-09-05 18:23:41 +00:00
3267996372
add channel last 3d support for maxpool3d on CPU ( #97775 )
...
### Testing
Single socket (28 cores):
shape | fp32 forward / ms | bf16 forward / ms | fp32 backward / ms | bf16 backward / ms
-- | -- | -- | -- | --
size: (1, 56, 264, 264), kernel: 3, stride: 1, mem_format: contig | 3.959584 | 5.493402 | 0.557232 | 0.568485
size: (1, 56, 264, 264), kernel: 3, stride: 1, mem_format: CL | 0.815511 | 1.351261 | 5.710506 | 10.57506
size: (32, 32, 100, 100), kernel: 3, stride: 1, mem_format: contig | 10.63426 | 15.28637 | 2.67656 | 1.71365
size: (32, 32, 100, 100), kernel: 3, stride: 1, mem_format: CL | 2.63570 | 2.05532 | 2.55452 | 2.33923
size: (4, 19, 10, 16, 16), kernel: 3, stride: 1, mem_format: contig | 0.375469 | 0.479748 | 0.066364 | 0.065155
size: (4, 19, 10, 16, 16), kernel: 3, stride: 1, mem_format: CL3d | 0.112197 | 0.112326 | 0.111697 | 0.145364
Single core:
shape | fp32 forward / ms | bf16 forward / ms | fp32 backward / ms | bf16 backward / ms
-- | -- | -- | -- | --
size: (1, 56, 264, 264), kernel: 3, stride: 1, mem_format: contig | 92.16582 | 128.6513 | 6.684325 | 12.21541
size: (1, 56, 264, 264), kernel: 3, stride: 1, mem_format: CL | 10.14318 | 29.80297 | 7.350142 | 11.25323
size: (32, 32, 100, 100), kernel: 3, stride: 1, mem_format: contig | 238.55453 | 331.89967 | 19.694657 | 32.78853
size: (32, 32, 100, 100), kernel: 3, stride: 1, mem_format: CL | 30.17079 | 32.75628 | 22.44543 | 30.17796
size: (4, 19, 10, 16, 16), kernel: 3, stride: 1, mem_format: contig | 7.474389 | 9.937217 | 0.236015 | 0.434229
size: (4, 19, 10, 16, 16), kernel: 3, stride: 1, mem_format: CL3d | 2.318954 | 2.469444 | 0.262125 | 0.401361
Pull Request resolved: https://github.com/pytorch/pytorch/pull/97775
Approved by: https://github.com/jgong5 , https://github.com/mikaylagawarecki
2023-08-26 00:21:27 +00:00
d9460bb8f8
Update test_MaxUnpool_index_errors XFAIL after #107483 ( #107658 )
...
After https://github.com/pytorch/pytorch/pull/107483 which reverted https://github.com/pytorch/pytorch/pull/95300 , these tests are not XFAIL anymore. So now we know the root cause of https://github.com/pytorch/pytorch/issues/103854 .
As this is failing slow jobs in trunk atm, i.e. 6981bcbc35
, I'm moving these tests back.
### Testing
Run locally and all tests passes.
```
PYTORCH_TEST_WITH_SLOW=1 PYTORCH_TEST_SKIP_FAST=1 python test/nn/test_pooling.py -k test_MaxUnpool_index_errors
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107658
Approved by: https://github.com/PaliC
2023-08-22 22:36:35 +00:00
de8bd108b4
[BE] Enable ruff's UP rules in pyproject.toml ( #105437 )
...
Signed-off-by: Justin Chu <justinchu@microsoft.com >
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105437
Approved by: https://github.com/huydhn , https://github.com/malfet , https://github.com/Skylion007
2023-07-21 19:14:52 +00:00
79c5e33349
[BE] Enable ruff's UP rules and autoformat nn/ mps/ and torch/ ( #105436 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105436
Approved by: https://github.com/malfet , https://github.com/albanD
2023-07-21 07:38:46 +00:00
1a661639f7
[quant] Support integer implementations for adaptive_avg_pool2d ( #104226 )
...
Summary:
This is needed for representing quantized model in pt2 export quantization flow
Test Plan:
tested by opinfo, python test/test_ops.py
Summary:
Test Plan:
Reviewers:
Subscribers:
Tasks:
Tags:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/104226
Approved by: https://github.com/jgong5 , https://github.com/andrewor14
2023-07-07 19:36:31 +00:00
f27a9129e7
XFAIL test_MaxUnpool_index_errors CUDA slow tests ( #103905 )
...
This has been failing in trunk for a while. Let's XFAIL it while continuing the investigation https://github.com/pytorch/pytorch/issues/103854 . We might not need this PR if the fix is on the way.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103905
Approved by: https://github.com/mikaylagawarecki
2023-06-22 18:05:10 +00:00
3f656ad7bb
[CUDA] Do accumulation for Adaptive Average Pooling in opmath_t
( #99378 )
...
Fix for an issue surfaced from the discuss forum: https://discuss.pytorch.org/t/adaptiveavgpool2d-causes-some-data-to-contain-inf/177420
CC @ptrblck @ngimel
Pull Request resolved: https://github.com/pytorch/pytorch/pull/99378
Approved by: https://github.com/ngimel
2023-04-28 20:43:12 +00:00
8aa34602f7
Jetson Update for CI Redo ( #94549 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94549
Approved by: https://github.com/ezyang , https://github.com/malfet
2023-02-21 17:13:38 +00:00
b005ec62b9
[BE] Remove dependency on six
and future
( #94709 )
...
Remove the Python 2 and 3 compatibility library [six](https://pypi.org/project/six ) and [future](https://pypi.org/project/future ) and `torch._six`. We only support Python 3.8+ now. It's time to retire them.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94709
Approved by: https://github.com/malfet , https://github.com/Skylion007
2023-02-14 09:14:14 +00:00
0bf78b57c0
fix: max_unpool3d buffer overflow ( #94372 )
...
Fixes #88032
Previously `output_size` is accessed before the shape length check, which leads to a buffer overflow issue.
The fix is simply to prioritize the check.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94372
Approved by: https://github.com/albanD
2023-02-08 19:48:25 +00:00
ccd8b66b0a
[testing] add ErrorInputs for adaptive_{avg, max}_poolnd
( #90924 )
...
Ref: https://github.com/pytorch/pytorch/pull/88906#discussion_r1040157313
Covers:
- [x] adaptive_avg_pool1d
- [x] adaptive_avg_pool2d
- [x] adaptive_avg_pool3d
- [x] adaptive_max_pool1d
- [x] adaptive_max_pool2d
- [x] adaptive_max_pool3d
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90924
Approved by: https://github.com/mruberry
2023-01-12 05:24:01 +00:00
7cd900eb97
[fix] adaptive_{avg, max}_pool
variants : cuda & cpu ( #88906 )
...
Fixes #78868
#### TODO
- [x] add tests
- [x] adaptive_avg_pool2d
- [x] adaptive_avg_pool3d
- [x] adaptive_max_pool2d
- [x] fix adaptive_max_pool3d_cuda
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88906
Approved by: https://github.com/mruberry
2022-12-13 20:57:00 +00:00
c6942dbbfb
add shape check for random_samples in fractional_max_pool{2d|3d} ( #89992 )
...
This PR add shape checks for `random_samples` in fractional_max_pool2d and fractional_max_pool3d.,
to provide more meaningful warnings instead of SegFault when the input is illegal.
For more details, please check https://github.com/pytorch/pytorch/issues/89648
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89992
Approved by: https://github.com/jgong5 , https://github.com/ezyang
2022-12-06 14:14:41 +00:00
ce856cee7e
[test_nn] fix missing class attributes for NNTestCase ( #89200 )
...
Missed setting these class variable 😓
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89200
Approved by: https://github.com/albanD
2022-11-22 22:55:44 +00:00
8fb470e81a
[fix] max_pool1d: shape check ( #85594 )
...
Fixes #76587
Before PR:
```python
import torch
max_pool = torch.nn.MaxPool1d(3)
t = torch.rand([17, 0, 50], dtype=torch.float32) # note requires_grad is False
max_pool(t) # Worked and returned tensor of shape [17, 0, 48].
```
After PR
```python
import torch
max_pool = torch.nn.MaxPool1d(3)
t = torch.rand([17, 0, 50], dtype=torch.float32) # note requires_grad is False
max_pool(t) # Errors with `max_pool1d: Expected 2D or 3D (batch mode) tensor with optional 0 dim batch size for input, but got: [17, 0, 48]`
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85594
Approved by: https://github.com/mruberry
2022-09-29 15:40:09 +00:00
4382da5d5e
Remove assertEqualIgnoreType from test_pooling ( #85112 )
...
Fix TODOs related to https://github.com/pytorch/pytorch/issues/38095 in test_pooling.py.
This PR correctly casts the expected outputs to satisfy the asserts. If you'd prefer feeding `exact_dtype=False` as an argument instead I can update accordingly.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85112
Approved by: https://github.com/kit1980
2022-09-16 22:04:42 +00:00