Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62737
ReductionOpInfo is a specialization of OpInfo for reduction operators. For now, it is designed to work with reductions that return a single tensor and that reduce all elements along one or more dimensions to a single value. In particular this excludes operators such as `max` and `min` that return multiple tensors and `quantile` that can return multiple values.
fixes https://github.com/pytorch/pytorch/issues/49746
Test Plan: Imported from OSS
Reviewed By: ejguan
Differential Revision: D30406568
Pulled By: heitorschueroff
fbshipit-source-id: 218b1da1902f67bcf4c3681e2a0f0029a25d51f1
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61083
It's already supported on CUDA, so it seems reasonable to support on CPU as
well. This also changes `test_nansum` to compare against `torch.sum` since numpy
doesn't support BFloat16. Note that `test_nansum_vs_numpy` checks against
NumPy as well, so that's still being tested.
Test Plan: Imported from OSS
Reviewed By: navahgar
Differential Revision: D30006227
Pulled By: heitorschueroff
fbshipit-source-id: 1449730e1936417e7de1f8b3cf8cdcc15518873c
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50903
First part of #50010. Also fixes#51127.
Test Plan: Imported from OSS
Reviewed By: ngimel
Differential Revision: D27911345
Pulled By: mruberry
fbshipit-source-id: 7138fddc935802918ab9ff19f4bc1b9f4d745d41
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55640
Mean is broken for complex types, since #53218 it's now allocating the result
as a real tensor which discards the imaginary component. This wasn't picked up
in testing because `_test_dim_ops` tests are defined as closures inside of
`_test_dim_ops` instead of as methods on the test class. The result is, they
never get run.
For best results, view diff with "Hide whitespace changes".
Test Plan: Imported from OSS
Reviewed By: ngimel
Differential Revision: D27671127
Pulled By: mruberry
fbshipit-source-id: 4a1f6fea1048919fda7339c867ee78e88f2d7bd2
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49267
This PR builds upon the PR https://github.com/pytorch/pytorch/pull/48711 by RockingJavaBean. The original PR introduced a BC breaking change by making the interpolation parameter positional. Thus, previous invocations of torch.quantile that did not include the interpolation parameter failed after the PR landed.
To avoid BC breaking changes, we preserve the original signatures and make the interpolation parameter in the new signatures kwarg only. For now, interpolation cannot have a default value to avoid ambiguity with the deprecated signature. However, due to limitations of codegen and C++, we cannot have a required arg after optional ones. Thus, this PR also makes dim and keepdim requires args. Once we can remove the old signatures, dim, keepdim and interpolation parameters in the new signature will get the default values back.
__TODO__
---
- [ ] Run backward compat tests
This reverts commit 2f1d1eb7df5e8032392b73751c84025a2aa3d1ee.
Test Plan: Imported from OSS
Reviewed By: glaringlee
Differential Revision: D27337117
Pulled By: heitorschueroff
fbshipit-source-id: 7fe31f22027645e0d6cb3cab0392d532a4b362c9
Summary:
1. Enabled `BFloat16` support for `argmax` & `argmin` on both CPU & CUDA
2. Added `OpInfo`s for `argmax` & `argmin`
3. Enabled `test_argminmax_multiple` for `float16`. It can't be enabled for `bfloat16`, as comparison is done with numpy, which doesn't currently support `bfloat16`.
4. Enabled `test_dim_arg_reduction_scalar` for `float16` & `bfloat16`.
5. Enabled `test_reduction_vectorize_along_output` for `bfloat16`.
6. Enabled `test_reduction_vectorize_along_input_corner` for `bfloat16`.
7. Enabled `test_dim_reduction` for both `float16` and `bfloat16`, except that both of them don't support `prod` on CPU.
8. Unskipped `TestCommonCPU.test_variant_consistency_jit` for dtype `bfloat16` for `amax` & `amin`, as they're passing.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52582
Reviewed By: anjali411
Differential Revision: D27204704
Pulled By: heitorschueroff
fbshipit-source-id: cdad5df494d070f8e1a8fb83939441a91124b4d9
Summary:
1. Enabled `amax` & `amin` for `float16` & `bfloat16` dtypes for both CPU & CUDA.
2. Added `OpInfo`s for `amax` & `amin`.
3. Enabled `test_min_with_inf` & `test_max_with_inf` for both `float16` & `bfloat16`, as they also use `torch.amin` & `torch.amax` respectively.
4. Enabled `test_amax` & `test_amin` for `float16` but not for `bfloat16`, as comparison is done with `numpy`, which doesn't support `bfloat16`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52579
Reviewed By: pbelevich
Differential Revision: D26784194
Pulled By: heitorschueroff
fbshipit-source-id: 1050de3e155b83f282fb30b0db6658eead89936c
Summary:
Fixes https://github.com/pytorch/pytorch/issues/50790.
Added `min()` & `max()` support for `Float16` & `BFloat16`.
CUDA already supported these ops on `Float16`, so the other three combinations had to be enabled.
`OpInfo`s for `min` & `max` were also added, and their sample inputs were removed from `method_tests()`.
### MORE INFO
The (slightly) long-term goal is to add dispatch for `min()` & `max()` related operations on CPU & CUDA for `Float16` & `BFloat16`,
wherever they aren't present already:
1. `amin()`
2. `argmax()`
3. `amax()`
4. `argmin()`
5. `torch._aminmax()`
6. `torch.clamp()` on CPU. Was already supported on CUDA
7. `min()` (in this PR)
8. `max()` (in this PR)
9. `minimum()`
10. `maximum()`
I'll submit separate PRs for the other ops.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51244
Reviewed By: jbschlosser
Differential Revision: D26503455
Pulled By: anjali411
fbshipit-source-id: c32247f214e9272ca2e4322a23337874e737b140
Summary:
BC-breaking note:
This PR changes the behavior of the any and all functions to always return a bool tensor. Previously these functions were only defined on bool and uint8 tensors, and when called on uint8 tensors they would also return a uint8 tensor. (When called on a bool tensor they would return a bool tensor.)
PR summary:
https://github.com/pytorch/pytorch/pull/44790#issuecomment-725596687
Fixes 2 and 3
Also Fixes https://github.com/pytorch/pytorch/issues/48352
Changes
* Output dtype is always `bool` (consistent with numpy) **BC Breaking (Previously used to match the input dtype**)
* Uses vectorized version for all dtypes on CPU
* Enables test for complex
* Update doc for `torch.all` and `torch.any`
TODO
* [x] Update docs
* [x] Benchmark
* [x] Raise issue on XLA
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47878
Reviewed By: albanD
Differential Revision: D25714324
Pulled By: mruberry
fbshipit-source-id: a87345f725297524242d69402dfe53060521ea5d
Summary:
BC-breaking note:
This PR changes the behavior of the any and all functions to always return a bool tensor. Previously these functions were only defined on bool and uint8 tensors, and when called on uint8 tensors they would also return a uint8 tensor. (When called on a bool tensor they would return a bool tensor.)
PR summary:
https://github.com/pytorch/pytorch/pull/44790#issuecomment-725596687
Fixes 2 and 3
Also Fixes https://github.com/pytorch/pytorch/issues/48352
Changes
* Output dtype is always `bool` (consistent with numpy) **BC Breaking (Previously used to match the input dtype**)
* Uses vectorized version for all dtypes on CPU
* Enables test for complex
* Update doc for `torch.all` and `torch.any`
TODO
* [x] Update docs
* [x] Benchmark
* [x] Raise issue on XLA
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47878
Reviewed By: H-Huang
Differential Revision: D25421263
Pulled By: mruberry
fbshipit-source-id: c6c681ef94004d2bcc787be61a72aa059b333e69
Summary:
Fix https://github.com/pytorch/pytorch/issues/48523
Related https://github.com/pytorch/pytorch/issues/38349
**BC-breaking Note:**
This PR updates PyTorch's quantile function to add additional interpolation methods `lower`, `higher`, `nearest`, and `midpoint`, and these interpolation methods are currently supported by NumPy.
New parameter `interpolation` is added to the signature for both `torch.quantile` and `torch.nanquantile` functions.
- `quantile(input, q, dim=None, interpolation='linear', keepdim=False, *, out=None) -> Tensor`
- `nanquantile(input, q, dim=None, interpolation='linear', keepdim=False, *, out=None) -> Tensor`
Function signatures followed the NumPy-like style for the moment, keeping `out` at the end to be consistent with PyTorch.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48711
Reviewed By: H-Huang
Differential Revision: D25428587
Pulled By: heitorschueroff
fbshipit-source-id: e98d24f6a651d302eb94f4ff4da18e38bdbf0124
Summary:
Creates multiple new test suites to have fewer tests in test_torch.py, consistent with previous test suite creation like test_unary_ufuncs.py and test_linalg.py.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47356
Reviewed By: ngimel
Differential Revision: D25202268
Pulled By: mruberry
fbshipit-source-id: 75fde3ca76545d1b32b86d432a5cb7a5ba8f5bb6