Commit Graph

1006 Commits

Author SHA1 Message Date
c382ad47dd Deprecate torch.cross default behaviour (#108760)
Long overdue this one. We may be able to change it in a few years :hopeful:.

**BC-breaking note**

This PR deprecates `torch.cross`'s default dim in favor of
`torch.linalg.cross`.
A upgrade guide is added to the documentation for `torch.cross`.

Note this PR DOES NOT remove `torch.cross`.

Fixes https://github.com/pytorch/pytorch/issues/108664

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108760
Approved by: https://github.com/albanD
2023-09-14 19:36:29 +00:00
61f0578787 Update take_along_dim docs to include dim=None case (#109120)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/109120
Approved by: https://github.com/lezcano
ghstack dependencies: #108879, #108880
2023-09-13 23:13:09 +00:00
b2cba439b4 Introduce Tensor overload to linspace and logspace (#104889)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/104889
Approved by: https://github.com/zou3519
ghstack dependencies: #107958
2023-09-11 23:30:40 +00:00
03fd3544a2 fixed lgamma documentation error (#108719)
Fixes #108527

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108719
Approved by: https://github.com/zou3519
2023-09-11 22:29:06 +00:00
a7f5abeade Revert "Introduce Tensor overload to linspace and logspace (#104889)"
This reverts commit 57e52393213b6b4fba3b334654b96396a2904087.

Reverted https://github.com/pytorch/pytorch/pull/104889 on behalf of https://github.com/clee2000 due to sorry have to revert this to revert https://github.com/pytorch/pytorch/pull/107958 ([comment](https://github.com/pytorch/pytorch/pull/104889#issuecomment-1714305768))
2023-09-11 17:33:48 +00:00
57e5239321 Introduce Tensor overload to linspace and logspace (#104889)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/104889
Approved by: https://github.com/zou3519
ghstack dependencies: #107958
2023-09-11 15:29:39 +00:00
e5e653a660 Revert "docs: Match open bracket with close bracket in unsqueeze (#95215)"
This reverts commit 9d04d376d81be2f01e5ea6b68943390346f2494c.

Reverted https://github.com/pytorch/pytorch/pull/95215 on behalf of https://github.com/kit1980 due to Incorrect assumptions ([comment](https://github.com/pytorch/pytorch/pull/95215#issuecomment-1708852420))
2023-09-06 18:04:10 +00:00
fe3309b4b8 Add optional is_coalesced argument to sparse coo tensor factory function. (#107638)
Resolves https://github.com/pytorch/pytorch/issues/107097

After this PR, instead of
```python
torch.sparse_coo_tensor(indices, values, size)._coalesced_(is_coalesced)
```
(that does not work in the autograd context, see #107097), use
```python
torch.sparse_coo_tensor(indices, values, size, is_coalesced=is_coalesced)
```

All sparse coo factory functions that take indices as input support the `is_coalesced` argument.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107638
Approved by: https://github.com/cpuhrsch
2023-08-26 07:24:29 +00:00
e00bd83124 Fix the example of torch.slice_scatter (#107849)
Fixes #107681
fix the example of torch.slice_scatter
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107849
Approved by: https://github.com/drisspg
2023-08-25 04:19:49 +00:00
8a7a6867b9 [PyTorch][Tensor] Introduce tensor.dim_order (#106835)
Summary:
This is a stride based attribute for a tensor available in Python.

This can help inspect tensors generated using `torch.empty_permuted(.., physical_layout, ...)`, where physical_layout should match the dim_order returned here. `empty_permuted` will be renamed to use dim_order as the param name in the future. And also help Executorch export pipeline with implementing dim_order based tensors.

Differential Revision: D48134476

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106835
Approved by: https://github.com/ezyang
2023-08-25 00:06:03 +00:00
e9af315e02 Fix torch.bucketize docs for "right" (#104474)
The docs correctly (i.e matching actual op behavior) state that

`right = False` means `boundaries[i-1] < input[m][n]...[l][x] <= boundaries[i]`.

However they previously stated that
`If 'right' is False (default), then the left boundary is closed.`

which contradicts the `boundaries[i-1] < input[m][n]...[l][x] <= boundaries[i]` statement.

This modifies the docs to say `... then the left boundary is OPEN.` and also clarifies that this is the opposite behavior of numpy.digitize.

Fixes #91580
Pull Request resolved: https://github.com/pytorch/pytorch/pull/104474
Approved by: https://github.com/aakhundov, https://github.com/svekars
2023-08-17 03:08:07 +00:00
a5d841ef01 asarray: take the default device into consideration. (#106779)
Fix: #106773

This PR makes it so `asarray` takes the default device into consideration when called with
a Python sequence as the data.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106779
Approved by: https://github.com/rgommers, https://github.com/lezcano
2023-08-11 13:16:42 +00:00
f725e6374d doc: fix fake quantize per channel doc (#105955)
another doc bug for fake_quantize_per_channel

function doc now matches e7142700ed/aten/src/ATen/native/quantized/FakeQuantPerChannelAffine.cpp (L32)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105955
Approved by: https://github.com/kit1980
2023-07-26 19:17:41 +00:00
6d43c89f37 [BE]: Update Ruff to 0.0.280 (#105724)
Removes unusued loop values in python dictionary iteration. Automated fix from Ruff master

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105724
Approved by: https://github.com/ezyang, https://github.com/janeyx99
2023-07-22 23:03:34 +00:00
7b211ff8dd doc: fix fake_quantize_per_channel_affine (#105241)
Fixes #105085

Fix in formula

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105241
Approved by: https://github.com/jcaip
2023-07-22 00:49:28 +00:00
79c5e33349 [BE] Enable ruff's UP rules and autoformat nn/ mps/ and torch/ (#105436)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105436
Approved by: https://github.com/malfet, https://github.com/albanD
2023-07-21 07:38:46 +00:00
64c39ece65 Fix a docstring of resolve_neg (#104151)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/104151
Approved by: https://github.com/malfet
2023-07-19 03:55:20 +00:00
b88b742db8 fixed torch.manual_seed note (#105175)
Fixes https://github.com/pytorch/pytorch/issues/87509

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105175
Approved by: https://github.com/ezyang
2023-07-13 23:43:44 +00:00
f987d11fa7 Reland: Make torch.empty* deterministic by filling with NaN or max int (#104995)
Relands #101849 after #104302 reverted it.

torchrec PR https://github.com/pytorch/torchrec/pull/1269 fixes the torchrec failure that caused #101849 to be reverted

Part of #82004

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104995
Approved by: https://github.com/albanD
2023-07-13 22:18:03 +00:00
3ff111a4b4 doc: fix fake_quantize_per_tensor_affine docs (#104453)
Fixes #82800

Fixes wrong `fake_quantize_per_tensor_affine` example and wrong `fake_quantize_per_tensor_affine` formula

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104453
Approved by: https://github.com/kit1980
2023-06-30 22:59:00 +00:00
a78bddac01 Revert D46920584: Multisect successfully blamed D46920584 for test or build failures (#104269) (#104302)
Summary:

This diff is reverting D46920584
D46920584: Make `torch.empty*` deterministic by filling with NaN or max int value (#101849) by generatedunixname499836121 has been identified to be causing the following test or build failures:

Tests affected:
- [torchrec/distributed/composable/tests:test_fsdp - torchrec.distributed.composable.tests.test_fsdp.FullyShardTest: test_composable_checkpoint](https://www.internalfb.com/intern/test/281475062923125/)

Here's the Multisect link:
https://www.internalfb.com/multisect/2341386
Here are the tasks that are relevant to this breakage:

We're generating a revert to back out the changes in this diff, please note the backout may land if someone accepts it.

If you believe this diff has been generated in error you may Commandeer and Abandon it.

Test Plan: NA

Reviewed By: huydhn, osalpekar

Differential Revision: D46997394

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104302
Approved by: https://github.com/osalpekar
2023-06-29 20:20:58 +00:00
a6b9a61a6a Added a note to torch.round doc to indicate the return type (#97227)
Added a note to torch.round doc to indicate the return type of output tensor

Fixes #89056

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97227
Approved by: https://github.com/albanD
2023-06-29 20:02:59 +00:00
2642f31e4c Make torch.empty* deterministic by filling with NaN or max int value (#101849)
Part of #82004

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101849
Approved by: https://github.com/lezcano, https://github.com/albanD, https://github.com/kulinseth
2023-06-21 02:53:22 +00:00
d52d1fd5ba add description for unexpected case (#103500)
Fixes #88547

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103500
Approved by: https://github.com/mingfeima, https://github.com/mikaylagawarecki
2023-06-20 19:02:45 +00:00
e82616d900 Add generator argument in torch.randn signature (#102075)
Fix the document issue of `torch.randn`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102075
Approved by: https://github.com/kit1980, https://github.com/soulitzer
2023-06-14 23:37:19 +00:00
a0885dff98 Link torch.cat in docstring of torch.stack and vice versa (#103421)
torch.cat and torch.stack are similar enough that they should point to each other.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103421
Approved by: https://github.com/malfet, https://github.com/svekars, https://github.com/kit1980
2023-06-14 23:31:22 +00:00
2a3e45a2a8 Docs: update default device description (#101283)
Closes #101274

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101283
Approved by: https://github.com/albanD
2023-05-16 17:07:31 +00:00
b3b333205f Fix asarray doc examples. (#100971)
Fixes issue raised on [PyTorch discuss](https://discuss.pytorch.org/t/confused-on-an-example-on-pytorch-official-documentation/178785).

**Summary:** the examples in `asarray` docs have a few mistakes that makes it not work. This PR fixes those.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/100971
Approved by: https://github.com/Skylion007, https://github.com/lezcano
2023-05-12 11:52:10 +00:00
ts
2a6a159c0c Modify repeat_interleave docs to highlight potential overloading (#99650)
Fixes #99259 , drawing to attention that input is optional by putting a variation of the method signature at the top of the file and by modifying the input arguments.

Note that I'm not certain how to get the additional signature at the same level of indentation as the first one, but I think this change does a good job of highlighting the change is optional.

Would be happy to iterate on this if there are any issues.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99650
Approved by: https://github.com/mikaylagawarecki
2023-05-01 17:53:03 +00:00
c11441fda3 Update torch.arange doc. (#99963)
To always exclude `end` without being affected by rounding error, `epsilon` should be subtracted, instead of being added.

Fixes #99853

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99963
Approved by: https://github.com/kit1980
2023-04-26 04:18:56 +00:00
5c16dfd708 Add half to real param description in torch.complex docs (#99938)
Fixes #89733 according to the issue description

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99938
Approved by: https://github.com/Skylion007
2023-04-25 21:23:16 +00:00
efc90c797d improvements to torch.gradient docs (#98824)
Fixes #98693

Clarified docs for `torch.gradient` on `h_l` and how the gradient is computed. For the mathematical equations, I followed this reference: https://www.dam.brown.edu/people/alcyew/handouts/numdiff.pdf.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98824
Approved by: https://github.com/ngimel, https://github.com/kit1980
2023-04-12 23:43:40 +00:00
9d04d376d8 docs: Match open bracket with close bracket in unsqueeze (#95215)
Was going to fix something else that I thought was an issue, but isn't, so just leaving this tiny thing in case it's wanted
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95215
Approved by: https://github.com/Skylion007, https://github.com/kit1980
2023-02-24 03:56:59 +00:00
ce950b412f Reland "Add torch.empty_permuted (#95069)" (#95208)
This reverts commit 92e03cd583c027a4100a13682cf65771b80569da.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95208
Approved by: https://github.com/albanD
2023-02-21 18:02:48 +00:00
92e03cd583 Revert "Add torch.empty_permuted (#95069)"
This reverts commit bedeb1f014795c497f11942ff4c772431d1c157a.

Reverted https://github.com/pytorch/pytorch/pull/95069 on behalf of https://github.com/jeanschmidt due to Breaking internal builds. More in https://fburl.com/phabricator/ztrxrroq
2023-02-21 12:05:20 +00:00
bedeb1f014 Add torch.empty_permuted (#95069)
torch.empty_permuted is a generalized version of torch.empty(memory_format=...), where you can pass an arbitrary physical layout as a tuple of dims to allow you to setup dense, non-overlapping tensors with non-standard memory format. Check the docblock for a full description of semantics.

The initial motivation for this PR is with guard-less unbacked SymInts. Traditionally, the way we allocate dense tensors with arbitrary layout is with `empty_strided`. However, `empty_strided` does not know that the given strides are actually contiguous, and must test this manually to find out if it is the case. With `empty_permuted`, this is known statically to be the case and helps us skip some 0/1 guards.

However, I also think torch.empty_permuted is a useful API in its own right. It is technically possible to simulate this with an empty and a permute; however, there are some downsides:

* The manual incant is tricky to work out. To allocate an NHWC tensor, the invocation is `torch.empty(N, H, W, C).permute(0, 3, 1, 2)`; the permute call has to take NHWC to NCHW, and is the *inverse* of the permutation people are typically thinking of when they talk about NHWC (0, 2, 3, 1). Instead, torch.empty_permuted lets you say `torch.empty_permuted((N, C, H, W), (0, 2, 3, 1))`, letting you provide the intuitive permutation. It can be literally be read off as NHWC if you assign N=0, C=1, H=2, W=3.
* An empty(requires_grad=True).permute() is no longer a leaf tensor. You can force it to be a leaf with a detach(), but it is more straightforward and less error prone to allow directly allocating a tensor with the correct permutation.

It is also technically possible to simulate this with empty_strided. However, this requires the user to manually compute the contiguous output strides and is bad from a reduction of guards perspective. For what it's worth, this is one of the more common uses of as_strided in the wild, and it would be nice to get rid of it.

A nice enhancement of this feature would be to accept `physical_layout` anywhere `memory_format` is accepted. However, this would be a pretty involved change, so I'm doing the easy thing instead.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95069
Approved by: https://github.com/malfet, https://github.com/ngimel, https://github.com/albanD, https://github.com/dagitses
2023-02-20 00:23:10 +00:00
fba13d94a1 Remove deprecated torch.symeig (#70988)
The time has come to remove deprecated linear algebra related functions. This PR removes `torch.symeig`.

- [x] XLA PR: https://github.com/pytorch/xla/pull/4498

Pull Request resolved: https://github.com/pytorch/pytorch/pull/70988
Approved by: https://github.com/lezcano, https://github.com/kit1980, https://github.com/malfet
2023-01-31 11:59:11 +00:00
00b3f22210 Add missing scalar example in docs of torch.where (#93145)
[`torch.where(condition, x, y)`](https://pytorch.org/docs/stable/generated/torch.where.html) accepts `x` and `y` as either `Tensor` or Scalar, but the Scalar example is missing in the docs. I simply add the example.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93145
Approved by: https://github.com/ngimel
2023-01-28 03:46:44 +00:00
acdd462b1a Revert "Remove deprecated torch.symeig (#70988)"
This reverts commit d70ed68162521341060b06985620cdbef04a8fa9.

Reverted https://github.com/pytorch/pytorch/pull/70988 on behalf of https://github.com/kit1980 due to Failing XLA tests, forward fix unsuccessful
2023-01-24 19:03:40 +00:00
3f64c96655 asarray: Add support for NumPy scalars (#90914)
Follow up from: Quansight-Labs/numpy_pytorch_interop#3

This PR adds support for NumPy scalars for `torch.asarray`.

**Before:** treats the scalar as an object that implements the buffer protocol. Thus, interprets the data as the default data type (`float32`)

```python
>>> torch.asarray(numpy.float64(0.5))
tensor([0.0000, 1.7500])
```

**After:** identifies the NumPy scalar, and does the "right" thing. i.e. creates a 0-dimensional tensor from the NumPy array that doesn't share its memory

```python
>>> torch.asarray(numpy.float64(0.5))
tensor(0.5000, dtype=torch.float64)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90914
Approved by: https://github.com/lezcano, https://github.com/mruberry
2023-01-24 08:09:30 +00:00
d70ed68162 Remove deprecated torch.symeig (#70988)
The time has come to remove deprecated linear algebra related functions. This PR removes `torch.symeig`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/70988
Approved by: https://github.com/lezcano, https://github.com/kit1980
2023-01-23 22:51:40 +00:00
30876229a7 [mta] Backward of unary foreach functions (#89591)
as per title, this PR defines backward of those.

This doesn't implement forward-mode automatic differentiation as [the current codegen](a747326423/tools/autograd/gen_variable_type.py (L1513)) doesn't seem to handle `ArrayRef<Tensor>`.

Rel:
- https://github.com/pytorch/pytorch/issues/53796
- https://github.com/pytorch/pytorch/issues/58833

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89591
Approved by: https://github.com/albanD
2023-01-23 08:28:06 +00:00
fb1427ea8f squeeze: allow squeezing multiple dimensions at once (#89017)
Ref #70924

This addresses part 1 of the issue, allowing `torch.squeeze` to be
passed a tuple of dimensions. e.g.
```python
x.squeeze(0).squeeze(0)
```
can now be written
```python
x.squeeze((0, 1))
```
(assuming x has at least 2 dimensions)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89017
Approved by: https://github.com/albanD
2023-01-17 14:20:15 +00:00
a4a0195c6c Fix torch.where signature mismatch that was caused by torchgen (#91627)
Fixes #91003

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91627
Approved by: https://github.com/albanD
2023-01-13 16:17:55 +00:00
b3e4f5029b Add check-sparse-tensor-invariants flag to Context - 2nd try. (#92094)
This PR is a copy of https://github.com/pytorch/pytorch/pull/90849 that merge was reverted.

The PR adds "check sparse tensor invariants" flag to Context that when enabled will trigger sparse tensor data invariants checks in unsafe methods of constructing sparse COO/CSR/CSC/BSR/BSC tensors. The feature includes the following changes to UI:

`torch.sparse.check_sparse_tensor_invariants` class provides different ways to enable/disable the invariant checking.

`torch.sparse_coo/csr/csc/bsr/bsc/compressed_tensor` functions have a new optional argument `check_invariants` to enable/disable the invariant checks explicitly. When the `check_invariants` argument is specified, the global state of the feature is temporarily overridden.

The PR fixes https://github.com/pytorch/pytorch/issues/90833

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92094
Approved by: https://github.com/cpuhrsch
2023-01-13 14:50:33 +00:00
c7a22bb7c7 Revert "Add check-sparse-tensor-invariants flag to Context. (#90849)"
This reverts commit b9a035c1c58630f3eef5242cb4849881b8376b39.

Reverted https://github.com/pytorch/pytorch/pull/90849 on behalf of https://github.com/DanilBaibak due to Break internal build
2023-01-12 09:58:16 +00:00
b9a035c1c5 Add check-sparse-tensor-invariants flag to Context. (#90849)
This PR adds "check sparse tensor invariants" flag to Context that when enabled will trigger sparse tensor data invariants checks in unsafe methods of constructing sparse COO/CSR/CSC/BSR/BSC tensors. The feature includes the following changes to UI:

- `torch.enable_check_sparse_tensor_invariants` and `torch.is_check_sparse_tensor_invariants_enabled` functions to globally enable/disable the invariant checks and to retrieve the state of the feature, respectively
- `torch.sparse_coo/csr/csc/bsr/bsc/compressed_tensor` functions have a new optional argument `check_invariants` to enable/disable the invariant checks explicitly. When the `check_invariants` argument is specified, the global state of the feature is temporarily overridden.

The PR also fixes https://github.com/pytorch/pytorch/issues/90833

# Main issue

*The following content is outdated after merging the PRs in this ghstack but kept for the record.*

The importance of this feature is that when enabling the invariants checks by default, say, via

<details>

```
$ git diff
diff --git a/torch/__init__.py b/torch/__init__.py
index c8543057c7..19a91d0482 100644
--- a/torch/__init__.py
+++ b/torch/__init__.py
@@ -1239,3 +1239,8 @@ if 'TORCH_CUDA_SANITIZER' in os.environ:

 # Populate magic methods on SymInt and SymFloat
 import torch.fx.experimental.symbolic_shapes
+
+# temporarily enable sparse tensor arguments validation in unsafe
+# constructors:
+
+torch._C._set_check_sparse_tensor_invariants(True)
```

</details>

a massive number of test failures/errors occur in test_sparse_csr.py tests:
```
$ pytest -sv test/test_sparse_csr.py
<snip>
==== 4293 failed, 1557 passed, 237 skipped, 2744 errors in 69.71s (0:01:09) ====
```
that means that we are silently constructing sparse compressed tensors that do not satisfy the sparse tensor invariants. In particular, the following errors are raised:

```
AssertionError: "resize_as_sparse_compressed_tensor_: self and src must have the same layout" does not match "expected values to be a strided and contiguous tensor"

RuntimeError: CUDA error: device-side assert triggered

RuntimeError: `col_indices[..., crow_indices[..., i - 1]:crow_indices[..., i]] for all i = 1, ..., nrows are sorted and distinct along the last dimension values` is not satisfied.

RuntimeError: expected col_indices to be a strided and contiguous tensor

RuntimeError: expected row_indices to be a strided and contiguous tensor

RuntimeError: expected values to be a strided and contiguous tensor

RuntimeError: for_each: failed to synchronize: cudaErrorAssert: device-side assert triggered

RuntimeError: tensor dimensionality must be sum of batch, base, and dense dimensionalities (=0 + 2 + 0) but got 3
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90849
Approved by: https://github.com/amjames, https://github.com/cpuhrsch
2023-01-11 01:05:14 +00:00
2a64365a29 Fix rendering of std/var docs (#91730)
Due to the indentation, "versionchanged" is being rendered as if it was an
argument.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91730
Approved by: https://github.com/albanD, https://github.com/lezcano
2023-01-05 22:17:37 +00:00
df4b3b13bc Revert "squeeze: allow squeezing multiple dimensions at once (#89017)"
This reverts commit e26cb06681f4ae92ba28c802cbea263f9a97c2ff.

Reverted https://github.com/pytorch/pytorch/pull/89017 on behalf of https://github.com/mehtanirav due to Internal breakages
2023-01-05 19:25:08 +00:00
e26cb06681 squeeze: allow squeezing multiple dimensions at once (#89017)
Ref #70924

This addresses part 1 of the issue, allowing `torch.squeeze` to be
passed a tuple of dimensions. e.g.
```python
x.squeeze(0).squeeze(0)
```
can now be written
```python
x.squeeze((0, 1))
```
(assuming x has at least 2 dimensions)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89017
Approved by: https://github.com/albanD
2023-01-04 14:40:56 +00:00