Commit Graph

242 Commits

Author SHA1 Message Date
f7b9a46880 Deprecate torch.lu
**BC-breaking note**:

This PR deprecates `torch.lu` in favor of `torch.linalg.lu_factor`.
A upgrade guide is added to the documentation for `torch.lu`.

Note this PR DOES NOT remove `torch.lu`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77636

Approved by: https://github.com/malfet
2022-06-07 22:50:14 +00:00
2c5bf12584 Revert "stft: remove non-center overload and python functional wrapper"
This reverts commit d23ecbfc9ac157560611b242f015743f189dbf48.

Reverted https://github.com/pytorch/pytorch/pull/73434 on behalf of https://github.com/albanD
2022-05-09 19:59:46 +00:00
1467e0dd5d Revert "Deprecate torch.lu"
This reverts commit a5bbfd94fb91c078416a99b95eb7b45d3ea81b6f.

Reverted https://github.com/pytorch/pytorch/pull/73804 on behalf of https://github.com/malfet
2022-05-09 19:06:44 +00:00
a5bbfd94fb Deprecate torch.lu
**BC-breaking note**:

This PR deprecates `torch.lu` in favor of `torch.linalg.lu_factor`.
A upgrade guide is added to the documentation for `torch.lu`.

Note this PR DOES NOT remove `torch.lu`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/73804

Approved by: https://github.com/IvanYashchuk, https://github.com/mruberry
2022-05-05 19:17:11 +00:00
dfde877c0b Add type hints for a few random functions/classes
Adds type hints for a few functions/classes that we use in [TorchGeo](https://github.com/microsoft/torchgeo).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74171
Approved by: https://github.com/jbschlosser, https://github.com/anjali411
2022-05-04 13:53:00 +00:00
d23ecbfc9a stft: remove non-center overload and python functional wrapper
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73434

Approved by: https://github.com/anjali411
2022-05-03 14:30:35 +00:00
77f23d6460 Revert "stft: remove non-center overload and python functional wrapper"
This reverts commit 6b7d89c4f13902f45bbac112dd7835387a35eec7.

Reverted https://github.com/pytorch/pytorch/pull/73434 on behalf of https://github.com/osalpekar
2022-04-23 23:21:27 +00:00
6b7d89c4f1 stft: remove non-center overload and python functional wrapper
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73434

Approved by: https://github.com/anjali411
2022-04-23 00:17:01 +00:00
80fe96c860 Revert "Add type hints for a few random functions/classes"
This reverts commit cdb40eb528c08cf401bf0fae096295a1614d6a3f.

Reverted https://github.com/pytorch/pytorch/pull/74171 on behalf of https://github.com/zengk95
2022-04-21 21:07:15 +00:00
cdb40eb528 Add type hints for a few random functions/classes
Adds type hints for a few functions/classes that we use in [TorchGeo](https://github.com/microsoft/torchgeo).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74171
Approved by: https://github.com/jbschlosser
2022-04-21 20:09:40 +00:00
39717d3034 Remove histogramdd functional wrapper
Merge once the forward compatibility period is expired for the histogramdd
operator.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/74201

Approved by: https://github.com/ezyang, https://github.com/albanD
2022-04-14 20:56:24 +00:00
715e07b97f Revert "Remove histogramdd functional wrapper"
This reverts commit 8cc338e5c2e3f1b6512dc3b33d33281ac5f4357c.

Reverted https://github.com/pytorch/pytorch/pull/74201 on behalf of https://github.com/suo
2022-04-14 03:56:48 +00:00
8cc338e5c2 Remove histogramdd functional wrapper
Merge once the forward compatibility period is expired for the histogramdd
operator.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/74201

Approved by: https://github.com/ezyang
2022-04-14 02:47:39 +00:00
3471b0eb3d Revert "Remove histogramdd functional wrapper"
This reverts commit 7c9017127f0c8063d03b7df0e32061016be0b045.

Reverted https://github.com/pytorch/pytorch/pull/74201 on behalf of https://github.com/malfet
2022-04-13 12:54:24 +00:00
7c9017127f Remove histogramdd functional wrapper
Merge once the forward compatibility period is expired for the histogramdd
operator.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/74201

Approved by: https://github.com/ezyang
2022-04-13 03:02:59 +00:00
0a1bc5f501 Miscellaneous __torch_function__ fixes
I figured these out by unconditionally turning on a no-op torch function
mode on the test suite and then fixing errors as they showed up.  Here's
what I found:

- _parse_to failed internal assert when __torch_function__'ed because it
  claims its name is "to" to the argument parser; added a name override
  so we know how to find the correct name

- Infix operator magic methods on Tensor did not uniformly handle
  __torch_function__ and TypeError to NotImplemented.  Now, we always
  do the __torch_function__ handling in
  _wrap_type_error_to_not_implemented and your implementation of
  __torch_function__ gets its TypeErrors converted to NotImplemented
  (for better or for worse; see
  https://github.com/pytorch/pytorch/issues/75462 )

- A few cases where code was incorrectly testing if a Tensor was
  Tensor-like in the wrong way, now use is_tensor_like (in grad
  and in distributions).  Also update docs for has_torch_function to
  push people to use is_tensor_like.

- is_grads_batched was dropped from grad in handle_torch_function, now
  fixed

- Report that you have a torch function even if torch function is
  disabled if a mode is enabled.  This makes it possible for a mode
  to return NotImplemented, pass to a subclass which does some
  processing and then pass back to the mode even after the subclass
  disables __torch_function__ (so the tensors are treated "as if"
  they are regular Tensors).  This brings the C++ handling behavior
  in line with the Python behavior.

- Make the Python implementation of overloaded types computation match
  the C++ version: when torch function is disabled, there are no
  overloaded types (because they all report they are not overloaded).

Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/75484

Approved by: https://github.com/zou3519
2022-04-11 16:52:16 +00:00
c5023ea1d5 stft: Implement center padding in ATen
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73432

Approved by: https://github.com/ezyang
2022-04-01 01:14:52 +00:00
9cc848c3f8 fix typo torch.functional.py
Fixes a typo in torch.functional.py document, which is not an existing issue
Thanks for your time

Pull Request resolved: https://github.com/pytorch/pytorch/pull/74796
Approved by: https://github.com/mrshenli, https://github.com/osalpekar
2022-03-30 00:45:03 +00:00
c7f9da5752 Add C++ implementation of histogramdd
This creates a `histogramdd` operator with overloads matching the `Union`
behaviour used in the functional variant. Moving into C++ is preferred because
it can handle torch function automatically instead of needing to differentiate
between the overloads manually.

This also adds a new return type: `std::tuple<Tensor, std::vector<Tensor>>`. For
which I've updated `wrap` to be completely generic for tuples and removed the
old manual definitions.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/74200

Approved by: https://github.com/ezyang
2022-03-29 02:17:21 +00:00
905efa82ff [fix] torch.broadcast_shapes should not handle shapes with negative dimensions. (#72999)
Summary:
Hi,
The PR fixes https://github.com/pytorch/pytorch/issues/68957. It aims to include the following:
- Fixes the code in `torch/functional.py`.
- Add the missing tests for negative input values and non-iterable inputs.

~#### TODO~
~- [x] Add OpInfo~
EDIT: `broadcast_shapes` don't take any tensor inputs. So we don't need OpInfo here. Thanks, kshitij12345 for guidance.

#### Earlier
```python
>>> shapes = [1, -12]
>>> torch.broadcast_shapes(*shapes)
torch.Size([-12])    # MUST RAISE ERROR
```

#### Now
```python
>>> shapes = [1, -12]
>>> torch.broadcast_shapes(*shapes)
RuntimeError: Trying to create tensor with negative dimension -12: [-12]
```

#### NumPy's Output
```python
>>> shapes = [1, -12]
>>> numpy.broadcast_shapes(*shapes)
ValueError: negative dimensions are not allowed
```

#### `torch.broadcast_tensor()` Output
As mentioned in the [doc](https://pytorch.org/docs/stable/generated/torch.broadcast_shapes.html):
```python
>>> shapes = [1, -12]
>>> torch.broadcast_tensors(*map(torch.empty, shapes))[0].shape
RuntimeError: Trying to create tensor with negative dimension -12: [-12]
```

Looking forward to hearing from you and your questions. Thanks! :)

cc: mruberry kshitij12345

Pull Request resolved: https://github.com/pytorch/pytorch/pull/72999

Reviewed By: albanD

Differential Revision: D34543995

Pulled By: ngimel

fbshipit-source-id: e32b1f266500a5e002c8f353b1e02f44c23d4f6e
(cherry picked from commit a6253ce6bb8455a3c89398f12b7d790a0b7e8d95)
2022-03-03 18:33:06 +00:00
9ad0578c59 Remove istft python functional wrapper (#71993)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71993

I've kept the symbol `torch.functional.istft` because it looks like public API,
but it could just as easily be moved to `_torch_docs.py`.

Moving this into its own PR until TorchScript starts recognizing `input`
as a keyword argument.

Test Plan: Imported from OSS

Reviewed By: mrshenli

Differential Revision: D34461399

Pulled By: anjali411

fbshipit-source-id: 3275fb74bef2fa0e030e61f7ee188daf8b5b2acf
(cherry picked from commit 5b4b083de894eba9ab16cea53b77746bcfd0fe32)
2022-03-03 17:31:00 +00:00
0947521268 Update stft tests to support latest librosa (#72833)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72833

Closes #72550

The latest version of librosa breaks backward compatibility in two
ways:
- Everything except the input tensor is now keyword-only
- `pad_mode` now defaults to `'constant'` for zero-padding

https://librosa.org/doc/latest/generated/librosa.stft.html

This changes the test to match the old behaior even when using the new
library and updates the documentation to explicitly say that
`torch.stft` doesn't exactly follow the librosa API. This was always
true (`torch.stft` it has new arguments, a different default window
and supports complex input), but it can't hurt to be explicit.

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D34386897

Pulled By: mruberry

fbshipit-source-id: 6adc23f48fcb368dacf70602e9197726d6b7e0c1
(cherry picked from commit b5c5ed41963022c9f26467279ed098fb905fa00a)
2022-02-23 02:31:42 +00:00
a35b4b49d2 Add linalg.lu_factor (#66933)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66933

This PR exposes `torch.lu` as `torch.linalg.lu_factor` and
`torch.linalg.lu_factor_ex`.

This PR also adds support for matrices with zero elements both in
the size of the matrix and the batch. Note that this function simply
returns empty tensors of the correct size in this case.

We add a test and an OpInfo for the new function.

This PR also adds documentation for this new function in line of
the documentation in the rest of `torch.linalg`.

Fixes https://github.com/pytorch/pytorch/issues/56590
Fixes https://github.com/pytorch/pytorch/issues/64014

cc jianyuh nikitaved pearu mruberry walterddr IvanYashchuk xwang233 Lezcano

Test Plan: Imported from OSS

Reviewed By: gchanan

Differential Revision: D32834069

Pulled By: mruberry

fbshipit-source-id: 51ef12535fa91d292f419acf83b800b86ee9c7eb
2022-01-05 20:32:12 -08:00
6ae34ea6f8 Revert D32521980: Add linalg.lu_factor
Test Plan: revert-hammer

Differential Revision:
D32521980 (b10929a14a)

Original commit changeset: 26a49ebd87f8

fbshipit-source-id: e1a6bb9c2ece9bd78190fe17e16a46e3358c5c82
2021-11-28 17:22:15 -08:00
b10929a14a Add linalg.lu_factor (#66933)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66933

This PR exposes `torch.lu` as `torch.linalg.lu_factor` and
`torch.linalg.lu_factor_ex`.

This PR also adds support for matrices with zero elements both in
the size of the matrix and the batch. Note that this function simply
returns empty tensors of the correct size in this case.

We add a test and an OpInfo for the new function.

This PR also adds documentation for this new function in line of
the documentation in the rest of `torch.linalg`.

Fixes https://github.com/pytorch/pytorch/issues/56590
Fixes https://github.com/pytorch/pytorch/issues/64014

cc jianyuh nikitaved pearu mruberry walterddr IvanYashchuk xwang233 Lezcano

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D32521980

Pulled By: mruberry

fbshipit-source-id: 26a49ebd87f8a41472f8cd4e9de4ddfb7f5581fb
2021-11-27 17:52:48 -08:00
5c3529a86d [lint] small pass to make lint clean (#68367)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68367

- bmm_test.py was using syntax not allowed in 3.6
- Some suppressions were not placed on the correct line.

With this file,
```
lintrunner --paths-cmd='git grep -Il .'
```
passes successfully.

Test Plan: Imported from OSS

Reviewed By: janeyx99, mrshenli

Differential Revision: D32436644

Pulled By: suo

fbshipit-source-id: ae9300c6593d8564fb326822de157d00f4aaa3c2
2021-11-16 10:27:00 -08:00
7b0408684b Fix linter (#67122)
Summary:
Fixes regression introduced by 7e5aa0d35a

Pull Request resolved: https://github.com/pytorch/pytorch/pull/67122

Reviewed By: seemethere

Differential Revision: D31872569

Pulled By: malfet

fbshipit-source-id: ada0137db9a46cbec573489c9c37a94f3a7576ae
2021-10-22 16:02:36 -07:00
7e5aa0d35a fixed unique arguments documentation (#66132)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66132

Differential Revisi
<img width="875" alt="Screen Shot 2021-10-05 at 12 10 39 PM" src="https://user-images.githubusercontent.com/17888388/136276286-3df20681-7b7a-4a91-97d6-4f1ac3722121.png">
on: [D31397746](https://our.intern.facebook.com/intern/diff/D31397746/)

Test Plan: Imported from OSS

Reviewed By: gchanan

Differential Revision: D31734476

Pulled By: samdow

fbshipit-source-id: 8999443c7f9b24394d7543652b8350261c1f8b3a
2021-10-22 14:50:02 -07:00
33790c4e06 Implement histogramdd on CPU (#65318)
Summary:
Implements `torch.histogramdd` analogous to `numpy.histogramdd`.

Builds on https://github.com/pytorch/pytorch/pull/58780, generalizing the existing `torch.histogram` kernel to handle D-dimensional inputs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/65318

Reviewed By: soulitzer

Differential Revision: D31654555

Pulled By: saketh-are

fbshipit-source-id: 14b781fac0fd3698b052dbd6f0fda46e50d4c5f1
2021-10-21 16:09:31 -07:00
aaffcfe9cd implement "xy" indexing for torch.meshgrid (#62724)
Summary:
This is step 4/7 of https://github.com/pytorch/pytorch/issues/50276. This allows the use of `"xy"` indexing but doesn't change any defaults.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62724

Reviewed By: heitorschueroff

Differential Revision: D30995290

Pulled By: dagitses

fbshipit-source-id: 08a6a6144b20bc019f68bc3c52e3bbf967976d8f
2021-09-17 08:31:17 -07:00
2c57bbf521 add support for indexing to meshgrid (#62722)
Summary:
This is step 3/7 of https://github.com/pytorch/pytorch/issues/50276. It only adds support for the argument but doesn't implement new indexing modes yet.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62722

Test Plan:
Verified this is not FC breaking by adding logging to both meshgrid
overloads and then called meshgrid twice:

`meshgrid(*tensors)`
  and
`meshgrid(*tensors, indexing='ij')`

This confirmed that the former signature triggered the original native
function and the latter signature triggered the new native function.

Reviewed By: H-Huang

Differential Revision: D30394313

Pulled By: dagitses

fbshipit-source-id: e265cb114d8caae414ee2305dc463b34fdb57fa6
2021-09-16 09:59:49 -07:00
ff6b475d4a [fix] don't expose unique_dim in torch (#63080)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/62793

This is mostly a quick fix. I think the more correct fix could be updating `unique_dim` to `_unique_dim` which could be BC-breaking for C++ users (� maybe). Maybe something else I am missing.

~~Not sure how to add a test for it.~~ Have tested it locally.

We can add a test like following. Tested this locally, it fails currently but passes with the fix.
```python
        def test_wildcard_import(self):
            exec('from torch import *')

```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/63080

Reviewed By: gchanan

Differential Revision: D30738711

Pulled By: zou3519

fbshipit-source-id: b86d0190e45ba0b49fd2cffdcfd2e3a75cc2a35e
2021-09-14 18:19:17 -07:00
2c258d91cc Fix torch.istft length mismatch and window runtime error (#63469)
Summary:
The PR fixes two issues:
- See https://github.com/pytorch/pytorch/issues/62747 and https://github.com/pytorch/audio/issues/1409. The length mismatch when the given ``length`` parameter is longer than expected. Add padding logic in consistent with librosa.
- See https://github.com/pytorch/pytorch/issues/62323. The current implementations checks if the min value of window_envelop.abs() is greater than zero.  In librosa they normalize the signal on non-zero values by indexing. Like
```
approx_nonzero_indices = ifft_window_sum > util.tiny(ifft_window_sum)
y[approx_nonzero_indices] /= ifft_window_sum[approx_nonzero_indices]
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/63469

Reviewed By: fmassa

Differential Revision: D30695827

Pulled By: nateanl

fbshipit-source-id: d034e53f0d65b3fd1dbd150c9c5acf3faf25a164
2021-09-02 09:31:47 -07:00
d4593d9d08 document why wrappers exist in torch.functional (#62847)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/62844.

These wrappers are not super obvious, but ultimately stem from the lack of support for functions with variadic args in native_functions.yaml. https://github.com/pytorch/pytorch/issues/62845 tracks that issue.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62847

Reviewed By: VitalyFedyunin

Differential Revision: D30305016

Pulled By: dagitses

fbshipit-source-id: 716fcecb0417b770bc92cfd8c54f7ead89070896
2021-08-18 11:51:21 -07:00
0f2f6a79cb clarify the documentation of torch.meshgrid (#62977)
Summary:
Also warn about the behavior differences from `numpy.meshgrid`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62977

Reviewed By: mruberry, ngimel

Differential Revision: D30220930

Pulled By: dagitses

fbshipit-source-id: ae6587b41792721cae2135376c58121b4634e296
2021-08-18 04:01:22 -07:00
22f78144c7 Extends warning on norm docs (#63310)
Summary:
torch.norm has a couple documentation issues, like https://github.com/pytorch/pytorch/issues/44552 and https://github.com/pytorch/pytorch/issues/38595, but since it's deprecated this PR simply clarifies that the documentation (and implementation) of torch.norm maybe be incorrect. This should be additional encouragement for users to migrate to torch.linalg.vector_norm and torch.linalg.matrix_norm.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/63310

Reviewed By: ngimel

Differential Revision: D30337997

Pulled By: mruberry

fbshipit-source-id: 0fdcc438f36e4ab29e21e0a64709e4f35a2467ba
2021-08-16 22:23:45 -07:00
dbcfd7739f Make torch.lu differentiable for wide/tall inputs + jit (#61564)
Summary:
As per title.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61564

Reviewed By: astaff

Differential Revision: D30338136

Pulled By: mruberry

fbshipit-source-id: f01436fc90980544cdfa270feee16bb3dda21b93
2021-08-16 11:40:57 -07:00
acdad8bc63 [docs] Merge note block in torch.lu documentation (#63156)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63156

**Summary**
This commit merges the four successive `Note` blocks that appear in the
documentation for `torch.lu`. Each one only has one line in it, so all
of them have been merged into one block with a bulleted list that
contains the original items.

**Test Plan**
Continuous integration.

*Before*
<img width="888" alt="Captura de Pantalla 2021-08-12 a la(s) 10 48 39 a  m" src="https://user-images.githubusercontent.com/4392003/129244443-b7d1594e-8833-4c20-a911-e1bf7ca88a8d.png">

*After*
<img width="932" alt="Captura de Pantalla 2021-08-12 a la(s) 10 48 46 a  m" src="https://user-images.githubusercontent.com/4392003/129244462-1f39dcdb-90e0-4fd9-a95f-343b0b6be1f1.png">

**Fixes**
This commit fixes #62339.

Test Plan: Imported from OSS

Reviewed By: navahgar, pbelevich

Differential Revision: D30292633

Pulled By: SplitInfinity

fbshipit-source-id: cb9071165629bfe7316b1d2fe952e4354c75d48f
2021-08-13 12:11:35 -07:00
1022443168 Revert D30279364: [codemod][lint][fbcode/c*] Enable BLACK by default
Test Plan: revert-hammer

Differential Revision:
D30279364 (b004307252)

Original commit changeset: c1ed77dfe43a

fbshipit-source-id: eab50857675c51e0088391af06ec0ecb14e2347e
2021-08-12 11:45:01 -07:00
b004307252 [codemod][lint][fbcode/c*] Enable BLACK by default
Test Plan: manual inspection & sandcastle

Reviewed By: zertosh

Differential Revision: D30279364

fbshipit-source-id: c1ed77dfe43a3bde358f92737cd5535ae5d13c9a
2021-08-12 10:58:35 -07:00
a46d4212bf Allow dims=0 in torch.tensordot call (#61331)
Summary:
In one of my previous PRs that rewrite `tensordot` implementation, I mistakenly take empty value of `dims_a` and `dims_b` as illegal values. This turns out to be not true. Empty `dims_a` and `dims_b` are supported, in fact common when `dims` is passed as an integer. This PR removes the unnecessary check.

Fixes https://github.com/pytorch/pytorch/issues/61096

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61331

Reviewed By: eellison

Differential Revision: D29578910

Pulled By: gmagogsfm

fbshipit-source-id: 96e58164491a077ddc7a1d6aa6ccef8c0c9efda2
2021-07-10 17:05:20 -07:00
4e347f1242 [docs] Fix backticks in docs (#60474)
Summary:
There is a very common error when writing docs: One forgets to write a matching `` ` ``, and something like ``:attr:`x`` is rendered in the docs. This PR fixes most (all?) of these errors (and a few others).

I found these running ``grep -r ">[^#<][^<]*\`"`` on the `docs/build/html/generated` folder. The regex finds an HTML tag that does not start with `#` (as python comments in example code may contain backticks) and that contains a backtick in the rendered HTML.

This regex has not given any false positive in the current codebase, so I am inclined to suggest that we should add this check to the CI. Would this be possible / reasonable / easy to do malfet ?

Pull Request resolved: https://github.com/pytorch/pytorch/pull/60474

Reviewed By: mrshenli

Differential Revision: D29309633

Pulled By: albanD

fbshipit-source-id: 9621e0e9f87590cea060dd084fa367442b6bd046
2021-06-24 06:27:41 -07:00
4caca7a15b Improved torch.einsum testing and fixed bug (#59731)
Summary:
Improved torch.einsum testing and fixed a bug where lower case letters appeared before upper case letters in the sorted order which is inconsistent with NumPy.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59731

Reviewed By: SplitInfinity, ansley

Differential Revision: D29183078

Pulled By: heitorschueroff

fbshipit-source-id: a33980d273707da2d60a387a2af2fa41527ddb68
2021-06-17 04:48:47 -07:00
58412740ae Added doc for torch.einsum sublist format (#57038)
Summary:
Adds documentation for the new sublist format for `torch.einsum`

closes https://github.com/pytorch/pytorch/issues/21412

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57038

Reviewed By: mruberry

Differential Revision: D28994431

Pulled By: heitorschueroff

fbshipit-source-id: 3dfb154fe6e4c440ac67c2dd92727bb5ecfe289e
2021-06-10 08:01:56 -07:00
72ae924fad Added sublist support for torch.einsum (#56625)
Summary:
This PR adds an alternative way of calling `torch.einsum`. Instead of specifying the subscripts as letters in the `equation` parameter, one can now specify the subscripts as a list of integers as in `torch.einsum(operand1, subscripts1, operand2, subscripts2, ..., [subscripts_out])`. This would be equivalent to `torch.einsum('<subscripts1>,<subscripts2>,...,->[<subscript_out>]', operand1, operand2, ...)`

TODO
- [x] Update documentation
- [x] Add more error checking
- [x] Update tests

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56625

Reviewed By: zou3519

Differential Revision: D28062616

Pulled By: heitorschueroff

fbshipit-source-id: ec50ad34f127210696e7c545e4c0675166f127dc
2021-05-21 08:36:45 -07:00
9123229684 Cleanup functional.py after lu_unpack was removed (#58669)
Summary:
Remove code in functional.py that became unused after PR c790fd2bf8

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58669

Reviewed By: driazati

Differential Revision: D28572377

Pulled By: heitorschueroff

fbshipit-source-id: c90d80ead5f3d69100667488bc6b14ef54b95b54
2021-05-20 13:06:30 -07:00
c790fd2bf8 ATen lu_unpack. Required for making torch.lu_solve differentiable. (#46913)
Summary:
Backward methods for `torch.lu` and `torch.lu_solve` require the `torch.lu_unpack` method.
However, while `torch.lu` is a Python wrapper over a native function, so its gradient is implemented via `autograd.Function`,
`torch.lu_solve` is a native function, so it cannot access `torch.lu_unpack` as it is implemented in Python.

Hence this PR presents a native (ATen) `lu_unpack` version. It is also possible to update the gradients for `torch.lu` so that backward+JIT is supported (no JIT for `autograd.Function`) with this function.

~~The interface for this method is different from the original `torch.lu_unpack`, so it is decided to keep it hidden.~~

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46913

Reviewed By: albanD

Differential Revision: D28355725

Pulled By: mruberry

fbshipit-source-id: 281260f3b6e93c15b08b2ba66d5a221314b00e78
2021-05-11 22:53:21 -07:00
0da5421837 Doc deprecate norm and add seealso to linalg.norm (#57986)
Summary:
**BC-breaking note**

This PR updates the deprecation notice for torch.norm to point users to the new torch.linalg.vector_norm and torch.linalg.matrix_norm functions.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57986

Reviewed By: nikithamalgifb

Differential Revision: D28353625

Pulled By: heitorschueroff

fbshipit-source-id: 5de77d89f0e84945baa5fea91f73918dc7eeafd4
2021-05-11 12:02:12 -07:00
43f6deb6e4 Deprecate chain_matmul (#57735)
Summary:
This one's easy. I also included a bugfix.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57735

Reviewed By: bdhirsh

Differential Revision: D28318277

Pulled By: mruberry

fbshipit-source-id: c3c4546a11ba5b555b99ee79b1ce6c0649fa7323
2021-05-11 00:09:36 -07:00
3c87fe9b14 Revert D28117714: [pytorch][PR] ATen lu_unpack. Required for making torch.lu_solve differentiable.
Test Plan: revert-hammer

Differential Revision:
D28117714 (5c67d8dfd3)

Original commit changeset: befd33db12ec

fbshipit-source-id: 295b2134935542a903a73f90a7998239dfe6cc81
2021-05-09 23:20:06 -07:00