Commit Graph

408 Commits

Author SHA1 Message Date
3113a1de4a Fix some tensor operators to return NotImplemented for invalid inputs (#58216)
Summary:
Same as https://github.com/pytorch/pytorch/issues/57934. (cc/ albanD)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58216

Reviewed By: ailzhang

Differential Revision: D28494886

Pulled By: albanD

fbshipit-source-id: 380205867ee1cde90e1c6fcfe2a31749e1243530
2021-05-19 13:09:57 -07:00
ad97fd8031 Support symbolic diff for leaky_relu (#58337)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58337

supports  symbolic differentiation for leaky_relu

Test Plan:
test/test_jit.py
test/test_ops.py

Reviewed By: Krovatkin

Differential Revision: D28458898

fbshipit-source-id: bdde74d689d2c2ea1f59507456c2efa4e38de1cc
2021-05-18 14:13:40 -07:00
9afe9fba29 Reland OpInfo support for forward AD (#58304)
Summary:
Try 3 to land this.
Trying ci-all label to ensure we test everything.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58304

Reviewed By: heitorschueroff

Differential Revision: D28474343

Pulled By: albanD

fbshipit-source-id: 8230fa3c0a8d3633f09999e7c2f47dbdc5fe57e9
2021-05-17 12:33:27 -07:00
4bcaa5ae20 Revert D28412496: Revert "Revert D28387767: Add forward AD test for op info"
Test Plan: revert-hammer

Differential Revision:
D28412496 (4f28c0b590)

Original commit changeset: 5b8e30b5e807

fbshipit-source-id: 5a47aad4d5428e97e2d2b4acb4192909360870cd
2021-05-14 08:26:03 -07:00
4f28c0b590 Revert "Revert D28387767: Add forward AD test for op info" (#58230)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58230

This reverts commit f88297c66bd36d075e9d50eb09a81bea74a669c6.

Test Plan: Imported from OSS

Reviewed By: ejguan

Differential Revision: D28412496

Pulled By: albanD

fbshipit-source-id: 5b8e30b5e80771dedf999c3aaa9791fc9026f8c1
2021-05-14 06:55:44 -07:00
04970057d8 Code-dedup in PowKernel (#57873)
Summary:
Both CPU and CUDA versions of PowKernel reimplement functionality that
already exists in UnaryOps, such as sqrt, rsqrt and reciprocal

Find this out while looking at sluggish compilation of PowerKernel.cu:
 - Before the change it took 11m5s and resulted in 7.6Mb .o file
 - After the change compilation finished in 10m20s, and 6.4Mb .o file

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57873

Reviewed By: ezyang

Differential Revision: D28304929

Pulled By: malfet

fbshipit-source-id: ac499476280de55a92044b1b041b1246eea74c64
2021-05-13 19:52:34 -07:00
3072c97017 Gelu Backward, Contribution from Kevin Stephano (#58249)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/58249

Test Plan: Imported from OSS

Reviewed By: ejguan

Differential Revision: D28425629

Pulled By: Krovatkin

fbshipit-source-id: 494ab165d548aa76f036344ab1c19c5fd64bae82
2021-05-13 19:39:39 -07:00
f3ead05d77 hardtanh (#57750)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/57750

Test Plan: Imported from OSS

Reviewed By: huiguoo

Differential Revision: D28425975

fbshipit-source-id: a5e3dfbd6c77c595528c052e0b4325ef452983eb
2021-05-13 19:39:37 -07:00
c524448dd1 init hardshrink (#57749)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57749

add to a fx test

Test Plan: Imported from OSS

Reviewed By: huiguoo

Differential Revision: D28425974

fbshipit-source-id: 195c7a1944decb7a2a99c2831cab38485f32be17
2021-05-13 19:38:05 -07:00
d304bb070a Gelu Backward, Contribution from Kevin Stephano (#58249)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/58249

Test Plan: Imported from OSS

Reviewed By: ejguan

Differential Revision: D28425381

Pulled By: Krovatkin

fbshipit-source-id: 21b7ac972220b6c35b285e3b66f05eb392002408
2021-05-13 16:36:44 -07:00
a49406b331 Fixed batched version of torch.linalg.cond for singular inputs (#58040)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58040

This PR uses `torch.linalg.inv_ex` to determine the non-invertible
inputs and return the condition number of infinity for such inputs.

Added OpInfo entry for `torch.linalg.cond`.

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D28405146

Pulled By: mruberry

fbshipit-source-id: 524b9a38309851fa6461cb787ef3fba5aa7d5328
2021-05-13 09:42:17 -07:00
c1430c3425 Add torch.linalg.inv_ex without checking for errors by default (#58039)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58039

The new function has the following signature
`inv_ex(Tensor inpit, *, bool check_errors=False) -> (Tensor inverse, Tensor info)`.
When `check_errors=True`, an error is thrown if the matrix is not invertible; `check_errors=False` - responsibility for checking the result is on the user.

`linalg_inv` is implemented using calls to `linalg_inv_ex` now.

Resolves https://github.com/pytorch/pytorch/issues/25095

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D28405148

Pulled By: mruberry

fbshipit-source-id: b8563a6c59048cb81e206932eb2f6cf489fd8531
2021-05-13 09:42:15 -07:00
9e156b01e5 linalg.eig backwards and linalg.eigvals (#57276)
Summary:
This PR adds backwards support for `eig` and `eigvals`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57276

Reviewed By: ngimel

Differential Revision: D28405056

Pulled By: mruberry

fbshipit-source-id: 27ef03f139f44d75f4d319b0f3e77e99eea9bb01
2021-05-13 09:42:13 -07:00
ab5c273950 Remove the matmul complex backward skip (#58138)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58138

related https://github.com/pytorch/pytorch/issues/55754

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D28403156

Pulled By: anjali411

fbshipit-source-id: dca4dd643f190b314a8a4c01c698c6a1e5229f6f
2021-05-13 07:48:08 -07:00
7a95cccbc7 Revert D28393469: [pytorch][PR] Enable ceil, floor, frac, round & trunc for BFloat16 on CUDA
Test Plan: revert-hammer

Differential Revision:
D28393469 (e6d8f45523)

Original commit changeset: b0f02ade7c6e

fbshipit-source-id: 5e900f240e738168b9db9a617c6a75c949ad36d6
2021-05-12 23:29:34 -07:00
6b1eeef601 OpInfo: squeeze (#58080)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/54261

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58080

Reviewed By: agolynski

Differential Revision: D28379485

Pulled By: mruberry

fbshipit-source-id: 2b288036f595a5bd6b948a072494ee87f82322ce
2021-05-12 21:29:31 -07:00
f88297c66b Revert D28387767: Add forward AD test for op info
Test Plan: revert-hammer

Differential Revision:
D28387767 (26b6d044cd)

Original commit changeset: 369d76921c84

fbshipit-source-id: 91ac961339bdd5e1e2530d2655364f9fe46cdafb
2021-05-12 20:41:25 -07:00
ce1a8620d9 Enabled roll & diag for BFloat16 dtype on CUDA (#57916)
Summary:
Enabled `roll` & `diag` for BFloat16 dtype on CUDA

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57916

Reviewed By: agolynski

Differential Revision: D28393534

Pulled By: ngimel

fbshipit-source-id: fc1d8555b23a75f8b24c2ad826f89cd4e14cf487
2021-05-12 20:29:17 -07:00
f9aa6b2432 Enable lerp for BFloat16 on CUDA (#57907)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/57907

Reviewed By: agolynski

Differential Revision: D28393597

Pulled By: ngimel

fbshipit-source-id: 27ebfaf175c9eeb8d411ce782fdbc468082c6af3
2021-05-12 20:23:52 -07:00
e6d8f45523 Enable ceil, floor, frac, round & trunc for BFloat16 on CUDA (#57910)
Summary:
Enable `ceil`, `floor`, `frac`, `round` & `trunc` for BFloat16 on CUDA

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57910

Reviewed By: agolynski

Differential Revision: D28393469

Pulled By: ngimel

fbshipit-source-id: b0f02ade7c6e2ed122aa5d80f6d442823dc1f221
2021-05-12 20:22:19 -07:00
c4a486f4b1 Enable atan2 & hypot for BFloat16 on CUDA (#57905)
Summary:
Enable `atan2` & `hypot` for BFloat16 on CUDA.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57905

Reviewed By: agolynski

Differential Revision: D28393706

Pulled By: ngimel

fbshipit-source-id: e505e5f098d35e4f7417508443cb0fedf6562dd1
2021-05-12 20:19:14 -07:00
26b6d044cd Add forward AD test for op info (#57701)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57701

The new OpInfo flag has the following semantic:
- If it says that it supports forward AD, we run gradcheck with forward AD to ensure it is correct
- If it says that it does not support it, we check that the corresponding error is raised

All the added tests take 3s to run for CPU builds and 1min for GPU builds which should be pretty negligible compared to the test_ops runtime for each of these arch.

Test Plan: Imported from OSS

Reviewed By: agolynski

Differential Revision: D28387767

Pulled By: albanD

fbshipit-source-id: 369d76921c8460aa4548f9b5159b7297994672f5
2021-05-12 18:49:18 -07:00
d09abf004c OpInfo: narrow (#58082)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/54261

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58082

Reviewed By: agolynski

Differential Revision: D28379371

Pulled By: mruberry

fbshipit-source-id: 484e560b1e6ceba234e497585ed308a27cd8b7a0
2021-05-12 15:39:15 -07:00
5e83c62a9e Revert D28351931: [pytorch][PR] Fix some tensor operators to return NotImplemented for invalid inputs
Test Plan: revert-hammer

Differential Revision:
D28351931 (35521a2629)

Original commit changeset: 985457a44dba

fbshipit-source-id: 10724c219e53648f10a70719e25bcf774c6c7852
2021-05-12 13:58:03 -07:00
35521a2629 Fix some tensor operators to return NotImplemented for invalid inputs (#57934)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/57719.

This PR fixes `torch.Tensor{__rsub__, __rdiv__, __rtruediv__, __pow__, __rmatmul__}` to return `NotImplemented` instead of raising a `TypeError`.

cc/ mruberry: The first commit of this PR is the same as 1d209db1cc excepts the commit message.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57934

Reviewed By: mruberry

Differential Revision: D28351931

Pulled By: albanD

fbshipit-source-id: 985457a44dba24d2496794dfb8c1661cbcd4ff8f
2021-05-12 11:03:23 -07:00
d212bf1863 Enable BFloat16 for nan_to_num on CUDA (#58063)
Summary:
Enabled BFloat16 for `nan_to_num` on CUDA. For comparison with numpy, a [workaround suggested](https://github.com/pytorch/pytorch/issues/57982#issuecomment-839150556) by ngimel is being used - the OpInfo's `sample.kwargs` is used to set two `numpy.kwargs`, viz. `posinf` & `neginf` for `BFloat16`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58063

Reviewed By: mruberry

Differential Revision: D28373478

Pulled By: ngimel

fbshipit-source-id: 6493b560d83632a8519c1d3bfc5c54be9b935fb9
2021-05-12 09:50:26 -07:00
ff982ef73d OpInfo: reshape, reshape_as and minor clean-up (#57460)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/54261

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57460

Reviewed By: nairbv

Differential Revision: D28151675

Pulled By: anjali411

fbshipit-source-id: 2b3bcadab3ff5d1761b2922b63afd70a354e785c
2021-05-12 06:05:21 -07:00
c790fd2bf8 ATen lu_unpack. Required for making torch.lu_solve differentiable. (#46913)
Summary:
Backward methods for `torch.lu` and `torch.lu_solve` require the `torch.lu_unpack` method.
However, while `torch.lu` is a Python wrapper over a native function, so its gradient is implemented via `autograd.Function`,
`torch.lu_solve` is a native function, so it cannot access `torch.lu_unpack` as it is implemented in Python.

Hence this PR presents a native (ATen) `lu_unpack` version. It is also possible to update the gradients for `torch.lu` so that backward+JIT is supported (no JIT for `autograd.Function`) with this function.

~~The interface for this method is different from the original `torch.lu_unpack`, so it is decided to keep it hidden.~~

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46913

Reviewed By: albanD

Differential Revision: D28355725

Pulled By: mruberry

fbshipit-source-id: 281260f3b6e93c15b08b2ba66d5a221314b00e78
2021-05-11 22:53:21 -07:00
8b816e9010 To implement gradient for Pytorch (#54617)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/56129

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54617

Reviewed By: anjali411

Differential Revision: D28057452

Pulled By: iramazanli

fbshipit-source-id: 9bd86679282d34f5e5393e6447121586517eb4f0
2021-05-11 18:52:20 -07:00
aaca12bcc2 Deprecate in docs torch.svd and change svd -> linalg_svd (#57981)
Summary:
This PR adds a note to the documentation that torch.svd is deprecated together with an upgrade guide on how to use `torch.linalg.svd` and `torch.linalg.svdvals` (Lezcano's instructions from https://github.com/pytorch/pytorch/issues/57549).
In addition, all usage of the old svd function is replaced with a new one from torch.linalg module, except for the `at::linalg_pinv` function, that fails the XLA CI build (https://github.com/pytorch/xla/issues/2755, see failure in draft PR https://github.com/pytorch/pytorch/pull/57772).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57981

Reviewed By: ngimel

Differential Revision: D28345558

Pulled By: mruberry

fbshipit-source-id: 02dd9ae6efe975026e80ca128e9b91dfc65d7213
2021-05-11 18:04:10 -07:00
502eb664ae OpInfo: chunk (#57935)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/54261

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57935

Reviewed By: ngimel

Differential Revision: D28346217

Pulled By: mruberry

fbshipit-source-id: 331995aa18fd2983fc2122a9af31fba43ab9839c
2021-05-11 10:16:10 -07:00
4fb8676cea Add dot implementation for BFloat16 on CUDA (#57903)
Summary:
Enabled `dot` for BFloat16 on CUDA (version 11+).
It also enabled `matmul` & `vdot` for BFloat16.
Backward for `matmul` isn't supported for `BFloat16`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57903

Reviewed By: mruberry

Differential Revision: D28346031

Pulled By: ngimel

fbshipit-source-id: 0917e9e0d6cf3694f45fe1c7e76370581502036a
2021-05-11 09:46:58 -07:00
067147ac7d Enable BFloat16 for logaddexp & logaddexp2 on CUDA (#57908)
Summary:
Enabled BFloat16 for `logaddexp` & `logaddexp2` on CUDA, with a [workaround](https://github.com/pytorch/pytorch/pull/57908#issuecomment-837320532) suggested by zasdfgbnm.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57908

Reviewed By: mruberry

Differential Revision: D28344976

Pulled By: ngimel

fbshipit-source-id: edef654b5819b236fbd9996f962115beb6e147e1
2021-05-11 09:44:15 -07:00
fa318911be Enable geometric ops, exp2, expm1, rsqrt & erfc for BFloat16 on CUDA (#57913)
Summary:
Ops enabled for BFloat16 on CUDA (12 in total):

`acos`
`asin`
`atan`
`cosh`
`sin`
`sinh`
`tan`
`sinc`
`exp2`
`erfc`
`expm1`
`rsqrt`

Enabled backward for `cos` on CUDA.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57913

Reviewed By: mruberry

Differential Revision: D28342969

Pulled By: ngimel

fbshipit-source-id: 3c140fe408cbf93b21296a52d95ef0a0ccd96503
2021-05-11 09:43:05 -07:00
a0d686c9cd OpInfo: select (#57731)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/54261

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57731

Reviewed By: bdhirsh

Differential Revision: D28318229

Pulled By: mruberry

fbshipit-source-id: ec9058fd188b82de80d3a2f1a1ba07f36d8d0741
2021-05-10 21:18:58 -07:00
3c87fe9b14 Revert D28117714: [pytorch][PR] ATen lu_unpack. Required for making torch.lu_solve differentiable.
Test Plan: revert-hammer

Differential Revision:
D28117714 (5c67d8dfd3)

Original commit changeset: befd33db12ec

fbshipit-source-id: 295b2134935542a903a73f90a7998239dfe6cc81
2021-05-09 23:20:06 -07:00
5c67d8dfd3 ATen lu_unpack. Required for making torch.lu_solve differentiable. (#46913)
Summary:
Backward methods for `torch.lu` and `torch.lu_solve` require the `torch.lu_unpack` method.
However, while `torch.lu` is a Python wrapper over a native function, so its gradient is implemented via `autograd.Function`,
`torch.lu_solve` is a native function, so it cannot access `torch.lu_unpack` as it is implemented in Python.

Hence this PR presents a native (ATen) `lu_unpack` version. It is also possible to update the gradients for `torch.lu` so that backward+JIT is supported (no JIT for `autograd.Function`) with this function.

~~The interface for this method is different from the original `torch.lu_unpack`, so it is decided to keep it hidden.~~

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46913

Reviewed By: astaff

Differential Revision: D28117714

Pulled By: mruberry

fbshipit-source-id: befd33db12ecc147afacac792418b6f4948fa4a4
2021-05-09 19:12:56 -07:00
4cf2c646c2 Added torch.linalg.matrix_norm (#57127)
Summary:
This PR is focused on  the API for `linalg.matrix_norm` and delegates computations to `linalg.norm` for the moment.

The main difference between the norms is when `dim=None`. In this case
- `linalg.norm` will compute a vector norm on the flattened input if `ord=None`, otherwise it requires the input to be either 1D or 2D in order to disambiguate between vector and matrix norm
- `linalg.vector_norm` will flatten the input
- `linalg.matrix_norm` will compute the norm over the last two dimensions, treating the input as batch of matrices

In future PRs, the computations will be moved to `torch.linalg.matrix_norm` and `torch.norm` and `torch.linalg.norm` will delegate computations to either `linalg.vector_norm` or `linalg.matrix_norm` based on the arguments provided.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57127

Reviewed By: mrshenli

Differential Revision: D28186736

Pulled By: mruberry

fbshipit-source-id: 99ce2da9d1c4df3d9dd82c0a312c9570da5caf25
2021-05-09 04:50:33 -07:00
2043093217 Add correction parameter to std/var (#50903)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50903

First part of #50010. Also fixes #51127.

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D27911345

Pulled By: mruberry

fbshipit-source-id: 7138fddc935802918ab9ff19f4bc1b9f4d745d41
2021-05-07 14:40:28 -07:00
9e6b7e6e6e OpInfo: expand and expand_as (#57606)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/54261

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57606

Reviewed By: albanD

Differential Revision: D28249191

Pulled By: mruberry

fbshipit-source-id: d985ab4e8a99b116c45953e621092929a9a8028e
2021-05-07 02:50:00 -07:00
4cb3c60c20 OpInfo: float_power (#57648)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/54295 (`float_power`)

cc: mruberry kshitij12345 krshrimali

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57648

Reviewed By: albanD

Differential Revision: D28249489

Pulled By: mruberry

fbshipit-source-id: 0ae5ce0d8b154724ae59f5f5b4412e34b0128d0a
2021-05-07 02:09:47 -07:00
6eec730a73 [testing] atan2: Enable cases where self broadcasts (#57608)
Summary:
Just a follow-up

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57608

Reviewed By: albanD

Differential Revision: D28249409

Pulled By: mruberry

fbshipit-source-id: a1ce2cd736ac5547cecb3e21aaa50637917284bc
2021-05-07 01:48:44 -07:00
159a2404bd fft: Increase tolerance for nd-fft tests (#57576)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/56820

The test only fails for inverse n-dim functions with `norm="forward"`. The relative error for isn't actually any bigger than other norm modes though. It's just that the magnitude of the result is bigger, so the absolute tolerance is less relative each element. So, I just increase the relative tolerance  to compensate.

This `precisionOverride` is already applied to `fftn` and `rfftn` for exactly the same reason.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57576

Reviewed By: albanD

Differential Revision: D28249222

Pulled By: mruberry

fbshipit-source-id: 734c7c1ae8236b253d6e3cd2218c05d21901c567
2021-05-07 01:30:32 -07:00
ee79413b6a [testing] change unaryufunc default dtypes (#57616)
Summary:
Reference: https://github.com/pytorch/pytorch/pull/56646#pullrequestreview-644839124

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57616

Reviewed By: albanD

Differential Revision: D28249129

Pulled By: mruberry

fbshipit-source-id: 2cfc837fd49100d2b1b2a09d9ca6db93e089e099
2021-05-07 01:20:49 -07:00
1f1e2dab6b Remove optional type for ord parameter in vector_norm (#57662)
Summary:
As per discussion here https://github.com/pytorch/pytorch/pull/57127#discussion_r624948215

Note that we cannot remove the optional type from the `dim` parameter because the default is to flatten the input tensor which cannot be easily captured by a value other than `None`

### BC Breaking Note
This PR changes the `ord` parameter of `torch.linalg.vector_norm` so that it no longer accepts `None` arguments. The default behavior of `2` is equivalent to the previous default of `None`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57662

Reviewed By: albanD, mruberry

Differential Revision: D28228870

Pulled By: heitorschueroff

fbshipit-source-id: 040fd8055bbe013f64d3c8409bbb4b2c87c99d13
2021-05-06 17:53:25 -07:00
35fab44eaf Add CUDA support for torch.ormqr (#57316)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57316

CUDA support is implemented using cuSOLVER.

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D28242071

Pulled By: mruberry

fbshipit-source-id: 6f0a1c50c21c376d2ee2907bddb618c6a600db1f
2021-05-06 04:45:54 -07:00
59d794b2c3 Port CPU torch.ormqr to ATen (#57315)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57315

This PR ports `torch.ormqr` from TH to ATen.
CUDA path will be implemented in a follow-up PR.
With ATen port, support for complex and batched inputs is added.
The tests are rewritten and OpInfo entry is added.

We can implement the least squares solver with geqrf + ormqr +
triangular_solve. So it's useful to have this function renewed at least for the
internal code.

Resolves https://github.com/pytorch/pytorch/issues/24748

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D28242070

Pulled By: mruberry

fbshipit-source-id: f070bb6ac2f5a3269b163b22f7354e9089ed3061
2021-05-06 04:44:40 -07:00
7627dd568a hardswish reland (#57652)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/57652

Test Plan: Imported from OSS

Reviewed By: Krovatkin

Differential Revision: D28226724

Pulled By: eellison

fbshipit-source-id: 585a91ffab7a855b5600e79130a37be25ef9b354
2021-05-05 17:21:43 -07:00
049152faa9 Make torch.linalg.eigvalsh differentiable (#57189)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57189

`torch.linalg.eigvalsh` now supports autograd. This is achieved by
computing the eigenvectors internally if input requires grad,
otherwise the eigenvectors are not computed and the operation is faster.

Test Plan: Imported from OSS

Reviewed By: mrshenli

Differential Revision: D28199708

Pulled By: albanD

fbshipit-source-id: 12ac56f50137398613e186abd49f82c8ab83532e
2021-05-05 13:12:18 -07:00
babae61f2f Make torch.linalg.svdvals differentiable (#57188)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57188

`torch.linalg.svdvals` now supports autograd. This is achieved by
computing the singular vectors internally if input requires grad,
otherwise the vectors are not computed and the operation is faster.

Test Plan: Imported from OSS

Reviewed By: mrshenli

Differential Revision: D28199709

Pulled By: albanD

fbshipit-source-id: cf39cf40965c606927db5331ce16743178fa711f
2021-05-05 13:11:15 -07:00