a58876ace7
Remove split functional wrapper ( #74727 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74727
Approved by: https://github.com/albanD , https://github.com/khabinov
2022-08-10 17:57:48 +00:00
be5b3df6cc
Update std_mean/var_mean/nanmean/nansum
signatures with int[1]? dim
( #82912 )
...
### Description
Change the type of the `dim` arg for `std_mean/var_mean/nanmean/nansum` to `int[1]?` in `native_functions.yaml`
### Issue
Part of #29137
### Testing
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82912
Approved by: https://github.com/albanD
2022-08-10 16:58:26 +00:00
5ca9b2b6fa
Enable dim=None
for torch.var
( #82765 )
...
### Description
Add support for `dim=None` in `torch.var`
### Issue
Part of #29137
### Testing
N/A
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82765
Approved by: https://github.com/albanD
2022-08-04 20:47:27 +00:00
ff5399e528
Revise sparse docs regarding Sparse Compressed tensors ( #82108 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82108
Approved by: https://github.com/bhosmer
2022-07-29 18:15:09 +00:00
8def154e00
Fix multiple docstring type mistakes ( #82474 )
...
### Description
* Docstrings using `(tuple of ints)` shows up as `(tuple of python:ints)`, so I fixed them by making the `int` no longer plural. Example: https://pytorch.org/docs/stable/generated/torch.permute.html#torch.permute
* A docstring type in JIT had one of its types incorrectly highlighted as code. Example: https://pytorch.org/docs/stable/generated/torch.jit.script.html#torch.jit.script
* I found some docstring type usages of `string` that had not yet been converted to `str` after #82410
* Some docstrings incorrectly listed their defaults inside the docstring types.
* I also found a docstring that was missing its type
### Testing
No testing should be required.
---
In the developer guidelines, there should probably be standards listed for the docstring types.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82474
Approved by: https://github.com/albanD
2022-07-29 17:45:37 +00:00
fd84c458f4
Add torch.unflatten and improve its docs ( #81399 )
...
unflatten now has a free function version in torch.flatten in addition to
the method in torch.Tensor.flatten.
Updated docs to reflect this and polished them a little.
For consistency, changed the signature of the int version of unflatten in
native_functions.yaml.
Some override tests were failing because unflatten has unusual
characteristics in terms of the .int and .Dimname versions having
different number of arguments so this required some changes
to test/test_override.py
Removed support for using mix of integer and string arguments
when specifying dimensions in unflatten.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81399
Approved by: https://github.com/Lezcano , https://github.com/ngimel
2022-07-29 15:02:42 +00:00
2bfae07a79
Enable dim=None
for torch.mean
( #81286 )
...
Part of #79525
This will require coordination with XLA before merging, just like #79881
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81286
Approved by: https://github.com/albanD
2022-07-28 22:34:56 +00:00
357b7d589c
Fix docstring inconsistencies: string -> str, boolean -> bool ( #82410 )
...
### Description
Throughout the PyTorch docs and codebase, the `string` type in docstrings is referred to by two separate names. This leads to inconsistent docs, like you can see here: https://pytorch.org/docs/stable/generated/torch.nn.Conv3d.html#torch.nn.Conv3d
This PR fixes this issue by ensuring that all mentions of the string type in docstrings, are using the same format that Sphinx generates hyperlinks for.
### Testing
No testing should be required for this change
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82410
Approved by: https://github.com/jbschlosser
2022-07-28 21:29:57 +00:00
026ef78b22
[DOC] fix a mistake in torch.dist document ( #82104 )
...
In document (https://pytorch.org/docs/stable/generated/torch.dist.html )the torch.dist(x, y, 0) returns inf.
However, it should return 4.0 and the pytorch (1.12) does return 4.0
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82104
Approved by: https://github.com/samdow
2022-07-26 13:51:31 +00:00
12cb26509a
Apply ufmt to torch internal ( #81643 )
...
This is a big bang PR, merge conflicts are probably expected and will be addressed at merge.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81643
Approved by: https://github.com/ezyang
2022-07-22 02:19:50 +00:00
e505796a2c
[Array API] Add linalg.vecdot ( #70542 )
...
This PR adds the function `linalg.vecdot` specified by the [Array
API](https://data-apis.org/array-api/latest/API_specification/linear_algebra_functions.html#function-vecdot )
For the complex case, it chooses to implement \sum x_i y_i. See the
discussion in https://github.com/data-apis/array-api/issues/356
Edit. When it comes to testing, this function is not quite a binopt, nor a reduction opt. As such, we're this close to be able to get the extra testing, but we don't quite make it. Now, it's such a simple op that I think we'll make it without this.
Resolves https://github.com/pytorch/pytorch/issues/18027 .
cc @mruberry @rgommers @pmeier @asmeurer @leofang @AnirudhDagar @asi1024 @emcastillo @kmaehashi
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70542
Approved by: https://github.com/IvanYashchuk , https://github.com/mruberry
2022-07-12 14:28:54 +00:00
7f3677d723
Revert "Remove split functional wrapper ( #74727 )"
...
This reverts commit cc3126083ecc4ac5d3952ee59b5fd47e53d45718.
Reverted https://github.com/pytorch/pytorch/pull/74727 on behalf of https://github.com/mehtanirav due to Breaking multiple internals builds and tests
2022-07-11 18:29:45 +00:00
23bdb570cf
Reland: Enable dim=None
for torch.sum
( #79881 )
...
Part of #29137
Reland of #75845
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79881
Approved by: https://github.com/albanD , https://github.com/kulinseth
2022-07-09 00:54:42 +00:00
39f659c3ba
Revert "[Array API] Add linalg.vecdot ( #70542 )"
...
This reverts commit 74208a9c68b5892b9dde39d06350fe7b92691429.
Reverted https://github.com/pytorch/pytorch/pull/70542 on behalf of https://github.com/malfet due to Broke CUDA-10.2 for vecdot_bfloat16, see 74208a9c68
2022-07-08 22:56:51 +00:00
cc3126083e
Remove split functional wrapper ( #74727 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74727
Approved by: https://github.com/albanD
2022-07-08 19:21:22 +00:00
74208a9c68
[Array API] Add linalg.vecdot ( #70542 )
...
This PR adds the function `linalg.vecdot` specified by the [Array
API](https://data-apis.org/array-api/latest/API_specification/linear_algebra_functions.html#function-vecdot )
For the complex case, it chooses to implement \sum x_i y_i. See the
discussion in https://github.com/data-apis/array-api/issues/356
Edit. When it comes to testing, this function is not quite a binopt, nor a reduction opt. As such, we're this close to be able to get the extra testing, but we don't quite make it. Now, it's such a simple op that I think we'll make it without this.
Resolves https://github.com/pytorch/pytorch/issues/18027 .
cc @mruberry @rgommers @pmeier @asmeurer @leofang @AnirudhDagar @asi1024 @emcastillo @kmaehashi
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70542
Approved by: https://github.com/IvanYashchuk , https://github.com/mruberry
2022-07-08 15:37:58 +00:00
37a5819665
Make slogdet, linalg.sloget and logdet support metatensors ( #79742 )
...
This PR also adds complex support for logdet, and makes all these
functions support out= and be composite depending on one function. We
also extend the support of `logdet` to complex numbers and improve the
docs of all these functions.
We also use `linalg_lu_factor_ex` in these functions, so we remove the
synchronisation present before.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79742
Approved by: https://github.com/IvanYashchuk , https://github.com/albanD
2022-07-01 16:09:21 +00:00
04407431ff
MAINT: Harmonize argsort params with array_api ( #75162 )
...
Closes [#70922 ](https://github.com/pytorch/pytorch/issues/70922 ).
- Does what it says on the tin.
- No non-standard implementation details.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75162
Approved by: https://github.com/rgommers , https://github.com/nikitaved , https://github.com/mruberry
2022-06-09 12:32:01 +00:00
c6215b343c
Deprecate torch.lu_solve
...
**BC-breaking note**:
This PR deprecates `torch.lu_solve` in favor of `torch.linalg.lu_solve_factor`.
A upgrade guide is added to the documentation for `torch.lu_solve`.
Note this PR DOES NOT remove `torch.lu_solve`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77637
Approved by: https://github.com/malfet
2022-06-07 22:50:14 +00:00
f7b9a46880
Deprecate torch.lu
...
**BC-breaking note**:
This PR deprecates `torch.lu` in favor of `torch.linalg.lu_factor`.
A upgrade guide is added to the documentation for `torch.lu`.
Note this PR DOES NOT remove `torch.lu`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77636
Approved by: https://github.com/malfet
2022-06-07 22:50:14 +00:00
f091b3fb4b
Update torch.lu_unpack docs
...
As per title
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77635
Approved by: https://github.com/malfet
2022-06-07 22:50:13 +00:00
bd08d085b0
Update argmin docs to reflect the code behavior ( #78888 )
...
Fixes #78791
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78888
Approved by: https://github.com/ezyang
2022-06-07 00:44:39 +00:00
c461d8a977
[primTorch] refs: hsplit, vsplit ( #78418 )
...
As per title
TODO:
* [x] Add error inputs (already exist)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78418
Approved by: https://github.com/mruberry
2022-06-06 19:54:05 +00:00
57c117d556
update signbit docs and add -0. to reference testing for unary and binary functions. ( #78349 )
...
Fixes https://github.com/pytorch/pytorch/issues/53963
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78349
Approved by: https://github.com/mruberry
2022-06-06 13:48:08 +00:00
416f581eb1
Updating torch.log example
...
Fixes issue #78301
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78776
Approved by: https://github.com/ngimel
2022-06-03 00:57:35 +00:00
388d44314d
Fix docs for torch.real ( #78644 )
...
Non-complex types are supported
```python
>>> import torch
>>> z = torch.zeros(5)
>>> torch.real(z.float())
tensor([0., 0., 0., 0., 0.])
>>> torch.real(z.int())
tensor([0, 0, 0, 0, 0], dtype=torch.int32)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78644
Approved by: https://github.com/mruberry , https://github.com/anjali411
2022-06-02 04:17:03 +00:00
3524428fad
DOC Corrects default value for storage_offset in as_strided ( #78202 )
...
Fixes #77730
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78202
Approved by: https://github.com/mruberry
2022-05-31 19:28:36 +00:00
3f334f0dfd
Fix asarray
documentation formatting ( #78485 )
...
Fixes #78290
Here's a screenshot of the modified doc:

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78485
Approved by: https://github.com/ngimel
2022-05-30 19:28:10 +00:00
089203f8bc
Updates floor_divide to perform floor division ( #78411 )
...
Fixes https://github.com/pytorch/pytorch/issues/43874
This PR changes floor_divide to perform floor division instead of truncation division.
This is a BC-breaking change, but it's a "bug fix," and we've already warned users for several releases this behavior would change.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78411
Approved by: https://github.com/ngimel
2022-05-29 21:28:45 +00:00
2df1da09e1
Add Elementwise unary ops 4 references ( #78216 )
...
Add reference implementations for `nan_to_num, positive, sigmoid, signbit, tanhshink`
Add prims for `minimum_value(dtype)` and `maximum_value(dtype)`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78216
Approved by: https://github.com/mruberry
2022-05-27 21:55:34 +00:00
07e4533403
reland of as_strided support for functionalization; introduce as_strided_scatter
...
This reverts commit a95f1edd8549b6a249ffa448df073ac4c8b81382.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78199
Approved by: https://github.com/ezyang
2022-05-24 22:40:44 +00:00
a95f1edd85
Revert "as_strided support for functionalization; introduce as_strided_scatter"
...
This reverts commit 3a921f2d267430f292a111e8bcd40c76022cfd47.
Reverted https://github.com/pytorch/pytorch/pull/77128 on behalf of https://github.com/suo due to This broke rocm tests on master 3a921f2d26
. rocm tests are no longer run on PRs, you should add a `ciflow/trunk` label if you want to run them
2022-05-24 20:19:12 +00:00
3a921f2d26
as_strided support for functionalization; introduce as_strided_scatter
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77128
Approved by: https://github.com/ezyang
2022-05-24 18:20:31 +00:00
de86146c61
rocblas alt impl during backward pass only ( #71881 )
...
In preparation of adopting future rocblas library options, it is necessary to track when the backward pass of training is executing. The scope-based helper class `BackwardPassGuard` is provided to toggle state.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71881
Approved by: https://github.com/albanD
2022-05-18 19:42:58 +00:00
0975174652
Fix doc about type promotion of lshift and rshift ( #77613 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77613
Approved by: https://github.com/ngimel
2022-05-17 00:28:48 +00:00
841c65f499
Unprivate _index_reduce and add documentation
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76997
Approved by: https://github.com/cpuhrsch
2022-05-13 19:48:38 +00:00
cc9d0f309e
lshift and rshift stop support floating types ( #77146 )
...
Fixes #74358
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77146
Approved by: https://github.com/ngimel
2022-05-11 22:29:30 +00:00
890bdf13e1
Remove deprecated torch.solve ( #70986 )
...
The time has come to remove deprecated linear algebra related functions. This PR removes `torch.solve`.
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @IvanYashchuk @xwang233 @Lezcano
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70986
Approved by: https://github.com/Lezcano , https://github.com/albanD
2022-05-10 13:44:07 +00:00
4ceac49425
Revert "Update torch.lu_unpack docs"
...
This reverts commit 9dc8f2562f2cb5d77d23fc4829f36c0ac024f1c3.
Reverted https://github.com/pytorch/pytorch/pull/73803 on behalf of https://github.com/malfet
2022-05-09 19:09:43 +00:00
1467e0dd5d
Revert "Deprecate torch.lu"
...
This reverts commit a5bbfd94fb91c078416a99b95eb7b45d3ea81b6f.
Reverted https://github.com/pytorch/pytorch/pull/73804 on behalf of https://github.com/malfet
2022-05-09 19:06:44 +00:00
b042cc7f4d
Revert "Deprecate torch.lu_solve"
...
This reverts commit f84d4d9cf5c520879b08ad3d4f5de5278fe641ec.
Reverted https://github.com/pytorch/pytorch/pull/73806 on behalf of https://github.com/malfet
2022-05-09 19:03:26 +00:00
f84d4d9cf5
Deprecate torch.lu_solve
...
**BC-breaking note**:
This PR deprecates `torch.lu_solve` in favor of `torch.linalg.lu_solve_factor`.
A upgrade guide is added to the documentation for `torch.lu_solve`.
Note this PR DOES NOT remove `torch.lu_solve`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73806
Approved by: https://github.com/IvanYashchuk , https://github.com/nikitaved , https://github.com/mruberry
2022-05-05 19:19:19 +00:00
a5bbfd94fb
Deprecate torch.lu
...
**BC-breaking note**:
This PR deprecates `torch.lu` in favor of `torch.linalg.lu_factor`.
A upgrade guide is added to the documentation for `torch.lu`.
Note this PR DOES NOT remove `torch.lu`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73804
Approved by: https://github.com/IvanYashchuk , https://github.com/mruberry
2022-05-05 19:17:11 +00:00
9dc8f2562f
Update torch.lu_unpack docs
...
As per title
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73803
Approved by: https://github.com/IvanYashchuk , https://github.com/nikitaved , https://github.com/mruberry
2022-05-05 19:12:23 +00:00
7cb7cd5802
Add linalg.lu
...
This PR modifies `lu_unpack` by:
- Using less memory when unpacking `L` and `U`
- Fuse the subtraction by `-1` with `unpack_pivots_stub`
- Define tensors of the correct types to avoid copies
- Port `lu_unpack` to be a strucutred kernel so that its `_out` version
does not incur on extra copies
Then we implement `linalg.lu` as a structured kernel, as we want to
compute its derivative manually. We do so because composing the
derivatives of `torch.lu_factor` and `torch.lu_unpack` would be less efficient.
This new function and `lu_unpack` comes with all the things it can come:
forward and backward ad, decent docs, correctness tests, OpInfo, complex support,
support for metatensors and support for vmap and vmap over the gradients.
I really hope we don't continue adding more features.
This PR also avoids saving some of the tensors that were previously
saved unnecessarily for the backward in `lu_factor_ex_backward` and
`lu_backward` and does some other general improvements here and there
to the forward and backward AD formulae of other related functions.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67833
Approved by: https://github.com/IvanYashchuk , https://github.com/nikitaved , https://github.com/mruberry
2022-05-05 09:17:05 +00:00
5adf97d492
Add docstrings to sparse compressed tensor factory functions
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76651
Approved by: https://github.com/cpuhrsch
2022-05-04 03:36:14 +00:00
ce76244200
fix where type promotion
...
Fixes #73298
I don't know whether `where` kernel actually supports type promotion, nor am I in the mood to find out, so it's manual type promotion.
Edit: nah, i can't tell TI to "promote to common dtype" because of bool condition, so manual type promotion is our only option.
I'll see what tests start failing and fix.
Uses some parts from #62084
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76691
Approved by: https://github.com/mruberry
2022-05-03 04:40:04 +00:00
39717d3034
Remove histogramdd functional wrapper
...
Merge once the forward compatibility period is expired for the histogramdd
operator.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74201
Approved by: https://github.com/ezyang , https://github.com/albanD
2022-04-14 20:56:24 +00:00
715e07b97f
Revert "Remove histogramdd functional wrapper"
...
This reverts commit 8cc338e5c2e3f1b6512dc3b33d33281ac5f4357c.
Reverted https://github.com/pytorch/pytorch/pull/74201 on behalf of https://github.com/suo
2022-04-14 03:56:48 +00:00
8cc338e5c2
Remove histogramdd functional wrapper
...
Merge once the forward compatibility period is expired for the histogramdd
operator.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74201
Approved by: https://github.com/ezyang
2022-04-14 02:47:39 +00:00