27f7b65a69
[BE] Ensure generated stub files by gen_pyi
are properly formatted ( #150730 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/150730
Approved by: https://github.com/aorenste
2025-05-17 12:30:40 +00:00
7cb5c751c3
Fix the basic description of torch.min(), torch.max(), torch.all(), torch.any() ( #152658 )
...
Fixes #152176
Pull Request resolved: https://github.com/pytorch/pytorch/pull/152658
Approved by: https://github.com/malfet
2025-05-08 22:59:14 +00:00
e2f9759bd0
Fix broken URLs ( #152237 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/152237
Approved by: https://github.com/huydhn , https://github.com/malfet
2025-04-27 09:56:42 +00:00
0f9b02c839
[Easy][torch.Event] Fix and improve the docs of torch.Event ( #151411 )
...
**Changes:**
- add detailed function or class signature
- fix the wrong display of torch.Event.wait and torch.Event.record
Pull Request resolved: https://github.com/pytorch/pytorch/pull/151411
Approved by: https://github.com/albanD
ghstack dependencies: #151404 , #151221
2025-04-26 13:52:38 +00:00
191b0237a6
Added to docs for out_dtype arg in torch gemms ( #151704 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/151704
Approved by: https://github.com/bdhirsh
2025-04-21 20:09:17 +00:00
c4482565cc
Revert "[Easy][torch.Event] Fix and improve the docs of torch.Event ( #151411 )"
...
This reverts commit 1e1d0a4be63b354f762ee21bdccec03c1e5b371c.
Reverted https://github.com/pytorch/pytorch/pull/151411 on behalf of https://github.com/malfet due to This broke rocm tests, see 92baeecbdd (40818271233-box)
([comment](https://github.com/pytorch/pytorch/pull/151221#issuecomment-2816883409 ))
2025-04-19 22:06:24 +00:00
1e1d0a4be6
[Easy][torch.Event] Fix and improve the docs of torch.Event ( #151411 )
...
**Changes:**
- add detailed function or class signature
- fix the wrong display of torch.Event.wait and torch.Event.record
Pull Request resolved: https://github.com/pytorch/pytorch/pull/151411
Approved by: https://github.com/albanD
ghstack dependencies: #151226 , #151221
2025-04-19 12:21:02 +00:00
9a2624c712
Fix keepdim
param optional description ( #151197 )
...
Fixes #151104
Fix optional description of `dim` and `keepdim`, except `torch.quantile` which already fixed in #146485
## Test Result
### Before

### After

Pull Request resolved: https://github.com/pytorch/pytorch/pull/151197
Approved by: https://github.com/mikaylagawarecki
2025-04-16 23:15:30 +00:00
4518b30680
Clarify that x and dx are mutually exclusive in torch.trapezoid doc ( #151190 )
...
This PR addresses [#151105 ](https://github.com/pytorch/pytorch/issues/151105 ) by stating that x and dx are mutually exclusive parameters in torch.trapezoid()
Pull Request resolved: https://github.com/pytorch/pytorch/pull/151190
Approved by: https://github.com/soulitzer
2025-04-15 21:42:05 +00:00
5a422150c3
Add torch.triu_indices
, torch.tril_indices
dtype description ( #150749 )
...
Fixes #150675
## Test Result

Pull Request resolved: https://github.com/pytorch/pytorch/pull/150749
Approved by: https://github.com/bdhirsh
2025-04-09 15:03:24 +00:00
732f9d7435
Optimize torch.equal
description ( #149618 )
...
Fixes #149222
## Test Result

Pull Request resolved: https://github.com/pytorch/pytorch/pull/149618
Approved by: https://github.com/zou3519
2025-03-21 03:44:49 +00:00
1bdbf12672
Update as strided doc ( #149146 )
...
Make it clearer why it is not recommended to use it and when the resulting Tensor will have undefined behavior.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/149146
Approved by: https://github.com/gchanan , https://github.com/jbschlosser
2025-03-14 19:49:57 +00:00
75d29443e7
[Docs] update bucketize documentaion ( #148400 )
...
Fixes #144504
Clarify the documentation for `torch.bucketize` by referencing the existing table. The current version includes a somewhat confusing explanation for the `right` kwarg, whereas the existing table is much clearer.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/148400
Approved by: https://github.com/benjaminglass1 , https://github.com/eellison , https://github.com/albanD
2025-03-06 22:07:52 +00:00
6a72aaadae
Fix torch.max
optional args dim
, keepdim
description ( #147177 )
...
[`torch.max`](https://pytorch.org/docs/stable/generated/torch.max.html#torch.max ) optional args `dim`, `keepdim` not described in document, but users can ignore them.
```python
>>> import torch
>>> a = torch.randn(3,1,3)
>>> a.max()
tensor(1.9145)
>>> a.max(dim=1)
torch.return_types.max(
values=tensor([[ 1.1436, -0.0728, 1.3312],
[-0.4049, 0.1792, -1.2247],
[ 0.8767, -0.7888, 1.9145]]),
indices=tensor([[0, 0, 0],
[0, 0, 0],
[0, 0, 0]]))
```
## Changes
- Add `optional` description for `dim`, `keepdim`
- Add example of using `dim`, `keepdim`
## Test Result
### Before

### After

Pull Request resolved: https://github.com/pytorch/pytorch/pull/147177
Approved by: https://github.com/colesbury
2025-02-20 08:18:09 +00:00
bae049b439
Update addr doc ( #146482 )
...
Fixes https://github.com/pytorch/pytorch/issues/146399
Pull Request resolved: https://github.com/pytorch/pytorch/pull/146482
Approved by: https://github.com/janeyx99
2025-02-18 23:25:38 +00:00
80f146dedf
Update addbmm, addmm, addmv and baddbmm description ( #146689 )
...
Fixes #146611 , following #146482
## Test Result

Pull Request resolved: https://github.com/pytorch/pytorch/pull/146689
Approved by: https://github.com/mikaylagawarecki
2025-02-13 01:30:50 +00:00
0c9fdd6cfb
[Docs] Fix description of input
in torch.addbmm()
( #146664 )
...
Fixes #146613
Pull Request resolved: https://github.com/pytorch/pytorch/pull/146664
Approved by: https://github.com/mikaylagawarecki
2025-02-11 01:22:09 +00:00
e8304f08fe
Fix torch.take_along_dim param type and default description ( #146474 )
...
## Changes
- Change type description to `LongTensor`, consistent with [`torch.take`](https://pytorch.org/docs/stable/generated/torch.take.html )
- Add `dim` param default value description
## Test Result
**Before**

**After**

Pull Request resolved: https://github.com/pytorch/pytorch/pull/146474
Approved by: https://github.com/mikaylagawarecki
2025-02-10 01:19:30 +00:00
a3ca5c7f4e
remove incorrect warnings from min/max documentation ( #146725 )
...
Fixes #ISSUE_NUMBER
Pull Request resolved: https://github.com/pytorch/pytorch/pull/146725
Approved by: https://github.com/wdvr , https://github.com/malfet
2025-02-08 05:10:08 +00:00
a7c2d85c18
Add overloads to diagonal docs ( #144214 )
...
Fixes #126827 . Refactored doc to demonstrate when none of the optional values are passed in. Added another example so that all overloads of the function are covered.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/144214
Approved by: https://github.com/albanD
2025-01-31 15:53:59 +00:00
ad36f4f42c
Revert "Add generator parameter to rand*_like functions ( #136780 )"
...
This reverts commit c7b2f7dd142fc97c8ce4ad7ad591687cf295fcda.
Reverted https://github.com/pytorch/pytorch/pull/136780 on behalf of https://github.com/izaitsevfb due to internal regression ([comment](https://github.com/pytorch/pytorch/pull/136780#issuecomment-2613191933 ))
2025-01-24 19:00:21 +00:00
f2cfe8b59f
PEP585 update - mostly toplevels ( #145178 )
...
See #145101 for details.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/145178
Approved by: https://github.com/bobrenjc93
2025-01-22 02:21:14 +00:00
c7b2f7dd14
Add generator parameter to rand*_like functions ( #136780 )
...
Fixes #128786
Fixes #101974
Fixes #27072
Pull Request resolved: https://github.com/pytorch/pytorch/pull/136780
Approved by: https://github.com/Chillee , https://github.com/ezyang
2025-01-15 21:16:52 +00:00
6de110b862
Support with statement on torch.Stream ( #140138 )
...
# Motivation
We propose to support Python with statement on `torch.Stream`. This is a benefit for all accelerators when writing device-agnostic code. The device-specific stream will also be supported because they are generally derived from `torch.Stream`.
With this PR, we can do like this
```python
s1= torch.Stream()
# Set s1 to the current stream
torch.accelerator.set_stream(s1)
with torch.Stream() as s2:
# Inside with statement, we set s2 to the current stream
assert torch.accelerator.current_stream() == s2
# Here the current stream should be s1
assert torch.accelerator.current_stream() == s1
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/140138
Approved by: https://github.com/albanD
2025-01-10 02:05:19 +00:00
e4a05dec0f
[BE][Ez]: Fix docs recommending inefficient tensor op order ( #144270 )
...
`detach().clone()` is faster than `.clone().detatch()` since the gradients are not cloned. Let's update all the documentation and tests so that users do not use the inefficient op ordering.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/144270
Approved by: https://github.com/awgu , https://github.com/XuehaiPan
2025-01-07 17:31:32 +00:00
67355a1289
[Easy] Add torch.range, torch.arange params optional description ( #143731 )
...
Fixes #129333
**Test Result**
**Before**


**After**


Pull Request resolved: https://github.com/pytorch/pytorch/pull/143731
Approved by: https://github.com/janeyx99
2024-12-24 01:29:24 +00:00
a70191da41
Add torch.topk indices vary description ( #143736 )
...
Fixes #133542
**Test Result**
**Before**

**After**

Pull Request resolved: https://github.com/pytorch/pytorch/pull/143736
Approved by: https://github.com/zou3519
2024-12-23 17:16:31 +00:00
12098ad242
Add torch.cat tensors type promotion description ( #141339 )
...
Fixes #126964
Add note description about type promotion of `torch.cat`
**Test Result**
**Before**

**After**

Pull Request resolved: https://github.com/pytorch/pytorch/pull/141339
Approved by: https://github.com/albanD
2024-12-14 01:36:41 +00:00
07edb2ec4d
Update documentation for torch.mean() to note behavior with empty tensors ( #142039 )
...
This PR updates the documentation for `torch.mean()` to explicitly mention that computing the mean over an empty tensor returns `nan`. This clarification helps users understand the behavior and handle it appropriately in their code.
Fixes #141057
Pull Request resolved: https://github.com/pytorch/pytorch/pull/142039
Approved by: https://github.com/albanD
2024-12-05 17:21:53 +00:00
763038db66
Clarify torch.arange floating-point rounding behavior ( #141655 )
...
Added documentation note clarifying the rounding behavior of `torch.arange` when using floating-point dtypes, particularly for reduced precision types like `bfloat16`. This helps users understand potential issues like repeated values and provides guidance on using integer dtypes for precise sequences.
## Changes
- Added explanatory note about floating-point rounding behavior and its effects
- Included specific mention of `bfloat16` dtype issues
- Added recommendation to use integer dtypes for precise sequences
Fixes [#137774 ](https://github.com/pytorch/pytorch/issues/137774 )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/141655
Approved by: https://github.com/cpuhrsch
2024-11-27 09:31:39 +00:00
0c587c324d
DOC: Correct torch.trapezoid docstring ( #141459 )
...
This is super duper minor, but I believe this corrects a typo in the documentation of `torch.trapezoid`.
The documentation says the input is a 1-dimensional tensor $y_0, \dots, y_n$, but it uses summations going from 1 to n-1. Since it's summing over terms $y_i - y_{i-1}$, stopping at n-1 excludes the last partition $y_n - y_{n-1}$, which doesn't match the implementation...
```python
# (just showing it does include $y_n - y_{n-1}$)
torch.trapezoid([0, 0, 9999]) == 9999 / 2
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/141459
Approved by: https://github.com/colesbury
2024-11-27 01:54:14 +00:00
4fb4aa3e70
Updated docstrings referring to torch.expand
to point to torch.Tensor.expand
( #140045 )
...
`torch.expand` was moved to `torch.Tensor.expand` but some docstrings still refer to `torch.expand`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/140045
Approved by: https://github.com/mikaylagawarecki
2024-11-21 20:13:41 +00:00
8f3c71ad27
Add torch.sum dtype promotion description ( #140939 )
...
Fixes #82159
Add note description about type promotion of `torch.sum`.
**Test Result**
**Before**

**After**

Pull Request resolved: https://github.com/pytorch/pytorch/pull/140939
Approved by: https://github.com/zou3519
2024-11-20 06:20:01 +00:00
7167323644
Fix type description of torch.chunk ( #140089 )
...
Fixes #126278
- Change return type description of `torch.chunk` to tuple
- Add type for input parameters
**Before**

**After**

Pull Request resolved: https://github.com/pytorch/pytorch/pull/140089
Approved by: https://github.com/awgu
2024-11-08 15:21:13 +00:00
ff616c26fb
Optimize isclose description ( #139724 )
...
Fixes #139563
Make description user friendly.
After Change:

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139724
Approved by: https://github.com/janeyx99
2024-11-06 19:30:44 +00:00
aec179e2be
Fix docs for logcumsumexp formula ( #139768 )
...
The previous formula was wrong and reused some indexing variables.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139768
Approved by: https://github.com/janeyx99
2024-11-06 01:19:09 +00:00
7d644f025f
make equation behind torch.isclose element-wise ( #138459 )
...
The current formula behind torch.isclose, according to the docs, is

However, torch.isclose acts element-wise, so this formula may be misleading at first, given that the docs said that `input` and `other` are the first, respectively second tensor to compare. I propose the following change, to stress the element-wise nature of the norms in the equation:

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138459
Approved by: https://github.com/soulitzer
2024-11-01 18:18:33 +00:00
7b863230ea
[Docs] Optimize parameter description to declare allowed type (2/N) ( #138152 )
...
Inspired by issue #137422 and #103847
Optimize method parameter types in docs to given users a more clear about what expected to pass to methods.
Previous PR:
- [x] https://github.com/pytorch/pytorch/pull/137956
Pull Request resolved: https://github.com/pytorch/pytorch/pull/138152
Approved by: https://github.com/albanD
2024-10-18 11:18:19 +00:00
b4f7f4bf49
[Docs] Optimize parameter description to declare allowed type (1/N) ( #137956 )
...
Inspired by issue #137422 and #103847
Optimize method parameter types in docs to given users a more clear about what expected to pass to methods.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/137956
Approved by: https://github.com/albanD
2024-10-17 01:19:55 +00:00
abb00efc14
Add torch.squeeze parameter description to declare allowed type ( #137485 )
...
Fixes #137422
Add parameter type definition in API docs to clarify allowed value type, eliminate users pass `None` as `dim` value directly.
```python
>>> import torch
>>> x = torch.randn(3,1,2)
>>> x.squeeze(dim=None)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: Please look up dimensions by name, got: name = None.
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/137485
Approved by: https://github.com/albanD
2024-10-09 05:29:13 +00:00
4830bd0dd4
[Doc] Clarify that NaNs are not equal to each other ( #137386 )
...
Fixes https://github.com/pytorch/pytorch/issues/137337
Pull Request resolved: https://github.com/pytorch/pytorch/pull/137386
Approved by: https://github.com/janeyx99 , https://github.com/huydhn , https://github.com/kit1980
2024-10-05 06:19:12 +00:00
e9d2765ec8
Revert "Add deterministic path for CUDA cumsum
( #136224 )"
...
This reverts commit d1bb8e828f280d1c66fff193c043d5bc36154577.
Reverted https://github.com/pytorch/pytorch/pull/136224 on behalf of https://github.com/atalman due to Break internal CI ([comment](https://github.com/pytorch/pytorch/pull/136224#issuecomment-2379214226 ))
2024-09-27 12:54:47 +00:00
d1bb8e828f
Add deterministic path for CUDA cumsum
( #136224 )
...
Change `cumsum` to call its decomposition when `use_deterministic_algorithms(True)` and input is CUDA.
Fixes #89492
Pull Request resolved: https://github.com/pytorch/pytorch/pull/136224
Approved by: https://github.com/ezyang , https://github.com/justinchuby
2024-09-26 04:52:05 +00:00
e3b89ca124
Revert "Add deterministic path for CUDA cumsum
( #136224 )"
...
This reverts commit b1a02bf70824a4802411ddd5be1d3610e7a2e269.
Reverted https://github.com/pytorch/pytorch/pull/136224 on behalf of https://github.com/ezyang due to Failing internall CI ([comment](https://github.com/pytorch/pytorch/pull/136224#issuecomment-2374201626 ))
2024-09-25 14:11:01 +00:00
b1a02bf708
Add deterministic path for CUDA cumsum
( #136224 )
...
Change `cumsum` to call its decomposition when `use_deterministic_algorithms(True)` and input is CUDA.
Fixes #89492
Pull Request resolved: https://github.com/pytorch/pytorch/pull/136224
Approved by: https://github.com/ezyang , https://github.com/justinchuby
2024-09-24 21:34:43 +00:00
54fc4f56ff
[Docs fix] fix syntax error in docs :torch.blackman_window ( #136354 )
...
Fixes #ISSUE_NUMBER
https://pytorch.org/docs/stable/generated/torch.blackman_window.html
error at : equal to torch.blackman_window(L + 1, periodic=False)[:-1]).
should delete the last ).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/136354
Approved by: https://github.com/soulitzer
2024-09-24 14:00:26 +00:00
fd182b90a7
Revert "Add deterministic path for CUDA cumsum
( #136224 )"
...
This reverts commit d45b0151e5d9a9358368b9fbd7fa454edd5d9709.
Reverted https://github.com/pytorch/pytorch/pull/136224 on behalf of https://github.com/atalman due to Failing internall CI ([comment](https://github.com/pytorch/pytorch/pull/136224#issuecomment-2369244135 ))
2024-09-23 19:57:13 +00:00
d45b0151e5
Add deterministic path for CUDA cumsum
( #136224 )
...
Change `cumsum` to call its decomposition when `use_deterministic_algorithms(True)` and input is CUDA.
Fixes #89492
Pull Request resolved: https://github.com/pytorch/pytorch/pull/136224
Approved by: https://github.com/ezyang , https://github.com/justinchuby
2024-09-20 02:41:56 +00:00
5ca46be15e
Fix/torch cat doc attr ( #135698 )
...
The `torch.cat` attr name for tensors in the docs differs from the method signature, unlike other methods.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/135698
Approved by: https://github.com/albanD
Co-authored-by: Alexander Jipa <azzhipa@amazon.com >
2024-09-11 22:32:55 +00:00
90e12cf63d
Fix return type of nansum
example. ( #135435 )
...
One of the examples in the documentation of `torch.nansum` contains a wrong return type. This fixes it.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/135435
Approved by: https://github.com/ezyang
2024-09-09 03:34:52 +00:00