0ec723acd0
Update docs for quantile to be clearer for nearest ( #162423 )
...
Correct the rounding scheme for nearest in quantile.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/162423
Approved by: https://github.com/soulitzer
2025-09-09 18:04:12 +00:00
9480cdc0b6
Modified the docs to add example for torch.is_floating_point and torc… ( #161951 )
...
…h.is_complex.
The PR proposes adding a simple, self-explanatory example to the documentation page. The example demonstrates the function's output for tensors with various data types, showing both True and False return values.
Fixes #161859
Pull Request resolved: https://github.com/pytorch/pytorch/pull/161951
Approved by: https://github.com/zou3519
2025-09-04 18:50:19 +00:00
09587daf8c
Adding missing example of torch.full_like Issue#161899 ( #162051 )
...
Fixes #161899
Pull Request resolved: https://github.com/pytorch/pytorch/pull/162051
Approved by: https://github.com/zou3519
2025-09-04 08:45:49 +00:00
80dd397f19
Argsort doc stable kwargs ( #161986 )
...
Fixes #129311
Updated torch.argsort documentation to reflect that the 'stable' parameter is a keyword argument and not a normal parameter.
@albanD, @soulitzer
Pull Request resolved: https://github.com/pytorch/pytorch/pull/161986
Approved by: https://github.com/soulitzer
2025-09-02 20:42:53 +00:00
b99a112688
Update optional tag for interpolation
in torch.quantile()
( #161706 )
...
Fixes #146156
Refix the issue with the extra needed fix.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/161706
Approved by: https://github.com/soulitzer
2025-08-29 16:21:14 +00:00
620d52e882
Fix sort doc error ( #161539 )
...
Fixes #129298 . Updated torch.sort documentation so that the 'stable' parameter is a Keyword Argument. This is how it's implemented in PyTorch.
@malfet
Pull Request resolved: https://github.com/pytorch/pytorch/pull/161539
Approved by: https://github.com/soulitzer
2025-08-27 17:01:53 +00:00
cd87f30295
DOC: Clarify documentation for torch.matmul and fix a typo ( #161424 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/161424
Approved by: https://github.com/AlannaBurke
2025-08-26 18:30:57 +00:00
dae7710bf2
[cuda][cupy] Improve cupy device placement when device is provided with explicit index ( #158529 )
...
resubmit https://github.com/pytorch/pytorch/pull/158320 , fixing a potential bug when device index is not specified explicitly.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/158529
Approved by: https://github.com/ezyang
2025-08-15 00:27:42 +00:00
fe3f5fe4ea
Optimize min
, max
gradient behavior description ( #160312 )
...
Fixes #160273
## Test Result
<img width="897" height="593" alt="image" src="https://github.com/user-attachments/assets/6ebcdb2c-8a2c-4f0d-8195-656089e88325 " />
<img width="985" height="653" alt="image" src="https://github.com/user-attachments/assets/606a7264-e223-4d2b-8c3f-f153ce43b208 " />
<img width="903" height="607" alt="image" src="https://github.com/user-attachments/assets/0ae2f56f-820f-4194-b15c-a02a078c0487 " />
<img width="903" height="607" alt="image" src="https://github.com/user-attachments/assets/79c38a17-45ac-4808-829f-d538178de36b " />
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160312
Approved by: https://github.com/ngimel
2025-08-14 04:18:49 +00:00
87e6c4079d
Fix the Doc issue on the description of edge_order in torch.gradient() ( #159130 )
...
Fixes #159129
Pull Request resolved: https://github.com/pytorch/pytorch/pull/159130
Approved by: https://github.com/soulitzer
2025-08-13 16:48:47 +00:00
4e0f179d0b
Update the signature and test of torch.hamming_window() ( #152682 )
...
Fixes #146590
Pull Request resolved: https://github.com/pytorch/pytorch/pull/152682
Approved by: https://github.com/albanD
2025-08-04 17:50:42 +00:00
38895c0ac2
Update RuntimeError message in is_nonzero(input) method from bool to Boolean ( #159712 )
...
RuntimeError message updated in is_nonzero(input) method from bool to Boolean.
**Case 1:**
t = torch.tensor([])
torch.is_nonzero(t)
**Case 2:**
t = torch.tensor([1,2])
torch.is_nonzero(t)
**Existing Error message in documentation:**
for case 1: RuntimeError: bool value of Tensor with no values is ambiguous
for case 2: RuntimeError: bool value of Tensor with more than one value is ambiguous
**Proposed Error message in documentation:**
for case 1: RuntimeError: Boolean value of Tensor with no values is ambiguous
for case 2: RuntimeError: Boolean value of Tensor with more than one value is ambiguous
Fixes #159710
Pull Request resolved: https://github.com/pytorch/pytorch/pull/159712
Approved by: https://github.com/malfet
2025-08-02 17:23:45 +00:00
7f649ed4f8
Add basic torch.hash_tensor op ( #154149 )
...
Added `torch.hash_tensor` reduction function with a `mode` argument that defaults to reduction with xor.
- The hash is always uint64.
- Integers will be casted to uint64 before performing the xor_sum reduction
- Floats will be upcasted to double and then bitcasted to uint64 before performing the xor_sum reduction
Pull Request resolved: https://github.com/pytorch/pytorch/pull/154149
Approved by: https://github.com/albanD
2025-07-23 22:28:03 +00:00
b66f429827
Fix torch.randint
, torch.mul
param missing description ( #158731 )
...
Wrong separator cause param description truncated.
- Change separator of param and its description
- Remove quote make `torch.dtype` display as reference to the class
## Test Result
### Before
<img width="1092" height="784" alt="image" src="https://github.com/user-attachments/assets/e8d96b26-07e9-40ff-9392-fa6665d4bbe4 " />
<img width="1111" height="457" alt="image" src="https://github.com/user-attachments/assets/a3c2e333-f861-4aeb-b4fb-05c8d880ae81 " />
### After
<img width="897" height="820" alt="image" src="https://github.com/user-attachments/assets/d1b5cefa-717a-4223-84b0-4346b7eecf44 " />
<img width="872" height="409" alt="image" src="https://github.com/user-attachments/assets/96223c37-cd9d-4656-9e55-032d09cbe5c1 " />
Pull Request resolved: https://github.com/pytorch/pytorch/pull/158731
Approved by: https://github.com/ngimel
2025-07-21 20:17:27 +00:00
944a140e90
Revert "[cuda][cupy] Improve cupy device placement when device is provided ( #158320 )"
...
This reverts commit 59f9b25f3cfc635053843372ea29ff4bf754da3f.
Reverted https://github.com/pytorch/pytorch/pull/158320 on behalf of https://github.com/wdvr due to reverting because most likely causing test/test_numba_integration.py::TestNumbaIntegration::test_from_cuda_array_interface_inferred_strides to fail ([comment](https://github.com/pytorch/pytorch/pull/158320#issuecomment-3079960616 ))
2025-07-16 19:15:33 +00:00
59f9b25f3c
[cuda][cupy] Improve cupy device placement when device is provided ( #158320 )
...
This is an improvement over https://github.com/pytorch/pytorch/pull/132595 . That PR improves the case where `device` is not given. This PR tries to improve the case where `device` is given but the first step of auto-infer device from `cudaPointerGetAttributes` can be wrong (undesired). See https://github.com/pytorch/pytorch/issues/158316 for more details on when this can happen.
I think this is a reasonable improvement, as people expect `torch.as_tensor` + cupy should be zero-copy as much as possible. However, it does change some behaviors, because previously it might incur a device-to-device copy.
I will leave it to pytorch developers to see if the improvement is worthwhile.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/158320
Approved by: https://github.com/ezyang
2025-07-16 07:12:36 +00:00
05d7288e31
Fix incorrect bin edge description in histogramdd docs ( #158275 )
...
Fixes #124435
This updates the torch.histogramdd documentation to correctly state that bins are inclusive of their left edges, not exclusive as currently written. There was a previous PR addressing this but it was closed due to inactivity. This picks that up and applies the fix.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/158275
Approved by: https://github.com/albanD
2025-07-15 16:25:01 +00:00
0f21fa84fb
Documentation Fix: torch.empty_like memory preservation ( #158050 )
...
updated docs for torch.empty_like to reflect view and dense memory behavior
Fixes #158022
Pull Request resolved: https://github.com/pytorch/pytorch/pull/158050
Approved by: https://github.com/ngimel , https://github.com/cyyever
2025-07-14 06:02:54 +00:00
b4fc42ca80
Add torch.segment_reduce
docs ( #154352 )
...
Fixes #153138
## Test Result

Pull Request resolved: https://github.com/pytorch/pytorch/pull/154352
Approved by: https://github.com/albanD
2025-07-11 06:16:38 +00:00
e172309880
Documentation Fix: Torch gather broadcasting ( #157920 )
...
updated torch gather docs to reflect proper broadcasting behavior for specific backends
Fixes #157425
Pull Request resolved: https://github.com/pytorch/pytorch/pull/157920
Approved by: https://github.com/albanD
2025-07-10 19:08:51 +00:00
4cc8b60d1b
[BE][1/16] fix typos in torch/ ( #156311 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/156311
Approved by: https://github.com/albanD
2025-07-09 11:02:22 +00:00
130d4973bd
Documentation update torch.clone #156644 ( #157007 )
...
updated torch clone docs to reflect implemented memory behavior
Fixes #156644
Pull Request resolved: https://github.com/pytorch/pytorch/pull/157007
Approved by: https://github.com/malfet , https://github.com/svekars
Co-authored-by: Svetlana Karslioglu <svekars@meta.com >
2025-06-27 21:10:09 +00:00
dfef1e4408
Optimize dim description in torch.max ( #156153 )
...
Fixes #156071
## Test Result
### Before

### After

Pull Request resolved: https://github.com/pytorch/pytorch/pull/156153
Approved by: https://github.com/albanD
2025-06-24 20:50:40 +00:00
53cd18f6b3
Update gradient behavior note in torch.amin and torch.amax ( #155071 )
...
Fixes #155048
The behavior of `min` and `max` were changed in #43519 . The note about gradient behavior in torch.amin and torch.amax docs are updated to reflect this change:
New note:
`amax, amin, max(dim), min(dim) evenly distributes gradient between equal values
when there are multiple input elements with the same minimum or maximum value.`
cc - @spzala @svekars @soulitzer @sekyondaMeta @AlannaBurke @ezyang @gqchen @nikitaved @Varal7 @xmfan
Pull Request resolved: https://github.com/pytorch/pytorch/pull/155071
Approved by: https://github.com/soulitzer
2025-06-15 16:09:31 +00:00
8817e5ac80
Render Example: and not Example:: in docs ( #153978 )
...
Everything here is a grep except the changes in tools/autograd/load_derivatives.py which I manually corrected.
The correct notation is:
```
Example::
>>> ...
```
It is common and wrong to have:
```
Example::
>>> ...
```
In the wrong example, we get these pesky double colons:

Pull Request resolved: https://github.com/pytorch/pytorch/pull/153978
Approved by: https://github.com/soulitzer , https://github.com/malfet
2025-05-21 01:03:26 +00:00
27f7b65a69
[BE] Ensure generated stub files by gen_pyi
are properly formatted ( #150730 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/150730
Approved by: https://github.com/aorenste
2025-05-17 12:30:40 +00:00
7cb5c751c3
Fix the basic description of torch.min(), torch.max(), torch.all(), torch.any() ( #152658 )
...
Fixes #152176
Pull Request resolved: https://github.com/pytorch/pytorch/pull/152658
Approved by: https://github.com/malfet
2025-05-08 22:59:14 +00:00
e2f9759bd0
Fix broken URLs ( #152237 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/152237
Approved by: https://github.com/huydhn , https://github.com/malfet
2025-04-27 09:56:42 +00:00
0f9b02c839
[Easy][torch.Event] Fix and improve the docs of torch.Event ( #151411 )
...
**Changes:**
- add detailed function or class signature
- fix the wrong display of torch.Event.wait and torch.Event.record
Pull Request resolved: https://github.com/pytorch/pytorch/pull/151411
Approved by: https://github.com/albanD
ghstack dependencies: #151404 , #151221
2025-04-26 13:52:38 +00:00
191b0237a6
Added to docs for out_dtype arg in torch gemms ( #151704 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/151704
Approved by: https://github.com/bdhirsh
2025-04-21 20:09:17 +00:00
c4482565cc
Revert "[Easy][torch.Event] Fix and improve the docs of torch.Event ( #151411 )"
...
This reverts commit 1e1d0a4be63b354f762ee21bdccec03c1e5b371c.
Reverted https://github.com/pytorch/pytorch/pull/151411 on behalf of https://github.com/malfet due to This broke rocm tests, see 92baeecbdd (40818271233-box)
([comment](https://github.com/pytorch/pytorch/pull/151221#issuecomment-2816883409 ))
2025-04-19 22:06:24 +00:00
1e1d0a4be6
[Easy][torch.Event] Fix and improve the docs of torch.Event ( #151411 )
...
**Changes:**
- add detailed function or class signature
- fix the wrong display of torch.Event.wait and torch.Event.record
Pull Request resolved: https://github.com/pytorch/pytorch/pull/151411
Approved by: https://github.com/albanD
ghstack dependencies: #151226 , #151221
2025-04-19 12:21:02 +00:00
9a2624c712
Fix keepdim
param optional description ( #151197 )
...
Fixes #151104
Fix optional description of `dim` and `keepdim`, except `torch.quantile` which already fixed in #146485
## Test Result
### Before

### After

Pull Request resolved: https://github.com/pytorch/pytorch/pull/151197
Approved by: https://github.com/mikaylagawarecki
2025-04-16 23:15:30 +00:00
4518b30680
Clarify that x and dx are mutually exclusive in torch.trapezoid doc ( #151190 )
...
This PR addresses [#151105 ](https://github.com/pytorch/pytorch/issues/151105 ) by stating that x and dx are mutually exclusive parameters in torch.trapezoid()
Pull Request resolved: https://github.com/pytorch/pytorch/pull/151190
Approved by: https://github.com/soulitzer
2025-04-15 21:42:05 +00:00
5a422150c3
Add torch.triu_indices
, torch.tril_indices
dtype description ( #150749 )
...
Fixes #150675
## Test Result

Pull Request resolved: https://github.com/pytorch/pytorch/pull/150749
Approved by: https://github.com/bdhirsh
2025-04-09 15:03:24 +00:00
732f9d7435
Optimize torch.equal
description ( #149618 )
...
Fixes #149222
## Test Result

Pull Request resolved: https://github.com/pytorch/pytorch/pull/149618
Approved by: https://github.com/zou3519
2025-03-21 03:44:49 +00:00
1bdbf12672
Update as strided doc ( #149146 )
...
Make it clearer why it is not recommended to use it and when the resulting Tensor will have undefined behavior.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/149146
Approved by: https://github.com/gchanan , https://github.com/jbschlosser
2025-03-14 19:49:57 +00:00
75d29443e7
[Docs] update bucketize documentaion ( #148400 )
...
Fixes #144504
Clarify the documentation for `torch.bucketize` by referencing the existing table. The current version includes a somewhat confusing explanation for the `right` kwarg, whereas the existing table is much clearer.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/148400
Approved by: https://github.com/benjaminglass1 , https://github.com/eellison , https://github.com/albanD
2025-03-06 22:07:52 +00:00
6a72aaadae
Fix torch.max
optional args dim
, keepdim
description ( #147177 )
...
[`torch.max`](https://pytorch.org/docs/stable/generated/torch.max.html#torch.max ) optional args `dim`, `keepdim` not described in document, but users can ignore them.
```python
>>> import torch
>>> a = torch.randn(3,1,3)
>>> a.max()
tensor(1.9145)
>>> a.max(dim=1)
torch.return_types.max(
values=tensor([[ 1.1436, -0.0728, 1.3312],
[-0.4049, 0.1792, -1.2247],
[ 0.8767, -0.7888, 1.9145]]),
indices=tensor([[0, 0, 0],
[0, 0, 0],
[0, 0, 0]]))
```
## Changes
- Add `optional` description for `dim`, `keepdim`
- Add example of using `dim`, `keepdim`
## Test Result
### Before

### After

Pull Request resolved: https://github.com/pytorch/pytorch/pull/147177
Approved by: https://github.com/colesbury
2025-02-20 08:18:09 +00:00
bae049b439
Update addr doc ( #146482 )
...
Fixes https://github.com/pytorch/pytorch/issues/146399
Pull Request resolved: https://github.com/pytorch/pytorch/pull/146482
Approved by: https://github.com/janeyx99
2025-02-18 23:25:38 +00:00
80f146dedf
Update addbmm, addmm, addmv and baddbmm description ( #146689 )
...
Fixes #146611 , following #146482
## Test Result

Pull Request resolved: https://github.com/pytorch/pytorch/pull/146689
Approved by: https://github.com/mikaylagawarecki
2025-02-13 01:30:50 +00:00
0c9fdd6cfb
[Docs] Fix description of input
in torch.addbmm()
( #146664 )
...
Fixes #146613
Pull Request resolved: https://github.com/pytorch/pytorch/pull/146664
Approved by: https://github.com/mikaylagawarecki
2025-02-11 01:22:09 +00:00
e8304f08fe
Fix torch.take_along_dim param type and default description ( #146474 )
...
## Changes
- Change type description to `LongTensor`, consistent with [`torch.take`](https://pytorch.org/docs/stable/generated/torch.take.html )
- Add `dim` param default value description
## Test Result
**Before**

**After**

Pull Request resolved: https://github.com/pytorch/pytorch/pull/146474
Approved by: https://github.com/mikaylagawarecki
2025-02-10 01:19:30 +00:00
a3ca5c7f4e
remove incorrect warnings from min/max documentation ( #146725 )
...
Fixes #ISSUE_NUMBER
Pull Request resolved: https://github.com/pytorch/pytorch/pull/146725
Approved by: https://github.com/wdvr , https://github.com/malfet
2025-02-08 05:10:08 +00:00
a7c2d85c18
Add overloads to diagonal docs ( #144214 )
...
Fixes #126827 . Refactored doc to demonstrate when none of the optional values are passed in. Added another example so that all overloads of the function are covered.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/144214
Approved by: https://github.com/albanD
2025-01-31 15:53:59 +00:00
ad36f4f42c
Revert "Add generator parameter to rand*_like functions ( #136780 )"
...
This reverts commit c7b2f7dd142fc97c8ce4ad7ad591687cf295fcda.
Reverted https://github.com/pytorch/pytorch/pull/136780 on behalf of https://github.com/izaitsevfb due to internal regression ([comment](https://github.com/pytorch/pytorch/pull/136780#issuecomment-2613191933 ))
2025-01-24 19:00:21 +00:00
f2cfe8b59f
PEP585 update - mostly toplevels ( #145178 )
...
See #145101 for details.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/145178
Approved by: https://github.com/bobrenjc93
2025-01-22 02:21:14 +00:00
c7b2f7dd14
Add generator parameter to rand*_like functions ( #136780 )
...
Fixes #128786
Fixes #101974
Fixes #27072
Pull Request resolved: https://github.com/pytorch/pytorch/pull/136780
Approved by: https://github.com/Chillee , https://github.com/ezyang
2025-01-15 21:16:52 +00:00
6de110b862
Support with statement on torch.Stream ( #140138 )
...
# Motivation
We propose to support Python with statement on `torch.Stream`. This is a benefit for all accelerators when writing device-agnostic code. The device-specific stream will also be supported because they are generally derived from `torch.Stream`.
With this PR, we can do like this
```python
s1= torch.Stream()
# Set s1 to the current stream
torch.accelerator.set_stream(s1)
with torch.Stream() as s2:
# Inside with statement, we set s2 to the current stream
assert torch.accelerator.current_stream() == s2
# Here the current stream should be s1
assert torch.accelerator.current_stream() == s1
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/140138
Approved by: https://github.com/albanD
2025-01-10 02:05:19 +00:00
e4a05dec0f
[BE][Ez]: Fix docs recommending inefficient tensor op order ( #144270 )
...
`detach().clone()` is faster than `.clone().detatch()` since the gradients are not cloned. Let's update all the documentation and tests so that users do not use the inefficient op ordering.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/144270
Approved by: https://github.com/awgu , https://github.com/XuehaiPan
2025-01-07 17:31:32 +00:00