5cedc5a0ff
[BE][PYFMT] migrate PYFMT for torch/[p-z]*/
to ruff format
( #144552 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/144552
Approved by: https://github.com/ezyang
2025-08-07 00:09:56 +00:00
dea7ad3371
PEP585 update - torch/testing ( #145200 )
...
See #145101 for details.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/145200
Approved by: https://github.com/bobrenjc93
2025-01-20 22:42:42 +00:00
3b6b306b71
Migrate from Tuple -> tuple in torch/testing ( #144256 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/144256
Approved by: https://github.com/aorenste
2025-01-10 06:37:55 +00:00
f984b88718
Ensure noncontiguous tensor creation tests offsetting ( #136396 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/136396
Approved by: https://github.com/amjames , https://github.com/eellison
ghstack dependencies: #136055
2024-10-02 00:40:43 +00:00
30293319a8
[BE][Easy][19/19] enforce style for empty lines in import segments in torch/[o-z]*/
( #129771 )
...
See https://github.com/pytorch/pytorch/pull/129751#issue-2380881501 . Most changes are auto-generated by linter.
You can review these PRs via:
```bash
git diff --ignore-all-space --ignore-blank-lines HEAD~1
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/129771
Approved by: https://github.com/justinchuby , https://github.com/janeyx99
2024-08-01 17:07:14 +00:00
67ef2683d9
[BE] wrap deprecated function/class with typing_extensions.deprecated
( #127689 )
...
Use `typing_extensions.deprecated` for deprecation annotation if possible. Otherwise, add `category=FutureWarning` to `warnings.warn("message")` if the category is missing.
Note that only warnings that their messages contain `[Dd]eprecat(ed|ion)` are updated in this PR.
Resolves #126888
- #126888
This PR is split from PR #126898 .
- #126898
------
Pull Request resolved: https://github.com/pytorch/pytorch/pull/127689
Approved by: https://github.com/Skylion007
2024-06-02 12:30:43 +00:00
033e733021
Revert "[BE] wrap deprecated function/class with typing_extensions.deprecated
( #126898 )"
...
This reverts commit 749a132fb0a8325cbad4734a563aa459ca611991.
Reverted https://github.com/pytorch/pytorch/pull/126898 on behalf of https://github.com/fbgheith due to switching typing-extensions=4.3.0 to 4.9.0 causes internal failure ([comment](https://github.com/pytorch/pytorch/pull/126898#issuecomment-2142884456 ))
2024-05-31 19:47:24 +00:00
749a132fb0
[BE] wrap deprecated function/class with typing_extensions.deprecated
( #126898 )
...
Use `typing_extensions.deprecated` for deprecation annotation if possible. Otherwise, add `category=FutureWarning` to `warnings.warn("message")` if the category is missing.
Note that only warnings that their messages contain `[Dd]eprecat(ed|ion)` are updated in this PR.
UPDATE: Use `FutureWarning` instead of `DeprecationWarning`.
Resolves #126888
- #126888
Pull Request resolved: https://github.com/pytorch/pytorch/pull/126898
Approved by: https://github.com/albanD
2024-05-29 12:09:27 +00:00
01abb5af21
additional support for float8_e4m3fnuz and _e5m2fnuz ( #115214 )
...
Follow up to #107586 .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115214
Approved by: https://github.com/peterbell10 , https://github.com/malfet
2024-01-22 18:33:41 +00:00
b637fdc8b3
Revert "additional support for float8_e4m3fnuz and _e5m2fnuz ( #115214 )"
...
This reverts commit 74e13624998f2a4de29bce73a949d7f0339ec04e.
Reverted https://github.com/pytorch/pytorch/pull/115214 on behalf of https://github.com/PaliC due to breaking internal builds ([comment](https://github.com/pytorch/pytorch/pull/115214#issuecomment-1900815152 ))
2024-01-19 17:35:04 +00:00
74e1362499
additional support for float8_e4m3fnuz and _e5m2fnuz ( #115214 )
...
Follow up to #107586 .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115214
Approved by: https://github.com/peterbell10
2024-01-19 00:50:18 +00:00
2e983fcfd3
Support unsigned int for randint, item, equality, fill, iinfo, tensor ( #116805 )
...
These are some basic utilities that are often used for testing.
Signed-off-by: Edward Z. Yang <ezyang@meta.com >
Pull Request resolved: https://github.com/pytorch/pytorch/pull/116805
Approved by: https://github.com/albanD
2024-01-10 02:17:23 +00:00
458e7d09fd
Add meta func for scaled mm ( #112609 )
...
# Summary
Adds a meta implementation for _scaled_mm which is required for dynamic shapes
Pull Request resolved: https://github.com/pytorch/pytorch/pull/112609
Approved by: https://github.com/eellison , https://github.com/malfet
2023-11-03 03:44:22 +00:00
2e29172942
Revert "Add meta func for scaled mm ( #112609 )"
...
This reverts commit 75174c379712433af1ff810b36e34573b3d2587e.
Reverted https://github.com/pytorch/pytorch/pull/112609 on behalf of https://github.com/huydhn due to Sorry for reverting this change, but it is failing ROCm jobs 75174c3797
([comment](https://github.com/pytorch/pytorch/pull/112609#issuecomment-1791704037 ))
2023-11-02 23:37:16 +00:00
75174c3797
Add meta func for scaled mm ( #112609 )
...
# Summary
Adds a meta implementation for _scaled_mm which is required for dynamic shapes
Pull Request resolved: https://github.com/pytorch/pytorch/pull/112609
Approved by: https://github.com/eellison , https://github.com/malfet
2023-11-02 18:42:41 +00:00
129e03905d
disallow invalid value ranges in torch.testing.make_tensor ( #96334 )
...
Fixes #96179 .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96334
Approved by: https://github.com/mruberry
2023-03-24 23:55:17 +00:00
47bfb192a7
deprecate low==high in torch.testing.make_tensor ( #96333 )
...
Addresses #96179 .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96333
Approved by: https://github.com/mruberry
2023-03-24 23:55:17 +00:00
76fb9a1c7f
fix low and high in torch.testing.make_tensor for integral inputs ( #96124 )
...
Fixes #96178 .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96124
Approved by: https://github.com/mruberry
2023-03-24 23:55:17 +00:00
9029361f24
honor low and high for torch.bool in torch.testing.make_tensor ( #96332 )
...
Fixes #96101 .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96332
Approved by: https://github.com/mruberry
2023-03-24 23:55:17 +00:00
303eb37e38
QoL improvements for torch.testing.make_tensor ( #96125 )
...
Per title. The major ones:
- Enforce keyword only parameters for `_modify_low_high`, which takes 7 parameters.
28aa2efd14/torch/testing/_creation.py (L147)
is just impossible to comprehend without multiple trips back and forth.
- Improve the error messages by including the offending values in the message
I'll highlight the smaller ones inline.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96125
Approved by: https://github.com/mruberry
2023-03-24 23:55:17 +00:00
ad782ff7df
Enable xdoctest runner in CI for real this time ( #83816 )
...
Builds on #83317 and enables running the doctests. Just need to figure out what is causing the failures.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83816
Approved by: https://github.com/ezyang , https://github.com/malfet
2022-12-29 05:32:42 +00:00
4baa78bb1f
enable ufmt for torch/testing/*.py ( #89525 )
...
I've tried to soft-enforce this manually already, albeit with a line length of 120. This just adds it to the CI. Note that this only applies to `torch/testing/*.py` and thus everything under `torch/testing/_internal/**/*` is *not* affected.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89525
Approved by: https://github.com/kit1980
2022-12-01 11:22:48 +00:00
dbeacf1182
Fix cat striding in PrimTorch ( #89332 )
...
Signed-off-by: Edward Z. Yang <ezyang@fb.com >
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89332
Approved by: https://github.com/ngimel
2022-11-20 04:05:33 +00:00
3ec71fce79
Improve make_tensor performance for float and complex types ( #85473 )
...
For floating types, `make_tensor` calls `rand` and then does a linear
interpolation from `low` to `high`. This instead calls `uniform_(low,
high)` to cut out the interpolation step.
For complex types, `make_tensor` does the `rand` + interpolation step
twice and calls `torch.complex(real, imag)` at the end. This instead
uses `view_as_real` and `uniform_(low, high)` to fuse it all into one
operation.
My benchmarks show significant speedups in all cases for float32 and
complex64.
| Device | dtype | Size | Master (us) | This PR (us) | Speedup |
|--------|-----------|-------|-------------|--------------|---------|
| CPU | float32 | 8 | 19.4 | 6.34 | 3.1 |
| | | 4096 | 36.8 | 21.3 | 1.7 |
| | | 2**24 | 167,000 | 80,500 | 2.1 |
| | complex32 | 8 | 37.0 | 7.57 | 4.9 |
| | | 4096 | 73.1 | 37.6 | 1.9 |
| | | 2**24 | 409,000 | 161,000 | 2.5 |
| CUDA | float32 | 8 | 40.4 | 11.7 | 3.5 |
| | | 4096 | 38.7 | 11.7 | 3.3 |
| | | 2**24 | 2,300 | 238 | 9.7 |
| | complex32 | 8 | 78.7 | 14 | 5.6 |
| | | 4096 | 82.7 | 13.8 | 6.0 |
| | | 2**24 | 5,520 | 489 | 11.3 |
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85473
Approved by: https://github.com/mruberry
2022-10-05 17:05:20 +00:00
6db3539e70
Revert "Improve make_tensor performance for float and complex types ( #85473 )"
...
This reverts commit a76995e584b880910f0724be98eb21773e8ed6e9.
Reverted https://github.com/pytorch/pytorch/pull/85473 on behalf of https://github.com/huydhn due to Sorry for revert your PR, but it seems to cause a bunch of flaky test in pull an periodic
2022-09-29 20:06:52 +00:00
a76995e584
Improve make_tensor performance for float and complex types ( #85473 )
...
For floating types, `make_tensor` calls `rand` and then does a linear
interpolation from `low` to `high`. This instead calls `uniform_(low,
high)` to cut out the interpolation step.
For complex types, `make_tensor` does the `rand` + interpolation step
twice and calls `torch.complex(real, imag)` at the end. This instead
uses `view_as_real` and `uniform_(low, high)` to fuse it all into one
operation.
My benchmarks show significant speedups in all cases for float32 and
complex64.
| Device | dtype | Size | Master (us) | This PR (us) | Speedup |
|--------|-----------|-------|-------------|--------------|---------|
| CPU | float32 | 8 | 19.4 | 6.34 | 3.1 |
| | | 4096 | 36.8 | 21.3 | 1.7 |
| | | 2**24 | 167,000 | 80,500 | 2.1 |
| | complex32 | 8 | 37.0 | 7.57 | 4.9 |
| | | 4096 | 73.1 | 37.6 | 1.9 |
| | | 2**24 | 409,000 | 161,000 | 2.5 |
| CUDA | float32 | 8 | 40.4 | 11.7 | 3.5 |
| | | 4096 | 38.7 | 11.7 | 3.3 |
| | | 2**24 | 2,300 | 238 | 9.7 |
| | complex32 | 8 | 78.7 | 14 | 5.6 |
| | | 4096 | 82.7 | 13.8 | 6.0 |
| | | 2**24 | 5,520 | 489 | 11.3 |
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85473
Approved by: https://github.com/mruberry
2022-09-29 11:46:09 +00:00
4618371da5
Integrate xdoctest - Rebased ( #82797 )
...
This is a new version of #15648 based on the latest master branch.
Unlike the previous PR where I fixed a lot of the doctests in addition to integrating xdoctest, I'm going to reduce the scope here. I'm simply going to integrate xdoctest, and then I'm going to mark all of the failing tests as "SKIP". This will let xdoctest run on the dashboards, provide some value, and still let the dashboards pass. I'll leave fixing the doctests themselves to another PR.
In my initial commit, I do the bare minimum to get something running with failing dashboards. The few tests that I marked as skip are causing segfaults. Running xdoctest results in 293 failed, 201 passed tests. The next commits will be to disable those tests. (unfortunately I don't have a tool that will insert the `#xdoctest: +SKIP` directive over every failing test, so I'm going to do this mostly manually.)
Fixes https://github.com/pytorch/pytorch/issues/71105
@ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82797
Approved by: https://github.com/ezyang
2022-08-12 02:08:01 +00:00
8571007017
[chalf] div, eq, masked_fill, index_put ( #77479 )
...
Ref: https://github.com/pytorch/pytorch/issues/74537
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77479
Approved by: https://github.com/anjali411
2022-05-18 17:01:08 +00:00
3269729c68
[complex32] make_tensor
...
Update `make_tensor` so that it can generate `complex32` tensor.
**Note**: This doesn't enable `complex32` tests in the OpInfo test suite but only updates `make_tensor` to generate it. Enabling `complex32` in the test suite will be done later PRs
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74854
Approved by: https://github.com/pmeier , https://github.com/anjali411
2022-03-30 01:05:34 +00:00
0973c5a1cc
align signature of make_tensor with other creation ops ( #72702 )
...
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/72702
Test Plan: Imported from OSS
Reviewed By: mrshenli
Differential Revision: D34457729
Pulled By: mruberry
fbshipit-source-id: 83d580c4201eef946dc9cf4b9e28a3d36be55609
(cherry picked from commit aa4cf20fbeb4b795595729b8ac2e6ba7707d8283)
2022-02-25 06:30:31 +00:00
a72a6365c9
disallow requires_grad=True in make_tensor for integral inputs ( #67149 )
...
Summary:
per title
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67149
Reviewed By: albanD
Differential Revision: D31928613
Pulled By: ngimel
fbshipit-source-id: 4491954c4fcd4a4e3121155d4451cc7370c27a0b
2021-10-26 16:19:28 -07:00
d37636901e
[Doc] make_tensor
to torch.testing
module ( #63925 )
...
Summary:
This PR aims to add `make_tensor` to the `torch.testing` module in PyTorch docs.
TODOs:
* [x] Add examples
cc: pmeier mruberry brianjo
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63925
Reviewed By: ngimel
Differential Revision: D30633487
Pulled By: mruberry
fbshipit-source-id: 8e5a1f880c6ece5925b4039fee8122bd739538af
2021-08-30 12:25:40 -07:00