183 Commits

Author SHA1 Message Date
e925dfcc6b Enable all SIM rules except disabled ones (#164645)
`SIM` rules are useful for simplifying boolean expressions and enhances code readability.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/164645
Approved by: https://github.com/ezyang, https://github.com/mlazos
2025-10-17 07:27:11 +00:00
228973df7f Fix channels-last dimension mapping in CUDA parallel_cat (#165023)
Fixes #164849
`dimension` was updated in-place, so for more than one batch of channels-last tensors the concat `dimension` for the second kernel launch was wrong

## Testing
- python -m compileall test/test_tensor_creation_ops.py

------
https://chatgpt.com/codex/tasks/task_e_68e708879b30832f89b10ae55faa68e8
Pull Request resolved: https://github.com/pytorch/pytorch/pull/165023
Approved by: https://github.com/ezyang
2025-10-09 20:04:32 +00:00
5d7360bb03 Revert "Enable all SIM rules except disabled ones (#164645)"
This reverts commit 321e6026925f6b6e8a36e3a8b7c0295cd7541911.

Reverted https://github.com/pytorch/pytorch/pull/164645 on behalf of https://github.com/izaitsevfb due to causes lint failures ([comment](https://github.com/pytorch/pytorch/pull/164645#issuecomment-3369274351))
2025-10-05 19:32:21 +00:00
321e602692 Enable all SIM rules except disabled ones (#164645)
`SIM` rules are useful for simplifying boolean expressions and enhances code readability.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/164645
Approved by: https://github.com/ezyang
2025-10-05 07:38:25 +00:00
14f8d86136 Reland #161649, vectorize stored in cat for all dtypes (#162440)
Per title

Pull Request resolved: https://github.com/pytorch/pytorch/pull/162440
Approved by: https://github.com/Skylion007
2025-09-18 13:50:44 +00:00
a92773eeb1 Revert "Use vectorized stores for all dtypes in cat (#161649)"
This reverts commit 377033757ae5ca524ea842f1b0a5f446ed3d8fe0.

Reverted https://github.com/pytorch/pytorch/pull/161649 on behalf of https://github.com/ngimel due to reverted internally ([comment](https://github.com/pytorch/pytorch/pull/161649#issuecomment-3266963044))
2025-09-08 15:58:58 +00:00
bc505977fb torch.zeros bound checks for symint (#161976)
Fixes #161490

I added a bounds check for negative symints to create a better error message.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/161976
Approved by: https://github.com/ezyang
2025-09-06 05:37:42 +00:00
377033757a Use vectorized stores for all dtypes in cat (#161649)
resurrecting #151818

Pull Request resolved: https://github.com/pytorch/pytorch/pull/161649
Approved by: https://github.com/Skylion007
2025-08-31 05:42:41 +00:00
e015de1969 Revert "Use vectorized stores for all dtypes (#161649)"
This reverts commit f0a517e333d6204f560d8061a4f70523060c93bf.

Reverted https://github.com/pytorch/pytorch/pull/161649 on behalf of https://github.com/ngimel due to buggy ([comment](https://github.com/pytorch/pytorch/pull/161649#issuecomment-3238895967))
2025-08-30 03:13:40 +00:00
f0a517e333 Use vectorized stores for all dtypes (#161649)
resurrecting #151818

Pull Request resolved: https://github.com/pytorch/pytorch/pull/161649
Approved by: https://github.com/Skylion007
2025-08-28 20:06:29 +00:00
fc0376e8b1 [BE][2/6] fix typos in test/ (test/test_*.py) (#157636)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/157636
Approved by: https://github.com/yewentao256, https://github.com/mlazos
ghstack dependencies: #156311, #156609
2025-07-09 11:02:23 +00:00
034a7f6437 [BE] Raise better exception in torch.[con]cat[enate] (#155460)
By replacing `TORCH_CHECK` with `TORCH_CHECK_VALUE`

Also make redispatching from aliases an even simpler, by just calling
respective original class

Addresses feedback raised in https://github.com/pytorch/pytorch/pull/155383/files#r2133952368

Pull Request resolved: https://github.com/pytorch/pytorch/pull/155460
Approved by: https://github.com/Skylion007, https://github.com/albanD
2025-06-09 22:18:00 +00:00
3d82a1dfb5 Add checks for empty tensor list (#155383)
Vibe-coded with Codex, after collecting a backtrace, see https://chatgpt.com/s/cd_68438be8a1248191adbfa0a5f000e60b

Even though, check for empty tensor list exists in `at::cat` crash might happens while resolving named dimension to position, by calling `dimname_to_position(tensors[0], dim)`, see backtrace below
```
(lldb) up
frame #1: 0x00000001101146dc libtorch_cpu.dylib`at::TensorBase::has_names(this=0x0000000000000000) const at TensorBase.h:559:10
   556 	  bool has_names() const {
   557 	    // If a user is using unnamed tensors, then we can short-circuit right here.
   558 	    // Otherwise, impl::has_names attempts to retrieve names.
-> 559 	    if (!impl_->has_named_tensor_meta()) {
   560 	      return false;
   561 	    }
   562 	    return impl::has_names(unsafeGetTensorImpl());
(lldb) up
frame #2: 0x00000001101144c4 libtorch_cpu.dylib`at::dimname_to_position(tensor=0x0000000000000000, dim=Dimname @ 0x000000016fdfe348) at NamedTensorUtils.cpp:23:3
   20  	int64_t dimname_to_position(const Tensor& tensor, Dimname dim) {
   21  	  TORCH_CHECK(dim.type() != NameType::WILDCARD,
   22  	      "Please look up dimensions by name, got: name = None.");
-> 23  	  TORCH_CHECK(tensor.has_names(),
   24  	      "Name ", dim, " not found in ", toDimnameRepr(tensor), ".");
   25  	  const auto names = tensor.names();
   26
```

TODOs:
 - May be move test from `test_tensor_creation.py` to OpInfo (not sure which one is more readable)
 - Replace  `TORCH_CHECK` with `TORCH_CHECK_VALUE` and adjust unit tests

Fixes https://github.com/pytorch/pytorch/issues/155306
Pull Request resolved: https://github.com/pytorch/pytorch/pull/155383
Approved by: https://github.com/cyyever, https://github.com/ezyang
ghstack dependencies: #155382
2025-06-08 18:53:19 +00:00
9bf6593e96 Fix docstring for torch.UntypedStorage.from_file (#155067)
Fixes #130629

Happy to revert the second commit if we think it's making the test too fragile for the future

Pull Request resolved: https://github.com/pytorch/pytorch/pull/155067
Approved by: https://github.com/malfet
2025-06-05 14:30:49 +00:00
9d3ad82ca7 [dynamo] Remove all skipIfTorchDynamo in test_tensor_creation_ops.py (#154693)
Looks like they are no longer needed.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/154693
Approved by: https://github.com/Skylion007, https://github.com/zou3519
2025-06-02 20:14:35 +00:00
f9dc20c7a3 [dynamo] Fix syntax error in aot graph from kwarg-less torch.Tensor.[random_|uniform_] calls (#154645)
As title, fixes #151432, see more context in the issue discussion.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/154645
Approved by: https://github.com/zou3519
2025-05-30 18:50:58 +00:00
dacdc9782b [Inductor] Add input value checking to randint meta function (#147191)
Fixes #147070

Adding value checking for the range to the meta function, similar to which in the CUDA/CPU aten op.

Test with
```
PYTORCH_TEST_WITH_DYNAMO=1 pytest test/test_tensor_creation_ops.py -k test_randint_inference
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/147191
Approved by: https://github.com/leslie-fang-intel, https://github.com/jansel
2025-02-25 02:18:16 +00:00
0acbf8039a [BE] Unskip some tensor creation tests on Mac (#146952)
Followup after https://github.com/pytorch/pytorch/pull/145367

One should never use skip, but rather xfail otherwise one never knows when test is finally fixed.

`test_float_to_int_conversion_finite` were fixed on MacOS a while back (guess since the time Intel builds were disbaled), while `test_float_to_int_conversion_nonfinite` is fixed by https://github.com/pytorch/pytorch/pull/145367 that selects architecture-appropriate reference values for Arm ISA

Note, that results of floating to integral types cast are undefined if floating point value is outside of integral dynamic range

"Fixes" https://github.com/pytorch/pytorch/issues/38752

Pull Request resolved: https://github.com/pytorch/pytorch/pull/146952
Approved by: https://github.com/atalman, https://github.com/seemethere
2025-02-12 01:59:15 +00:00
f59a56e56f [ARM] Fix test_float_to_int_conversion_nonfinite (#145367)
We have broken tests on Aarch64 which are not enabled upstream, this PR will fix and enable those tests.

```
AssertionError: Tensor-likes are not equal!

Mismatched elements: 2 / 3 (66.7%)
Greatest absolute difference: 1 at index (1,)
Greatest relative difference: 1.0842021724855044e-19 at index (1,)

To execute this test, run the following from the base repo dir:
    python test/test_tensor_creation_ops.py TestTensorCreationCPU.test_float_to_int_conversion_nonfinite_cpu_int64

This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/145367
Approved by: https://github.com/malfet
2025-02-11 22:22:10 +00:00
d4171b724e Let tensor_a.new_tensor() be on tensor_a.device by default (#144958)
Fixes #144957
Closes #73838 cc @albanD @ezyang

Currently, `tensor_a.new_tensor()` will return a on-cpu tensor no matter where is `tensor_a`. This differs from the document and is a side-effect of https://github.com/pytorch/pytorch/pull/41984.

See #144957 how current logic breaks dynamo.

This PR restore the documented behavior and add tests for `new_tensor`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/144958
Approved by: https://github.com/ezyang
2025-01-24 22:12:31 +00:00
ad36f4f42c Revert "Add generator parameter to rand*_like functions (#136780)"
This reverts commit c7b2f7dd142fc97c8ce4ad7ad591687cf295fcda.

Reverted https://github.com/pytorch/pytorch/pull/136780 on behalf of https://github.com/izaitsevfb due to internal regression ([comment](https://github.com/pytorch/pytorch/pull/136780#issuecomment-2613191933))
2025-01-24 19:00:21 +00:00
0b17c09893 restore rng generation for fbcode (#144819)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/144819
Approved by: https://github.com/malfet, https://github.com/kit1980
2025-01-16 06:46:26 +00:00
d595b96059 Revert "restore rng generation for fbcode (#144819)"
This reverts commit 2bc18a905544f4e25cfbd354351418b36a0f5fc1.

Reverted https://github.com/pytorch/pytorch/pull/144819 on behalf of https://github.com/ngimel due to internal failure ([comment](https://github.com/pytorch/pytorch/pull/144819#issuecomment-2594298941))
2025-01-16 01:52:29 +00:00
Sam
c7b2f7dd14 Add generator parameter to rand*_like functions (#136780)
Fixes #128786
Fixes #101974
Fixes #27072

Pull Request resolved: https://github.com/pytorch/pytorch/pull/136780
Approved by: https://github.com/Chillee, https://github.com/ezyang
2025-01-15 21:16:52 +00:00
2bc18a9055 restore rng generation for fbcode (#144819)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/144819
Approved by: https://github.com/malfet, https://github.com/kit1980
2025-01-15 16:34:25 +00:00
d21738f24a Revert "Fix torch.normal ignores default_device (#144070)"
This reverts commit 184549b2d7e59acfc6e47d121e9ebb50648945b3.

Reverted https://github.com/pytorch/pytorch/pull/144070 on behalf of https://github.com/ezyang due to broken a specific use case ([comment](https://github.com/pytorch/pytorch/pull/144070#issuecomment-2590681953))
2025-01-14 17:41:58 +00:00
4143312e67 S390x ci periodic tests (#125401)
Periodically run testsuite for s390x

**Dependencies update**
Package z3-solver is updated from version 4.12.2.0 to version 4.12.6.0. This is a minor version update, so no functional change is expected.
The reason for update is build on s390x. pypi doesn't provide binary build for z3-solver for versions 4.12.2.0 or 4.12.6.0 for s390x. Unfortunately, version 4.12.2.0 fails to build with newer gcc used on s390x builders, but those errors are fixed in version 4.12.6.0. Due to this minor version bump fixes build on s390x.

```
# pip3 install z3-solver==4.12.2.0
...
      In file included from /tmp/pip-install-756iytc6/z3-solver_ce6f750b780b4146a9a7c01e52672071/core/src/util/region.cpp:53:
      /tmp/pip-install-756iytc6/z3-solver_ce6f750b780b4146a9a7c01e52672071/core/src/util/region.cpp: In member function ‘void* region::allocate(size_t)’:
      /tmp/pip-install-756iytc6/z3-solver_ce6f750b780b4146a9a7c01e52672071/core/src/util/tptr.h:29:62: error: ‘uintptr_t’ does not name a type
         29 | #define ALIGN(T, PTR) reinterpret_cast<T>(((reinterpret_cast<uintptr_t>(PTR) >> PTR_ALIGNMENT) + \
            |                                                              ^~~~~~~~~
      /tmp/pip-install-756iytc6/z3-solver_ce6f750b780b4146a9a7c01e52672071/core/src/util/region.cpp:82:22: note: in expansion of macro ‘ALIGN’
         82 |         m_curr_ptr = ALIGN(char *, new_curr_ptr);
            |                      ^~~~~
      /tmp/pip-install-756iytc6/z3-solver_ce6f750b780b4146a9a7c01e52672071/core/src/util/region.cpp:57:1: note: ‘uintptr_t’ is defined in header ‘<cstdint>’; did you forget to ‘#include <cstdint>’?
         56 | #include "util/page.h"
        +++ |+#include <cstdint>
         57 |
```

**Python paths update**
On AlmaLinux 8 s390x, old paths:
```
python -c 'from distutils.sysconfig import get_python_lib; print(get_python_lib())'
/usr/lib/python3.12/site-packages
```

Total result is `/usr/lib/python3.12/site-packages/torch;/usr/lib/python3.12/site-packages`

New paths:
```
python -c 'import site; print(";".join([x for x in site.getsitepackages()] + [x + "/torch" for x in site.getsitepackages()]))'
/usr/local/lib64/python3.12/site-packages;/usr/local/lib/python3.12/site-packages;/usr/lib64/python3.12/site-packages;/usr/lib/python3.12/site-packages;/usr/local/lib64/python3.12/site-packages/torch;/usr/local/lib/python3.12/site-packages/torch;/usr/lib64/python3.12/site-packages/torch;/usr/lib/python3.12/site-packages/torch
```

```
# python -c 'import torch ; print(torch)'
<module 'torch' from '/usr/local/lib64/python3.12/site-packages/torch/__init__.py'>
```

`pip3 install dist/*.whl` installs torch into `/usr/local/lib64/python3.12/site-packages`, and later it's not found by cmake with old paths:

```
CMake Error at CMakeLists.txt:9 (find_package):
  By not providing "FindTorch.cmake" in CMAKE_MODULE_PATH this project has
  asked CMake to find a package configuration file provided by "Torch", but
  CMake did not find one.
```

https://github.com/pytorch/pytorch/actions/runs/10994060107/job/30521868178?pr=125401

**Builders availability**
Build took 60 minutes
Tests took: 150, 110, 65, 55, 115, 85, 50, 70, 105, 110 minutes (split into 10 shards)

60 + 150 + 110 + 65 + 55 + 115 + 85 + 50 + 70 + 105 + 110 = 975 minutes used. Let's double it. It would be 1950 minutes.

We have 20 machines * 24 hours = 20 * 24 * 60 = 20 * 1440 = 28800 minutes

We currently run 5 nightly binaries builds, each on average 90 minutes build, 15 minutes test, 5 minutes upload, 110 minutes total for each, 550 minutes total. Doubling would be 1100 minutes.

That leaves 28800 - 1100 = 27700 minutes total. Periodic tests would use will leave 25750 minutes.

Nightly binaries build + nightly tests = 3050 minutes.

25750 / 3050 = 8.44. So we could do both 8 more times for additional CI runs for any reason. And that is with pretty good safety margin.

**Skip test_tensorexpr**
On s390x, pytorch is built without llvm.
Even if it would be built with llvm, llvm currently doesn't support used features on s390x and test fails with errors like:
```
JIT session error: Unsupported target machine architecture in ELF object pytorch-jitted-objectbuffer
unknown file: Failure
C++ exception with description "valOrErr INTERNAL ASSERT FAILED at "/var/lib/jenkins/workspace/torch/csrc/jit/tensorexpr/llvm_jit.h":34, please report a bug to PyTorch. Unexpected failure in LLVM JIT: Failed to materialize symbols: { (main, { func }) }
```
**Disable cpp/static_runtime_test on s390x**

Quantization is not fully supported on s390x in pytorch yet.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/125401
Approved by: https://github.com/malfet

Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
2025-01-10 18:21:07 +00:00
184549b2d7 Fix torch.normal ignores default_device (#144070)
Fixes #122886

1. Enable `torch.normal` working with `DeviceContext` to get default device which set via `set_default_device`.
2. Add hint in `set_default_device` doc, suggest use `torch.Tensor.to` method move to desired device explicitly.

**Test Result**
1. **Doc Preview**
![image](https://github.com/user-attachments/assets/eb69c334-be2b-4dc5-bdce-567da21e1635)

2. **Local Test**
```python
>>> import torch
>>> torch.normal(0.,1., (10,10)).device
device(type='cpu')
>>> torch.set_default_device('cuda')
>>> torch.normal(0.,1., (10,10)).device
device(type='cuda', index=0)
```

```bash
pytest test/test_tensor_creation_ops.py
```

![image](https://github.com/user-attachments/assets/8b466b55-f162-4b83-8b20-71de2c1d0914)

```bash
lintrunner
```
![image](https://github.com/user-attachments/assets/5b269c50-da57-47ed-8500-4edf2c2295e4)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/144070
Approved by: https://github.com/ezyang
2025-01-10 08:19:55 +00:00
ab1f627aa4 fix randint distribution for large max (#143787)
Fixes #ISSUE_NUMBER
Similar to #143682, for large maximum values we were sampling integers via % and it doesn't provide uniform distribution. Here we limit the max skew to approx 1% (random32 is used for max values `<= 2**32 / 128`)
This comes with significant perf penalty, especially for cuda, but it's a pretty bad bug, so we'll have to figure out what can be done to improve it.
`torch.compile` has always been producing correct results for this, and it's performance is also significantly better than current eager (eager is ~660 GB/s on H100, torch.compile 1200 GB/s), so we have to figure out why torch.compile is better.
`__launch_bounds__` slightly regress perf, so perhaps we can figure out how to specify them better, but it's only 20-30 GB/s, so the big difference is still unexplained.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/143787
Approved by: https://github.com/eqy
2025-01-08 18:51:48 +00:00
094ca3154d Fix torch._refs.tensor error with empty list (#143461)
Fixes #143216

**Test Result**

**Before**

```python
>>> import torch
>>> torch._refs.tensor([])
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/zong/code/pytorch/torch/_refs/__init__.py", line 6614, in tensor
    new_tensor = _internal_new_from_data(
                 ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/zong/code/pytorch/torch/_refs/__init__.py", line 6596, in _internal_new_from_data
    tensor = _recursive_build(inferred_scalar_type, data)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/zong/code/pytorch/torch/_refs/__init__.py", line 6545, in _recursive_build
    return torch.stack([_recursive_build(scalarType, item) for item in seq])
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: stack expects a non-empty TensorList

```

**After**

```python
>>> torch._refs.tensor([])
tensor([])
>>> torch._refs.tensor([], device='cuda')
tensor([], device='cuda:0')
```

```bash
$ pytest test/test_tensor_creation_ops.py -k test_refs_tensor
```

![image](https://github.com/user-attachments/assets/5be4c17a-bea6-4b7b-bec1-b4fcb417a8cd)

```bash
$ lintrunner
```
![image](https://github.com/user-attachments/assets/e8f88f41-78ac-4337-b53f-2e524de2bec0)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/143461
Approved by: https://github.com/ezyang, https://github.com/soulitzer
2025-01-08 01:29:00 +00:00
2e42be0595 Use random64 in Fischer-Yates algorithm for large N (#143682)
Fixes bug in randperm https://nbsanity.com/static/a4774194938414dedcec7d6e99727d31/Shuffling_20in_20torch_20vs_20numpy-public.html

Pull Request resolved: https://github.com/pytorch/pytorch/pull/143682
Approved by: https://github.com/eqy, https://github.com/albanD, https://github.com/malfet
2025-01-07 03:48:56 +00:00
cyy
df458be4e5 [4/N] Apply py39 ruff and pyupgrade fixes (#143257)
```torch/fx/passes/annotate_getitem_nodes.py``` was changed to support the new type hinting annotations.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/143257
Approved by: https://github.com/justinchuby, https://github.com/albanD
2025-01-04 10:47:51 +00:00
3571476739 Revert "fix randint distribution for large max (#143787)"
This reverts commit 8059d56ec364feb554f3fb90012a0fc2d2104e7f.

Reverted https://github.com/pytorch/pytorch/pull/143787 on behalf of https://github.com/wdvr due to failing internal tests, to be fixed first ([comment](https://github.com/pytorch/pytorch/pull/143787#issuecomment-2563493323))
2024-12-27 09:16:36 +00:00
f6801ba4b3 Revert "Use random64 in Fischer-Yates algorithm for large N (#143682)"
This reverts commit 7013be0094e8d3ded2ba2f948082f98d63e622bb.

Reverted https://github.com/pytorch/pytorch/pull/143682 on behalf of https://github.com/wdvr due to failing Meta internal tests that need to be updated ([comment](https://github.com/pytorch/pytorch/pull/143682#issuecomment-2563487675))
2024-12-27 09:09:33 +00:00
8059d56ec3 fix randint distribution for large max (#143787)
Fixes #ISSUE_NUMBER
Similar to #143682, for large maximum values we were sampling integers via % and it doesn't provide uniform distribution. Here we limit the max skew to approx 1% (random32 is used for max values `<= 2**32 / 128`)
This comes with significant perf penalty, especially for cuda, but it's a pretty bad bug, so we'll have to figure out what can be done to improve it.
`torch.compile` has always been producing correct results for this, and it's performance is also significantly better than current eager (eager is ~660 GB/s on H100, torch.compile 1200 GB/s), so we have to figure out why torch.compile is better.
`__launch_bounds__` slightly regress perf, so perhaps we can figure out how to specify them better, but it's only 20-30 GB/s, so the big difference is still unexplained.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/143787
Approved by: https://github.com/eqy
2024-12-26 23:54:03 +00:00
7013be0094 Use random64 in Fischer-Yates algorithm for large N (#143682)
Fixes bug in randperm https://nbsanity.com/static/a4774194938414dedcec7d6e99727d31/Shuffling_20in_20torch_20vs_20numpy-public.html

Pull Request resolved: https://github.com/pytorch/pytorch/pull/143682
Approved by: https://github.com/eqy, https://github.com/albanD
2024-12-25 01:19:19 +00:00
01d60bcf32 [Easy] Fix todo by enable tests for cuda (#143637)
Fix TODO in `test_tensor_creation_ops.py` file:

```python
# TODO: update to work on CUDA, too
```

**Test Result**

```bash
$ pytest test/test_tensor_creation_ops.py
```

![image](https://github.com/user-attachments/assets/ef829541-668e-446d-a9ab-b26b9d73085f)

```bash
$ lintrunner
```
![image](https://github.com/user-attachments/assets/d6a46eee-1f60-48e6-898a-a8d9620eb54a)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/143637
Approved by: https://github.com/albanD
2024-12-24 03:47:43 +00:00
d8c8ba2440 Fix unused Python variables in test/[e-z]* (#136964)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/136964
Approved by: https://github.com/justinchuby, https://github.com/albanD
2024-12-18 23:02:30 +00:00
e6083016b3 fix test_float_to_int_conversion_nonfinite for NumPy 2 (#138131)
Related to #107302

We saw `test_float_to_int_conversion_nonfinite` failed as we upgrade to NumPy 2.

It is caused by the undefined behavior of `numpy` casting `inf`, `-inf` and `nan` from `np.float32` to other dtypes.
The test is using NumPy as reference for the ground truth. (see line 1013-1015)
However, these behaviors are undefined in NumPy.
If you do `np.array([float("inf")]).astype(np.uint8, casting="safe")`, it results in an error `TypeError: Cannot cast array data from dtype('float64') to dtype('uint8') according to the rule 'safe'`.
The undefined behaviors are always subject to change.

This PR address this issue by passing concrete values as the ground truth references.
In the future, even NumPy changes its behavior the test would still remain stable.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138131
Approved by: https://github.com/drisspg
2024-11-14 04:19:19 +00:00
92fdea8a39 remove skips due to https://github.com/pytorch/torchdynamo/issues/1991 (#138133)
Closes https://github.com/pytorch/pytorch/issues/93479. A bunch of other dynamo-wrapped tests also exhibit "torch.* returned non-Tensor output unimplemented" making the issue seem less relevant to me. Some tests are marked as xfail as they fail for other reasons.

If these tests are indeed important, we should create a new issue to track them.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/138133
Approved by: https://github.com/ezyang
2024-10-17 17:42:46 +00:00
9c084cccfd [Pytorch][ATEN] Enable FP8 concatenate (#138046)
Summary: Float8 is becoming and increasingly popular datatype now that it is well supported on GPUs. This  diff enables FP8 to work with `torch.cat`. This is pretty straight forward since memory operations dont vary based on the input dtype, but can be quite helpful for FP8 based models.

Test Plan:
```
buck2 run mode/opt -c fbcode.enable_gpu_sections=true -c fbcode.platform=platform010 -c fbcode.nvcc_arch=h100a -c fbcode.platform010_cuda_version=12 //caffe2/test:tensor_creation -- -r test_cat_all_dtypes_and_devices
```

Differential Revision: D64443965

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138046
Approved by: https://github.com/eqy, https://github.com/qchip, https://github.com/jianyuh
2024-10-17 04:58:54 +00:00
a063a82c8b [redo] Fp8 support for item() with cuda, index_select, and fill_ cpu (#137341)
Summary:

Redo of https://github.com/pytorch/pytorch/pull/128780, easier to copy-paste.

Test Plan: CI

Reviewers:

Subscribers:

Tasks:

Tags:

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/137341
Approved by: https://github.com/eqy
2024-10-07 00:58:51 +00:00
b181b58857 Fix Storage.filename to not track the filename when storage was mmap-ed with MAP_PRIVATE (#128725)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/128725
Approved by: https://github.com/albanD
2024-06-17 18:55:47 +00:00
5359af0c7e [dynamo] wrap GraphModule exceptions in dynamo-wrapped tests (#126341)
Better approach to https://github.com/pytorch/pytorch/pull/126197 to catch issues like https://github.com/pytorch/pytorch/issues/125568.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/126341
Approved by: https://github.com/anijain2305, https://github.com/jansel
2024-05-29 05:18:04 +00:00
5a1216bb2e [BE]: Update ruff to 0.4.1 (#124549)
Update ruff to 0.4.1 .
This version fixes a lot false negatives/false positives, is 20-40% faster, and has various other bug fixes.

Below is a before and after table showing the execution time of ruff lint and ruff format in milliseconds courtesy of https://astral.sh/blog/ruff-v0.4.0

| Repository                                         | Linter (v0.3) | Linter (v0.4) | Formatter (v0.3) | Formatter (v0.4) |
|----------------------------------------------------|---------------|---------------|------------------|------------------|
| [pytorch/pytorch](https://github.com/pytorch/pytorch) | 328.7         | 251.8         | 351.1            | 274.9            |

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124549
Approved by: https://github.com/ezyang
2024-04-21 14:06:23 +00:00
af27bc443b fix typo in 4 files (#123529)
fix typo: `information` has no plural.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/123529
Approved by: https://github.com/albanD
2024-04-09 23:37:35 +00:00
8a5a377190 Move doc links to point to main (#121823)
The previous links were pointing to an outdated branch

Command: `find . -type f -exec sed -i "s:docs/main:docs/master:g" {} + `

Pull Request resolved: https://github.com/pytorch/pytorch/pull/121823
Approved by: https://github.com/albanD, https://github.com/malfet
2024-03-15 19:49:37 +00:00
56718cab8d Unskip test_complex_type_conversions (#118694)
Resolve TODO and unskip test_complex_type_conversions as real and imag have been implemented for complex.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/118694
Approved by: https://github.com/huydhn
2024-01-31 08:04:15 +00:00
8467de4e97 Fix kaiser_window for lower precision data types on CPU (#117345)
Fixes #117230.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/117345
Approved by: https://github.com/jgong5, https://github.com/soumith
2024-01-26 03:26:12 +00:00
2200118f59 Enable some uint{16,32,64} tests that are working (#116809)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/116809
Approved by: https://github.com/albanD
2024-01-15 02:25:21 +00:00