Commit Graph

26 Commits

Author SHA1 Message Date
425ea90f95 [testing] Add test owner labels for some cuda? tests (#163296)
I am trying to give some test files better owner labels than `module: unknown`.  I am not sure them, but they seem pretty reasonable

Pull Request resolved: https://github.com/pytorch/pytorch/pull/163296
Approved by: https://github.com/eqy, https://github.com/msaroufim
2025-09-26 18:26:56 +00:00
e2f9759bd0 Fix broken URLs (#152237)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/152237
Approved by: https://github.com/huydhn, https://github.com/malfet
2025-04-27 09:56:42 +00:00
d8c8ba2440 Fix unused Python variables in test/[e-z]* (#136964)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/136964
Approved by: https://github.com/justinchuby, https://github.com/albanD
2024-12-18 23:02:30 +00:00
ba48cf6535 [BE][Easy][6/19] enforce style for empty lines in import segments in test/ (#129757)
See https://github.com/pytorch/pytorch/pull/129751#issue-2380881501. Most changes are auto-generated by linter.

You can review these PRs via:

```bash
git diff --ignore-all-space --ignore-blank-lines HEAD~1
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/129757
Approved by: https://github.com/ezyang
2024-07-17 06:42:37 +00:00
a5f816df18 Add more dtypes to __cuda_array_interface__ (#129621)
`__cuda_array_interface__` was missing some unsigned integer dtypes as well as BF16.

numba doesn't support BF16 so I skip tests for that one.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/129621
Approved by: https://github.com/lezcano
2024-07-09 10:47:19 +00:00
8979412442 Enable ufmt format on test files (#126845)
Fixes some files in  #123062

Run lintrunner on files:

test/test_nnapi.py,
test/test_numba_integration.py,
test/test_numpy_interop.py,
test/test_openmp.py,
test/test_optim.py

```bash
$ lintrunner -a --take UFMT --all-files
ok No lint issues.
Successfully applied all patches.
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/126845
Approved by: https://github.com/ezyang
2024-05-28 01:42:07 +00:00
c4d24b5b7f special-case cuda array interface of zero size (#121458)
Fixes #98133
retry of #98134
Pull Request resolved: https://github.com/pytorch/pytorch/pull/121458
Approved by: https://github.com/bdice, https://github.com/ptrblck, https://github.com/mikaylagawarecki
2024-03-18 15:21:38 +00:00
675acfc1f4 Remove unwanted comma (#71193)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/70611

Pull Request resolved: https://github.com/pytorch/pytorch/pull/71193

Reviewed By: ngimel

Differential Revision: D33542841

Pulled By: mruberry

fbshipit-source-id: 0f2f1218c056aea7ecf86ba4036cfb10df6e8614
2022-01-13 19:09:05 -08:00
6259601c8a Set test owners for tests with unknown owners (#67552)
Summary:
Action following https://github.com/pytorch/pytorch/issues/66232

Pull Request resolved: https://github.com/pytorch/pytorch/pull/67552

Reviewed By: jbschlosser

Differential Revision: D32028248

Pulled By: janeyx99

fbshipit-source-id: a006f7026288b7126dba58b31cac28e10ce0fed6
2021-10-29 12:42:01 -07:00
1022443168 Revert D30279364: [codemod][lint][fbcode/c*] Enable BLACK by default
Test Plan: revert-hammer

Differential Revision:
D30279364 (b004307252)

Original commit changeset: c1ed77dfe43a

fbshipit-source-id: eab50857675c51e0088391af06ec0ecb14e2347e
2021-08-12 11:45:01 -07:00
b004307252 [codemod][lint][fbcode/c*] Enable BLACK by default
Test Plan: manual inspection & sandcastle

Reviewed By: zertosh

Differential Revision: D30279364

fbshipit-source-id: c1ed77dfe43a3bde358f92737cd5535ae5d13c9a
2021-08-12 10:58:35 -07:00
b6bbb41fd8 Temporary disable TestNumbaIntegration.test_from_cuda_array_interface* (#54430)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54430

see https://github.com/pytorch/pytorch/issues/54429

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D27232636

Pulled By: pbelevich

fbshipit-source-id: 15fb69828a23cb6f3c173a7863bd55bf4973f408
2021-03-22 09:17:28 -07:00
ec6d29d6fa Drop unused imports from test (#49973)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49973

From
```
./python/libcst/libcst codemod remove_unused_imports.RemoveUnusedImportsWithGlean --no-format caffe2/
```

Test Plan: Standard sandcastle tests

Reviewed By: xush6528

Differential Revision: D25727350

fbshipit-source-id: 237ec4edd85788de920663719173ebec7ddbae1c
2021-01-07 12:09:38 -08:00
1c616c5ab7 Add complex tensor dtypes for the __cuda_array_interface__ spec (#42918)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/42860

The `__cuda_array_interface__` tensor specification is missing the appropriate datatypes for the newly merged complex64 and complex128 tensors. This PR addresses this issue by casting:

* `torch.complex64` to 'c8'
* `torch.complex128` to 'c16'

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42918

Reviewed By: izdeby

Differential Revision: D23130219

Pulled By: anjali411

fbshipit-source-id: 5f8ee8446a71cad2f28811afdeae3a263a31ad11
2020-08-14 10:26:23 -07:00
e75fb4356b Remove (most) Python 2 support from Python code (#35615)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35615

Python 2 has reached end-of-life and is no longer supported by PyTorch.
Now we can clean up a lot of cruft that we put in place to support it.
These changes were all done manually, and I skipped anything that seemed
like it would take more than a few seconds, so I think it makes sense to
review it manually as well (though using side-by-side view and ignoring
whitespace change might be helpful).

Test Plan: CI

Differential Revision: D20842886

Pulled By: dreiss

fbshipit-source-id: 8cad4e87c45895e7ce3938a88e61157a79504aed
2020-04-22 09:23:14 -07:00
f050b16dd9 Move pytorch distributed tests to separate folder for contbuild. (#30445)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30445

Create distributed and rpc directories under caffe/test for better management
of unit tests.

Differential Revision: D18702786

fbshipit-source-id: e9daeed0cfb846ef68806f6decfcb57c0e0e3606
2020-01-22 21:16:59 -08:00
f326045b37 Fix typos, via a Levenshtein-type corrector (#31523)
Summary:
Should be non-semantic.

Uses https://en.wikipedia.org/wiki/Wikipedia:Lists_of_common_misspellings/For_machines to find likely typos, with https://github.com/bwignall/typochecker to help automate the checking.

Uses an updated version of the tool used in https://github.com/pytorch/pytorch/pull/30606 .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31523

Differential Revision: D19216749

Pulled By: mrshenli

fbshipit-source-id: 7fd489cb9a77cd7e4950c1046f925d57524960ea
2020-01-17 16:03:19 -08:00
b38901aa15 Test reading __cuda_array_interface__ inferred strides. (#31451)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31451

The PR that fixed this, https://github.com/pytorch/pytorch/pull/24947, didn't add a test.

Fixes: https://github.com/pytorch/pytorch/issues/31443

Test Plan: Imported from OSS

Differential Revision: D19170020

Pulled By: gchanan

fbshipit-source-id: bdbf09989ac8a61b1b70bb1ddee103caa8ef435b
2019-12-20 08:21:39 -08:00
1d7b40f1c4 Fix reading __cuda_array_interface__ without strides (#24947)
Summary:
When converting a contiguous CuPy ndarray to Tensor via `__cuda_array_interface__`, an error occurs due to incorrect handling of default strides. This PR fixes this problem. It makes `torch.tensor(cupy_ndarray)` works for contiguous inputs.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24947

Differential Revision: D18838986

Pulled By: ezyang

fbshipit-source-id: 2d827578f54ea22836037fe9ea8735b99f2efb42
2019-12-06 07:36:27 -08:00
f583f2e657 Fixed test_numba_integration (#25017)
Summary:
The semantic of the _auto-convert GPU arrays that support the __cuda_array_interface__ protocol_ has changed a bit.

It used to throw an exception when using `touch.as_tensor(...,device=D)` where `D` is a CUDA device not used in `__cuda_array_interface__`. Now, this is supported and will result in an implicit copy.

I do not what have changes but `from_blob()` now supports that the input and the output device differ.
I have updated the tests to reflect this, which fixes https://github.com/pytorch/pytorch/issues/24968
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25017

Differential Revision: D16986240

Pulled By: soumith

fbshipit-source-id: e6f7e2472365f924ca155ce006c8a9213f0743a7
2019-08-23 08:58:08 -07:00
ee15ad1bd6 "CharTensor" numpy conversion is supported now (#21458)
Summary:
Fixed #21269 by removed the the expected `ValueError` when converting a tensor to a Numpy `int8` array in the Numba interoperability test.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21458

Differential Revision: D15696363

Pulled By: ezyang

fbshipit-source-id: f4ee9910173aab0b90a757e75c35925b026d1cc4
2019-06-06 10:06:41 -07:00
5d8879cf6d Auto-convert GPU arrays that support the __cuda_array_interface__ protocol (#20584)
Summary:
This PR implements auto-conversion of GPU arrays that support the `__cuda_array_interface__` protocol (fixes #15601).

If an object exposes the `__cuda_array_interface__` attribute, `touch.as_tensor()` and `touch.tensor()` will use the exposed device memory.

#### Zero-copy
When using `touch.as_tensor(...,device=D)` where `D` is the same device as the one used in `__cuda_array_interface__`.

#### Implicit copy
When using `touch.as_tensor(...,device=D)` where `D` is the CPU or another non-CUDA device.

#### Explicit copy
When using `torch.tensor()`.

#### Exception
When using `touch.as_tensor(...,device=D)` where `D` is a CUDA device not used in `__cuda_array_interface__`.

#### Lifetime
`torch.as_tensor(obj)` tensor grabs a reference to `obj` so that the lifetime of `obj` exceeds the tensor
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20584

Differential Revision: D15435610

Pulled By: ezyang

fbshipit-source-id: c423776ba2f2c073b902e0a0ce272d54e9005286
2019-05-21 14:06:46 -07:00
1c3494daf0 Fix Windows test CI
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17954

Differential Revision: D14437473

Pulled By: soumith

fbshipit-source-id: f0d79ff0c5d735f822be3f42bbca91c1928dacaf
2019-03-13 09:22:46 -07:00
46162ccdb9 Autograd indices/values and sparse_coo ctor (#13001)
Summary:
Reopen of #11253 after fixing bug in index_select
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13001

Differential Revision: D10514987

Pulled By: SsnL

fbshipit-source-id: 399a83a1d3246877a3523baf99aaf1ce8066f33f
2018-10-24 10:00:22 -07:00
f4944f0f8a Rename test/common.py to test/common_utils.py (#12794)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12794

common.py is used in base_module for almost all tests in test/. The
name of this file is so common that can easily conflict with other dependencies
if they happen to have another common.py in the base module. Rename the file to
avoid conflict.

Reviewed By: orionr

Differential Revision: D10438204

fbshipit-source-id: 6a996c14980722330be0a9fd3a54c20af4b3d380
2018-10-17 23:04:29 -07:00
7a1b668283 Implement Tensor.__cuda_array_interface__. (#11984)
Summary:
_Implements pytorch/pytorch#11914, cc: ezyang_

Implements `__cuda_array_interface__` for non-sparse cuda tensors,
providing compatibility with numba (and other cuda projects...).

Adds `numba` installation to the `xenial-cuda9` jenkins test environments via direct installation in `.jenkins/pytorch/test.sh` and numba-oriented test suite in `test/test_numba_integration.py`.

See interface reference at:
https://numba.pydata.org/numba-doc/latest/cuda/cuda_array_interface.html
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11984

Differential Revision: D10361430

Pulled By: ezyang

fbshipit-source-id: 6e7742a7ae4e8d5f534afd794ab6f54f67808b63
2018-10-12 13:41:05 -07:00