183 Commits

Author SHA1 Message Date
0d777a808c Make test_randperm work with meta device (#56976)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56976

Band-aid fix for #54282

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D28020401

Pulled By: ezyang

fbshipit-source-id: 50546d5275eade408d65e9c883999fb3b65ff55a
2021-04-27 07:26:58 -07:00
3e006fc57e Adding hsplit,vsplit and dsplit methods (#53536)
Summary:
Fixes #{issue number}

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53536

Reviewed By: albanD

Differential Revision: D27938880

Pulled By: iramazanli

fbshipit-source-id: f741119517783ec2bafa296622ee518b587dd127
2021-04-26 09:39:09 -07:00
93bf0ae6fc Remove legacy constructor calls from pytorch codebase. (#54142)
Summary:
Follow up from https://github.com/pytorch/pytorch/issues/53889
Related to https://github.com/pytorch/pytorch/issues/47112

Removing every occurrence of the legacy constructor call present in PyTorch at:
- _docs_
- _benchmarks_
- _test_
- _caffe2_
- _CONTRIBUTING.md_

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54142

Reviewed By: ngimel

Differential Revision: D27699450

Pulled By: mruberry

fbshipit-source-id: 530aa3f5746cc8bc1407d5d51b2bbd8075e30546
2021-04-11 15:45:17 -07:00
5584332180 Wrap cub in its own namespace (#55292)
Summary:
Tentative fix for https://github.com/pytorch/pytorch/issues/55027.
Wraps cub import in its name space so that static variables used by cub and thrust don't conflict if they end up in the different libraries when torch is built with BUILD_SPLIT_CUDA. cub variables end up in their own namespace, thrust variables are unwrapped, so they don't clash.
This also allows extensions to use cub without wrapping it (thrust will still be problematic). The solution to allowing extensions to use thrust is to stop using thrust in pytorch completely.
Now importing cub and importing thrust cannot coexist, so I had to move nonzero to its own file, and remove reliance on thrust functions for it. Nonzero now uses cub only.
Also, we cannot selectively import just some of cub headers, we are forced to import `cub/cub.cuh`, which is not great.
Caffe2 ops using cub are not touched (there are too many), so mixing caffe2 and torch will (can) still result in the same bug. We are moving towards disabling c2 ops, so I think this is fine.
Still, even with that compiler (correctly) warns about redefinition of `CUB_NS_PREFIX` because including `ATen/ATen.h` transitively includes `thrust/complex.h` and that in turn includes original (empty) definition of `CUB_NS_PREFIX`. We probably can just ignore this warning. Here's an example warning:
```
In file included from /data/users/ngimel/pytorch/aten/src/ATen/native/cuda/Nonzero.cu:9:
/data/users/ngimel/pytorch/aten/src/ATen/cuda/CubUtils.cuh:4: warning: "CUB_NS_PREFIX" redefined
 #define CUB_NS_PREFIX namespace at{ namespace native{

In file included from /home/ngimel/local/cuda/include/thrust/system/cuda/config.h:76,
                 from /home/ngimel/local/cuda/include/thrust/system/cuda/detail/execution_policy.h:33,
                 from /home/ngimel/local/cuda/include/thrust/iterator/detail/device_system_tag.h:23,
                 from /home/ngimel/local/cuda/include/thrust/iterator/iterator_traits.h:111,
                 from /home/ngimel/local/cuda/include/thrust/detail/type_traits/pointer_traits.h:23,
                 from /home/ngimel/local/cuda/include/thrust/type_traits/is_contiguous_iterator.h:27,
                 from /home/ngimel/local/cuda/include/thrust/type_traits/is_trivially_relocatable.h:19,
                 from /home/ngimel/local/cuda/include/thrust/detail/complex/complex.inl:20,
                 from /home/ngimel/local/cuda/include/thrust/complex.h:1031,
                 from /data/users/ngimel/pytorch/c10/util/complex.h:9,
                 from /data/users/ngimel/pytorch/c10/core/ScalarType.h:4,
                 from /data/users/ngimel/pytorch/c10/core/Scalar.h:10,
                 from /data/users/ngimel/pytorch/build/aten/src/ATen/core/TensorBody.h:8,
                 from /data/users/ngimel/pytorch/aten/src/ATen/Tensor.h:3,
                 from /data/users/ngimel/pytorch/aten/src/ATen/Context.h:4,
                 from /data/users/ngimel/pytorch/aten/src/ATen/ATen.h:9,
                 from /data/users/ngimel/pytorch/aten/src/ATen/native/cuda/Nonzero.cu:1:
/home/ngimel/local/cuda/include/cub/util_namespace.cuh:43: note: this is the location of the previous definition
 #define CUB_NS_PREFIX

```
We will need a lint rule to prevent people from including `cub/cub.cuh`, because this will lead to https://github.com/pytorch/pytorch/issues/55027 reappearing again for some sequence of operations (and will lead to errors with cub code in extensions).
Also, for this to work reliably we'll need to make sure that everything calling cub ends up in only one of libtorch_cuda_cu or libtorch_cuda_cpp, otherwise even namespace won't help (there still will be same symbols in 2 libraries).

Upd: libtorch_cuda_cpp and libtorch_cuda_cu still contain the same symbols, which means that there exists a sequence of operations that will cause cache bug to reappear, so this is not a solution, we need to adjust file lists for BUILD_SPLITC_CUDA:
```
(pytorch) [ngimel@ ~/local/pytorch/build/lib] nm libtorch_cuda_cu.so | grep PerDeviceAttributeCache | c++filt
000000000c6bf808 u guard variable for at::native::cub::GetPerDeviceAttributeCache<at::native::cub::PtxVersionCacheTag>()::cache
000000000c600830 u guard variable for cub::GetPerDeviceAttributeCache<cub::PtxVersionCacheTag>()::cache
00000000018625e0 t at::native::cub::PerDeviceAttributeCache::DevicePayload at::native::cub::PerDeviceAttributeCache::operator()<at::native::cub::PtxVersion(int&)::{lambda(int&)https://github.com/pytorch/pytorch/issues/1}>(at::native::cub::PtxVersion(int&)::{lambda(int&)https://github.com/pytorch/pytorch/issues/1}&&, int)
00000000009ce630 t cub::PerDeviceAttributeCache::DevicePayload cub::PerDeviceAttributeCache::operator()<cub::PtxVersion(int&)::{lambda(int&)https://github.com/pytorch/pytorch/issues/1}>(cub::PtxVersion(int&)::{lambda(int&)https://github.com/pytorch/pytorch/issues/1}&&, int)
000000000c6bf820 u at::native::cub::GetPerDeviceAttributeCache<at::native::cub::PtxVersionCacheTag>()::cache
000000000c600840 u cub::GetPerDeviceAttributeCache<cub::PtxVersionCacheTag>()::cache
(pytorch) [ngimel@ ~/local/pytorch/build/lib] nm libtorch_cuda_cpp.so | grep PerDeviceAttributeCache | c++filt
0000000000ad2d98 u guard variable for at::native::cub::GetPerDeviceAttributeCache<at::native::cub::PtxVersionCacheTag>()::cache
0000000000ad2da0 u at::native::cub::GetPerDeviceAttributeCache<at::native::cub::PtxVersionCacheTag>()::cache
```
Upd2:
Moved TensorFactories.cu to torch_cuda_cu sources (see a change to caffe2/CMakeLists.txt), so now cub-related symbols are only in libtorch_cuda_cu. We'd need a test for that, any suggestions on how best to test it?
cc zasdfgbnm malfet

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55292

Reviewed By: anjali411

Differential Revision: D27576442

Pulled By: ngimel

fbshipit-source-id: 1ef29503a342bb214794d34a42a47052092a66c1
2021-04-05 23:21:05 -07:00
6c235ef267 Allow std=0 in torch.normal, and error if std<0 (#51317)
Summary:
Part of https://github.com/pytorch/pytorch/issues/49998

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51317

Reviewed By: bdhirsh

Differential Revision: D27253939

Pulled By: mruberry

fbshipit-source-id: af7a72c3d91549b1a88b73849b6973e7619dc50b
2021-03-31 21:06:07 -07:00
c2f41b6b84 Add meta device to generic device testing framework, skip NotImplementedError (#53682)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53682

With this, under the meta device, 101 tests passed and 16953 skipped.
It ain't much, but it's a start.

Some various bits and bobs:
- NotImplementedError suppression at test level is implemented
  in the same way as CUDA memory leak check, i.e., by wrapping
  test methods and monkeypatching them back in.
- I had to reimplement assertRaises/assertRaisesRegex from scratch to
  ignore NotImplementedError when _ignore_not_implemented_error is True.
  The implementation relies on a small amount of private API that hasn't
  changed since 2010
- expectedAlertNondeterministic doesn't really work so I skipped them
  all; there's probably a way to do it better

I tested this using `pytest --disable-warnings --tb=native -k meta --sw
test/*.py` and a pile of extra patches to make collection actually work
(lol).

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D26955539

Pulled By: ezyang

fbshipit-source-id: ac21c8734562497fdcca3b614a28010bc4c03d74
2021-03-14 20:41:19 -07:00
fe38027fc3 [fix] torch.cat : cross-device check for out and input tensors (#53004)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/52044 (`stack` dispatches to `cat`)

The way dispatcher works, currently this case happens only in CUDA kernel (CPU kernel is chosen if all inputs and out are on CPU). That is why the check is added only on the CUDA side.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53004

Reviewed By: albanD

Differential Revision: D27003956

Pulled By: mruberry

fbshipit-source-id: 818ea0f76153f4fa281740f30705e5ef018413f6
2021-03-12 12:51:11 -08:00
54a2498919 Modify tests to use assertWarnsOnceRegex instead of maybeWarnsRegex (#52387)
Summary:
Related to https://github.com/pytorch/pytorch/issues/50006

Follow on for https://github.com/pytorch/pytorch/issues/48560 to ensure TORCH_WARN_ONCE warnings are caught. Most of this is straight-forward find-and-replace, but I did find one place where the TORCH_WARN_ONCE warning was not wrapped into a python warning.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52387

Reviewed By: albanD

Differential Revision: D26773387

Pulled By: mruberry

fbshipit-source-id: 5be7efbc8ab4a32ec8437c9c45f3b6c3c328f5dd
2021-03-08 03:32:14 -08:00
8c798e0622 Forbid trailing whitespace (#53406)
Summary:
Context: https://github.com/pytorch/pytorch/pull/53299#discussion_r587882857

These are the only hand-written parts of this diff:
- the addition to `.github/workflows/lint.yml`
- the file endings changed in these four files (to appease FB-internal land-blocking lints):
  - `GLOSSARY.md`
  - `aten/src/ATen/core/op_registration/README.md`
  - `scripts/README.md`
  - `torch/csrc/jit/codegen/fuser/README.md`

The rest was generated by running this command (on macOS):
```
git grep -I -l ' $' -- . ':(exclude)**/contrib/**' ':(exclude)third_party' | xargs gsed -i 's/ *$//'
```

I looked over the auto-generated changes and didn't see anything that looked problematic.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53406

Test Plan:
This run (after adding the lint but before removing existing trailing spaces) failed:
- https://github.com/pytorch/pytorch/runs/2043032377

This run (on the tip of this PR) succeeded:
- https://github.com/pytorch/pytorch/runs/2043296348

Reviewed By: walterddr, seemethere

Differential Revision: D26856620

Pulled By: samestep

fbshipit-source-id: 3f0de7f7c2e4b0f1c089eac9b5085a58dd7e0d97
2021-03-05 17:22:55 -08:00
e9d7137072 fixes #38775 #38779: complex support for linspace and logspace (#38875)
Summary:
Closes https://github.com/pytorch/pytorch/issues/38775, Closes https://github.com/pytorch/pytorch/issues/38779

TO-DO:
* [x] Add Tests

Quansight Tracking : q-38775, q-38779

Pull Request resolved: https://github.com/pytorch/pytorch/pull/38875

Reviewed By: malfet

Differential Revision: D26628530

Pulled By: anjali411

fbshipit-source-id: ca4259b9f6725c4a4350f944465327169d12122e
2021-03-05 08:37:55 -08:00
316f0b89c3 [testing] Port torch.{repeat, tile} tests to use OpInfo machinery (#50199)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/50013

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50199

Reviewed By: ngimel

Differential Revision: D25949791

Pulled By: mruberry

fbshipit-source-id: 10eaf2d749fac8c08847f50461e72ad1c75c61e3
2021-01-19 06:02:27 -08:00
3df5f9c3b2 Revert D25843351: [pytorch][PR] Clarify, make consistent, and test the behavior of logspace when dtype is integral
Test Plan: revert-hammer

Differential Revision:
D25843351 (0ae0fac1bb)

Original commit changeset: 45237574d04c

fbshipit-source-id: fb5343d509b277158b14d1b61e10433793889842
2021-01-15 18:47:37 -08:00
0ae0fac1bb Clarify, make consistent, and test the behavior of logspace when dtype is integral (#47647)
Summary:
torch.logspace doesn't seem to have explained how integers are handled.
Add some clarification and some test when dtype is integral.

The CUDA implementation is also updated to be consistent with CPU implementation.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47647

Reviewed By: gchanan

Differential Revision: D25843351

Pulled By: walterddr

fbshipit-source-id: 45237574d04c56992c18766667ff1ed71be77ac3
2021-01-15 12:31:20 -08:00
36ddb00240 [fix] torch.cat: Don't resize out if it is already of the correct size. (#49937)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/49878

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49937

Reviewed By: mruberry

Differential Revision: D25851564

Pulled By: ngimel

fbshipit-source-id: 9a78922642d5bace70d887a88fa9e92d88038120
2021-01-08 18:10:49 -08:00
58c13cf685 Back out "Revert D25375885: [pytorch][PR] Reenable some BF16 tests on CUDA"
Summary: Revert D25397144 69829f3fff4d4a2d1a71bb52e90d3c7f16b27fa3

Test Plan: Revert Hammer

Reviewed By: janeyx99

Differential Revision: D25397572

fbshipit-source-id: 625ca2a32e4558ae4582a15697b6e1cc57cc1573
2020-12-08 07:52:59 -08:00
39445f718c Revert D25375885: [pytorch][PR] Reenable some BF16 tests on CUDA
Test Plan: revert-hammer

Differential Revision:
D25375885 (e3893b867f)

Original commit changeset: 2e19fe725ae9

fbshipit-source-id: 69829f3fff4d4a2d1a71bb52e90d3c7f16b27fa3
2020-12-08 07:05:33 -08:00
e3893b867f Reenable some BF16 tests on CUDA (#48805)
Summary:
Fixes #{issue number}

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48805

Reviewed By: agolynski

Differential Revision: D25375885

Pulled By: ailzhang

fbshipit-source-id: 2e19fe725ae9450bd1a2bc4e2d308c59b9f94fac
2020-12-07 16:16:07 -08:00
36c87f1243 Refactors test_torch.py to be fewer than 10k lines (#47356)
Summary:
Creates multiple new test suites to have fewer tests in test_torch.py, consistent with previous test suite creation like test_unary_ufuncs.py and test_linalg.py.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47356

Reviewed By: ngimel

Differential Revision: D25202268

Pulled By: mruberry

fbshipit-source-id: 75fde3ca76545d1b32b86d432a5cb7a5ba8f5bb6
2020-11-28 20:11:40 -08:00
4ab2055857 Re-enable only cuda tests wrongly disabled before (#48429)
Summary:
Close https://github.com/pytorch/pytorch/issues/46536

Re-enable only cuda tests wrongly disabled in https://github.com/pytorch/pytorch/pull/45332

See discussions https://github.com/pytorch/pytorch/issues/46536#issuecomment-721386038 and https://github.com/pytorch/pytorch/pull/45332#issuecomment-721350987

~~See also https://github.com/pytorch/pytorch/pull/47237 and https://github.com/pytorch/pytorch/pull/47642~~

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48429

Reviewed By: ngimel

Differential Revision: D25176368

Pulled By: mruberry

fbshipit-source-id: 3822f5a45e58c0e387624e70ea272d16218901a9
2020-11-25 13:26:35 -08:00
2907447c97 Spurious numpy writable warning (#47271)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/47160

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47271

Reviewed By: ailzhang

Differential Revision: D24855889

Pulled By: mruberry

fbshipit-source-id: beaf232b115872f20fb0292e995a876cdc429868
2020-11-12 00:14:56 -08:00
59aca02224 Implement Tensor.new_empty_strided(sizes, strides, *, dtype, device, requires_grad) (#47225)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47225

Summary
-------
This PR implements Tensor.new_empty_strided. Many of our torch.* factory
functions have a corresponding new_* method (e.g., torch.empty and
torch.new_empty), but there is no corresponding method to
torch.empty_strided. This PR adds one.

Motivation
----------
The real motivation behind this is for vmap to be able to work through
CopySlices. CopySlices shows up a lot in double backwards because a lot
of view functions have backward formulas that perform view+inplace.

e0fd590ec9/torch/csrc/autograd/functions/tensor.cpp (L78-L106)

To support vmap through CopySlices, the approach in this stack is to:
- add `Tensor.new_empty_strided` and replace `empty_strided` in
CopySlices with that so that we can propagate batch information.
- Make some slight modifications to AsStridedBackward (and add
as_strided batching rule)

Please let me know if it would be better if I squashed everything related to
supporting vmap over CopySlices together into a single big PR.

Test Plan
---------
- New tests.

Test Plan: Imported from OSS

Reviewed By: ejguan

Differential Revision: D24741688

Pulled By: zou3519

fbshipit-source-id: b688047d2eb3f92998896373b2e9d87caf2c4c39
2020-11-09 08:31:01 -08:00
e4bc785dd5 randperm: add torch check to ensure generator device = tensor device (#47022)
Summary:
**BC-breaking Note:**

This PR disallows passing in a generator of a different device than the tensor being created during `randperm` execution. For example, the following code which used to work no longer works.
```
> torch.randperm(3, device='cuda', generator=torch.Generator(device='cpu'))
tensor([0, 1, 2], device='cuda:0')
```
It now errors:
```
> torch.randperm(3, device='cuda', generator=torch.Generator(device='cpu'))
RuntimeError: Expected a 'cuda:0' generator device but found 'cpu'
```

**PR Summary:**

Fixes https://github.com/pytorch/pytorch/issues/44714

Also added + ran tests to ensure this functionality.

Disclaimer: More work needs to be done with regards to small cuda tensors when a generator is specified, look at the issue thread for more details.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47022

Reviewed By: samestep

Differential Revision: D24608237

Pulled By: janeyx99

fbshipit-source-id: b83c47219c7816d93f938f7ce86dc8857513961b
2020-11-04 08:29:31 -08:00
774b638eb6 Change largeCUDATensorTest to largeTensorTest+onlyCUDA; add a buffer to large cuda tensor test (#45332)
Summary:
Effectively, `largeCUDATensorTest` = `largeTensorTest` + `onlyCUDA`.

There was this problem where a user got OOM for a `largeCUDATensorTest('16GB')` on a 16GB V100. This decorator was checking total memory for a GPU device, however in most cases, we can't allocate all of the memory that a GPU has. So, it would be beneficial that we have a buffer on this `largeTensorTest` check for CUDA. I added a 10% buffer to it.

Definition of `largeTensorTest`

d22dd80128/torch/testing/_internal/common_device_type.py (L560-L578)

`_has_sufficient_memory`

d22dd80128/torch/testing/_internal/common_device_type.py (L535-L557)

`largeCUDATensorTest`

d22dd80128/torch/testing/_internal/common_device_type.py (L526-L532)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45332

Reviewed By: ngimel

Differential Revision: D24698690

Pulled By: mruberry

fbshipit-source-id: a77544478e45ce271f6639ea04e87700574ae307
2020-11-03 11:43:49 -08:00
c5ae875179 Add bfloat support for torch.randn and torch.norm (#47143)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/47143

Reviewed By: pbelevich

Differential Revision: D24664407

Pulled By: malfet

fbshipit-source-id: c63ff1cbb812751aba4c56e64e6ee1008cfc2d7f
2020-11-02 11:49:21 -08:00
604e1b301a Fix negative column numbers for the torch.eye (#46841)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/46757

Error out the negative column numbers and add the corresponding tests in the `test/test_tensor_creation_ops.py`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46841

Reviewed By: VitalyFedyunin

Differential Revision: D24593839

Pulled By: ngimel

fbshipit-source-id: b8988207911453de7811cf3ceb43747192cd689d
2020-10-28 22:29:25 -07:00
a9a9d0b181 Rocm skip test cases (#45782)
Summary:
Skip the following test cases for rocm (When PYTORCH_TEST_WITH_ROCM=1):
- test_reference_numerics_tan_cuda_float64 (__main__.TestUnaryUfuncsCUDA)
- test_addmv_cuda_float16 (__main__.TestTorchDeviceTypeCUDA)
- test_logspace_cuda_float64 (__main__.TestTensorCreationCUDA)
- test_gloo_backend_2gpu_module (__main__.DistributedDataParallelTest)
jeffdaily
pruthvistony

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45782

Reviewed By: VitalyFedyunin

Differential Revision: D24115581

Pulled By: xw285cornell

fbshipit-source-id: 4043a9fa19e242301b5007813c15b6b3873889c5
2020-10-05 15:12:25 -07:00
e255a4e1fd Enable bfloat16 random kernels on Windows (#44918)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/33793

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44918

Reviewed By: pbelevich

Differential Revision: D23777548

Pulled By: ngimel

fbshipit-source-id: 9cf13166d7deba17bc72e402b82ed0afe347cb9b
2020-09-18 15:55:32 -07:00
a153eafab7 Let logspace support bfloat16 on both CPU and CUDA (#44675)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44675

Reviewed By: ngimel

Differential Revision: D23710801

Pulled By: mruberry

fbshipit-source-id: 12d8e56f41bb635b500e89aaaf5df86a1795eb72
2020-09-17 14:13:55 -07:00
b61d3d8be8 Implement torch.kaiser_window (#44271)
Summary:
Related to https://github.com/pytorch/pytorch/issues/38349

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44271

Reviewed By: ngimel

Differential Revision: D23727972

Pulled By: mruberry

fbshipit-source-id: b4c931b2eb3a536231ad6d6c3cb66e52a13286ac
2020-09-16 20:41:31 -07:00
7e91728f68 Deprecates calling linspace and logspace without setting steps explicitly (#43860)
Summary:
**BC-breaking note**

This change is BC-breaking for C++ callers of linspace and logspace if they were providing a steps argument that could not be converted to an optional.

**PR note**

This PR deprecates calling linspace and logspace wihout setting steps explicitly by:

- updating the documentation to warn that not setting steps is deprecated
- warning (once) when linspace and logspace are called without steps being specified

A test for this behavior is added to test_tensor_creation_ops. The warning only appears once per process, however, so the test would pass even if no warning were thrown. Ideally there would be a mechanism to force all warnings, include those from TORCH_WARN_ONCE, to trigger.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43860

Reviewed By: izdeby

Differential Revision: D23498980

Pulled By: mruberry

fbshipit-source-id: c48d7a58896714d184cb6ff2a48e964243fafc90
2020-09-13 06:09:19 -07:00
69fbc705d8 Remained changes of #43578 (#43921)
Summary:
Not full https://github.com/pytorch/pytorch/issues/43578 was merged. This PR is the remained part.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43921

Reviewed By: ailzhang

Differential Revision: D23438504

Pulled By: mruberry

fbshipit-source-id: 9c5e26346dfc423b7a440b8a986420a27349090f
2020-08-31 20:42:07 -07:00
7680d87a76 Let linspace support bfloat16 and complex dtypes (#43578)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/43578

Reviewed By: malfet

Differential Revision: D23413690

Pulled By: mruberry

fbshipit-source-id: 8c24f7b054269e1317fe53d26d523fea4decb164
2020-08-31 14:54:22 -07:00
4dc8f3be8c Creates test_tensor_creation_ops.py test suite (#43104)
Summary:
As part of our continued refactoring of test_torch.py, this takes tests for tensor creation ops like torch.eye, torch.randint, and torch.ones_like and puts them in test_tensor_creation_ops.py. There hare three test classes in the new test suite: TestTensorCreation, TestRandomTensorCreation, TestLikeTensorCreation. TestViewOps and tests for construction of tensors from NumPy arrays have been left in test_torch.py. These might be refactored separately into test_view_ops.py and test_numpy_interop.py in the future.

Most of the tests ported from test_torch.py were left as is or received a signature change to make them nominally "device generic." Future work will need to review test coverage and update the tests.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43104

Reviewed By: ngimel

Differential Revision: D23280358

Pulled By: mruberry

fbshipit-source-id: 469325dd1a734509dd478cc7fe0413e276ffb192
2020-08-22 23:18:54 -07:00