71ce3448d9
Fix torch.inverse when magma is not available
...
Fixes #2156
2017-07-21 15:57:43 -04:00
82143487b3
Add CUDA support for arange
...
Also enables CUDA for range
2017-07-19 15:48:20 -04:00
a45ad7cfba
Advanced Indexing Part 1 -- Purely Integer Array Indexing
2017-06-22 17:21:50 -04:00
5b81746767
Simplify python warning settings and cleanup tests.
2017-06-11 05:37:59 -04:00
69287250d1
Add a broadcast parameter to copy_, use it in the library in cases where there is non-broadcasting calls exposed by the tests.
2017-06-11 05:37:59 -04:00
5af46cb352
Add broadcasting support for matmul.
2017-06-11 05:37:59 -04:00
a36f95fe26
Add broadcast support for fused-matmul broadcasting. Functions are: addmm, addbmm, addr, addmv, baddbmm.
2017-06-11 05:37:59 -04:00
85d838a028
Testing over the following: 1) CPU tensor out-of-place functions 2) CPU tensor in-place functions 3) GPU tensor out-of-place functions 4) GPU tensor in-place functions 5) torch. functions 6) Fallback semantics (use pointwise nElem matching rather than broadcasting)
2017-06-11 05:37:59 -04:00
ba690d5607
Add support for NVTX functions. ( #1748 )
2017-06-10 18:26:58 +02:00
5f1a16a018
Torch manual seed to seed cuda devices ( #1762 )
2017-06-10 12:37:21 +02:00
7b578dd68e
Add scatterAdd
2017-05-25 16:49:48 -04:00
33b3968660
add larger tests for qr
2017-05-08 16:58:54 -07:00
f273377d19
add device asserts in scatter/gather kernels
2017-05-03 11:12:26 -04:00
77035d151e
make topk test unique
2017-04-28 07:30:25 -04:00
01a35dcace
Fix coalesced CUDA collectives for nonhomogeneous lists
2017-04-11 14:48:54 -07:00
b16a352a3b
Fix remainder and cremainder for integer types
2017-04-07 17:17:44 -07:00
f0c7124420
Allow support for negative dimension argument for all functions
2017-04-06 16:37:00 -07:00
91c4ba7980
Add torch.arange and deprecate torch.range
2017-04-03 10:38:58 -04:00
bb353ccc17
Add batch triangular factorization and solves, add IntegerTensor to cwrap ( #903 )
2017-03-23 15:06:00 -04:00
e50a1f19b3
Use streams in scatter to overlap copy with compute
2017-03-14 22:46:07 +01:00
7ad948ffa9
fix tests to not sys.exit(), also fix fatal error on THC initialization
2017-03-01 17:37:04 -05:00
b190f1b5bc
Add another pinned memory test.
...
Checks that pinned memory freed on a different GPU from which it was
allocated isn't re-used too soon.
2017-03-01 12:22:31 +01:00
61bd5a0643
[Lint] Address F811
2017-02-27 19:33:00 -05:00
4c474a9939
Improve prodall CUDA test
2017-02-20 23:28:31 -08:00
a1534cc37d
Fix auto-gpu in cat
2017-02-14 21:28:50 +01:00
712686ce91
Add cat, contiguous, squeeze, and unsqueeze to THPP
...
Use unsqueeze and view from TH/THC
2017-02-11 17:49:31 +01:00
e7c1e6a8e3
[pep8] Fix most lint automatically with autopep8
...
Here's the command I used to invoke autopep8 (in parallel!):
git ls-files | grep '\.py$' | xargs -n1 -P`nproc` autopep8 -i
Several rules are ignored in setup.cfg. The goal is to let autopep8
handle everything which it can handle safely, and to disable any rules
which are tricky or controversial to address. We may want to come back
and re-enable some of these rules later, but I'm trying to make this
patch as safe as possible.
Also configures flake8 to match pep8's behavior.
Also configures TravisCI to check the whole project for lint.
2017-01-28 01:15:51 +01:00
a1fa995044
Fixes and improvements ( #593 )
...
* Fix error in ELU backward
* Add --seed flag for testst st
* Add test for BatchNorm eval
* Fix autograd.backward docs
* Support cc flags in cuDNN search
* Fix IndexSelect backward formula
2017-01-25 22:21:49 -05:00
d951d5b1cd
Fix tensor.cuda(0) when on non-zero device. ( #472 )
2017-01-18 01:08:37 -05:00
f91bb96071
Remove cmin, cmax and cinv
2017-01-16 19:07:37 -05:00
b07358b329
renaming test to avoid dot in test name
2016-12-27 13:34:09 -08:00
2aea8077f9
renaming test to avoid dot in test name
2016-12-27 13:17:04 -08:00
f45d75ed22
make the CUDA-aware tests backoff if CUDA no available
2016-12-24 15:36:00 -05:00
93ed476e7d
adding LAPACK double bindings, adding fmod and remainder
2016-12-22 17:36:47 -08:00
59b9eeff49
Expose gather and equals for CUDA tensors
2016-12-19 20:35:08 -05:00
20fffc8bb7
Fix torch.is_tensor for half tensors ( #322 )
...
Fixes #311
2016-12-19 15:27:47 +01:00
0d7d29fa57
Enable caching allocator for CUDA pinned memory ( #275 )
...
Also add binding for CUDA "sleep" kernel
2016-12-02 01:33:56 -05:00
88d9fdec2e
Add torch.cuda.set_device
2016-12-01 23:14:41 +01:00
6322cf3234
Allow device=None in Tensor constructor"
...
Setting device=None is the same as not specifying the device (use the
current active device).
2016-12-01 20:09:19 +01:00
103e70ccc5
adding cuda types for tensor methods ( #194 )
2016-11-02 10:25:58 -04:00
f2d7e94948
Use torch.Size for Tensor sizes and tuple for strides
...
See issue #20
The torch.Size class is a tuple subclass which distinguishes sizes from
other tuples so that torch.Tensor(size) is interpreted as size instead
of data.
2016-10-28 19:37:09 +02:00
19f2f1a9d3
Buffer values when constructing a CUDA tensor from a sequence
2016-10-24 22:30:11 +02:00
79ead42ade
Add CUDA Stream and Event API ( #133 )
2016-10-18 12:15:57 -04:00
ee14cf9438
Add support for pinned memory: ( #127 )
...
torch.Storage/Tensor.pin_memory()
torch.Storage/Tensor.is_pinned()
2016-10-15 18:38:26 -04:00
3d6ebde756
qr and ormqr tests and bugfix
2016-10-14 03:10:16 -04:00
0c9670ddf0
Allow remapping storages at load time and serialize data in little endian order
2016-10-04 12:54:55 -07:00
3f7ab95890
Finish implementation of prng related functions
2016-09-29 11:33:25 -07:00
3eac7164f4
Add data parallel functions to nn
2016-09-27 15:45:45 -07:00
1ed488da4f
Make custom precision of CUDA tests work in inplace mode as well
2016-09-25 12:26:00 -07:00
5030d76acf
Reduce precision of CUDA blas tests
2016-09-23 21:10:28 -07:00