Commit Graph

173 Commits

Author SHA1 Message Date
dd6d04ddf2 doc: Normalize all true/false in docstrings to `True|False` (#3593)
* doc: Normalize all true/false in docstrings to ``True|False``

This makes them more apparent in the documentation.

* doc: fix flake8
2017-11-09 08:12:29 -05:00
fa5efab669 comments and case where not all sparse (#3370) 2017-11-01 06:05:17 -04:00
01be4d6b20 sparse broadcast_coalesce and reduce_add_coalesced 2017-10-28 18:52:35 -04:00
3a0aee71f3 fix sparse tensor .cpu() 2017-10-28 18:52:35 -04:00
46a868dab7 [Ready] Limit docs line length (#1900)
* some docs are ready

* docs

* docs

* fix some more

* fix some more
2017-07-10 10:24:54 -04:00
bb3779efe8 Add broadcasting to masked_select. 2017-06-24 09:45:21 -04:00
12813b88f6 Add DistributedDataParallel 2017-06-12 22:00:22 -04:00
ddf6328990 Document type function returns type with no args (#1719) 2017-06-05 11:54:55 -04:00
743e4894d2 Prefix values/indices/sparse_mask/nnz with underscore (#1457)
As discussed in #1441.

I also added some docs giving clear guidance about how to coalescing
in sparse tensors.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-05-03 11:14:10 -04:00
01d84c5f9d revert sparse cuda index type change 2017-04-18 12:46:54 -07:00
88b42324e7 spcadd, sparseMask, cadd, csub, cmul + tests 2017-04-18 12:46:54 -07:00
c4d1318662 Fix map_location in torch.load (#1006) 2017-03-15 16:54:19 -04:00
f17cfe4293 sparse tensor operations (#735) 2017-03-03 18:37:03 +01:00
6464e69e21 Docs for torch.Storage (#475) 2017-01-18 03:22:30 -05:00
d951d5b1cd Fix tensor.cuda(0) when on non-zero device. (#472) 2017-01-18 01:08:37 -05:00
0325e2f646 Major autograd refactor
Improves autograd performance by more than 2x and fixes a couple
of bugs. All core functions have been moved to C.
2016-10-13 17:17:49 -07:00
2bc9da4f5e Support "device" keyword argument (#79)
Adds the optional "device" keyword argument to Tensor and Storage
constructors and .new methods.
2016-10-01 19:32:55 -04:00
e034f258e3 Fix ffi utils in Python 2.7 2016-10-01 15:37:05 -07:00
11b38a6895 Add more functions to autograd 2016-09-30 16:37:07 -04:00
cb5d4e836f Lazy load CUDA and THNN modules (#64) 2016-09-28 19:29:53 -04:00
3eac7164f4 Add data parallel functions to nn 2016-09-27 15:45:45 -07:00
8fdec15a55 Codemod to remove camel case method naming 2016-09-20 08:40:28 -07:00
da5bb373e6 Type conversions now use auto gpu 2016-09-15 18:48:27 -07:00