442 Commits

Author SHA1 Message Date
bf603299b6 Restore torch.mm behavior for sparse variables (#5077)
torch.mm(sparse, dense) -> dense works for tensors. This PR makes it work for variables as well.

I renamed mm to _mm in Declarations.cwrap and wrote a native mm function that wraps _mm for the dense case and addmm for the sparse case.
2018-02-07 15:42:29 -05:00
ba61eee074 Expose sparse variable addmm, addmm_ (#5016)
sspaddmm, mm for sparse tensors to come in another pr; they're a little more involved.
2018-02-05 11:40:53 -05:00
a69110c0d7 Add size checks for sparse tensor constructor (#4113)
* Add size checks for sparse tensor constructor

* Fix tests

* Free max_indices
2018-02-01 22:08:20 -05:00
5e72d7af13 Remove setting coalesce to 0 in sparse transpose_ (#4707)
* Remove setting coalesce to 0 in sparse transpose_

* Remove setting coalesced to 0 in THCSTensor transpose_

* Add test for transpose's coalesce invariant
2018-01-23 21:57:12 -05:00
bc11511cda Restore sparse variable transpose_() and t_() (#4779)
* Restore sparse variable transpose_() and t_()

* Add dimension wrapping to transpose_, t_

* Don't expose sparse_raw_resize_ to python
2018-01-23 21:32:40 -05:00
e83546b686 Restore sparse variable _dimI() and _dimV() (#4785) 2018-01-23 21:13:03 -05:00
14033df3cb Fix resize_as_ on Variables containing SparseTensors (#4745)
Fix resize_as_ on Variables containing SparseTensors

Also enable Tensor::tensor(...) on sparse types
2018-01-22 14:33:42 -05:00
b7752efc1b Restore sparse variable methods for: (#4780)
- _nnz
- coalesce
- to_dense
- is_coalesced
2018-01-22 13:48:51 -05:00
a5440717ae Restores some sparse variable methods (#4687)
* Restores some sparse variable methods:
- transpose
- t
- zeros
- zeros_like
- sub
- sub_
- div
- div_
- mul
- mul_

* Restore sparse variable pow()
2018-01-22 10:24:39 -05:00
de28e754b2 Make Variable.is_sparse an attribute (#4308)
This matches Tensor.is_sparse, which makes it easier to replace Tensor
with Variable.
2017-12-22 12:46:28 -05:00
c813ce3787 Implement Variable._sparse_mask (#4124)
* Implement Variable._sparse_mask

* Use SparseTensor as the dyanmic_type
2017-12-15 17:25:20 -05:00
51ca3a1a48 Make sparse test also check that coalesce status of tensors makes sense. (#3171)
This adds more heavy sanity checking when we run to_dense(); in particular,
we make sure that if it claims to be coalesced, it truly is coalesced, and if
it is not, that the coalesced version also to_dense() to the same thing.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-11-28 09:55:56 -05:00
8cd0df020c make sparse (new) functions conform that storage is not NULL (#3381) 2017-10-30 18:55:26 -04:00
4f33b136d8 add tests for the previously failing coalesce case 2017-10-28 18:52:35 -04:00
9107110d3a Add sparseTensor.new wrapper bindings (#3329) 2017-10-28 16:34:08 +02:00
bdeee47d33 Add zero, zeros_like, _dimI and _dimV for sparse tensors (#3271) 2017-10-26 18:28:04 +02:00
9ec9acc0cd Fix bug with 'coalesced' calculation in 'cadd'. (#3162)
Apparently, the algorithm only guarantees the output is coalesced if
the inputs are coalesced.

I'm planning to do another PR that does much more stringent correctness
testing for the 'coalesced' bit shortly, but y'all should merge
this one first.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-10-18 23:20:56 +02:00
3977ee3520 Support device on sparse tensor constructor, assert values/indices on same device.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-06-13 16:30:35 -04:00
c0e7bda3f1 Enforce storage is not NULL invariant for sparse tensors.
Fixes #1783.

There is an undocumented invariant in PyTorch that we should
try to avoid having storage == NULL as much as possible (even
though Torch supports it.)  This commit properly documents the
invariant, and fixes a bug in sparse where the invariant was
not respected.  This now means that sparse tensors now correctly
remember what GPU they are associated with.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-06-13 16:30:35 -04:00
7bee03fe1e Do NOT clone indices/values passed to sparse tensor by default.
Fixes #1782.

The default operation should be cheap: user can always choose to
explicitly make a copy on the way in.  Note that this is a
BACKWARDS COMPATIBILITY BREAKING change.  However, we DO create
a new tensor wrapper (so we are not affected by subsequent
size changes, etc.)

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-06-13 16:30:34 -04:00
5d6e593c67 Test clone preserves uncoalescedness if it wasn't coalesced.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-06-13 16:30:19 -04:00
2f967a204c Sparse tensor clone() preserves coalescedness.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-06-13 16:30:19 -04:00
80c0a8776b Fix #1447: sparse_mask doesn't make sense with uncoalesced tensors (#1458)
* Make sparseMask error if mask is uncoalesced.

Fixes #1447.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Add test for sparse adagrad.

Previously, the sparse codepath was not exercised at all; this commit
adds a very simple test case "sparse Rosenbrock"; the idea is to do
Rosenbrock but then knock out one of the dimensions so that the
tensor is sparse.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-05-03 17:53:45 -04:00
743e4894d2 Prefix values/indices/sparse_mask/nnz with underscore (#1457)
As discussed in #1441.

I also added some docs giving clear guidance about how to coalescing
in sparse tensors.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-05-03 11:14:10 -04:00
e9953c4595 A number of post-merge fixes for test_sparse (#1444)
* Simplify _gen_sparse

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Randomly generate an uncoalesced tensor and test with it.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Simpler implementation of cpu_only suggested by @apaszke

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Better implementation of randn, suggested by @soumith

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Lint fix.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Fix CUDA type error.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-05-03 08:43:03 -04:00
dca208b525 Refactor test_sparse to reduce boilerplate. (#1421)
* Refactor test_sparse to reduce boilerplate.

Instead of manually creating a helper function, threading an is_cuda
parameter around, and creating a test method for CUDA and non-CUDA
variants, we take a different approach:

- There is now some new member variables initialized in setUp which
  control the aspects of how we carry out the test; at the moment,
  it's just whether or not we are using CUDA or not.  This means
  you don't have to pass is_cuda around, or do a conditional to
  get the triplet of constructors you need.

  I'll note that I am not a big fan of member variables in test
  objects, but these are (intended to be) immutable so I think
  it should be OK.

- Instead of manually defining test_foo and test_foo_cuda, we now
  have a new TestCudaSparse class which overrides setUp (from above)
  to swap in the CUDA implementation.  Way less boilerplate, and NO
  metaprogramming needed.

  If you need to opt out of CUDA testing, there is a new cpu_only
  decorator you can use.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-05-01 21:52:58 -04:00
45020a74cd remove inplace pow and fix contiguous -> coalesce (#1398) 2017-04-28 18:26:29 -04:00
f75ab857b8 Add safeCoalesce() to tests 2017-04-28 17:11:05 -04:00
f2903332c7 Make coalesce() out of place 2017-04-28 17:11:05 -04:00
4f09461d24 Rename sparse tensor contiguous() to coalesce() 2017-04-28 17:11:05 -04:00
bafb2e5cc2 Implement sparse pow. (#1387) 2017-04-28 23:06:09 +02:00
701e63107f speed improvements, fix tests 2017-04-18 12:46:54 -07:00
655c22569e CPU hspmm + more efficient reorder 2017-04-18 12:46:54 -07:00
cd3bbc9dfd more operations and optimizations (hspmm, reorder, ...) 2017-04-18 12:46:54 -07:00
01d84c5f9d revert sparse cuda index type change 2017-04-18 12:46:54 -07:00
88b42324e7 spcadd, sparseMask, cadd, csub, cmul + tests 2017-04-18 12:46:54 -07:00
ec260fe8e9 add test for dsmm 2017-04-18 12:46:54 -07:00
328b416068 THCS contiguous + to_dense 2017-04-18 12:46:54 -07:00
f17cfe4293 sparse tensor operations (#735) 2017-03-03 18:37:03 +01:00
e7c1e6a8e3 [pep8] Fix most lint automatically with autopep8
Here's the command I used to invoke autopep8 (in parallel!):

    git ls-files | grep '\.py$' | xargs -n1 -P`nproc` autopep8 -i

Several rules are ignored in setup.cfg. The goal is to let autopep8
handle everything which it can handle safely, and to disable any rules
which are tricky or controversial to address. We may want to come back
and re-enable some of these rules later, but I'm trying to make this
patch as safe as possible.

Also configures flake8 to match pep8's behavior.

Also configures TravisCI to check the whole project for lint.
2017-01-28 01:15:51 +01:00
a1fa995044 Fixes and improvements (#593)
* Fix error in ELU backward

* Add --seed flag for testst st

* Add test for BatchNorm eval

* Fix autograd.backward docs

* Support cc flags in cuDNN search

* Fix IndexSelect backward formula
2017-01-25 22:21:49 -05:00
59d66e6963 Sparse Library (#333) 2017-01-05 00:43:41 +01:00