torch.mm(sparse, dense) -> dense works for tensors. This PR makes it work for variables as well.
I renamed mm to _mm in Declarations.cwrap and wrote a native mm function that wraps _mm for the dense case and addmm for the sparse case.
* Remove setting coalesce to 0 in sparse transpose_
* Remove setting coalesced to 0 in THCSTensor transpose_
* Add test for transpose's coalesce invariant
This adds more heavy sanity checking when we run to_dense(); in particular,
we make sure that if it claims to be coalesced, it truly is coalesced, and if
it is not, that the coalesced version also to_dense() to the same thing.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Apparently, the algorithm only guarantees the output is coalesced if
the inputs are coalesced.
I'm planning to do another PR that does much more stringent correctness
testing for the 'coalesced' bit shortly, but y'all should merge
this one first.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Fixes#1783.
There is an undocumented invariant in PyTorch that we should
try to avoid having storage == NULL as much as possible (even
though Torch supports it.) This commit properly documents the
invariant, and fixes a bug in sparse where the invariant was
not respected. This now means that sparse tensors now correctly
remember what GPU they are associated with.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Fixes#1782.
The default operation should be cheap: user can always choose to
explicitly make a copy on the way in. Note that this is a
BACKWARDS COMPATIBILITY BREAKING change. However, we DO create
a new tensor wrapper (so we are not affected by subsequent
size changes, etc.)
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
* Make sparseMask error if mask is uncoalesced.
Fixes#1447.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
* Add test for sparse adagrad.
Previously, the sparse codepath was not exercised at all; this commit
adds a very simple test case "sparse Rosenbrock"; the idea is to do
Rosenbrock but then knock out one of the dimensions so that the
tensor is sparse.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
As discussed in #1441.
I also added some docs giving clear guidance about how to coalescing
in sparse tensors.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
* Simplify _gen_sparse
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
* Randomly generate an uncoalesced tensor and test with it.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
* Simpler implementation of cpu_only suggested by @apaszke
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
* Better implementation of randn, suggested by @soumith
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
* Lint fix.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
* Fix CUDA type error.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
* Refactor test_sparse to reduce boilerplate.
Instead of manually creating a helper function, threading an is_cuda
parameter around, and creating a test method for CUDA and non-CUDA
variants, we take a different approach:
- There is now some new member variables initialized in setUp which
control the aspects of how we carry out the test; at the moment,
it's just whether or not we are using CUDA or not. This means
you don't have to pass is_cuda around, or do a conditional to
get the triplet of constructors you need.
I'll note that I am not a big fan of member variables in test
objects, but these are (intended to be) immutable so I think
it should be OK.
- Instead of manually defining test_foo and test_foo_cuda, we now
have a new TestCudaSparse class which overrides setUp (from above)
to swap in the CUDA implementation. Way less boilerplate, and NO
metaprogramming needed.
If you need to opt out of CUDA testing, there is a new cpu_only
decorator you can use.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Here's the command I used to invoke autopep8 (in parallel!):
git ls-files | grep '\.py$' | xargs -n1 -P`nproc` autopep8 -i
Several rules are ignored in setup.cfg. The goal is to let autopep8
handle everything which it can handle safely, and to disable any rules
which are tricky or controversial to address. We may want to come back
and re-enable some of these rules later, but I'm trying to make this
patch as safe as possible.
Also configures flake8 to match pep8's behavior.
Also configures TravisCI to check the whole project for lint.
* Fix error in ELU backward
* Add --seed flag for testst st
* Add test for BatchNorm eval
* Fix autograd.backward docs
* Support cc flags in cuDNN search
* Fix IndexSelect backward formula