Commit Graph

362 Commits

Author SHA1 Message Date
d8b2e5d091 Add python only default init expression; Implement stft, hann/hamming/bartlett window. (#4095)
* implement stft

* addressed comments; implemented window functions; added support for python only default initialization
2017-12-18 12:28:23 -05:00
c813ce3787 Implement Variable._sparse_mask (#4124)
* Implement Variable._sparse_mask

* Use SparseTensor as the dyanmic_type
2017-12-15 17:25:20 -05:00
aeb7a3668d Implement Variable.new (#4080) 2017-12-11 15:45:43 -05:00
c681b03d37 Add determinant function on variable; Add backward on svd (#3816)
* determinant on variable

* svd bwd
2017-12-01 13:22:46 -05:00
65e0d5bad8 Fix void* wrapping in autograd codegen
Also, add assertions here and there to make sure bad things
never happen again.
2017-11-24 13:33:13 +01:00
9cb8b43778 Split off in-place NN functions (#3683)
For example, this splits threshold into threshold(), which is now
never in-place, and threshold_() which is always in-place.

This simplifies the in-place vs. non-in-place logic in
gen_variable_type.py, which was bug-prone.
2017-11-14 12:59:06 -05:00
5aa5b572e4 update build so that all of TH* is in libATen 2017-11-02 19:53:36 -04:00
53fe804322 Make ONNX work with new C++ autograd world.
The general strategy is there is a new module, torch.onnx.symbolic, which
contains a function for every ATen method name with the ONNX translation.
While implementing this, I took the opportunity to expunge all references
of 'g' from the public API; instead, it is managed by a global variable in
torch.onnx which tracks the "current graph".

Other changes:

- If you pass a Tensor to op as an argument, it will now automatically be
  converted into a Constant ONNX node.  This lets us remove needing to
  implement ONNX

- Rename value to other, wherever there is both a Scalar and Tensor overload.
  This way, keyword dispatch can work uniformly in both cases.

- Deleted any autograd Function classes that both had a symbolic and were ported
  to the new C++ autograd implementation.  There may still be some straggling
  classes that didn't have symbolic.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-10-20 15:38:01 -04:00
f1f64c8d07 Generate autograd functions for NN / more refactors (#3136)
Generate autograd functions for NN and implement more derivatives in derivatives.yaml

A big refactor of gen_variable_type.py
2017-10-19 15:03:26 -04:00
f29bcab67e Use Declarations.yaml to generate python bindings 2017-10-07 00:41:29 -04:00
558d26a69e Fix argument indices 2017-10-07 00:41:29 -04:00
dcb8d0f088 Refactor out python binding generation from gen_variable_type.py
- Also includes some prep work for binding NN functions
2017-10-07 00:41:29 -04:00