Commit Graph

80 Commits

Author SHA1 Message Date
ac8f56656d Adapt ONNX Slice op changes (#3316) 2017-10-28 00:03:29 -04:00
869bdeb936 Symbolic implementation of Index supporting tuple of slices. (#3294) 2017-10-27 02:39:38 +05:30
9989bb1a43 Export index constants as long, not int (onnx-caffe2 needs it.) (#3274)
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-10-25 09:50:33 +02:00
4b1e85d266 Remove split/chunk python autograd. 2017-10-24 19:33:37 -04:00
5691b0b8d2 Fix the Slice changes in ONNX (#3216) 2017-10-24 14:12:54 -04:00
53fe804322 Make ONNX work with new C++ autograd world.
The general strategy is there is a new module, torch.onnx.symbolic, which
contains a function for every ATen method name with the ONNX translation.
While implementing this, I took the opportunity to expunge all references
of 'g' from the public API; instead, it is managed by a global variable in
torch.onnx which tracks the "current graph".

Other changes:

- If you pass a Tensor to op as an argument, it will now automatically be
  converted into a Constant ONNX node.  This lets us remove needing to
  implement ONNX

- Rename value to other, wherever there is both a Scalar and Tensor overload.
  This way, keyword dispatch can work uniformly in both cases.

- Deleted any autograd Function classes that both had a symbolic and were ported
  to the new C++ autograd implementation.  There may still be some straggling
  classes that didn't have symbolic.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-10-20 15:38:01 -04:00
fce3ed19e5 Change device_id to device in python land (#3133)
* change device_id to device in python land

* cuda/random.py
2017-10-17 00:54:26 +02:00
e4701e63f6 Fix exporting Reshape with single torch.Size argument 2017-10-02 23:29:49 +02:00
ad414908d7 Advanced Indexing with variables for autograd (#2590) 2017-09-20 14:50:07 -04:00
b66d90c84f Add a pass to remove all non-standard ONNX nodes before export (#225) 2017-09-19 10:53:32 -04:00
28828e033f Make certain functions traceable 2017-09-19 10:53:32 -04:00
462f95ed6d fix bug in autograd type() for non-default GPU input 2017-09-13 15:33:37 -04:00
3c61b59fd4 codemod primspec -> symbol, PrimSpec -> Symbolic 2017-09-06 13:45:39 -04:00
6d8d5bab4c Codemod Toffee -> ONNX, toffee -> onnx. Change file names to match 2017-09-06 13:45:39 -04:00
4fc54af010 Code review comments.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-09-05 17:48:55 -04:00
f1e4de9a63 Add primspec for Sub, Index, Chunk, and Embedding 2017-09-05 17:48:55 -04:00
394ff072eb Update to latest ToffeeIR operator schema.
- Conv no longer supports bias, so we create an explicit broadcasted
  addition afterwards.  There is one minor problem, however, which is that
  ConvTranspose in Caffe2 has mandatory bias.  So there's a hack.
  See Note [Caffe2ConvTranspose] for the details.
- Squeeze: dims -> axes
- Transpose: axes -> perm
- Reshape lost its extra output (yay!)

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-09-05 17:48:55 -04:00
52e693022a helper methods appendNewNode and NewNode for python Graph API
uses suffixes to disambiguate attribute types
2017-09-05 17:48:55 -04:00
5c82aefa24 Fix bug in Transpose export.
This is a case of two wrongs make a right.  There were a pair of
related bugs;

- We incorrectly translated Transpose as if it were a Permute;
  but Torch transpose actually is a *swap* between dimensions.

- Why didn't we ever notice it?  In all of our tests, a transpose
  was *solely* done to get a weight matrix into the correct form.
  But Caffe2's FC operator *implicitly* does a transpose on
  the weight matrix.

This commit fixes both of these problems.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-09-05 17:48:55 -04:00
b5833551f3 Documentation, and inplace support.
This adds the PyTorch API user documentation for Toffee.
To make the example work, I also converted all "inplace"
ops to export out-of-place in Toffee.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-09-05 17:48:55 -04:00
57eb8bd288 Frontend refactor, and some documentation.
- BC BREAKING: export now also takes a mandatory file-ish argument, specifying
  the file to export the protobuf to.  I rewrote the tests to use BytesIO to
  get out the string so they could parse it again.

- BC BREAKING: export no longer returns the tensors that were computed.  To
  get these, use the internal _export function.

- Multiple inputs to models are now supported by passing a tuple to input.
  (Old API of a single Variable still works.)

- Keyword arguments to models are now supported via kwargs keyword arg.

- Renamed embed_params to export_params, and it now defaults to True.

- Toffee tests now live in their own test_toffee.py file.  I had to
  rename a pile of expect files for this.

- Removed defunct torch.toffee imports from autograd to solve module import
  cycle.

- Helper function _with_file_like to abstract over opening file-ish arguments,
  taken from torch.save()

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-09-05 17:48:55 -04:00
1f77d482d5 Don't insert Transpose if it is no-op.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-09-05 17:48:55 -04:00
1e0171f436 Super resolution network (#148) 2017-09-05 17:48:55 -04:00
dc6378d891 merge fixes for Squeeze and ConvTranspose 2017-09-05 17:48:55 -04:00
35bddb6b7e pr feedback 2017-09-05 17:48:55 -04:00
c9f7f2eff4 Change pipeline for exporting to toffeeIR
previously:
  PythonOp/CppOp Graph -> ToffeeIR, primspecs worked with protobufs
now:
  PythonOp/CppOp --ToToffeIR--> jit::Graph of in-memory ToffeIR -> protobufs of ToffeIR

This commit let's primspec functions work directly with JIT IR nodes,
which makes it possible to do a lot more stuff in those functions.
2017-09-05 17:48:55 -04:00
91dcf2938a Miscellaneous fixes needed to make caffe2 E2E 2017-09-05 17:48:55 -04:00
af90a780d1 primspec for avgpool + squeeze (#80) 2017-09-05 17:48:55 -04:00
0ca3ca302e test for primspec for concat (#77) 2017-09-05 17:48:55 -04:00
52e0816bed primspec for concat 2017-09-05 17:48:55 -04:00
0e5320e073 Lint
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-09-05 17:48:55 -04:00
c0d0a99977 Alexnet back online.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-09-05 17:48:55 -04:00
ee2ba279f2 Working Reshape op
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-09-05 17:48:55 -04:00
ca98c659df Add tests that gradcheck grad sizes match input size and fix advanced indexing
case that fails check.
2017-08-02 17:49:02 -04:00
8946502348 Accept all kinds of arguments in Variable.expand 2017-07-20 01:45:57 -04:00
f417cb062b Fix repeat backward to handle unsqueezed dims 2017-07-20 01:45:57 -04:00
bc032be13e Implement negative dimensions and double backwards cumprod. 2017-06-27 18:44:14 -04:00
e5857c5f1c Implement Gather double backwards. 2017-06-24 09:45:21 -04:00
7da77c4255 Add ScatterAdd autograd function. 2017-06-24 09:45:21 -04:00
656cb1c31a Implement and test double backwards for IndexCopy. 2017-06-24 09:45:21 -04:00
4ab4938cf0 Fix and test single backwards IndexCopy. 2017-06-24 09:45:21 -04:00
1324c4b081 Implement double backwards for masked_scatter. 2017-06-24 09:45:21 -04:00
bb3779efe8 Add broadcasting to masked_select. 2017-06-24 09:45:21 -04:00
a45ad7cfba Advanced Indexing Part 1 -- Purely Integer Array Indexing 2017-06-22 17:21:50 -04:00
a836f8f56f Use and document saved_variables for double backwards. 2017-06-22 11:46:24 -04:00
1572173ca7 Implement double backwards for Sort, Topk. 2017-06-21 00:24:13 -04:00
e16ceef76a Implement Scatter double backwards. 2017-06-21 00:24:13 -04:00
b79ff11aca Implement IndexAdd, IndexFill, IndexSelect, MaskedSelect double backwards. 2017-06-21 00:24:13 -04:00
50c0912a75 Implemented masked_fill double backwards. 2017-06-21 00:24:13 -04:00
82ef292f00 Add gradgradchecks for various autograd Functions and support Unfold double backwards. 2017-06-19 18:19:16 -04:00