Commit Graph

10 Commits

Author SHA1 Message Date
643ca5def2 Replace c10::guts::stuff with std::stuff (#30915)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30915

Since we now have C++14, we don't need these c10::guts helpers anymore
ghstack-source-id: 95777609

Test Plan: waitforsandcastle

Differential Revision: D18869639

fbshipit-source-id: 97716f932297c64c6e814410ac47b444c33d4e2e
2019-12-16 13:57:19 -08:00
9e81616343 Merge Tensor and Variable types. (#28287)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28287

This PR eliminates the static distinction between
Tensor and Variable.  Every Variable is a Tensor, no need to static_cast
or call the Variable constructor.

To do this, I need Tensor to have API parity with Variable. I have already
moved most of the methods I don't want in Tensor off Variable.
These implementations are all placed in Tensor.cpp.

One API difference is that all Variable methods now have const, so we no longer
have faux const-correctness (see https://github.com/zdevito/ATen/issues/27 for
back story)

This diff is BC breaking in a few ways:
- Because torch::autograd::Variable is now just an alias of at::Tensor, ADL for
  `torch::autograd` functions no longer works, you have to explicitly qualify
  them with `torch::autograd` (examples: `torch/nn/parallel/data_parallel.h`)
- Because Variable and Tensor are now the same type, code which assumes that
  they are different types (e.g., for the purposes of templating, or enable_if checks)
  will not work until you delete the (now) redundant overload/specialization.
  (examples: `torch/nn/modules/container/any.h`, `torch/csrc/utils/pybind.h`)

Some other notes:
- I'm not sure what was going with the old template implementation of `extract_vars`,
  but I couldn't get the sfinae version to work. Replacing it with an overloading based version
  made it work.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Differential Revision: D18571426

Pulled By: ezyang

fbshipit-source-id: 2ea8151e5f1d8512cdebf1345399642e68b707b8
2019-11-21 09:26:39 -08:00
da6b8a905a Use c10::to_string in more places (#28605)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28605

This was added because std::to_string isn't available in libstc++
on Android.  Use it in more places to get the PyTorch Android
build working with libstdc++.

Test Plan: Internal android build.

Reviewed By: jerryzh168

Differential Revision: D18099520

fbshipit-source-id: 17a2b617c2d21deadd0fdac1db849823637981fc
2019-10-24 15:52:05 -07:00
9bdcc499d1 Delete a few cases where we directly use Backend/TensorTypeId. (#25467)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25467

Use Layout/Device more directly in these cases.
ghstack-source-id: 89289651

Test Plan: sandcastle and ossci

Differential Revision: D17131883

fbshipit-source-id: ab3c6d1c879b7f26f20a2378364c852dc37508fc
2019-08-30 13:00:20 -07:00
mal
6b656565ab Hooks for C++ API (#24393)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24393

Ability to register hook on a variable, similar to python autograd API. register_hook will take a function as argument and create a CppFunctionPreHook similar to PyFunctionPreHook.
It will return the index of the hook which can be passed to remove_hook to disable the hook.

Test Plan: Added tests.

Differential Revision: D16861722

fbshipit-source-id: d08047f932e38c7bde04283a18b2d0311c8ad604
2019-08-16 12:44:20 -07:00
mal
81ba2df554 Allow forward functions with single output to return Variable (#23803)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23803

Custom `forward()` can return a `Variable` in case of single outputs instead of returning a `variable_list` of size 1.

Test Plan: Modified tests involving single output forward functions.

Reviewed By: ezyang

Differential Revision: D16673857

Pulled By: ezyang

fbshipit-source-id: c96d9473b48ad99e6736a68d334b333a917498b7
2019-08-09 11:10:14 -07:00
mal
3fa2df7c9a Support custom autograd functions in C++ (#23572)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23572

### **(The stack from #23020  was moved into this PR)**

Adding API for custom autograd operations, with user defined forward and backward, [like in python](https://pytorch.org/docs/stable/notes/extending.html#extending-torch-autograd).

The custom operation should be a subclass of Function, with static forward and backward functions. `forward()` can accept any arguments similar to the Python API and `backward()` should accept a variable list as an argument.

Both `forward()` and `backward() `accept a AutogradContext* which can be used to share data between them.
Variables can be saved in the context using `save_for_backward()` and other data can be saved in the map `save` in the form of `<std::string, at::IValue>` pairs. Variables saved in forward can be accessed with `get_saved_variables()`.

Example usage:
```
class MyFunction : public Function<MyFunction> {
  public:
  static variable_list forward(AutogradContext *ctx, int n, Variable var) {
     // Save data for backward in context
     ctx->saved_data["n"] = n;
     return {var};
  }

  static variable_list backward(AutogradContext *ctx, variable_list grad_output) {
     // Use data saved in forward
     auto n = ctx->saved_data["n"].toInt();
     return {grad_output[0]*n};
  }
};

```
Then, it can be used with:
```
Variable x;
MyFunction::apply(6, x);
```

Also AutogradContext has methods to mark outputs as non differentiable and mark inputs as dirty similar to the [Python API](ff23a02ac4/torch/autograd/function.py (L26)).

Test Plan: Added tests for the custom autograd function API based on test_autograd.py. Currently only the tests for the basic functionality have been added. More tests will be added later.

Differential Revision: D16583428

fbshipit-source-id: 0bd42f19ce37bcd99d3080d16195ad74d40d0413
2019-07-31 11:30:48 -07:00
mal
e7a9b0d62f Rename torch::autograd::Function to torch::autograd::Node
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23269

Test Plan: Imported from OSS

Differential Revision: D16454878

fbshipit-source-id: b1e840fc2d3901955280d141e5ad6efd5e9d66af
2019-07-23 20:52:22 -07:00
mal
44493a623e Pass variable_list of inputs to _wrap_outputs
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23037

Test Plan: Imported from OSS

Differential Revision: D16380071

fbshipit-source-id: ae3333c02ef8a3c09b95bec7b8e92ce649553615
2019-07-19 12:31:23 -07:00
mal
58e20638f7 Refactoring _wrap_outputs to remove python dependence.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22631

Test Plan:
test suite

Imported from OSS

Differential Revision: D16185040

fbshipit-source-id: 9b83749f6c9cd05d13f54a3bb4801e263293252b
2019-07-10 12:12:16 -07:00