Use at::Tensor based autograd Variable (#2676)

Variable is now a subclass of at::Tensor backed by a VariableImpl* pImpl. The implementation of the ATen functions is defined in the auto-generated VariableType.h/cpp file.

Currently, only functions which fall through to the base type, such as sizes() and isCuda() are implemented. Differentiable ops like add() and mul() will be added in a subsequent PR.
This commit is contained in:
Sam Gross
2017-09-12 11:36:01 -04:00
committed by GitHub
parent 820143f4af
commit 1290e586fb
48 changed files with 1217 additions and 551 deletions

View File

@ -128,7 +128,7 @@ PyObject* THPCppFunction_register_hook_dict(PyObject* self, PyObject* _var)
auto var = (THPVariable*)_var;
auto& fn = *((THPCppFunction*)self)->cdata;
fn.pre_hooks.push_back(std::make_shared<PyFunctionPreHook>(
var->backward_hooks, var->cdata->output_nr));
var->backward_hooks, var->cdata.output_nr()));
Py_RETURN_NONE;
}