Merge Tensor and Variable. (#28620)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28620

All Tensors are Variables now, they just happen to have requires_grad=False. Tensors ALWAYS have `VariableTensorId` in their type set.

When constructing this patch, I had to make decisions about what I would fix in this patch, and what I would leave for follow up PRs. Here is the cleanup that happens in this patch:

- The `is_variable` property is removed from TensorOptions. I removed this immediately because unlike Tensor::is_variable, TensorOptions::is_variable doesn't respect our VariableTensorId thread-local state. This means that there were a bunch of places where TensorOptions::is_variable was false, which is obviously bogus in the world when tensor and variable are merged. Instead of keeping the method as a function that always returns true, I just opted to remove it entirely (it's not public API.) All places we set `is_variable` are deleted.
  - Knock on effect: there is no longer a separate DeprecatedTypeProperties for the variable and non-variable versions of type.
  - Knock on effect: instead of asserting on TensorOptions::is_variable, instead we just test `at::impl::variable_is_excluded()`
- There is now only one copy of the cuDNN RNN dropout cache, not two (I'm not sure why we had two to begin with)

Some cleanup that doesn't happen in this patch:
- Eliminating unnecessary uses of `make_variable`
- Eliminating `Tensor::is_variable`

The most subtle part of this patch is retaining tracing behavior: the fact that everything is a Variable means that more code gets routed to VariableType than before; this can change traces. I identified two places where we didn't appropriately turn off VariableType, mostly factory functions:

- `torch.tensor` must turn off VariableType before invoking `at::empty` to construct the tensor, as it subsequently does direct data access
- `tensor_slow` (invoked when you pass a Python scalar to a tensor argument) must turn off VariableType before calling `scalar_to_tensor` so the scalar gets traced as constant, rather than as a call to `scalar_to_tensor`.

Honestly, these are all giant hacks, and should be replaced with a more specialized guard that just toggles tracing.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Reviewed By: dreiss

Differential Revision: D18171156

Pulled By: ezyang

fbshipit-source-id: 5b6a045beba37492647e350190f495114e86504d
This commit is contained in:
Edward Yang
2019-11-04 14:54:23 -08:00
committed by Facebook Github Bot
parent 215ac1065a
commit 25261a4776
40 changed files with 214 additions and 293 deletions

View File

@ -693,9 +693,6 @@ TEST(TensorTest, DataPtr) {
TEST(TensorTest, Data) {
const auto tensor = torch::rand({3, 3});
ASSERT_TRUE(torch::equal(tensor, tensor.data()));
const auto tensor2 = at::rand({3, 3});
ASSERT_THROW(tensor2.data(), c10::Error);
}
TEST(TensorTest, BackwardAndGrad) {
@ -703,11 +700,6 @@ TEST(TensorTest, BackwardAndGrad) {
auto y = x * x;
y.backward();
ASSERT_EQ(x.grad().item<float>(), 10.0);
x = at::tensor({5});
y = x * x;
ASSERT_THROWS_WITH(y.backward(), "backward is not implemented for Tensor");
ASSERT_THROWS_WITH(x.grad(), "grad is not implemented for Tensor");
}
TEST(TensorTest, BackwardCreatesOnesGrad) {
@ -729,12 +721,6 @@ TEST(TensorTest, IsLeaf) {
auto y = x * x;
ASSERT_TRUE(x.is_leaf());
ASSERT_FALSE(y.is_leaf());
x = at::tensor({5});
y = x * x;
const auto message = "is_leaf is not implemented for Tensor";
ASSERT_THROWS_WITH(y.is_leaf(), message);
ASSERT_THROWS_WITH(x.is_leaf(), message);
}
TEST(TensorTest, OutputNr) {
@ -742,12 +728,6 @@ TEST(TensorTest, OutputNr) {
auto y = x * x;
ASSERT_EQ(x.output_nr(), 0);
ASSERT_EQ(y.output_nr(), 0);
x = at::tensor({5});
y = x * x;
const auto message = "output_nr is not implemented for Tensor";
ASSERT_THROWS_WITH(y.output_nr(), message);
ASSERT_THROWS_WITH(x.output_nr(), message);
}
TEST(TensorTest, Version) {
@ -757,14 +737,6 @@ TEST(TensorTest, Version) {
ASSERT_EQ(x._version(), 1);
x.add_(1);
ASSERT_EQ(x._version(), 2);
x = at::ones(3);
const auto message = "version is not implemented for Tensor";
ASSERT_THROWS_WITH(x._version(), message);
x.mul_(2);
ASSERT_THROWS_WITH(x._version(), message);
x.add_(1);
ASSERT_THROWS_WITH(x._version(), message);
}
TEST(TensorTest, Detach) {
@ -774,12 +746,6 @@ TEST(TensorTest, Detach) {
ASSERT_FALSE(y.is_leaf());
ASSERT_TRUE(y_detached.is_leaf());
ASSERT_FALSE(y_detached.requires_grad());
x = at::tensor({5}, at::TensorOptions().requires_grad(false));
y = x * x;
const auto message = "detach is not implemented for Tensor";
ASSERT_THROWS_WITH(x.detach(), message);
ASSERT_THROWS_WITH(y.detach(), message);
}
TEST(TensorTest, DetachInplace) {
@ -790,12 +756,6 @@ TEST(TensorTest, DetachInplace) {
ASSERT_FALSE(y.requires_grad());
ASSERT_TRUE(y_detached.is_leaf());
ASSERT_FALSE(y_detached.requires_grad());
x = at::tensor({5}, at::TensorOptions().requires_grad(false));
y = x * x;
const auto message = "detach_ is not implemented for Tensor";
ASSERT_THROWS_WITH(x.detach_(), message);
ASSERT_THROWS_WITH(y.detach_(), message);
}
TEST(TensorTest, SetData) {
@ -807,10 +767,6 @@ TEST(TensorTest, SetData) {
x.set_data(y);
ASSERT_TRUE(torch::equal(x, y));
ASSERT_EQ(x.data_ptr<float>(), y.data_ptr<float>());
x = at::tensor({5});
y = at::tensor({5});
ASSERT_THROWS_WITH(x.set_data(y), "set_data is not implemented for Tensor");
}
TEST(TensorTest, RequiresGradInplace) {
@ -828,11 +784,4 @@ TEST(TensorTest, RequiresGradInplace) {
const auto int_tensor = torch::tensor({5}, at::TensorOptions().dtype(torch::kInt));
ASSERT_THROWS_WITH(int_tensor.requires_grad_(true),
"Only Tensors of floating point dtype can require gradients");
x = at::tensor({5}, at::TensorOptions().requires_grad(false));
y = x * x;
ASSERT_THROWS_WITH(x.requires_grad_(false),
"requires_grad_ is not implemented for Tensor");
ASSERT_THROWS_WITH(y.requires_grad_(false),
"requires_grad_ is not implemented for Tensor");
}