165 Commits

Author SHA1 Message Date
67d64ea910 Fix binary op name inference to happen before shape checks (#25563)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25563

Before, for binary ops, name inference occurred after shape checks. This
defeats the purposes for names because the names are supposed to tell
the user that i.e. their tensors are misaligned or that they are adding
incompatible tensors.

This PR changes TensorIterator so that names are computed before shape checks and
propagated after the binary ops are finished. In order to support this,
this PR makes the following changes:
- adds a `names_` field to TensorIterator, similar to `shape_`. This is
necessary to hold the output names, that are computed in
`compute_names`, until they are used in `propagate_names_to_outputs()`.

Test Plan: Imported from OSS

Differential Revision: D17158869

Pulled By: zou3519

fbshipit-source-id: 0caa90f7a93e4d9bdb2549cd330cc3abd2258868
2019-09-03 18:49:09 -07:00
9922e09436 Name inference rule for torch.cat (#25568)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25568

Test Plan
- new test [namedtensor ci]

Test Plan: Imported from OSS

Differential Revision: D17159069

Pulled By: zou3519

fbshipit-source-id: fbc185ea5865b128508451096b742ac18e467670
2019-09-03 18:43:10 -07:00
a6ba4f64ac Name inference for masked_fill_ / masked_fill
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/25567

Test Plan: - new tests [namedtensor ci]

Differential Revision: D17159070

Pulled By: zou3519

fbshipit-source-id: d177a0847fc592b6b15e3ae59fcea847d4975e12
2019-09-03 17:45:14 -07:00
2aef60660f Name inference rule for masked select (#25566)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25566

masked_select returns a tensor with None names. However, it broadcasts
its inputs so we need to perform a check that they are broadcastable.

Test Plan: - new tests [namedtensor ci]

Differential Revision: D17159071

Pulled By: zou3519

fbshipit-source-id: ad201f3f73bc54163ede1ba3d906d2409ebef475
2019-09-03 17:45:09 -07:00
938e740241 Name inference rule for mean, std, var, std_mean, var_mean (#25431)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25431

I put the name propagation logic in a central place, `make_reduction`,
that creates a TensorIterator for the reduction. This lets us implement
name inference rules for mean, std, var, std_mean, and var_mean.

Test Plan
- new tests [namedtensor ci]

Test Plan: Imported from OSS

Differential Revision: D17123577

Pulled By: zou3519

fbshipit-source-id: 2d47080a40da0c4bcabbb3df71ffa8fbeb7a14c6
2019-09-03 11:54:13 -07:00
2513ca66ca Add guards for using named tensor with serialization and multiprocessing (#25345)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25345

Test Plan
- New tests [namedtensor ci]

Test Plan: Imported from OSS

Differential Revision: D17101486

Pulled By: zou3519

fbshipit-source-id: 58e803b042056ee6abab8551517f74078f2b81d5
2019-08-29 14:10:33 -07:00
0bb69f6071 Add guard for named tensors in the JIT (#25344)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25344

Test Plan
- [namedtensor ci]

Test Plan: Imported from OSS

Differential Revision: D17101487

Pulled By: zou3519

fbshipit-source-id: d6170a809dfd98e6a4dba8450433c439962991cc
2019-08-29 14:10:28 -07:00
6f5fe96c80 Implement name inference for torch.matmul (#25177)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25177

Test Plan
- new tests [namedtensor ci]

Test Plan: Imported from OSS

Differential Revision: D17051452

Pulled By: zou3519

fbshipit-source-id: 7259cdb7ba7f480035528cf3c60ef6d051e42db5
2019-08-28 13:51:04 -07:00
d2719b549d Implement name inference for torch.bmm (#25123)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25123

The approach is different for CPU and CUDA. In particular:
- in CPU, I added a name inference rule to bmm_out
- in CUDA, bmm calls THCTensor_(baddbmm) so I added a name inference
rule to that.

When one calls baddbmm on CPU or CUDA, it'll error out with NYI due to
named_guard: True on it in native_functions.yaml. I'm not planning on
implementing baddbmm soon because it's a little tricky to add it to CPU
and bmm is more commonly used function.

Test Plan
- new tests [namedtensor ci]

Test Plan: Imported from OSS

Differential Revision: D16998073

Pulled By: zou3519

fbshipit-source-id: 8dc01898964318717911f28eebd6cdfffc7dfcf2
2019-08-28 13:51:00 -07:00
2f4f6c2563 Implement name inference for torch.dot (#24474)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24474

torch.dot is a little weird. It ignores the names of its inputs to be
consistent with the rest of our matrix multiplication functions.

I've written the implementation using a helper function that is also
used by other matrix multiplication functions so that it is easy to
change the behavior.

Test Plan
- new tests [namedtensor ci]

Test Plan: Imported from OSS

Differential Revision: D16915802

Pulled By: zou3519

fbshipit-source-id: 628a6de1935357022cc92f4d23222736a70bb070
2019-08-27 06:49:27 -07:00
088201f95d Implement name inference for addmv, addmv_, mv (#24471)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24471

mv(Tensor[M, N], Tensor[O]) ignores the names of N and O and returns a
tensor with names [M].

Test Plan: - new tests [namedtensor ci]

Differential Revision: D16915805

Pulled By: zou3519

fbshipit-source-id: d7d47903f249f85ef3be8a188d51993834bf5f55
2019-08-26 15:03:26 -07:00
78fa8a8ad0 Implement name inference for expand (#24469)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24469

tensor.expand(*sizes) returns a tensor with names equal to tensor.names
plus unnamed padding in the beginning dimensions.

For example, Tensor[H, W].expand(10, 2, 128, 128) -> Tensor[None, None,
H, W].

Test Plan: - new tests [namedtensor ci]

Differential Revision: D16915804

Pulled By: zou3519

fbshipit-source-id: 77ac97f42e9959d7f6d358c5286e3dc27488e33d
2019-08-26 15:03:22 -07:00
0156d02b59 Implement name inference for mm, addmm (#24306)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24306

Featuring:
- a new way of writing name inference tests. At some point I'll migrate
the older tests over.
- The out= variants aren't implemented. This is because they are a
little weird: the output gets resized, but I haven't throught through
what semantics that should have.

Test Plan: - new tests [namedtensor ci]

Differential Revision: D16915801

Pulled By: zou3519

fbshipit-source-id: 29ae2ee414c7d98e042965458c5dccef7ddbd4dd
2019-08-26 12:20:26 -07:00
6195aee2c6 Fix binary op name inference between unnamed and named tensors. (#24921)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24921

Let `unnamed = torch.randn(1, 1, 1)` and `named = torch.randn(1, 1,
names=('N', 'C'))`.

Previously, there was a bug where `unnamed + named` would error out.
This happened because `unify_from_right(unnamed.opt_names(),
named.opt_names())` would return `named.names()`, which was propagated
to the output tensor. However, the output tensor has dim 3, but
`names.names()` only has 2 elements, so the code would throw an error.

The solution implemented in this PR is to stop trying to do premature
optimization. If all inputs to an operation doesn't have names, then
don't run name inference. However, if any inputs do, then materialize
the names and run name inference.

It's possible to make this more efficient for the case where some inputs
are named and some aren't, but we should benchmark these cases
and determine if it is necessary for it to be more efficient.

Test Plan: - new tests [namedtensor ci]

Differential Revision: D16930710

Pulled By: zou3519

fbshipit-source-id: 0de73c803c8b0f9a1c2d80684b9a47cccba91cbc
2019-08-26 12:20:22 -07:00
867d8af20f Fix FIXME_default_names by storing static list of 64 none names (#24885)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24885

Store a static pre-allocated vector of names. When one calls
`default_names`, it gives a const reference to some amount of these
names.

Also make clearer the maximum number of dimensions we support for named
tensors. Right now it is 64 but that number is easy to change. 64
follows some internal pytorch maximum number of dimensions;
TensorIterator reduce ops have a limit of 64 dims.

Test Plan: - new tests [namedtensor ci]

Differential Revision: D16915803

Pulled By: zou3519

fbshipit-source-id: 931741b199456f8976882b82f25ab5af6dcd108b
2019-08-23 14:32:07 -07:00
3a59a9b36c Implement name inference for t(), transpose(...) (#24941)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24941

Test Plan
- [namedtensor ci]

Test Plan: Imported from OSS

Differential Revision: D16930707

Pulled By: zou3519

fbshipit-source-id: 833a2bfd27f3bb3b7bc4327ac62a1d02ec526127
2019-08-23 09:01:53 -07:00
a77cb2ccd1 Revert D16915800: Implement name inference for t(), transpose(...)
Differential Revision:
D16915800

Original commit changeset: d8e5beff3daa

fbshipit-source-id: f8b966fdc485d8250ae74d8bbbda157b45c2d1a0
2019-08-20 14:07:06 -07:00
acf3b76bf0 Implement name inference for t(), transpose(...) (#24203)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24203

Test Plan
- [namedtensor ci]

Test Plan: Imported from OSS

Differential Revision: D16915800

Pulled By: zou3519

fbshipit-source-id: d8e5beff3daa7e5fd5bfed5b02d8089cac300de8
2019-08-20 13:46:47 -07:00
4bfd33ed36 Name inference for softmax, log_softmax and Dimname overloads. (#24087)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24087

Added name inference rules for softmax and log_softmax.

Added the overloads for Dimname dim to softmax and log_softmax.

Test Plan: - [namedtensor ci]

Differential Revision: D16763391

Pulled By: zou3519

fbshipit-source-id: 676a14666d42441eb7d3c9babef7461c7b78d290
2019-08-14 12:19:27 -07:00
5cb8a7b396 Fix out= function semantics for named tensors. (#24028)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24028

Previously, torch.abs(tensor, out=out) would ignore the names of the
`out` tensor and overwrite them with the names of `tensor`.

This patch changes the behavior to the following:
1) If `out` does not have names, then overwite them with `tensor.names`.
2) If `out` does have names, then check that `out.names` equals
`tensor.names`.

This patch also includes the following clean ups:
- renamed `default_names` to `FIXME_default_names` because it is
inefficient and needs to be fixed.
- Renamed impl::internal_get_names / impl::internal_has_names to
impl::get_names / impl::set_names. Devs should feel free to use them, so
I removed the internal_ prefix.
- Moved internal_set_names to NamedTensor.{h, cpp}. These functions
still have the internal_ prefix because their use requires caution.

Test Plan: - [namedtensor ci]

Differential Revision: D16763387

Pulled By: zou3519

fbshipit-source-id: 57dcc7c759246def0db2746d1dca8eddd5e90049
2019-08-14 12:19:23 -07:00
f996f8d61d Update tensor.view_names / tensor.names_ API (#23973)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23973

Without loss of generality, I describe the API for `tensor.view_names`.
`tensor.names_` has an analogous API.

`tensor.view_names(*names)` returns a view on tensor with named dims `names`.
`names` must be of length `tensor.dim()`; otherwise, if '*' is in `names`,
then it (known as the "glob") is expanded greedily to be equal to the
corresponding names from `tensor.names`.

For example,
```
>>> x = torch.empty(2, 3, 5, 7, names=('N', 'C', 'H', 'W'))
>>> x.view_names('*', 'height', 'width').names
('N', 'C', 'height', 'width')

>>> x.view_names('batch', '*', 'width').names
('batch', 'C', 'H', 'width')
```

tensor.view_names(**rename_map) returns a view on tensor that has
renamed dims as specified in the mapping `rename_map`.

For example,
```
>>> x = torch.empty(2, 3, 5, 7, names=('N', 'C', 'H', 'W'))
>>> x.view_names(W='width', H='height').names
('N', 'C', 'height', 'width')
```

These are different(!!!) from the C++ API, which only allows the
following:
- tensor.view_names(optional<DimnameList>)

C++ API parity for named tensors is not important right now; I am
punting that to the future.

Test Plan: - [namedtensor ci]

Differential Revision: D16710916

Pulled By: zou3519

fbshipit-source-id: 7cb8056c0fb4c97b04c3a2d1dd0f737e0a67ce34
2019-08-14 09:40:35 -07:00
2fcdb3a1f3 Rename set_names -> view_names, set_names_ -> names_ (#23962)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23962

This change should make the semantics clearer.

`tensor.names_(names)` sets tensor.names to be `names`.

`tensor.view_names(names)` returns a view of the tensor with names
`names`.

Test Plan
- [namedtensor ci]

Test Plan: Imported from OSS

Differential Revision: D16710915

Pulled By: zou3519

fbshipit-source-id: c82fa9812624d03c86f7be84b0a460e3c047aaa0
2019-08-14 09:40:31 -07:00
7030f2c623 Implement tensor.align_to(names), torch.align_tensors(*tensors) (#23804)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23804

`output = tensor.align_to(names)` returns a view of `tensor` such that
`output.names = names`. Dimensions with the same names in `tensor` and
`output` have the same sizes; dimensions with new names have size 1.

The following must be true for this operation to succeed:
1) tensor.names must be a subsequence (not necessarily contiguous) of `names`
2) Aligning tensor.names to names must not change the absolute position from the
   right of any unnamed dimension.

In practice, these constraints mean that aligning cannot transpose
names.

Some examples:
- Tensor[C].align_to(C) -> Tensor[C]
- Tensor[N].align_to([N, C]) -> Tensor[N, C]
- Tensor[H, W].align_to([N, H, W, C]) -> Tensor[N, H, W, C]
- Tensor[None].align_to([N, None]) -> Tensor[N, None]
- Tensor[N].align_to([N, None None]) -> Tensor[N, None, None]

Examples of error cases:
- Tensor[W, H].align_to([N, H, W, C]) -> Error (not a subsequence)
- Tensor[None, H].align_to([None, H, W]) -> Error (would change the
absolute position from the right of a None dimension)

`torch.align_tensors(*tensors)` aligns the named dimensions of each
tensor according to the alignment rules so that they can be used in an
operation. More concretely, it aligns each tensor to the
longest names among the names of the tensors in `tensors`.

This allows users to emulate "broadcasting by names", which is one of
the things named tensors tries to enable. Here is an example:

```
imgs: Tensor[N, C, H, W]
scale: Tensor[N]

// Doesn't work because we do broadcasting by alignment by default
imgs * scale

// Does work
imgs, scale = torch.align_tensors(imgs, scale)
imas * scale
```

Future:
- Consider allowing broadcasting by names by default.

Test Plan:
- The diff looks pretty large but more than half of it is testing.
- new tests [namedtensor ci]

Differential Revision: D16657927

Pulled By: zou3519

fbshipit-source-id: e2f958bf5146c8ee3b694aba57d21b08e928a4e6
2019-08-14 09:40:27 -07:00
eabfca3577 Named inference for contiguous(), bernoulli variants, and dropout. (#24109)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24109

See title.

Test Plan: - New tests [namedtensor ci]

Differential Revision: D16763389

Pulled By: zou3519

fbshipit-source-id: ea14af0fe812d04ca7127a080e56c273b21c30bc
2019-08-14 06:19:28 -07:00
ad42c7d0f3 Implement name inference rule for empty_like, clone (#24108)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24108

`torch.empty_like(tensor)` and `tensor.clone()` both propagate names to
the output tensor.

As a part of this change, I fixed the empty(..., names=) overload to
include the `memory_format` argument in the normal `empty` declaration
in native_functions.yaml.

Test Plan: - [namedtensor ci]

Differential Revision: D16763392

Pulled By: zou3519

fbshipit-source-id: c7b2bc058d26a515a5fd8deef22c2acb290c8816
2019-08-14 06:19:24 -07:00
65fa0233c5 Add names argument to ones, rand, randn, zeros, full; fix empty (#24107)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24107

In the short term, we implement this by having overloads for each of
these functions. In the long term, the plan is to move DimnameList to
TensorOptions so that we do not have to duplicate work.

Also fixes the implementation of empty. If there are no names, we should
just return an unnamed tensor instead of telling the user we don't
support their backend/layout.

Test Plan: - [namedtensor ci]

Differential Revision: D16763393

Pulled By: zou3519

fbshipit-source-id: 7324a6b157187d4f74abc5459052f3323a417412
2019-08-14 06:19:21 -07:00
98a3b3d565 Add name propagation for at::alias, add tensor.set_names (#24202)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24202

tensor.set_names(names) is the out-of-place variant of
tensor.set_names_(names). This naming is probably confusing so I am
taking any and all suggestions.

Test Plan: - run tests [namedtensor ci]

Differential Revision: D16773014

Pulled By: zou3519

fbshipit-source-id: 61024303c1a34db631cc4cb2c53757345e40d72c
2019-08-13 17:01:18 -07:00
75db368031 Revert D16763388: Add name propagation for at::alias, add tensor.set_names
Differential Revision:
D16763388

Original commit changeset: 4b2fb3acc051

fbshipit-source-id: 5be35bdcc2e7c71378af9e34be19305bdd4ba0d1
2019-08-12 13:42:43 -07:00
6772f537f0 Revert D16763390: Improve test_namedtensor.py with named tensor equality check
Differential Revision:
D16763390

Original commit changeset: 170e27ebc4d7

fbshipit-source-id: dbabe837793d8db6493a221b91e43a065baece75
2019-08-12 13:42:39 -07:00
90f3f9d9aa Improve test_namedtensor.py with named tensor equality check (#24106)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24106

Test Plan
- Code reading. assertTensorDataAndNamesEqual isn't used in this commit
but it'll be used in future commits.
- [namedtensor ci]

Test Plan: Imported from OSS

Differential Revision: D16763390

Pulled By: zou3519

fbshipit-source-id: 170e27ebc4d79aca939c5d101489b20faedc6133
2019-08-12 12:45:00 -07:00
1108fa1acb Add name propagation for at::alias, add tensor.set_names (#24105)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24105

tensor.set_names(names) is the out-of-place variant of
tensor.set_names_(names). This naming is probably confusing so I am
taking any and all suggestions.

Test Plan: - run tests [namedtensor ci]

Differential Revision: D16763388

Pulled By: zou3519

fbshipit-source-id: 4b2fb3acc0514515e7ca805dbc5c3d4a9bd96317
2019-08-12 12:44:56 -07:00
0bba302da5 Revert D16621830: Add name propagation for at::alias, add tensor.set_names
Differential Revision:
D16621830

Original commit changeset: f8a3837d3a37

fbshipit-source-id: 801ab858a0741d98b0b9d56763fa70a9010fe75e
2019-08-09 10:55:18 -07:00
71352fbd9a Revert D16667816: Improve test_namedtensor.py with named tensor equality check
Differential Revision:
D16667816

Original commit changeset: 66519cd5d17b

fbshipit-source-id: 51a26cdfb5624695a492d3ac93fb7a402c44e11a
2019-08-09 10:55:14 -07:00
de97b12dbd Revert D16647820: Add names argument to ones, rand, randn, zeros, full
Differential Revision:
D16647820

Original commit changeset: c6c53c5f26a8

fbshipit-source-id: a341c6eda49f5dd2e1712b65e61fef99791f0668
2019-08-09 10:55:10 -07:00
177a5c3f41 Revert D16647821: Implement name inference rule for empty_like, clone
Differential Revision:
D16647821

Original commit changeset: 43b261f3456b

fbshipit-source-id: 03caecd6898efd292b4f5c5b7254f7d31d502d6a
2019-08-09 10:55:06 -07:00
521484eaec Revert D16657926: Named inference for contiguous(), bernoulli variants, and dropout.
Differential Revision:
D16657926

Original commit changeset: 8cd46765b1c7

fbshipit-source-id: fce2202dd101cfc3153f279a0a4651c9b735e044
2019-08-09 10:32:48 -07:00
4dd2908dd6 Named inference for contiguous(), bernoulli variants, and dropout. (#23808)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23808

See title.

Test Plan: - New tests [namedtensor ci]

Differential Revision: D16657926

Pulled By: zou3519

fbshipit-source-id: 8cd46765b1c791b73448ddf4585dae56d635364d
2019-08-09 09:17:47 -07:00
16b6466e5e Implement name inference rule for empty_like, clone (#23746)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23746

`torch.empty_like(tensor)` and `tensor.clone()` both propagate names to
the output tensor.

As a part of this change, I fixed the empty(..., names=) overload to
include the `memory_format` argument in the normal `empty` declaration
in native_functions.yaml.

Test Plan: - [namedtensor ci]

Differential Revision: D16647821

Pulled By: zou3519

fbshipit-source-id: 43b261f3456b6bf5fca7b6313e659b259a2ba66d
2019-08-09 09:17:43 -07:00
11cff2981b Add names argument to ones, rand, randn, zeros, full (#23743)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23743

In the short term, we implement this by having overloads for each of
these functions. In the long term, the plan is to move DimnameList to
TensorOptions so that we do not have to duplicate work.

Test Plan: - [namedtensor ci]

Differential Revision: D16647820

Pulled By: zou3519

fbshipit-source-id: c6c53c5f26a86b730cbc4d4eb69907ac0e08fc65
2019-08-09 09:17:39 -07:00
5fbe824398 Improve test_namedtensor.py with named tensor equality check (#23801)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23801

Test Plan
- Code reading. assertTensorDataAndNamesEqual isn't used in this commit
but it'll be used in future commits.
- [namedtensor ci]

gh-metadata: pytorch pytorch 23801 gh/zou3519/90/head

Test Plan: Imported from OSS

Differential Revision: D16667816

Pulled By: zou3519

fbshipit-source-id: 66519cd5d17bda4c4304a1bc6e2a03ae59d49e39
2019-08-09 09:17:35 -07:00
78f3b883f0 Add name propagation for at::alias, add tensor.set_names (#23624)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23624

tensor.set_names(names) is the out-of-place variant of
tensor.set_names_(names). This naming is probably confusing so I am
taking any and all suggestions.

Test Plan:
- run tests [namedtensor ci]

gh-metadata: pytorch pytorch 23624 gh/zou3519/86/head

Differential Revision: D16621830

Pulled By: zou3519

fbshipit-source-id: f8a3837d3a370b41210e938369348dcbb4aee53a
2019-08-09 09:17:31 -07:00
57fc793650 Add names to repr for named tensors
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23316

Test Plan:
- [namedtensor ci]

gh-metadata: pytorch pytorch 23316 gh/zou3519/80/head

Imported from OSS

Differential Revision: D16494415

Pulled By: zou3519

fbshipit-source-id: e483f57bdb0610d0eadbe70d673e20dc3d3f9502
2019-08-02 11:37:29 -07:00
8e466b7e21 Add torch._C._BUILD_NAMEDTENSOR() (#23623)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23623

This is a quick, not-user-facing check for if pytorch was built with BUILD_NAMEDTENSOR=1.

Test Plan:
- run tests [namedtensor ci]

gh-metadata: pytorch pytorch 23623 gh/zou3519/85/head

Differential Revision: D16621829

Pulled By: zou3519

fbshipit-source-id: d7e1161dc176bab2c1f953265722daeba1e63102
2019-08-02 11:37:25 -07:00
08f7f27c6a Fix named tensor build by enabling tensor.is_pinned and removing support for clone() (#23597)
Summary:
`is_pinned` was moved to native_functions.yaml, disabling it for named
tensors. This PR re-enables its usage for named tensors.

I wrote a named inference rule for torch.clone(), but something happened
to it. Disable it for now so we can get the namedtensor ci to be green.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23597

Test Plan: - run tests [namedtensor ci]

Differential Revision: D16581771

Pulled By: zou3519

fbshipit-source-id: 498018cdc55e269bec80634b8c0a63ba5c72914b
2019-07-31 11:48:40 -07:00
c5482e33e9 Rename tensor.is_named to has_named, expose has_named to python.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23315

Test Plan:
- [namedtensor ci]

gh-metadata: pytorch pytorch 23315 gh/zou3519/79/head

Imported from OSS

Differential Revision: D16494414

Pulled By: zou3519

fbshipit-source-id: d2d6beb45db9288e5df707b68b6046d783ca9f97
2019-07-31 07:14:07 -07:00
725e41e955 Enable named tensors for arithmetic, clone, and tensor conversion ops
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23237

Test Plan: Imported from OSS

Differential Revision: D16494416

Pulled By: zou3519

fbshipit-source-id: 29bc390797c99088d50a2b59c3e2402a93562e2c
2019-07-31 07:14:04 -07:00
437a8b3eed Named inference rule for copy_
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23229

Test Plan: Imported from OSS

Differential Revision: D16494413

Pulled By: zou3519

fbshipit-source-id: 4acb85e5a4ad09bf5f7cbb84cc8d4ceac0cd9967
2019-07-30 07:17:34 -07:00
505fa83b2f Implement named inference rule for mul
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23193

Test Plan:
- [namedtensor ci]

gh-metadata: pytorch pytorch 23193 gh/zou3519/75/head

Imported from OSS

Differential Revision: D16494401

Pulled By: zou3519

fbshipit-source-id: 0e2395d7de39158ec51feed5da0389715ec52600
2019-07-29 09:58:18 -07:00
0dcb8755c8 Implement tensor.set_names_, tensor.names setter
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23172

Test Plan:
- [namedtensor ci]

gh-metadata: pytorch pytorch 23172 gh/zou3519/74/head

Imported from OSS

Differential Revision: D16494364

Pulled By: zou3519

fbshipit-source-id: 8d0e26b33346d4eadba30b2e76610f6d7be7c373
2019-07-26 08:50:49 -07:00
c8a50a26d2 Named inference rule for torch.prod
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23106

Test Plan:
- [namedtensor ci]

Imported from OSS

Differential Revision: D16419175

Pulled By: zou3519

fbshipit-source-id: beb9ef838525c1ea7d7839cb9b8d68028fb4917f
2019-07-26 08:50:45 -07:00