Summary:
BC-breaking NOTE:
In PyTorch 1.6 bool and integral fill values given to torch.full must set the dtype our out keyword arguments. In prior versions of PyTorch these fill values would return float tensors by default, but in PyTorch 1.7 they will return a bool or long tensor, respectively. The documentation for torch.full has been updated to reflect this.
PR NOTE:
This PR causes torch.full to throw a runtime error when it would have inferred a float dtype by being given a boolean or integer value. A versioned symbol for torch.full is added to preserve the behavior of already serialized Torchscript programs. Existing tests for this behavior being deprecated have been updated to reflect it now being unsupported, and a couple new tests have been added to validate the versioned symbol behavior. The documentation of torch.full has also been updated to reflect this change.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40364
Differential Revision: D22176640
Pulled By: mruberry
fbshipit-source-id: b20158ebbcb4f6bf269d05a688bcf4f6c853a965
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38739
Instead of codegenning the named tensor support checks into
CPUType/CUDAType, we instead add a new dispatch key that is put
into tensor whenever it has names. By default, the fallback
implementation says that named tensors are not supported, but
if they are supported, we register a fallthrough which lets
us through to the true backend implementation.
There are a bunch of small pieces which are necessary to make this
happen:
- NameMode now also excludes DispatchKey::Named from the dispatch set
- To avoid bad error messages, we add a teensy special case to
the dispatcher for named_not_supported_kernel: if we see that
the boxed kernel we need to invoke from unboxed is this kernel,
but we don't support boxing, but it's a kernel which is known
to not need boxing, we just pass in nullptr for the stack.
The special case here is very nice: it doesn't affect the fast
path and only gets exercised when things are not supported.
- I need to add support for per operator fallthrough registration.
This is done similarly to how we support fallthrough fallback,
by just keeping track if the registered kernel for an operator
is a fallthrough.
It is possible we could go even further down this path, and move
the named tensor logic itself into this key. I leave this
up to future work.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Test Plan: Imported from OSS
Differential Revision: D21662643
Pulled By: ezyang
fbshipit-source-id: 5bc6ae14a1f600189bd8bf865f74dd1700d932f7
Summary:
This updates assertEqual and assertEqual-like functions to either require both or neither of atol and rtol be specified. This should improve clarity around handling precision in the test suite, and it allows us to remove the legacy positional atol argument from assertEqual. In addition, the "message" kwarg is replace with a kwarg-only "msg" argument whose name is consistent with unittest's assertEqual argument.
In the future we could make "msg" an optional third positional argument to be more consistent with unittest's assertEqual, but requiring it be specified should be clear, and we can easily update the signature to make "msg" an optional positional argument in the future, too.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38872
Differential Revision: D21740237
Pulled By: mruberry
fbshipit-source-id: acbc027aa1d7877a49664d94db9a5fff91a07042
Summary:
This updates assertEqual and assertEqual-like functions to either require both or neither of atol and rtol be specified. This should improve clarity around handling precision in the test suite, and it allows us to remove the legacy positional atol argument from assertEqual. In addition, the "message" kwarg is replace with a kwarg-only "msg" argument whose name is consistent with unittest's assertEqual argument.
In the future we could make "msg" an optional third positional argument to be more consistent with unittest's assertEqual, but requiring it be specified should be clear, and we can easily update the signature to make "msg" an optional positional argument in the future, too.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38872
Differential Revision: D21717199
Pulled By: mruberry
fbshipit-source-id: 9feb856f94eee911b44f6c7140a1d07c1b026d3a
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35624
Python 2 has reached end-of-life and is no longer supported by PyTorch.
This test case is valid syntax in Python 3.
Test Plan: CI
Differential Revision: D20842877
Pulled By: dreiss
fbshipit-source-id: 856e72171496aa1d517f2f27a8a5066462cf4f76
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35615
Python 2 has reached end-of-life and is no longer supported by PyTorch.
Now we can clean up a lot of cruft that we put in place to support it.
These changes were all done manually, and I skipped anything that seemed
like it would take more than a few seconds, so I think it makes sense to
review it manually as well (though using side-by-side view and ignoring
whitespace change might be helpful).
Test Plan: CI
Differential Revision: D20842886
Pulled By: dreiss
fbshipit-source-id: 8cad4e87c45895e7ce3938a88e61157a79504aed
Summary:
See NumPy's division documentation here: https://numpy.org/doc/1.18/reference/generated/numpy.divide.html#numpy.divide.
True division is the same as PyTorch's default division except when both inputs are integer or bool tensors. In the latter case the inputs are (conceptually) cast to the default floating type before the division is performed.
The function is implemented for dense and sparse tensors and supports exporting to ONNX from PyTorch's eager mode or JIT traces. The function is inherently incompatible with exporting to ONNX via JIT script, and is another datapoint suggesting we should deprecate exporting scripted graphs to ONNX.
Tests are added for the type promotion, named tensor, and ONNX export behavior.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34236
Reviewed By: houseroad
Differential Revision: D20334087
Pulled By: mruberry
fbshipit-source-id: 83d00d886f46f713215d7d9e02ffd043164c57f1
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30445
Create distributed and rpc directories under caffe/test for better management
of unit tests.
Differential Revision: D18702786
fbshipit-source-id: e9daeed0cfb846ef68806f6decfcb57c0e0e3606
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31116
Changelist:
- remove BUILD_NAMEDTENSOR macro
- remove torch._C._BUILD_NAMEDTENSOR
- remove all python behavior that relies on torch._C._BUILD_NAMEDTENSOR
Future:
- In the next diff, I will remove all usages of
ATen/core/EnableNamedTensor.h since that header doesn't do anything
anymore
- After that, we'll be done with the BUILD_NAMEDTENSOR removal.
Test Plan: - run CI
Differential Revision: D18934951
Pulled By: zou3519
fbshipit-source-id: 0a0df0f1f0470d0a01c495579333a2835aac9f5d
Summary:
Adds `torch.floor_divide` following the numpy's `floor_divide` api. I only implemented the out-of-place version, I can add the inplace version if requested.
Also fixes https://github.com/pytorch/pytorch/issues/27512
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30493
Differential Revision: D18896211
Pulled By: eellison
fbshipit-source-id: ee401c96ab23a62fc114ed3bb9791b8ec150ecbd
Summary:
With the CI failure caused in 8bbafa0b32d2899ef6101172d62c6049427c977b fixed (incorrect return type of the lambdas in CUDA kernels)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30521
Differential Revision: D18770151
Pulled By: ailzhang
fbshipit-source-id: 02f0fe1d5718c34d24da6dbb5884ee8b247ce39a
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30193
Featuring:
- Added a NoNamesGuard::reset() function that sets NamesMode back to
what it was before the guard. This makes it so that we don't have to
create a new context to run code in an unnamed way.
- Added a diagonal(Tensor, *, Dimname outdim, Dimname dim1, Dimname dim2, int64_t offset=0)
overload. All of the non-tensor arguments are keyword only for
readability purposes; something like `tensor.diagonal("A", "B", "C")`
would be really confusing.
Test Plan: - Added new tests
Differential Revision: D18638363
Pulled By: zou3519
fbshipit-source-id: ea37b52a19535f84a69be38e95e569e88f307381
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29322
torch.equal checks if two tensors are equal in both size and values. For
named tensors, it also checks that the names are exactly equal. There is
an argument to be made for alternative semantics (check that the names
*match*), but for an API that is called "equal" I would expect it to
check equality on names as well.
Test Plan: - new tests
Differential Revision: D18453387
Pulled By: zou3519
fbshipit-source-id: d52bde4e3fdd7f331eef097a3b31d35c89c78049
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29407
Fixes https://github.com/pytorch/pytorch/issues/27753.
The bug was that random tensors print subtly differently. This causes
the "names=" tag to appear in slightly different places; sometimes it is
on the same line as the data, sometimes it is on different lines.
For this test, we wanted to know the following:
- printing a big named tensor's repr doesn't crash
- a big named tensor's repr shows the names
This PR changes the test to check those two things.
Test Plan: - run existing tests
Differential Revision: D18428657
Pulled By: zou3519
fbshipit-source-id: 6bcf247ffba010520878a175e766a496028f87d9
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29199
Previously, we called `native::mean_cpu_gpu` inside `mean(Tensor, Dimname)`;
`native::mean_cpu_gpu` is not supported by autograd. This PR replaces
`native::mean_cpu_gpu` with `at::mean(Tensor, int)` so that the dimname
overload can piggyback off of autograd support for `at::mean(Tensor,
int)`.
Also added tests (those didn't exist before) for autograd support for
named tensor reduction functions.
Test Plan: - `python test/test_namedtensor.py -v`
Differential Revision: D18334617
Pulled By: zou3519
fbshipit-source-id: 1714eb3fd93714fe860f208831e8d910f01c1c78
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29129
cdist(x1, x2) does the following:
- assume x1, x2 are 2-dimensional. Then x1, x2 are each considered to be
a list of vectors.
- The operation returns a matrix that is the pairwise distance between
each vector in x1 and each vector in x2. The matrix has first dimension
size equal to the number of vectors in x1 and second dimension size equal
to the number of vectors in x2.
- cdist also supports arbitrary left-hand broadcastable batch
dimensions. In this case, x1 and x2 are each considered to be a batch
of a list of vectors.
The above leads to the following name inference rule for cdist:
- In the 2D case, propagate x1.names[-2] and x2.names[-1] (because
the final result has size (x1.size[-2], x2.size[-2]).
- in the ND case, unify all the batch dimensions together to produce the
output batch dimensions and then apply the rule for the 2D case.
Furthermore, I moved all of the name checking in the implementation to
occur before name inference because name inference assumes that the
shapes are valid.
Test Plan: - new test: `pytest test/test_namedtensor.py -v -k "cdist"`
Differential Revision: D18311867
Pulled By: zou3519
fbshipit-source-id: 713d7cdda93c8fe92e7f1bd7f7c5c6e20a8138e3
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28981
This PR adds support for calling those functions on named tensors. The
implementation is not the nicest: in the future we have plans to merge
names into TensorOptions, at which point we don't need the extra
branches that check if the tensor has names. Right now, however, these
functions are very useful to have (in particular, ones_like is used by
autograd to generate gradients).
Test Plan: - Added tests for each of these
Differential Revision: D18270937
Pulled By: zou3519
fbshipit-source-id: 720739ff0474449a960b81728345a4250becbfc3
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28904
Motivation
============
Before this PR, a core problem with writing name inference rules was
that each rule needed to handle misalignment by themselves. A misaligned
name occurs when we are matching None with a non-None name, but the
non-None name already exists in the first tensor.
For example, `A` is misaligned in `Tensor[A, None] + Tensor[None, A]`.
Each op handled this in a custom way
- align_from_right (used by broadcasting) handles misalignment
- compute_matmul_outnames checks for misalignment across batch and
feature dimensions.
We can actually codify "misalignment" into something more rigorous by
folding it into the definition of `match` and eliminate special handling
of "misalignment". That is what this PR attempts to do.
Approach
============
Definition: Two names in two tensors *match* if they are equal, or if at
least one of them is a wildcard that can be *refined* to the other name.
With this new definition, to check if two names match, we need to know
about the names list that each name came from to determine if a wildcard
can successfully be *refined* to the other name.
For example, consider the following:
```
tensor: Tensor[A, None]
other: Tensor[None, A]`
```
when unifying `tensor.names[-1]` with `other.names[-1]`, we see that
`tensor.names[-1]` is None and `other.names[-1]` is A. Then we check to
see if `tensor.names[-1]` can be refined to `A`; it can't be refined if
there is already an `A` in `tensor.names`.
Enter `TensorNames`.
A TensorName represents a Dimname associated with some DimnameList
(that came from a Tensor).
`TensorNames` is a list of such TensorName objects with some helper
functions attached.
One can perform the following operations:
- unify two `TensorName` objects
- unify two `TensorNames` objects with right alignment.
Plan
============
This PR changes `compute_matmul_outnames` to use `TensorNames` to
demonstrate how they make writing name inference rules easier. In the
future I'll convert other name inference rules to use `TensorNames` as
well.
Test Plan
- run all tests
Test Plan: Imported from OSS
Differential Revision: D18270666
Pulled By: zou3519
fbshipit-source-id: 3ec96cc957747eb4cfe4ea17fd02ef3d8828a20c
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28975
TensorIterator supports propagating names so we just needed to enable
them with support_named_tensor: True
Test Plan:
- really basic tests to test that each variant (outplace, inplace, out=)
supports named tensors.
Differential Revision: D18252421
Pulled By: zou3519
fbshipit-source-id: ea7fb59dcf8c708b6e45d03b9c2ba27fa6b6ce98
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27760
There's nothing special about the named tensor tests that requires that
they be run in their own CI job. In this PR we delete the
TEST_NAMEDTENSOR flag that hides named tensor tests from regular jobs.
In the future, we'll delete the named tensor CI job so that we do not
duplicate signals.
Test Plan: - wait for CI
Differential Revision: D17882262
Pulled By: zou3519
fbshipit-source-id: f90c71cb939e53b8ea23f7e2ab95a5c41b8be0e3
Summary:
One fewer legacy decorator cluttering the test suite.
Functions relying on this decorator were updated or, in the case of test_sparse, the test suite was put back on double by default.
Note: this PR is blocked on https://github.com/pytorch/pytorch/issues/27599.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27628
Differential Revision: D17896254
Pulled By: mruberry
fbshipit-source-id: 13d460301f50ef4af7a660372432108164c0de1f
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27308
Currently, `tensor.align_to(*names)` has the restriction that the
`tensor` must be fully named. This doesn't need to be the case, when
using Ellipsis, we "expand the ellipsis to all unmentioned dimensions,
in the order which they appear in the original tensor".
For example, consider `tensor: Tensor[None, None, C]`.
`tensor.align_to(C, None, None)` is ambiguous because the user might
have wanted to switch the order of the None dimensions and there is no
way to specify that using this API. However, `tensor.align_to('C', ...)`
isn't ambiguous: we can select the two unnamed dimensions in the order
in which they appear.
To actually implement this, we write a brand-new `align_to(names,
ellipsis_idx)` function in c++ that is separate from the regular
`align_to(names)` implementation. Ideally we would support "..." as a
special name in c++ and combine the two implementations; we'll need to
support "..." in c++ in the future but that requires a bit of extra work.
In this PR, Python processees the ellipsis and then calls the correct
overload.
Test Plan: - run tests
Differential Revision: D17745179
Pulled By: zou3519
fbshipit-source-id: 9fed06d224215cfb7efecd8c002604baab3c45e6
Summary:
This PR stop common_utils.py from setting the default tensor type when it's imported. See issue https://github.com/pytorch/pytorch/issues/27355. This is a frequent source of confusion for test writers.
Many tests relied on this setting (whether they knew it or not), and this PR also updates the test suite to pass without common_utils.py setting the default tensor type. Some larger test files now set the default floating dtype themselves, however. These test files are:
- test_autograd.py
- test_distributions.py
- test_jit.py
- test_nn.py
This is still a significant improvement from today, however. First, these files set the default floating dtype much more clearly than importing it from common_utils. Second, the rest of the test suite no longer sets this globally. Third, this PR is a springboard to updating those tests, too. In particular, as tests are made generic they can be moved aways from relying on this global setting.
Notable technical changes in this PR are:
- Significant updates to test_torch.py to make it pass without setting the default floating dtype globally.
- The default_floating_dtype decorator is now defined in common_utils, a couple versions of this operator were defined in test files previously.
- test_torch-specific parts of common_utils were refactored into test_torch.
- tensor creation methods in common_utils were updated to accept an optional dtype and device.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27444
Differential Revision: D17795235
Pulled By: mruberry
fbshipit-source-id: 7f77271c0c836e69f183ad9057a2c4b29f09d2e1
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26815
This PR adds named tensor support for:
- any, all, `bitwise_not(_)`, cumprod, cumsum, `logical_not`
In addition, it adds smoke tests for a variety of tensor attributes and
fns:
- is_shared, is_signed
- retain_grad, register_hook
Test Plan: - [namedtensor ci]
Differential Revision: D17575905
Pulled By: zou3519
fbshipit-source-id: 37bfa327e68112c5bf0f6bf1f467a527f50fa1c4
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26563
This adds name inference rules for pre-existing logsumexp, mode,
kthvalue, and median ops. Also adds overloads so that they can take
`Dimname` dimensions.
There are a lot of min/max overloads. This PR adds name inference to
the following overloads for (both) min and max:
- min(Tensor, int dim)
- min(Tensor, Dimname dim)
- min(Tensor) (full reduction)
Test Plan: - new tests and [namedtensor ci]
Differential Revision: D17557050
Pulled By: zou3519
fbshipit-source-id: a099a0ef04ad90d021a38a0668fc44902e1c7171
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26543
Also adds a test for logical_xor (it already had named tensor support
but there was no test)
Test Plan: - [namedtensor ci]
Differential Revision: D17501403
Pulled By: zou3519
fbshipit-source-id: 49be15580be9fb520e25a8020164e5a599d22d40
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26558
Previously, name inference gets called after dimensions are wrapped.
This PR makes it so that name inference always wraps dimensions so that
it can be called anywhere. Ideally we would only wrap dimensions once,
but many of our operators wrap dimensions in weird places.
Wrapping dimensions in name inference is pretty inexpensive and only
happens for named tensors (name inference does not run on unnamed
tensors.)
Test Plan: - [namedtensor ci]
Differential Revision: D17557049
Pulled By: zou3519
fbshipit-source-id: 68c5636489e233dbf2588ab6ad4e379a6fe4c8ba
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26636
This PR defines a lot of dimname overloads so that when named tensor
support is added for those operators, we will not have to modify the
autogenerated TensorMethods.h, thereby avoiding potential merge
conflicts in the future.
Overloads were added for the following:
- all
- any
- argmax
- argmin
- cumsum
- cumprod
- index_copy
- kthvalue
- mode
- permute
- squeeze
- index_add
- index_fill
- scatter
- scatter_add
- index_select
- gather
- sort
- argsort
Test Plan: - [namedtensor ci]
Differential Revision: D17522984
Pulled By: zou3519
fbshipit-source-id: eca6dea819ba4e4e43b71b700d5cf09176f00061
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26648
Previously:
- `Tensor.align_to(*names)` only works on fully named tensors. In addition, the
desired ordering `names` must not have any None-names.
- `Tensor.align_to(*names)` accepted `...`, but expanded it based on
position. i.e., in `tensor.align_to('N', ..., 'C', 'H')`, `...` expand
to `*tensor.names[1:-2]`. This is wildly incorrect: see the following
concrete example.
```
tensor = tensor.refine_names('N', 'C', 'H, 'W')
tensor.align_to('W', ...) # ... expands to 'C', 'H, 'W'
```
This PR changes it so that `...` in `tensor.align_to` grabs all
unmentioned dimensions from `tensor`, in the order that they appear.
`align_to` is the only function that takes ellipsis that requires this
change. This is because all other functions (`refine_to`) require their
list of names to work in a positional manner, but `align_to` lets the
user reorder dimensions.
This does not add very much overhead to `align_to`, as shown in the
following benchmark. However, in the future, we should resolve to make
these operations faster; align_to should be as fast as view but isn't
most likely due to Python overhead.
```
[ins] In [2]: import torch
...: named = torch.randn(3, 3, 3, 3, names=('N', 'C', 'H', 'W'))
...: unnamed = torch.randn(3, 3, 3, 3)
...: %timeit unnamed[:]
...: %timeit unnamed.view(-1)
...: %timeit named.align_to(...)
...: %timeit named.align_to('N', 'C', 'H', 'W')
31 µs ± 126 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
43.8 µs ± 146 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
69.6 µs ± 142 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
66.1 µs ± 1.13 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
```
Test Plan:
- new tests [namedtensor ci]
allows the user to transpose and permute dimensions.
Differential Revision: D17528207
Pulled By: zou3519
fbshipit-source-id: 4efc70329f84058c245202d0b267d0bc5ce42069
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26548
This makes the naming more consistent with PyTorch's API. The original
concern was that `tensor.rename` might make the operation seem like it
is in-place. However, we have many "verb" APIs: `tensor.add(other)`, for
example, doesn't add other to tensor in-place, but `tensor.add_(other)`
does.
`tensor.rename_` does exactly the same place as `tensor.rename`, but
in-place.
Test Plan: - [namedtensor ci]
Differential Revision: D17502021
Pulled By: zou3519
fbshipit-source-id: 6a5b93136a820075013cd1e30fb8fc6b9d77d7d9
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25582
There are a lot of min/max overloads. This PR adds name inference to
the following overloads for (both) min and max:
- min(Tensor, int dim)
- min(Tensor, Dimname dim)
- min(Tensor) (full reduction)
Test Plan: - new tests [namedtensor ci]
Differential Revision: D17521607
Pulled By: zou3519
fbshipit-source-id: 303e3cef22916dbc9da6a092d4f23e39e74c39e4
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26541
`torch.pow` already supports named tensors; every one of its constituent
codepaths propagates names:
- TensorIterator propagates names
- resize_as_ and fill_ propagate names (exponent == 0 or base == 1)
- resize_as_ and copy_ propagate names (exponent == 1)
This PR adds `supports_named_tensor = True` to the pow overloads,
enabling `pow` to take named tensors.
Test Plan: - [namedtensor ci]
Differential Revision: D17501402
Pulled By: zou3519
fbshipit-source-id: 07ee91d685e55dd58bbbb3a3fc9e185de8bb7515
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26493
resize_ and resize_as_ are low level functions that are not meant to be
used as a part of the regular PyTorch user's routine. However, they are
used to implement a lot of our operations: `out=` functionality is
implemented by resizing an output to be the correct size.
To keep in line with already implemented `out=` functionality, we do the
following:
- resize_as_(self, other) propagates names according to `out=` functionality.
This means that if self doesn't have names, then we propagate
other.names. If self does have names, they must be equal to other.names.
In addition, resize_ cannot resize a named tensor to anything but the same size.
Test Plan: - [namedtensor ci]
Differential Revision: D17501404
Pulled By: zou3519
fbshipit-source-id: e396e7fba55e1419355933925226d02dccb03012
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26479
This PR doesn't delete the code for them yet because it takes some effort to
determine what to delete. I will send a followup PR fully deleting
tagged names, but this PR disables their creation.
Test Plan: - [namedtensor ci]
Differential Revision: D17484758
Pulled By: zou3519
fbshipit-source-id: 451409e36eac98ffee1b98884d0f675bb5d46c9d
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26352
"named_guard: P" is the same as "supports_named_tensor: !P".
Also changed the error message to be more understandable to users.
Test Plan:
- `TEST_NAMEDTENSOR=1 pytest test/test_namedtensor.py -v`
- [namedtensor ci]
Differential Revision: D17426234
Pulled By: zou3519
fbshipit-source-id: 4cab780e6e29e184e79cdd3690f41df9ebb2ecb5
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26350
Python 3 lets us use `...` to perform indexing. Semantically, `...`
means "the rest of the unspecified dimensions". For example, while
indexing, one can do (for 5D `tensor`) `tensor[0, 0, ..., 0]` and
the `...` is expanded into `tensor[0, 0, :, :, 0]`.
Previously, we were using '*' to represent a similar behavior in names.
For example, `tensor.refine_names` supports things like the following:
```
x = torch.randn(2, 3, 4, 5, 6)
x_out = x.refine_names('*', 'H', 'W') # refine only the last two
dimensions
```
This PR changes it so that named tensor API functions recognize `'...'`
(in Python 2 and Python 3) and `...` (in Python 3 exclusively) instead
of `'*'`.
Test Plan: - [namedtensor ci]
Differential Revision: D17424666
Pulled By: zou3519
fbshipit-source-id: 003182879fd38ced3fea051217572a457cdaf7cf