4 Commits

Author SHA1 Message Date
cyy
9a0c217a0a [9/N] Fixes clang-tidy warnings in c10/util/*.h (#116185)
Continued work to clean headers in c10/util.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/116185
Approved by: https://github.com/Skylion007
2023-12-22 09:35:44 +00:00
02da9437b0 Store SymInt out of line (#84390)
swolchok reported that non-tracing usage of Tensor we are wasting a lot
of time on is_symbolic() tests, e.g., when destructing SymInts.  This
is a regression for no good reason because we don't actually ever
have SymInts in those cases.  This PR moves the stored SymInts on
Tensor out of line, into a separate ExtraMeta struct, which is only
allocated when we make a Tensor store symbolic sizes/strides.

To avoid adding another word to TensorImpl, I take over the named tensor
metadata field.  This makes named tensor require a double indirection
and use up more space, but it's OK since we're going to delete this
feature anyway soon.

I restore regular int64_t storage on Tensor.  This entailed reverting
https://github.com/pytorch/pytorch/pull/82467 ; there are no other
substantive changes to SizesAndStrides so a close review is not
necessary.

I don't bother optimizes sizes and strides in ExtraMeta in the same
way stock tensor is optimized.  I add a SymDimVector alias.  I make
SymInt UNCHECKED constructor public as it is a useful optimization
in some situations when the int is known to be positive.

I thought about storing the SymInts on the Python object instead.
However, because we can allocate symbolic shape tensors directly
from C++, we cannot guarantee that there is a PyInterpreter for
a Tensor. So we do it this way instead; it's also faster since you
don't have to take out the GIL to do accesses.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84390
Approved by: https://github.com/swolchok, https://github.com/Krovatkin
2022-09-06 20:24:39 +00:00
82504985d5 [PyTorch][easy] Tie DimVector inline size to SizesAndStrides inline size
These should be kept the same, right?

Differential Revision: [D37006473](https://our.internmc.facebook.com/intern/diff/D37006473/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79128

Approved by: https://github.com/ezyang
2022-06-09 16:32:53 +00:00
ecde870d4e Move ATen/core/DimVector.h to c10/util/DimVector.h.
This PR moves `ATen/core/DimVector.h`, as suggested in:
https://github.com/pytorch/pytorch/pull/76812#discussion_r866875924

The changes can be summarized as:

- Changing includes from `ATen/core/DimVector.h` to `c10/util/DimVector.h`
- Re-declaring both the type and constant size in `at` namespace
- Making `c10::contiguous_strides` return a `DimVector`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77045

Approved by: https://github.com/peterbell10, https://github.com/ezyang
2022-05-11 01:30:56 +00:00