Consistent compute numel/contiguous strategy with SymInts (#85858)

Previously, our handling for contiguity was inconsistent in the following ways:

- is_strides_like 2d/3d and is_non_overlapping_and_dense always were computed
  based on sizes_and_strides_, even if you had symbolic ints
- Furthermore, even if you set custom policy for strides, these quantities were
  not overridable by subclasses
- Furthermore, we didn't even store these fields on ExtraMeta
- We duplicate implementations of compute_contiguous (plain, channels last,
  channels last 3d)
- We inconsistently called refresh_numel()/refresh_contiguous(), versus
  recomputing it ourselves

This factor makes a consistent strategy for all of the boolean fields, and
for numel computation.  After this refactor:

- All layout boolean fields are interposable via strides policy
  and can be overridden from Python; you will never access a garbage field
- All layout boolean fields are on ExtraMeta
- You can always call refresh_numel/contiguous, no matter if your Tensor is
  contiguous or not
- The numel/layout boolean fields are always populated consistently with
  the sizes strides fields (either on Tensor or ExtraMeta), even if you
  have custom policy
- There is only one implementation of the actual computation logic

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: [D39907696](https://our.internmc.facebook.com/intern/diff/D39907696)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85858
Approved by: https://github.com/albanD
This commit is contained in:
Edward Z. Yang
2022-09-30 10:01:35 -07:00
committed by PyTorch MergeBot
parent 84a06d7193
commit 3b6588ab74
18 changed files with 617 additions and 192 deletions

View File

@ -561,6 +561,34 @@ static const std::vector<OperatorGeneratorArgs> opGenArgs{
pack(stack, result);
},
aliasAnalysisFromSchema()),
OperatorGeneratorArgs(
TORCH_SELECTIVE_SCHEMA(
"aten::is_contiguous.memory_format(Tensor self, MemoryFormat memory_format) -> bool"),
[](Stack& stack) {
auto memory_format = pop(stack).toMemoryFormat();
auto t = pop(stack).toTensor();
push(stack, t.is_contiguous(memory_format));
},
aliasAnalysisFromSchema()),
OperatorGeneratorArgs(
// NB: intentionally suffixed with extra _format to prevent tests for
// "_like" suffix from triggering on this
TORCH_SELECTIVE_SCHEMA(
"aten::is_strides_like_format(Tensor self, MemoryFormat memory_format) -> bool"),
[](Stack& stack) {
auto memory_format = pop(stack).toMemoryFormat();
auto t = pop(stack).toTensor();
push(stack, t.unsafeGetTensorImpl()->is_strides_like(memory_format));
},
aliasAnalysisFromSchema()),
OperatorGeneratorArgs(
TORCH_SELECTIVE_SCHEMA(
"aten::is_non_overlapping_and_dense(Tensor self) -> bool"),
[](Stack& stack) {
auto t = pop(stack).toTensor();
push(stack, t.unsafeGetTensorImpl()->is_non_overlapping_and_dense());
},
aliasAnalysisFromSchema()),
// these ops are generic over the list element type.
// CREATING GENERIC_LIST_OPS
OperatorGeneratorArgs(