Files
pytorch/functorch/dim/_dim_entry.py
Edward Yang 97eb7a281d torchdim Python port (#160236)
The big semantic change (and the reason for this port) is that we no longer monkeypatch Tensor with torchdim's special methods. The new algorithm for handling dispatch is that we first land in `__torch_function__` and we see if a special FCD implementation needs to be dispatch to first, and if there is nothing we fallback to the standard level strategy.

Because there is no longer C binding equivalent of classes, we've condensed _C.Dim and Dim together, and similar for Tensor. This resulted in some bugs as the Python API is sometimes different from the C API. I've attempted to disambiguate these but there may still be mistakes (many early bugs were due to this problem). Dim and DimEntry are especially painful as Dim must abide by Tensor equality semantics, but is pointer equality in C (DimEntry doesn't have this problem). Another difference between C/Python that is subtle is we no longer get implicit conversions from Dim to DimEntry, this also caused some bugs.

Much of the mechanical porting work was done by claude code. I have a separate PR that deletes functorch._C, but it was useful having dim.cpp to point claude at it so I haven't done it in this PR. From a reviewing perspective, I need to re-review that I didn't forget to port anything, some noticeably missing "small" things are patched_dim_method. I am still in progress of carefully doing a side-by-side review of ports; "simplifications" from claude code were also a major source of bugs.

There are two major feature gaps in the implementation:

- DelayedTensor and dot handling are not implemented yet. This should be reasonably easy, just need to do it.  However, for the purposes of sharded propagation it is actually better not to reconstruct matmuls.
- Splitting dimensions with an index like `[x, y]` doesn't work. The problem is that `__getitem__` interprets this as advanced indexing and sends the list to torch.tensor to turn into a tensor, instead of being eligible for `__torch_function__`. I think I might need to hard code a special case for this or something?

Signed-off-by: Edward Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160236
Approved by: https://github.com/zdevito, https://github.com/albanD
2025-09-21 03:01:04 +00:00

128 lines
3.5 KiB
Python

from __future__ import annotations
from typing import TYPE_CHECKING, Union
if TYPE_CHECKING:
from collections.abc import Sequence
from . import Dim
import torch # noqa: TC002
# NB: The old code represented dimension was from as negative number, so we
# follow this convention even though it shouldn't be necessary now
class DimEntry:
# The dimension this is from the rhs, or a FCD
data: Union[Dim, int]
def __init__(self, data: Union[Dim, int, None] = None) -> None:
from . import Dim
if type(data) is int:
assert data < 0
elif data is None:
data = 0
else:
assert isinstance(data, Dim)
self.data = data
def __eq__(self, other: object) -> bool:
if not isinstance(other, DimEntry):
return False
# Use 'is' for Dim objects to avoid triggering __torch_function__
# Use '==' only for positional (int) comparisons
if self.is_positional() and other.is_positional():
# Both are positional (ints)
return self.data == other.data
elif not self.is_positional() and not other.is_positional():
# Both are Dim objects - use 'is' to avoid __eq__
return self.data is other.data
else:
# One is positional, one is Dim - they can't be equal
return False
def is_positional(self) -> bool:
return type(self.data) is int and self.data < 0
def is_none(self) -> bool:
# Use isinstance to check for Dim objects, avoid triggering __torch_function__
from . import Dim
if isinstance(self.data, Dim):
# This is a Dim object, it can't be "none" (which is represented by 0)
return False
else:
# This is an int or other type
return self.data == 0
def position(self) -> int:
assert isinstance(self.data, int)
return self.data
def dim(self) -> Dim:
assert not isinstance(self.data, int)
return self.data
def __repr__(self) -> str:
return repr(self.data)
def ndim_of_levels(levels: Sequence[DimEntry]) -> int:
r = 0
for l in levels:
if l.is_positional():
r += 1
return r
def _match_levels(
tensor: torch.Tensor,
from_levels: list[DimEntry],
to_levels: list[DimEntry],
drop_levels: bool = False,
) -> torch.Tensor:
"""
Reshape a tensor to match target levels using as_strided.
Args:
tensor: Input tensor to reshape
from_levels: Current levels of the tensor
to_levels: Target levels to match
drop_levels: If True, missing dimensions are assumed to have stride 0
Returns:
Reshaped tensor
"""
if from_levels == to_levels:
return tensor
sizes = tensor.size()
strides = tensor.stride()
if not drop_levels:
assert len(from_levels) <= len(to_levels), (
"Cannot expand dimensions without drop_levels"
)
new_sizes = []
new_strides = []
for level in to_levels:
# Find index of this level in from_levels
try:
idx = from_levels.index(level)
except ValueError:
# Level not found in from_levels
if level.is_positional():
new_sizes.append(1)
else:
new_sizes.append(level.dim().size)
new_strides.append(0)
else:
new_sizes.append(sizes[idx])
new_strides.append(strides[idx])
return tensor.as_strided(new_sizes, new_strides, tensor.storage_offset())