mirror of
https://github.com/pytorch/pytorch.git
synced 2025-10-21 05:34:18 +08:00
Handle unbacked SymInt sized outputs in AOTAutograd (#113159)
Thanks aakhundov for constructing the test case. This PR was constructed by running the failing test case, and then fixing problems until we got all the way to the end. There are a few distinct fixes: * AOTAutograd performs equality tests on tensor metadata to determine if a metadata mutation had occurred. If we test i0 vs i1, we should report these are NOT equal, since obviously we have somehow resized the tensor from i0 to i1 (even if, on a particular run, it is possible i0 == i1). * There's a sketchy fix for `test_aot_autograd_exhaustive_matmul_cpu_float32` where we check if the output shape equals the tangent shape. Unfortunately, the same `definitely_true` treatment does not work here, it still fails on the example. I piled an extra sketchy fix on top of it, where I just try my best to avoid doing the view. Maybe we should have some sort of logging here. * Partitioner needs to get out a size for unbacked SymInt when partitioning. I just feed it a random heuristic value in this case, similar to how we've been dealing with this in Inductor. Signed-off-by: Edward Z. Yang <ezyang@meta.com> Pull Request resolved: https://github.com/pytorch/pytorch/pull/113159 Approved by: https://github.com/aakhundov, https://github.com/bdhirsh
This commit is contained in:
committed by
PyTorch MergeBot
parent
aa376e31fd
commit
1f3fa13f0a
@ -592,7 +592,6 @@ class TestPythonDispatch(TestCase):
|
||||
$0: f32[1] = input('x')
|
||||
$1: f32[1] = torch._ops.aten.mul.Tensor($0, $0)
|
||||
$2: f32[1] = input('grad_y')
|
||||
True = torch._ops.aten.is_same_size.default($1, $2)
|
||||
$3: f32[1] = torch._ops.aten.mul.Tensor($2, $0)
|
||||
$4: f32[1] = torch._ops.aten.mul.Tensor($2, $0)
|
||||
$5: f32[1] = torch._ops.aten.add.Tensor($4, $3)''')
|
||||
@ -852,7 +851,6 @@ $0: f32[1] = input('x')
|
||||
$1: f32[1] = input('x.grad')
|
||||
$2: f32[1] = torch._ops.aten.pow.Tensor_Scalar($0, 2)
|
||||
$3: f32[1] = input('grad_output')
|
||||
True = torch._ops.aten.is_same_size.default($2, $3)
|
||||
$4: f32[1] = torch._ops.aten.mul.Tensor($3, 2)
|
||||
$5: f32[1] = torch._ops.aten.mul.Tensor($4, $0)
|
||||
$6: f32[1] = torch._ops.aten.add_.Tensor($1, $5)''')
|
||||
|
Reference in New Issue
Block a user