make aotdispatcher opinfo tests keep input mutations in graph (#165327)

This stack is going to turn off functionalization and turn on the default partitioner, so I'm going to separate out a few changes before turning off functionalization in our OpInfo tests:

(1) run our tests with input mutations allowed inside the graph

(2) run our tests with the default partitioner

(3) run with functionalization off

(4) (later) make the tests properly test for bitwise equivalence

Pull Request resolved: https://github.com/pytorch/pytorch/pull/165327
Approved by: https://github.com/ezyang
This commit is contained in:
Brian Hirsh
2025-10-14 07:53:21 -07:00
committed by PyTorch MergeBot
parent 89298ada83
commit d2e1dbc8f2

View File

@ -64,7 +64,13 @@ def aot_autograd_check(
return func(*c_args, **c_kwargs)
compiled_f = compiled_function(
func_no_tensors, nop, nop, dynamic=dynamic, partition_fn=min_cut_rematerialization_partition)
func_no_tensors,
nop,
nop,
dynamic=dynamic,
partition_fn=min_cut_rematerialization_partition,
keep_inference_input_mutations=True
)
out = wrapper_set_seed(func_no_tensors, args)
if check_gradients == "auto":