Need to revert due to internal hangs: S437700
This reverts commit b6c1490cc02316ffe85e5ae74651d80f0158ba64.
Revert "[dynamo] implement IteratorVariable and polyfill fallbacks for enumerate (#131725)"
This reverts commit 2576dbbc35d66e8e9ed6cb12216ccc424cb87ec3.
Revert "[dynamo] add itertools repeat/count bytecode reconstruction (#131716)"
This reverts commit 35b4de32fafc5ad024c20ef1275711bffc557ae9.
Revert "[dynamo] add lazy IteratorVariable implementations for map and zip (#131413)"
This reverts commit 7d282d87550787d8269593093519c2ad7c5032cd.
Fixes #ISSUE_NUMBER
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132528
Approved by: https://github.com/ZainRizvi
Fixes#130087
This patch tries to provide a built-in id function implementation for TensorVariable when the id function is called on tensors like module parameters. The id function call on intermediate tensors is not supported.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/130100
Approved by: https://github.com/anijain2305
Fixes https://github.com/pytorch/pytorch/issues/130750.
Repro of lazy/eager `map` discrepancy without `islice`:
```python
def fn(a, b):
y = 1
def f(x):
nonlocal y
y += 1
return x
l = list(zip([a, b], map(f, [1, 2, 3, 4])))
return a + y
```
The major change is that we implement `MapVariable` and `ZipVariable` based on `IteratorVariable`. Before, `map` and `zip` were being traced by immediately unpacking the result as a `TupleVariable`, which is wrong in cases such as the example above.
`MapVariable`s are not allowed to be unpacked while `ZipVariable`s can only be unpacked if all of its iterables can also be unpacked.
We also add new `[has_]force_unpack_var_sequence` methods to `VariableTracker` for the case where it is safe to unpack the entire sequence lazily, e.g., when building a list from a map (i.e. `list(map(f, ...))`).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131413
Approved by: https://github.com/anijain2305
Before the pr, we have a graph break for `callable(nn_module)`:
```python
class M(nn.Module):
def forward(self, x):
return x.sin()
def f(m):
return callable(m)
res = torch.compile(f, fullgraph=True)(M())
```
```
Traceback (most recent call last):
File "/data/users/yidi/pytorch/t.py", line 17, in <module>
out = torch.compile(f, backend="eager", fullgraph=True)(M())
File "/data/users/yidi/pytorch/torch/_dynamo/eval_frame.py", line 414, in _fn
return fn(*args, **kwargs)
File "/data/users/yidi/pytorch/torch/_dynamo/convert_frame.py", line 1077, in catch_errors
return callback(frame, cache_entry, hooks, frame_state, skip=1)
File "/data/users/yidi/pytorch/torch/_dynamo/convert_frame.py", line 456, in _convert_frame_assert
return _compile(
File "/data/users/yidi/pytorch/torch/_utils_internal.py", line 74, in wrapper_function
return function(*args, **kwargs)
File "/home/yidi/.conda/envs/pytorch/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/data/users/yidi/pytorch/torch/_dynamo/convert_frame.py", line 799, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/data/users/yidi/pytorch/torch/_dynamo/utils.py", line 210, in time_wrapper
r = func(*args, **kwargs)
File "/data/users/yidi/pytorch/torch/_dynamo/convert_frame.py", line 618, in compile_inner
out_code = transform_code_object(code, transform)
File "/data/users/yidi/pytorch/torch/_dynamo/bytecode_transformation.py", line 1167, in transform_code_object
transformations(instructions, code_options)
File "/data/users/yidi/pytorch/torch/_dynamo/convert_frame.py", line 177, in _fn
return fn(*args, **kwargs)
File "/data/users/yidi/pytorch/torch/_dynamo/convert_frame.py", line 564, in transform
tracer.run()
File "/data/users/yidi/pytorch/torch/_dynamo/symbolic_convert.py", line 2244, in run
super().run()
File "/data/users/yidi/pytorch/torch/_dynamo/symbolic_convert.py", line 886, in run
while self.step():
File "/data/users/yidi/pytorch/torch/_dynamo/symbolic_convert.py", line 801, in step
self.dispatch_table[inst.opcode](self, inst)
File "/data/users/yidi/pytorch/torch/_dynamo/symbolic_convert.py", line 496, in wrapper
return inner_fn(self, inst)
File "/data/users/yidi/pytorch/torch/_dynamo/symbolic_convert.py", line 1255, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/data/users/yidi/pytorch/torch/_dynamo/symbolic_convert.py", line 739, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/data/users/yidi/pytorch/torch/_dynamo/variables/builtin.py", line 948, in call_function
return handler(tx, args, kwargs)
File "/data/users/yidi/pytorch/torch/_dynamo/variables/builtin.py", line 711, in <lambda>
return lambda tx, args, kwargs: obj.call_function(
File "/data/users/yidi/pytorch/torch/_dynamo/variables/builtin.py", line 948, in call_function
return handler(tx, args, kwargs)
File "/data/users/yidi/pytorch/torch/_dynamo/variables/builtin.py", line 835, in builtin_dipatch
unimplemented(error_msg)
File "/data/users/yidi/pytorch/torch/_dynamo/exc.py", line 216, in unimplemented
raise Unsupported(msg)
torch._dynamo.exc.Unsupported: builtin: callable [<class 'torch._dynamo.variables.nn_module.NNModuleVariable'>] False
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/127026
Approved by: https://github.com/jansel
We guard on key order
1) When a key is a non-constant object
2) When we actually need key order - like .values, .items etc
For dicts/OrderedDicts that do not require key order guarding, we just rely on usual `GuardManger + DictGetItemGuardAccessor`. This is faster than going through the `list(d.keys())` based design for OrderedDicts.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124779
Approved by: https://github.com/jansel
This removes the duplicate handling of comparison ops between symbolic_convert and bultin and refactors the handling to use the binop infrastructure. This change regresses overheads a bit, but this is fixed in the next PR.
New test skips are variants of `type(e) is np.ndarray` previously falling back to eager.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/122043
Approved by: https://github.com/anijain2305
ghstack dependencies: #122039