Commit Graph

158 Commits

Author SHA1 Message Date
7eecbf8a30 Remove unnecessary skipIfTorchDynamo from test_jit_fuser_te (#118728)
And add some expected failures.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/118728
Approved by: https://github.com/bdhirsh
2024-02-12 20:55:29 +00:00
72d9a38118 add get_function to TorchInGraphFunctionVariable (#119314)
partially address https://github.com/pytorch/pytorch/issues/118785

Pull Request resolved: https://github.com/pytorch/pytorch/pull/119314
Approved by: https://github.com/yanboliang, https://github.com/anijain2305
2024-02-12 16:35:34 +00:00
2c91e13afc Add lowerings to special functions (#119187)
As in the title.

In addition, the PR introduces infrastructure for lowerings of pointwise functions that have both cpp and triton implementations available.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/119187
Approved by: https://github.com/peterbell10
2024-02-11 16:35:40 +00:00
e1c1b8c2b2 [dynamo] Improve support for backwards hooks (#119525)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/119525
Approved by: https://github.com/yanboliang, https://github.com/anijain2305
2024-02-10 01:14:03 +00:00
25a0fa6d13 Revert "[dynamo] Improve support for backwards hooks (#119525)"
This reverts commit b1f4b2a63c038f0090886d7d213825f39c283ea5.

Reverted https://github.com/pytorch/pytorch/pull/119525 on behalf of https://github.com/clee2000 due to broke test_autograd.py::TestAutograd::test_post_accumulate_grad_hook_gets_cleaned_up on dynamo https://github.com/pytorch/pytorch/actions/runs/7847212828/job/21416215820 b1f4b2a63c.  The failure exists on the PR as well, but got masked by the other test.  Putting this as no signal? ([comment](https://github.com/pytorch/pytorch/pull/119525#issuecomment-1936447169))
2024-02-09 18:58:55 +00:00
b1f4b2a63c [dynamo] Improve support for backwards hooks (#119525)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/119525
Approved by: https://github.com/yanboliang
2024-02-09 17:02:40 +00:00
0f478d9d61 [Dynamo][15/N] Merge allow_in_graph/inline/skip trace rules check into trace_rule.lookup (#118971)
Finally we have this PR to merge allow_in_graph/inline/skip trace rules into ```trace_rules.lookup_inner```, where we can define and lookup trace rules at both function level and file level. Going forward, this is the central place that we define and consulte Dynamo trace rule for any function.
* ```trace_rules.looup``` is the API can return allow_in_graph, inline or skip.
* ```skipfiles.check``` is the API can return inline or skip, since we have multiple places that only do inline/skip check.
  *  I'll move ```skipfiles.check``` to ```trace_rules.check``` as one of the follow-ups.
* Both functions consulte ```trace_rules.lookup_inner``` to get the tracing rule.

To avoid a single big PR, I left a few items as the follow-ups:
* Remove ```skipfiles.py``` and merge the code into ```trace_rules.py```.
* We do double check in ```symbolic_convert.check_inlineable```, will refactor and simplify it. We should only do inline/skip check before generating ```SkipFilesVariable``` and ```UserFunctionVariable```.
* Rename ```SkipFilesVariable``` as ```SkipFunctionVariable```, since we only handle functions.
* The inline/skip reasons are not logged for some cases, since the new lookup framework doesn't always return inline/skip reasons. I'll refactor loggings to record the inline/skip reason in next step.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/118971
Approved by: https://github.com/jansel
2024-02-07 05:15:39 +00:00
c814d8e5c2 Fix handling random() calls encountered inside inlined code. (#119218)
Fix https://github.com/pytorch/pytorch/issues/118787

In the compiled function, calls to random() are replaced with a single function call
to a function that generates all the random variables .
The random calls encountered during compilation used to be tracked inside a variable
stored inside the instruction translator. And when there are nested translators, the tracked
calls used to get lost when the inner instructions translator popped out.

This diff fixes that by moving the tracked calla to the output graph which is shared across translators that are generating the same function.

More details about the issue and why this solution is picked are in the github issue above.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/119218
Approved by: https://github.com/jansel, https://github.com/anijain2305
2024-02-06 23:48:21 +00:00
5e78c4b0f4 [dynamo] Functools partial reconstruct (#118583)
Replaces #117721

Pull Request resolved: https://github.com/pytorch/pytorch/pull/118583
Approved by: https://github.com/yanboliang
ghstack dependencies: #118901, #118616
2024-02-06 23:42:43 +00:00
62cc1053d8 [dynamo] Fix missing guards in FunctoolsPartialVariable (#118616)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/118616
Approved by: https://github.com/yanboliang
ghstack dependencies: #118901
2024-02-06 23:42:43 +00:00
ec31d11580 [dynamo] Skip dynamo when inside a functorch context (#118901)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/118901
Approved by: https://github.com/zou3519
2024-02-06 20:22:24 +00:00
4dc53f777b Fix dynamo failure w/ astype (#117952)
The torch "fake" ndarray had some mismatches vs numpy.ndarray which caused test_sparse_to_sparse_compressed to fail under dynamo.

This also fixes (because the test now hits it) a problem where unpacking a sequence with the incorrect number of args would assert in dynamo instead of graph breaking (because it would throw an exception). Added a unit test for this condition.

Fixed:
- torch._numpy._ndarray.astype() (actually used by the test)
- torch._numpy._ndarray.put() (drive-by discovery)
- torch._numpy._ndarray.view() (drive-by discovery)

(burndown item 7)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/117952
Approved by: https://github.com/yanboliang
ghstack dependencies: #117951
2024-02-03 08:10:15 +00:00
c6c851102f Fix test_compressed_layout_conversions_coverage to check BSC format (#117951)
test_compressed_layout_conversions_coverage verifies torch's conversions between different memory layouts using numpy as a reference. Since numpy doesn't support BSC format it just skipped that. Instead fake it by using a transposed BSR format.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/117951
Approved by: https://github.com/zou3519
2024-02-03 08:10:15 +00:00
30d3ff1659 Inline gradcheck functions since they don't have C bindings (#119047)
Gradcheck functions are in python, so they shouldn't be in `torch_c_binding_in_graph_functions`
fixes #118792

Pull Request resolved: https://github.com/pytorch/pytorch/pull/119047
Approved by: https://github.com/yanboliang, https://github.com/zou3519
2024-02-03 02:46:39 +00:00
3b41793412 Purge redundant module init tests (#119028)
Fixes #118784

This test file is old and redundant; coverage is maintained in `test_modules.py` via the `test_factory_kwargs` set of tests.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/119028
Approved by: https://github.com/zou3519
2024-02-02 20:17:00 +00:00
bd8c91efc0 Remove some now-succeeding tests from dynamo_test_failures.py (#118928)
Test Plan:
- wait for CI
Pull Request resolved: https://github.com/pytorch/pytorch/pull/118928
Approved by: https://github.com/aorenste, https://github.com/anijain2305, https://github.com/yanboliang
2024-02-02 19:49:26 +00:00
eb2bdfae88 Make variables in dict LazyTrackers (not lazily guarded yet) and avoid using DICT_KEYS guard (#117625)
Make variables in dict lazy and remove DICT_KEYS guard.

We build the keys of a dict depth-first and we rely on the guards of
each element in the dict to create the correct guards. This allows us to
remove the rather buggy DICT_KEYS guard and make the guard lazy.
The guards are not completely lazy yet, as we instantiate them in
`_HashableTracker._eq_impl` but it should be possible to make them
truly lazy.

Also, adding new types to the supported types within keys should be less
error prone.

This is marginally less efficient when we graph break, but in turn we
should graph break much less. It also  makes the dicts code easier to maintain
(removes `is_hashable_python_var`).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/117625
Approved by: https://github.com/jansel, https://github.com/peterbell10, https://github.com/anijain2305
ghstack dependencies: #117982, #118098, #117983
2024-02-02 14:38:08 +00:00
a16df1d85f [Dynamo] graph break on isinstance calls if we don't know the type (#118778)
If we can't figure out the python type of a VariableTracker, then the
isinstance call should graph break (instead of raising an error).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/118778
Approved by: https://github.com/ydwu4
ghstack dependencies: #118768
2024-02-01 23:18:10 +00:00
318e6ff40e Fix __name__ on a reconstructed NestedUserFunctionVariable (#118768)
```
def f():
    def g():
        return ()

    print(g.__name__)

f()
```

The following script should print `g` (with or without torch.compile),
but prints `f.<locals>.g` with torch.compile.

The problem looks like we use the co_qualname when reconstructing the
NestedUserFunctionVariable. I switched this over to use the co_name.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/118768
Approved by: https://github.com/yanboliang, https://github.com/jansel
2024-02-01 18:59:01 +00:00
81b55f58ce Matmul decide should_fold using has_out instead of grad_mode (#118617)
Fixes https://github.com/pytorch/pytorch/issues/118548

Pull Request resolved: https://github.com/pytorch/pytorch/pull/118617
Approved by: https://github.com/lezcano
2024-01-31 18:34:16 +00:00
126c1621ce Add Support for CausalBias to torch compile (#116071)
Fixes #115363

Pull Request resolved: https://github.com/pytorch/pytorch/pull/116071
Approved by: https://github.com/mlazos
2024-01-30 02:22:48 +00:00
6591741183 [dynamo] support inference_mode with no arguments (#118427)
Before the pr, we have an error for the following code
```python
def k(x):
    with torch.inference_mode():
        x = x + 1
        return x

torch.compile(k, backend="eager", fullgraph=True)(x)
```
error message:
```
Traceback (most recent call last):
....
    return InferenceModeVariable.create(tx, args[0].as_python_constant())
torch._dynamo.exc.InternalTorchDynamoError: list index out of range
```

This pr supports the case when torch.inference_mode is not provided any argument (i.e. default to True).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/118427
Approved by: https://github.com/yanboliang, https://github.com/jansel
2024-01-29 20:20:26 +00:00
41dfdde9f5 Handle some numpy functions with out arguments correctly in dynamo (#118248)
Dynamo creates Tensors when tracing through numpy ufuncs like np.sin, np.minimum etc. When running, np functions generally return Tensors when run with `torch.compile`. However, we currently require when normalizing `out` arguments that the input is an ndarray.  This creates assertion errors when running torch.compile on any numpy function with an out argument:
```
    def test_numpy_ufunc_out(self):
        @torch.compile(backend="eager")
        def foo():
            x = np.arange(5)
            out = np.empty((x.shape[0], x.shape[0]))
            res_out = np.sin(x, out=out)
            assert res_out is out
        foo()
```
Failure with stack trace: https://gist.github.com/jamesjwu/68e217638d735678b3de968584dba23f

Instead, we can wrap tensors in an ndarray in normalize_outarray to handle the case correctly. Fixing this resolves ~220 tests under dynamo_test_failures, but also exposes a followup bug.

In the presence of a graph break, ndarrays don't preserve their id, which can affect assertions and `is` checks between numpy arrays:
```
     def test_x_and_out_broadcast(self, ufunc):
        x = self.get_x(ufunc)
        out = np.empty((x.shape[0], x.shape[0]))

        x_b = np.broadcast_to(x, out.shape)
        # ufunc is just np.sin here
        res_out = ufunc(x, out=out)
        res_bcast = ufunc(x_b)
        # passes
        assert res_out is out
        graph_break()
        # fails
        assert res_out is out
```
Regular tensors preserve their id because Dynamo caches their example tensor values across a graph break. However, with ndarrays, we only store their converted tensor values, and construct new ndarrays around those values:
eebe7e1d37/torch/_dynamo/variables/builder.py (L1083)
Added a test with expected failure to showcase this — we can then fix that issue separately.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/118248
Approved by: https://github.com/lezcano
2024-01-29 09:09:21 +00:00
ca1d70632d [14/N][Dynamo] Make trace_rules.lookup only handle function + callable type (#118366)
Step by step changes to unblock #118264

Pull Request resolved: https://github.com/pytorch/pytorch/pull/118366
Approved by: https://github.com/angelayi
2024-01-27 23:02:44 +00:00
40c08795b0 [JIT] python IR bindings: consolidate tests, add short docs in OVERVIEW.md (#118319)
Document the existence of python IR bindings; quick comments about it; and consolidate tests in one file to serve as examples to users.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/118319
Approved by: https://github.com/eellison
2024-01-27 03:11:51 +00:00
9bce208dfb Replace follow_imports = silent with normal (#118414)
This is a lot of files changed! Don't panic! Here's how it works:

* Previously, we set `follow_imports = silent` for our mypy.ini configuration. Per https://mypy.readthedocs.io/en/stable/running_mypy.html#follow-imports, what this does is whenever we have an import to a module which is not listed as a file to be typechecked in mypy, we typecheck it as normal but suppress all errors that occurred in that file.
* When mypy is run inside lintrunner, the list of files is precisely the files covered by the glob in lintrunner.toml, but with files in excludes excluded.
* The top-level directive `# mypy: ignore-errors` instructs mypy to typecheck the file as normal, but ignore all errors.
* Therefore, it should be equivalent to set `follow_imports = normal`, if we put `# mypy: ignore-errors` on all files that were previously excluded from the file list.
* Having done this, we can remove the exclude list from .lintrunner.toml, since excluding a file from typechecking is baked into the files themselves.
* torch/_dynamo and torch/_inductor were previously in the exclude list, because they were covered by MYPYINDUCTOR. It is not OK to mark these as `# mypy: ignore-errors` as this will impede typechecking on the alternate configuration. So they are temporarily being checked twice, but I am suppressing the errors in these files as the configurations are not quite the same. I plan to unify the configurations so this is only a temporary state.
* There were some straggler type errors after these changes somehow, so I fixed them as needed. There weren't that many.

In the future, to start type checking a file, just remove the ignore-errors directive from the top of the file.

The codemod was done with this script authored by GPT-4:

```
import glob

exclude_patterns = [
    ...
]

for pattern in exclude_patterns:
    for filepath in glob.glob(pattern, recursive=True):
        if filepath.endswith('.py'):
            with open(filepath, 'r+') as f:
                content = f.read()
                f.seek(0, 0)
                f.write('# mypy: ignore-errors\n\n' + content)
```

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/118414
Approved by: https://github.com/thiagocrepaldi, https://github.com/albanD
2024-01-27 02:44:11 +00:00
b256b7b348 Add way to actually delete a torch.library.Library object (#118318)
Relying on object lifetimes in Python is a bad idea due to reference
cycles. Previously, when a torch.library.Library object gets destroyed,
it clears all the registrations associated with it, but it's unclear
when it actually gets destroyed due to the existence of refcycles.

This PR:
- adds torch::Library::clear(), which deterministically releases all of
  the RAII registration handles of the torch::Library object
- adds a new `torch.library._scoped_library` context manager, which creates
  a library and cleans it up at the end of the scope using the previous item.
  All tests (unless they already handle library lifetimes) should use
  this new API
- Rewrites some flaky tests to use `_scoped_library`.

In the future we'll probably migrate all of our torch.library tests to
use `_scoped_library`, but that's kind of annoying because we have
multiple thousands of LOC

I'm hoping this will deflake those tests; we'll see.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/118318
Approved by: https://github.com/albanD
2024-01-26 22:30:51 +00:00
f7f7283ec7 Skip test_none_names_refcount under Dynamo-wrapped CI (#118309)
Fixes https://github.com/pytorch/pytorch/issues/117716
Dynamo does some things that modifies the refcount. Skipping this test.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/118309
Approved by: https://github.com/ydwu4, https://github.com/yanboliang, https://github.com/albanD
ghstack dependencies: #118152
2024-01-25 22:21:22 +00:00
4e29f01bf2 Remove sdp_kernel and replace with sdpa_kernel in attention namespace (#114689)
# Summary
Simplification of Backend Selection

This PR deprecates the `torch.backends/cuda/sdp_kernel` context manager and replaces it with a new context manager `torch.nn.attention.sdpa_kernel`. This context manager also changes the api for this context manager.

For `sdp_kernel` one would specify the backend choice by taking the negation of what kernel they would like to run. The purpose of this backend manager was to only to be a debugging tool, "turn off the math backend" and see if you can run one of the fused implementations.

Problems:
- This pattern makes sense if majority of users don't care to know anything about the backends that can be run. However, if users are seeking to use this context manager then they are explicitly trying to run a specific backend.
- This is not scalable. We are working on adding the cudnn backend and this API makes it so so that more implementations will need to be turned off if user wants to explicitly run a given backend.
- Discoverability of the current context manager. It is somewhat un-intutive that this backend manager is in backends/cuda/init when this now also controls the CPU fused kernel behavior. I think centralizing to attention namespace will be helpful.

Other concerns:
- Typically backends (kernels) for operators are entirely hidden from users and implementation details of the framework. We have exposed this to users already, albeit not by default and with beta warnings. Does making backends choices even more explicit lead to problems when we potentially want to remove existing backends, (perhaps inputs shapes will get covered by newer backends).

A nice side effect is now that we aren't using the `BACKEND_MAP` in test_transformers many, many dynamo failures are passing for CPU tests.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/114689
Approved by: https://github.com/cpuhrsch
2024-01-24 22:28:04 +00:00
880f9bb57e Remove xfails for consistently succeeding tests (#118152)
Fixes https://github.com/pytorch/pytorch/issues/117786, https://github.com/pytorch/pytorch/issues/117785
Pull Request resolved: https://github.com/pytorch/pytorch/pull/118152
Approved by: https://github.com/yanboliang
2024-01-24 15:47:55 +00:00
c0732c8d5e [Dynamo] Add complex to literal constant (#117819)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/117819
Approved by: https://github.com/zou3519
2024-01-23 23:46:46 +00:00
9ebaa27922 Fix types.MethodDescriptorType related bug in dynamo (#118041)
Methods that were `types.MethodDescriptorType` were failing because the `tensor.method()` to `method(tensor)` conversion was dropping the tensor and just calling `method`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/118041
Approved by: https://github.com/yanboliang
ghstack dependencies: #118000
2024-01-23 16:11:38 +00:00
fed45aee54 Replace invoking self.value if there is a user defined init, avoiding arbitrary code execution (#117818)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/117818
Approved by: https://github.com/ezyang
2024-01-23 03:14:58 +00:00
80cf0ce153 Enhance torch.vmap support from inside torch.compile (#116050)
This work rewrites vmap support in torch.compile by inlining most of
the frames into the existing FX graph. It also unlocks to PyTorch to
support features that were previously missing, such as keyword args.

Fixes: https://github.com/pytorch/pytorch/issues/114306

Pull Request resolved: https://github.com/pytorch/pytorch/pull/116050
Approved by: https://github.com/zou3519
2024-01-22 17:53:45 +00:00
2f4456a73e Remove xfail on test_make_weak_keyed_dict_from_weak_keyed_dict (#117848)
Based on the logs, this test has been consistently passing, so we remove
the xfail.

Fixes https://github.com/pytorch/pytorch/issues/116765
Pull Request resolved: https://github.com/pytorch/pytorch/pull/117848
Approved by: https://github.com/Skylion007
ghstack dependencies: #117765
2024-01-19 18:05:30 +00:00
17c5f69852 Run test_jit with PYTORCH_TEST_WITH_DYNAMO=1 in CI (#117765)
Gets rid of all the single test excludes.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/117765
Approved by: https://github.com/voznesenskym
2024-01-19 13:42:41 +00:00
f302a0d380 Re-enable SGD (#117434)
Re-enables the SGD optimizer now that compile times are more reasonable. [Benchmark run](https://github.com/pytorch/pytorch/actions/runs/7511073761)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/117434
Approved by: https://github.com/anijain2305, https://github.com/janeyx99
2024-01-19 04:28:50 +00:00
16ebfbbf07 All tests run with markDynamoStrictTest now (#117763)
Last test to remove from the denylist was dynamo/test_logging.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/117763
Approved by: https://github.com/voznesenskym
ghstack dependencies: #117729, #117747, #117754, #117761
2024-01-18 19:42:41 +00:00
5278200507 Add some better docs for dynamo_test_failures.py (#117761)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/117761
Approved by: https://github.com/voznesenskym
ghstack dependencies: #117729, #117747, #117754
2024-01-18 19:42:41 +00:00
07216721cf [codemod] markDynamoStrictTest batch 23 (#117754)
[codemod] markDynamoStrictTest test_custom_ops
[codemod] markDynamoStrictTest test_python_dispatch
Pull Request resolved: https://github.com/pytorch/pytorch/pull/117754
Approved by: https://github.com/voznesenskym
ghstack dependencies: #117729, #117747
2024-01-18 19:37:04 +00:00
5aa895e53e Don't run inductor tests in Dynamo shard (#117747)
In theory we could, but these get really slow once we turn on strict
mode, so we're not going to for now.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/117747
Approved by: https://github.com/bdhirsh
ghstack dependencies: #117729
2024-01-18 17:43:30 +00:00
db1a6eda9e [codemod] markDynamoStrictTest batch 22 (#117729)
[codemod] markDynamoStrictTest test_autograd
[codemod] markDynamoStrictTest test_ao_sparsity
[codemod] markDynamoStrictTest test_jit
[codemod] markDynamoStrictTest test_quantization
Pull Request resolved: https://github.com/pytorch/pytorch/pull/117729
Approved by: https://github.com/bdhirsh
2024-01-18 16:59:26 +00:00
6e4e81a9ef [dynamo] Extend LazyVariableTracker to tuples (#117426)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/117426
Approved by: https://github.com/lezcano, https://github.com/jansel
2024-01-18 15:51:28 +00:00
b0084be114 Revert "Re-enable SGD (#117434)"
This reverts commit e7fac72be75a9fa7a31c6fc8062364fdfc4aaa3a.

Reverted https://github.com/pytorch/pytorch/pull/117434 on behalf of https://github.com/lezcano due to breaks test_profiler.py when run with dynamo ([comment](https://github.com/pytorch/pytorch/pull/117434#issuecomment-1898311961))
2024-01-18 11:37:36 +00:00
f4df0f061c Implement set in terms of dict (#110524)
This allows to heavily simplify the implementation of set, which was
"quite unique". Now we represent a set a as a dict where all its values
are None.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110524
Approved by: https://github.com/jansel
ghstack dependencies: #112252, #117630
2024-01-18 09:36:41 +00:00
bc85eb948f Break on unsupported keys for dicts / elements for sets (#117630)
As per title

Pull Request resolved: https://github.com/pytorch/pytorch/pull/117630
Approved by: https://github.com/jansel
ghstack dependencies: #112252
2024-01-18 09:35:46 +00:00
e7fac72be7 Re-enable SGD (#117434)
Re-enables the SGD optimizer now that compile times are more reasonable. [Benchmark run](https://github.com/pytorch/pytorch/actions/runs/7511073761)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/117434
Approved by: https://github.com/anijain2305, https://github.com/janeyx99
2024-01-18 06:47:15 +00:00
cb2b98ad6b [codemod] markDynamoStrictTest batch 21 (#117609)
[codemod] markDynamoStrictTest test_torch
[codemod] markDynamoStrictTest test_ops_gradients
[codemod] markDynamoStrictTest test_ops
[codemod] markDynamoStrictTest test_modules
[codemod] markDynamoStrictTest test_ops_jit
[codemod] markDynamoStrictTest test_ops_fwd_gradients
Pull Request resolved: https://github.com/pytorch/pytorch/pull/117609
Approved by: https://github.com/bdhirsh
ghstack dependencies: #117700, #117701, #117702
2024-01-18 02:49:26 +00:00
c64fd8b89c [codemod] markDynamoStrictTest batch 20 (#117702)
[codemod] markDynamoStrictTest test_tensorexpr_pybind
[codemod] markDynamoStrictTest test_tensorexpr
[codemod] markDynamoStrictTest test_jit_llga_fuser
[codemod] markDynamoStrictTest test_jit_fuser_te

Pull Request resolved: https://github.com/pytorch/pytorch/pull/117702
Approved by: https://github.com/bdhirsh
ghstack dependencies: #117700, #117701
2024-01-18 00:30:22 +00:00
3770311093 [codemod] markDynamoStrictTest batch 19 (#117701)
[codemod] markDynamoStrictTest export/test_verifier
[codemod] markDynamoStrictTest export/test_upgrade
[codemod] markDynamoStrictTest export/test_unflatten
[codemod] markDynamoStrictTest export/test_serialize
[codemod] markDynamoStrictTest export/test_serdes
[codemod] markDynamoStrictTest export/test_retraceability
[codemod] markDynamoStrictTest export/test_passes
[codemod] markDynamoStrictTest export/test_pass_infra
[codemod] markDynamoStrictTest export/test_functionalized_assertions
[codemod] markDynamoStrictTest export/test_export_nonstrict
[codemod] markDynamoStrictTest export/test_export
[codemod] markDynamoStrictTest export/test_experimental
[codemod] markDynamoStrictTest export/test_db

Pull Request resolved: https://github.com/pytorch/pytorch/pull/117701
Approved by: https://github.com/bdhirsh, https://github.com/malfet
ghstack dependencies: #117700
2024-01-18 00:30:22 +00:00