Don't run test/autograd/test_fallback.py in parallel (#106866)

Fixes https://github.com/pytorch/pytorch/issues/106754

This PR:
- moves test/autograd/test_fallback.py to test_autograd_fallback.py and
removes it from test_autograd.py (necessary for the next step)
- adds test_autograd_fallback.py to parallel test blocklist.
- lintrunner really wanted to make changes to the files, but other than
that, it is a move.

The problem is that we set a global option (the autograd fallback mode)
during these tests which may cause the tests to interfere with each
other.

Test Plan:
- python test/run_test.py -i test_autograd_fallback

NOTE to diff train oncall:
- You'll also need to modify the test/autograd/test_fallback.py TARGET in
caffe2/test/TARGETS since we renamed the file.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106866
Approved by: https://github.com/soulitzer
This commit is contained in:
Richard Zou
2023-08-09 07:59:37 -07:00
committed by PyTorch MergeBot
parent 0b57581dec
commit b9ad7bc533
3 changed files with 47 additions and 20 deletions

View File

@ -318,6 +318,8 @@ RUN_PARALLEL_BLOCKLIST = [
"test_cuda_primary_ctx",
"test_cuda_trace",
"test_cuda_nvml_based_avail",
# temporarily sets a global config
"test_autograd_fallback",
] + FSDP_TEST
# Test files that should always be run serially with other test files,
@ -373,6 +375,7 @@ ONNX_SERIAL_LIST = [
# A subset of our TEST list that validates PyTorch's ops, modules, and autograd function as expected
CORE_TEST_LIST = [
"test_autograd",
"test_autograd_fallback",
"test_modules",
"test_nn",
"test_ops",