[TD] Enable TD on distributed cpu (#150028)

Enable TD on distributed cpu, I think the only reason it's not is because I forgot to enable it

Get rid of some of the statements that are no ops:
* asan uses default shard
* nogpu got moved to periodic
* no windows cuda testing anymore

Only thing on pull and trunk that doesn't use TD is dynamo_wrapped but I think it's fast enough to be ok for now, we can take another look after this
Pull Request resolved: https://github.com/pytorch/pytorch/pull/150028
Approved by: https://github.com/ZainRizvi
This commit is contained in:
Catherine Lee
2025-03-28 17:19:11 +00:00
committed by PyTorch MergeBot
parent cf7447ae99
commit 85079e4380

View File

@ -30,7 +30,6 @@ from torch.testing._internal.common_utils import (
get_report_path,
IS_CI,
IS_MACOS,
IS_WINDOWS,
retry_shell,
set_cwd,
shell,
@ -1377,11 +1376,7 @@ def parse_args():
default=IS_CI
and (
TEST_WITH_CROSSREF
or TEST_WITH_ASAN
or (TEST_CONFIG == "distributed" and TEST_CUDA)
or (IS_WINDOWS and not TEST_CUDA)
or TEST_CONFIG == "nogpu_AVX512"
or TEST_CONFIG == "nogpu_NO_AVX2"
or TEST_CONFIG == "distributed"
or TEST_CONFIG == "default"
)
and get_pr_number() is not None