mirror of
https://github.com/pytorch/pytorch.git
synced 2025-10-20 21:14:14 +08:00
port distributed pipeline test files for Intel GPU (#159033)
In this PR we will port all distributed pipeline test files. We could enable Intel GPU with following methods and try the best to keep the original code styles: 1. instantiate_device_type_tests() 2. use "torch.accelerator.current_accelerator()" to determine the accelerator backend 3. use "requires_accelerator_dist_backend()" to replace requires_nccl() 4. use "get_default_backend_for_device()" to get backend 5. enabled XPU for some test path 6. add TEST_MULTIACCELERATOR in common_utils for all backend. Pull Request resolved: https://github.com/pytorch/pytorch/pull/159033 Approved by: https://github.com/guangyey, https://github.com/d4l3k Co-authored-by: Daisy Deng <daisy.deng@intel.com>
This commit is contained in:
committed by
PyTorch MergeBot
parent
c8205cb354
commit
76a0609b6b
@ -1422,6 +1422,7 @@ MACOS_VERSION = float('.'.join(platform.mac_ver()[0].split('.')[:2]) or -1)
|
||||
TEST_XPU = torch.xpu.is_available()
|
||||
TEST_HPU = True if (hasattr(torch, "hpu") and torch.hpu.is_available()) else False
|
||||
TEST_CUDA = torch.cuda.is_available()
|
||||
TEST_MULTIACCELERATOR = torch.accelerator.device_count() >= 2
|
||||
custom_device_mod = getattr(torch, torch._C._get_privateuse1_backend_name(), None)
|
||||
TEST_PRIVATEUSE1 = is_privateuse1_backend_available()
|
||||
TEST_PRIVATEUSE1_DEVICE_TYPE = torch._C._get_privateuse1_backend_name()
|
||||
|
Reference in New Issue
Block a user