mirror of
https://github.com/pytorch/pytorch.git
synced 2025-10-20 21:14:14 +08:00
port 4 dynamo test files to Intel GPU (#157779)
For https://github.com/pytorch/pytorch/issues/114850, we will port test cases to Intel GPU. Six dynamo test files were ported in PR [#156056](https://github.com/pytorch/pytorch/pull/156056) and [#156575](https://github.com/pytorch/pytorch/pull/156575.) In this PR we will port 4 more dynamo test files. We could enable Intel GPU with following methods and try the best to keep the original code styles: - instantiate_device_type_tests() - use "torch.accelerator.current_accelerator()" to determine the accelerator backend - added XPU support in decorators like @requires_gpu - enabled XPU for some test path - added xfailIfXPU to skip xpu test when there is a bug. Pull Request resolved: https://github.com/pytorch/pytorch/pull/157779 Approved by: https://github.com/guangyey, https://github.com/jansel
This commit is contained in:
committed by
PyTorch MergeBot
parent
e1a20988f3
commit
8088958793
@ -1957,6 +1957,14 @@ def runOnRocmArch(arch: tuple[str, ...]):
|
||||
def xfailIfS390X(func):
|
||||
return unittest.expectedFailure(func) if IS_S390X else func
|
||||
|
||||
def xfailIf(condition):
|
||||
def wrapper(func):
|
||||
if condition:
|
||||
return unittest.expectedFailure(func)
|
||||
else:
|
||||
return func
|
||||
return wrapper
|
||||
|
||||
def skipIfXpu(func=None, *, msg="test doesn't currently work on the XPU stack"):
|
||||
def dec_fn(fn):
|
||||
reason = f"skipIfXpu: {msg}"
|
||||
|
Reference in New Issue
Block a user