Added is_xla (#103100)

This change creates `is_xla` which is congruent with `is_cuda` and `is_cpu`. Useful in situations like: https://github.com/pytorch/pytorch/pull/102858

```
>>> x = torch.tensor([1], device=xm.xla_device())
>>> x.is_xla
True
>>> x.is_cpu
False
>>> x = torch.tensor([1])
>>> x.is_cpu
True
>>> x.is_xla
False
```

Attn: @albanD
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103100
Approved by: https://github.com/albanD
This commit is contained in:
Muralidhar Andoorveedu
2023-06-22 23:31:00 +00:00
committed by PyTorch MergeBot
parent 49dc26435f
commit 4e204ff87b
5 changed files with 28 additions and 0 deletions

View File

@ -1149,6 +1149,14 @@ static const std::vector<OperatorGeneratorArgs> opGenArgs{
push(stack, a.is_cpu());
},
aliasAnalysisFromSchema()),
OperatorGeneratorArgs(
TORCH_SELECTIVE_SCHEMA("prim::is_xla(Tensor a) -> bool"),
[](Stack& stack) {
at::Tensor a;
pop(stack, a);
push(stack, a.is_xla());
},
aliasAnalysisFromSchema()),
OperatorGeneratorArgs(
TORCH_SELECTIVE_SCHEMA("prim::is_xpu(Tensor a) -> bool"),
[](Stack& stack) {