Support is_mtia attribute. (#108307) (#108310)

Summary:

FBGEMM uses `self.iter.is_cuda` to check if the tensor is for CUDA. This diff enables similar feature `self.iter.is_mtia` for tensors with MTIA device key.

Test Plan: See diff D48693225

Reviewed By: jackm321

Differential Revision: D48809191

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108310
Approved by: https://github.com/albanD
This commit is contained in:
Jun Luo
2023-09-01 01:25:36 +00:00
committed by PyTorch MergeBot
parent d569e506ab
commit 8289ad8e5e
6 changed files with 23 additions and 0 deletions

View File

@ -1190,6 +1190,14 @@ static const std::vector<OperatorGeneratorArgs> opGenArgs{
push(stack, a.is_xla());
},
aliasAnalysisFromSchema()),
OperatorGeneratorArgs(
TORCH_SELECTIVE_SCHEMA("prim::is_mtia(Tensor a) -> bool"),
[](Stack& stack) {
at::Tensor a;
pop(stack, a);
push(stack, a.is_mtia());
},
aliasAnalysisFromSchema()),
OperatorGeneratorArgs(
TORCH_SELECTIVE_SCHEMA("prim::is_xpu(Tensor a) -> bool"),
[](Stack& stack) {