mirror of
https://github.com/pytorch/pytorch.git
synced 2025-10-21 05:34:18 +08:00
Allow PrivateUse1 backends to not have Storage (#86557)
Allow PrivateUse1 backends to not have Storage To unblock the DirectML backend, this change would be needed for 1.13 as well. The DirectML backend creates tensors using the open registration pattern documented here: https://pytorch.org/tutorials/advanced/extend_dispatcher.html [registration example](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fbdhirsh%2Fpytorch_open_registration_example&data=05%7C01%7CSheil.Kumar%40microsoft.com%7Cf107b0b4349e41f1a57808daa7ee8a2c%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638006940242882444%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=ivYLNmuC1WMitwu8n%2B1RAmeKkRM4ssb7EvhhGKJDFwk%3D&reserved=0) However, DirectML tensors are opaque, and do not have Storage. The DirectML Tensor Impl derives from OpaqueTensorImpl, which does not have a storage. Because of this various places in the code fail that expect storage to be present. We had made various changes in-tree to accommodate this: a. def __deepcopy__(self, memo): [b5acba8895/torch/_tensor.py (L119)
](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fpytorch%2Fpytorch%2Fblob%2Fb5acba88959698d35cb548c78dd3fb151f85f28b%2Ftorch%2F_tensor.py%23L119&data=05%7C01%7CSheil.Kumar%40microsoft.com%7Cf107b0b4349e41f1a57808daa7ee8a2c%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638006940242882444%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=ajg23nMCzgRDwlinqSxS%2BRmOkAcDCr3LW%2BBEfNCn5hw%3D&reserved=0) or self.device.type in ["lazy", "xla", "mps", "ort", "meta", "hpu", 'dml'] b. def _reduce_ex_internal(self, proto): [b5acba8895/torch/_tensor.py (L275)
](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fpytorch%2Fpytorch%2Fblob%2Fb5acba88959698d35cb548c78dd3fb151f85f28b%2Ftorch%2F_tensor.py%23L275&data=05%7C01%7CSheil.Kumar%40microsoft.com%7Cf107b0b4349e41f1a57808daa7ee8a2c%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638006940242882444%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=xDW6LwPSe2F396OJ6QSJY6mVzJVDeQiJgA0G347y2pw%3D&reserved=0) if self.device.type in ["xla", "ort", "hpu", "dml"]: c. TensorIteratorBase::build has an unsupported list for tensors without storage. [b5acba8895/aten/src/ATen/TensorIterator.cpp (L1497)
](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fpytorch%2Fpytorch%2Fblob%2Fb5acba88959698d35cb548c78dd3fb151f85f28b%2Faten%2Fsrc%2FATen%2FTensorIterator.cpp%23L1497&data=05%7C01%7CSheil.Kumar%40microsoft.com%7Cf107b0b4349e41f1a57808daa7ee8a2c%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638006940242882444%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=qAdgNgzKl0xrtOvsABpw1VGkSoGUpe7jwDPhHw3XjgU%3D&reserved=0) Using the PrivateUse1 backend, similar exemptions need to be made in order to relax requirements on Storage so that the DirectML backend tensors can work. Pull Request resolved: https://github.com/pytorch/pytorch/pull/86557 Approved by: https://github.com/bdhirsh, https://github.com/martinb35
This commit is contained in:
committed by
PyTorch MergeBot
parent
61a5898675
commit
f24d174fff
@ -116,6 +116,7 @@ class Tensor(torch._C._TensorBase):
|
||||
if (
|
||||
self.is_sparse
|
||||
or self.device.type in ["lazy", "xla", "mps", "ort", "meta", "hpu"]
|
||||
or (self.storage is None and self.device.type == "privateuseone")
|
||||
or (type(self) is not Tensor and self.data_ptr() == 0)
|
||||
):
|
||||
new_tensor = self.clone()
|
||||
@ -271,7 +272,9 @@ class Tensor(torch._C._TensorBase):
|
||||
# 2. Python list is not a good fit due to performance reason.
|
||||
# `tolist()` converts every single element in the tensor into python objects
|
||||
# and serialize them one by one.
|
||||
if self.device.type in ["xla", "ort", "hpu"]:
|
||||
if self.device.type in ["xla", "ort", "hpu"] or (
|
||||
self.storage is None and self.device.type == "privateuseone"
|
||||
):
|
||||
# Convert BFloat16 tesors to Float32 before conversion to numpy, as numpy doesn't
|
||||
# support BFloat16. The rebuild tensor from numpy takes in the original self.dtype,
|
||||
# this would reconstruct the BFloat16 tensor from numpy.
|
||||
|
Reference in New Issue
Block a user