mirror of
https://github.com/pytorch/pytorch.git
synced 2025-11-03 15:35:04 +08:00
Refactor Device to not depend on Backend. (#10478)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/10478 - Removed Backend constructor from Device, and fixed all use-sites to use DeviceType::CPU instead of kCPU, or use a new function backendToDeviceType to perform the conversion. - New method device_type() on Type; it gives you the underlying device type, e.g., CPU for SparseCPU. - We add backward compatibility for kCPU/kCUDA uses, by introducing a new special type which is implicitly convertible to both DeviceType and Backend. As long as you don't define a function that's overloaded on both DeviceType and Backend (but not on BackendOrDeviceType), the implicit conversions will ensure that uses of at::Device(at::kCPU) keep working. We fixed use-sites in the library, but did NOT fix sites in the test code, so that we can exercise this BC code. Reviewed By: Yangqing Differential Revision: D9301861 fbshipit-source-id: 9a9d88620500715c7b37e655b4fd761f6dd72716
This commit is contained in:
committed by
Facebook Github Bot
parent
f1420adfe3
commit
6bdbad93b9
@ -255,7 +255,7 @@ static PyObject * THPVariable_cuda(PyObject* self, PyObject* args, PyObject* kwa
|
||||
auto& self_ = reinterpret_cast<THPVariable*>(self)->cdata;
|
||||
ParsedArgs<2> parsed_args;
|
||||
auto r = parser.parse(args, kwargs, parsed_args);
|
||||
auto backend = self_.is_sparse() ? at::kSparseCUDA : at::kCUDA;
|
||||
auto backend = self_.is_sparse() ? at::Backend::SparseCUDA : at::Backend::CUDA;
|
||||
auto& type = self_.type().toBackend(backend);
|
||||
auto device_obj = r.device(0);
|
||||
if (!r.isNone(0) && device_obj.is_cpu()) {
|
||||
|
||||
Reference in New Issue
Block a user