Refactor Device to not depend on Backend. (#10478)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10478

- Removed Backend constructor from Device, and fixed all
  use-sites to use DeviceType::CPU instead of kCPU, or
  use a new function backendToDeviceType to perform
  the conversion.
- New method device_type() on Type; it gives you the
  underlying device type, e.g., CPU for SparseCPU.
- We add backward compatibility for kCPU/kCUDA uses,
  by introducing a new special type which is implicitly
  convertible to both DeviceType and Backend.  As long as
  you don't define a function that's overloaded on both
  DeviceType and Backend (but not on BackendOrDeviceType),
  the implicit conversions will ensure that uses
  of at::Device(at::kCPU) keep working. We fixed use-sites in
  the library, but did NOT fix sites in the test code, so that
  we can exercise this BC code.

Reviewed By: Yangqing

Differential Revision: D9301861

fbshipit-source-id: 9a9d88620500715c7b37e655b4fd761f6dd72716
This commit is contained in:
Edward Yang
2018-08-18 17:25:26 -07:00
committed by Facebook Github Bot
parent f1420adfe3
commit 6bdbad93b9
79 changed files with 290 additions and 202 deletions

View File

@ -255,7 +255,7 @@ static PyObject * THPVariable_cuda(PyObject* self, PyObject* args, PyObject* kwa
auto& self_ = reinterpret_cast<THPVariable*>(self)->cdata;
ParsedArgs<2> parsed_args;
auto r = parser.parse(args, kwargs, parsed_args);
auto backend = self_.is_sparse() ? at::kSparseCUDA : at::kCUDA;
auto backend = self_.is_sparse() ? at::Backend::SparseCUDA : at::Backend::CUDA;
auto& type = self_.type().toBackend(backend);
auto device_obj = r.device(0);
if (!r.isNone(0) && device_obj.is_cpu()) {