mirror of
https://github.com/pytorch/pytorch.git
synced 2025-10-20 21:14:14 +08:00
Update docs for torch.device (#156686)
# Motivation Update the doc, to make `torch.device`'s constructor officially support the following methods: - A device string, which is a string representation of the device type and optionally the device ordinal. - A device type and a device ordinal. - A device ordinal, which is treated as the current accelerator type. Pull Request resolved: https://github.com/pytorch/pytorch/pull/156686 Approved by: https://github.com/albanD
This commit is contained in:
committed by
PyTorch MergeBot
parent
3608737347
commit
9b498d3bb2
@ -149,9 +149,13 @@ the result of :func:`torch.cuda.current_device()`.
|
||||
|
||||
A :class:`torch.Tensor`'s device can be accessed via the :attr:`Tensor.device` property.
|
||||
|
||||
A :class:`torch.device` can be constructed via a string or via a string and device ordinal
|
||||
A :class:`torch.device` can be constructed using:
|
||||
|
||||
Via a string:
|
||||
* A device string, which is a string representation of the device type and optionally the device ordinal.
|
||||
* A device type and a device ordinal.
|
||||
* A device ordinal, where the current :ref:`accelerator<accelerators>` type will be used.
|
||||
|
||||
Via a device string:
|
||||
::
|
||||
|
||||
>>> torch.device('cuda:0')
|
||||
@ -163,10 +167,10 @@ Via a string:
|
||||
>>> torch.device('mps')
|
||||
device(type='mps')
|
||||
|
||||
>>> torch.device('cuda') # current cuda device
|
||||
>>> torch.device('cuda') # implicit index is the "current device index"
|
||||
device(type='cuda')
|
||||
|
||||
Via a string and device ordinal:
|
||||
Via a device type and a device ordinal:
|
||||
|
||||
::
|
||||
|
||||
@ -179,6 +183,24 @@ Via a string and device ordinal:
|
||||
>>> torch.device('cpu', 0)
|
||||
device(type='cpu', index=0)
|
||||
|
||||
Via a device ordinal:
|
||||
|
||||
.. note::
|
||||
This method will raise a RuntimeError if no accelerator is currently detected.
|
||||
|
||||
::
|
||||
|
||||
>>> torch.device(0) # the current accelerator is cuda
|
||||
device(type='cuda', index=0)
|
||||
|
||||
>>> torch.device(1) # the current accelerator is xpu
|
||||
device(type='xpu', index=1)
|
||||
|
||||
>>> torch.device(0) # no current accelerator detected
|
||||
Traceback (most recent call last):
|
||||
File "<stdin>", line 1, in <module>
|
||||
RuntimeError: Cannot access accelerator device when none is available.
|
||||
|
||||
The device object can also be used as a context manager to change the default
|
||||
device tensors are allocated on:
|
||||
|
||||
@ -211,22 +233,13 @@ non-None device argument. To globally change the default device, see also
|
||||
>>> # You can substitute the torch.device with a string
|
||||
>>> torch.randn((2,3), device='cuda:1')
|
||||
|
||||
.. note::
|
||||
For legacy reasons, a device can be constructed via a single device ordinal, which is treated
|
||||
as the current :ref:`accelerator<accelerators>` type.
|
||||
This matches :meth:`Tensor.get_device`, which returns an ordinal for device
|
||||
tensors and is not supported for cpu tensors.
|
||||
|
||||
>>> torch.device(1)
|
||||
device(type='cuda', index=1)
|
||||
|
||||
.. note::
|
||||
Methods which take a device will generally accept a (properly formatted) string
|
||||
or (legacy) integer device ordinal, i.e. the following are all equivalent:
|
||||
or an integer device ordinal, i.e. the following are all equivalent:
|
||||
|
||||
>>> torch.randn((2,3), device=torch.device('cuda:1'))
|
||||
>>> torch.randn((2,3), device='cuda:1')
|
||||
>>> torch.randn((2,3), device=1) # legacy
|
||||
>>> torch.randn((2,3), device=1) # equivalent to 'cuda:1' if the current accelerator is cuda
|
||||
|
||||
.. note::
|
||||
Tensors are never moved automatically between devices and require an explicit call from the user. Scalar Tensors (with tensor.dim()==0) are the only exception to this rule and they are automatically transferred from CPU to GPU when needed as this operation can be done "for free".
|
||||
|
Reference in New Issue
Block a user