mirror of
https://github.com/pytorch/pytorch.git
synced 2025-10-21 05:34:18 +08:00
# Motivation The following API will be put under torch.accelerator - empty_cache - max_memory_allocated - max_memory_reserved - memory_allocated - memory_reserved - memory_stats - reset_accumulated_memory_stats - reset_peak_memory_stats Pull Request resolved: https://github.com/pytorch/pytorch/pull/152932 Approved by: https://github.com/albanD ghstack dependencies: #138222
51 lines
839 B
Markdown
51 lines
839 B
Markdown
# torch.accelerator
|
|
|
|
```{eval-rst}
|
|
.. automodule:: torch.accelerator
|
|
```
|
|
|
|
```{eval-rst}
|
|
.. currentmodule:: torch.accelerator
|
|
```
|
|
|
|
```{eval-rst}
|
|
.. autosummary::
|
|
:toctree: generated
|
|
:nosignatures:
|
|
|
|
device_count
|
|
is_available
|
|
current_accelerator
|
|
set_device_index
|
|
set_device_idx
|
|
current_device_index
|
|
current_device_idx
|
|
set_stream
|
|
current_stream
|
|
synchronize
|
|
device_index
|
|
```
|
|
|
|
```{eval-rst}
|
|
.. automodule:: torch.accelerator.memory
|
|
```
|
|
```{eval-rst}
|
|
.. currentmodule:: torch.accelerator.memory
|
|
```
|
|
|
|
## Memory management
|
|
```{eval-rst}
|
|
.. autosummary::
|
|
:toctree: generated
|
|
:nosignatures:
|
|
|
|
empty_cache
|
|
max_memory_allocated
|
|
max_memory_reserved
|
|
memory_allocated
|
|
memory_reserved
|
|
memory_stats
|
|
reset_accumulated_memory_stats
|
|
reset_peak_memory_stats
|
|
```
|