mirror of
https://github.com/pytorch/pytorch.git
synced 2025-10-20 12:54:11 +08:00
Include other accelerators in capturable docstr for optimizers (#149770)
Fixes #149722 @ILCSFNO is this better? Pull Request resolved: https://github.com/pytorch/pytorch/pull/149770 Approved by: https://github.com/albanD
This commit is contained in:
committed by
PyTorch MergeBot
parent
bd09d87fdb
commit
dccc41581a
@ -270,9 +270,10 @@ _fused_doc = r"""fused (bool, optional): whether the fused implementation is use
|
||||
implementation, pass False for either foreach or fused. """
|
||||
|
||||
_capturable_doc = r"""capturable (bool, optional): whether this instance is safe to
|
||||
capture in a CUDA graph. Passing True can impair ungraphed performance,
|
||||
so if you don't intend to graph capture this instance, leave it False
|
||||
(default: False)"""
|
||||
capture in a graph, whether for CUDA graphs or for torch.compile support.
|
||||
Tensors are only capturable when on supported :ref:`accelerators<accelerators>`.
|
||||
Passing True can impair ungraphed performance, so if you don't intend to graph
|
||||
capture this instance, leave it False (default: False)"""
|
||||
|
||||
_differentiable_doc = r"""differentiable (bool, optional): whether autograd should
|
||||
occur through the optimizer step in training. Otherwise, the step()
|
||||
|
Reference in New Issue
Block a user