Include other accelerators in capturable docstr for optimizers (#149770)

Fixes #149722

@ILCSFNO is this better?

Pull Request resolved: https://github.com/pytorch/pytorch/pull/149770
Approved by: https://github.com/albanD
This commit is contained in:
Jane Xu
2025-04-24 20:38:37 +00:00
committed by PyTorch MergeBot
parent bd09d87fdb
commit dccc41581a

View File

@ -270,9 +270,10 @@ _fused_doc = r"""fused (bool, optional): whether the fused implementation is use
implementation, pass False for either foreach or fused. """
_capturable_doc = r"""capturable (bool, optional): whether this instance is safe to
capture in a CUDA graph. Passing True can impair ungraphed performance,
so if you don't intend to graph capture this instance, leave it False
(default: False)"""
capture in a graph, whether for CUDA graphs or for torch.compile support.
Tensors are only capturable when on supported :ref:`accelerators<accelerators>`.
Passing True can impair ungraphed performance, so if you don't intend to graph
capture this instance, leave it False (default: False)"""
_differentiable_doc = r"""differentiable (bool, optional): whether autograd should
occur through the optimizer step in training. Otherwise, the step()