mirror of
https://github.com/pytorch/pytorch.git
synced 2025-10-20 21:14:14 +08:00
Fix incorrect variable in autograd docs (#70884)
Summary: Fixes https://github.com/pytorch/pytorch/issues/68362. Pull Request resolved: https://github.com/pytorch/pytorch/pull/70884 Reviewed By: mruberry Differential Revision: D33463331 Pulled By: ngimel fbshipit-source-id: 834ba9c450972710e0424cc92af222551f0b4a4a
This commit is contained in:
committed by
Facebook GitHub Bot
parent
22f5280433
commit
23f902f7e4
@ -78,7 +78,7 @@ But that may not always be the case. For instance:
|
||||
Under the hood, to prevent reference cycles, PyTorch has *packed* the tensor
|
||||
upon saving and *unpacked* it into a different tensor for reading. Here, the
|
||||
tensor you get from accessing ``y.grad_fn._saved_result`` is a different tensor
|
||||
object than ``x`` (but they still share the same storage).
|
||||
object than ``y`` (but they still share the same storage).
|
||||
|
||||
Whether a tensor will be packed into a different tensor object depends on
|
||||
whether it is an output of its own `grad_fn`, which is an implementation detail
|
||||
|
Reference in New Issue
Block a user