Fix unexpected inference_mode interaction with torch.autograd.functional.jacobian (#130307)

Fixes #128264

Pull Request resolved: https://github.com/pytorch/pytorch/pull/130307
Approved by: https://github.com/soulitzer
This commit is contained in:
Tianyi Tao
2024-08-25 22:14:02 +00:00
committed by PyTorch MergeBot
parent dc1959e6a7
commit 7af38eb98b

View File

@ -240,9 +240,9 @@ This better runtime comes with a drawback: tensors created in inference mode
will not be able to be used in computations to be recorded by autograd after
exiting inference mode.
Enable inference mode when you are performing computations that dont need
to be recorded in the backward graph, AND you dont plan on using the tensors
created in inference mode in any computation that is to be recorded by autograd later.
Enable inference mode when you are performing computations that do not have
interactions with autograd, AND you dont plan on using the tensors created
in inference mode in any computation that is to be recorded by autograd later.
It is recommended that you try out inference mode in the parts of your code
that do not require autograd tracking (e.g., data processing and model evaluation).