mirror of
https://github.com/pytorch/pytorch.git
synced 2025-10-20 21:14:14 +08:00
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/26477 - At inference time we need turn off autograd mode and turn on no-variable mode since we strip out these modules for inference-only mobile build. - Both flags are stored in thread-local variables so we cannot simply set them to false glboally. - Add "autograd/grad_mode.h" header to all-in-one header 'torch/script.h' to reduce friction for iOS engs who might need do this manually in their project. P.S. I tried to hide AutoNonVariableTypeMode in codegen but figured it's not very trivial (e.g. there are manually written part not covered by codegen). Might try it again later. Test Plan: - Integrate with Android demo app to confirm inference runs correctly. Differential Revision: D17484259 Pulled By: ljk53 fbshipit-source-id: 06887c8b527124aa0cc1530e8e14bb2361acef31
11 lines
305 B
C
11 lines
305 B
C
#pragma once
|
|
|
|
#include <torch/csrc/api/include/torch/types.h>
|
|
#include <torch/csrc/autograd/generated/variable_factories.h>
|
|
#include <torch/csrc/autograd/grad_mode.h>
|
|
#include <torch/csrc/jit/custom_operator.h>
|
|
#include <torch/csrc/jit/import.h>
|
|
#include <torch/csrc/jit/pickle.h>
|
|
|
|
#include <ATen/ATen.h>
|