mirror of
https://github.com/pytorch/pytorch.git
synced 2025-10-21 05:34:18 +08:00
introduce INTERN_DISABLE_AUTOGRAD flag to create inference only library for mobile
Summary: This is the first of a series of changes to reduce build size by cutting autograd functions from mobile build. When INTERN_DISABLE_AUTOGRAD is set: * On CMake side we exclude Functions.h/cpp, VariableType*.h/cpp, VariableTypeManual.cpp from the build process. Still keep variable_factories.h as we rely on it to create variables instead of tensors. * In source code we gate a couple autograd references (in autograd/variable.cpp) with C10_MOBILE (technically we should use a dedicated c macro but its maintenance cost is higher than cmake macro as we have several build systems to change). * Pass --disable-autograd flag to codegen script, which will stop generating Functions/VariableType code. And for variable_factories.h it will stop generating tracing code. Edit: in this diff we will keep Functions.h/cpp to avoid changing source code. Why we need this change if it's already not calling VariableType and autograd stuff with USE_STATIC_DISPATCH=ON for mobile? It's trying to reduce static library size for iOS build, for which it's relatively harder to strip size with linker approach. Why we need make involved change into codegen script? There isn't a global config system in codegen - autograd/env.py provides similar functionality but it says not adding anything there. Test Plan: - will check CI; - test mobile build in sample app; Differential Revision: D17202733 Pulled By: ljk53 fbshipit-source-id: 5701c6639b39ce58aba9bf5489a08d30d1dcd299
This commit is contained in:
committed by
Facebook Github Bot
parent
41cf5564fe
commit
8485710143
@ -23,7 +23,8 @@ def generate_code(ninja_global=None,
|
||||
declarations_path=None,
|
||||
nn_path=None,
|
||||
install_dir=None,
|
||||
subset=None):
|
||||
subset=None,
|
||||
disable_autograd=False):
|
||||
# cwrap depends on pyyaml, so we can't import it earlier
|
||||
root = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||
sys.path.insert(0, root)
|
||||
@ -41,7 +42,12 @@ def generate_code(ninja_global=None,
|
||||
gen_autograd_python(declarations_path or DECLARATIONS_PATH, autograd_gen_dir, 'tools/autograd')
|
||||
|
||||
if subset == "libtorch" or not subset:
|
||||
gen_autograd(declarations_path or DECLARATIONS_PATH, autograd_gen_dir, 'tools/autograd')
|
||||
gen_autograd(
|
||||
declarations_path or DECLARATIONS_PATH,
|
||||
autograd_gen_dir,
|
||||
'tools/autograd',
|
||||
disable_autograd=disable_autograd,
|
||||
)
|
||||
gen_jit_dispatch(declarations_path or DECLARATIONS_PATH, jit_gen_dir, 'tools/jit/templates')
|
||||
|
||||
|
||||
@ -55,6 +61,12 @@ def main():
|
||||
'--subset',
|
||||
help='Subset of source files to generate. Can be "libtorch" or "pybindings". Generates both when omitted.'
|
||||
)
|
||||
parser.add_argument(
|
||||
'--disable-autograd',
|
||||
default=False,
|
||||
action='store_true',
|
||||
help='It can skip generating autograd related code when the flag is set',
|
||||
)
|
||||
options = parser.parse_args()
|
||||
generate_code(
|
||||
options.ninja_global,
|
||||
@ -62,6 +74,7 @@ def main():
|
||||
options.nn_path,
|
||||
options.install_dir,
|
||||
options.subset,
|
||||
options.disable_autograd,
|
||||
)
|
||||
|
||||
|
||||
|
Reference in New Issue
Block a user