Summary:
Adjusts type hints for optimize_for_mobile to be consistent with the default. Right now using optimize_for_mobile and only passing a script_module gives me a type error complaining about preserved_methods can't be None.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/59282
Test Plan:
Imported from GitHub, without a `Test Plan:` line.
Open source tests ran the lints. Internal CI should be enough here.
Reviewed By: jbschlosser
Differential Revision: D28838159
Pulled By: JacobSzwejbka
fbshipit-source-id: dd1e9aff00a759f71d32025d8c5b01e612c869a5
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58344
remove a helper function thats more trouble then its worth.
ghstack-source-id: 129131889
Test Plan: ci and {P414950111}
Reviewed By: dhruvbird
Differential Revision: D28460607
fbshipit-source-id: 31bd6c1cc169785bb360e3113d258b612cad47fc
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57045
Went back and adjusted the previous optimizations to just be applied to every function.
Cleaned up api to match.
ghstack-source-id: 127214412
ghstack-source-id: 127536155
Test Plan: unit test
Reviewed By: kimishpatel
Differential Revision: D27950859
fbshipit-source-id: 214e83d5a19b452747fe223615815c10fa4aee58
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53314
Introduction of api for optimizing non forward functions for mobile. As of this diff, all functions that you say to optimize will be preserved, and those functions will be run through canonical optimization. The intention is to stack each further optimization onto separate diffs since they touch multiple files, and it seems like it'd be a nightmare to review.
ghstack-source-id: 123909414
Test Plan:
torch.utils.mobile_optimizer.optimize_for_mobile(net, methods_to_optimize=["forward", "foo"]) runs fine
torch.utils.mobile_optimizer.optimize_for_mobile(net, methods_to_optimize={"foo"}) optimizes just foo if the model doesnt define forward otherwise optimizes foo and forward
torch.utils.mobile_optimizer.optimize_for_mobile(net, methods_to_optimize=["forward"]) runs fine
torch.utils.mobile_optimizer.optimize_for_mobile(net) runs fine if the model defines forward, Throws otherwise
Reviewed By: kimishpatel
Differential Revision: D26618689
fbshipit-source-id: 5bff1fb3f3f6085c4a649a8128af9c10f0fa9400
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51884
it is now possible to bundle inputs and not bundle them for forward. This is ok and so we need to account for that.
ghstack-source-id: 121266667
Test Plan: Manually bundle inputs for a function not named forward. Call optimize_for_mobile and make sure the functions are still there. {P173289878}
Reviewed By: iseeyuan
Differential Revision: D26304558
fbshipit-source-id: 79f82d9de59c70b76f34e01f3d691107bf40e7bc
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51496
A previous change added the possibility of more functions being generated when bundled inputs are attached. Want to preserve those here in optimize_for_mobile
ghstack-source-id: 120862718
Test Plan:
Created a dummy model. Augment several methods with bundled inputs. Call optimize for mobile. Verified the functions are still there.
Discovered a weird interaction between freeze_module and bundled inputs. If the user does something like
inputs =[<inputs>]
augment_many_model_functions_with_bundled_inputs(
model,
inputs={
model.forward : inputs,
model.foo : inputs,
}
)
to attach their bundled inputs, freeze_module within optimize_for_mobile will error out. Instead the user would need to do something like
inputs =[<inputs>]
inputs2 =[<inputs>] # Nominally the same as the the inputs above
augment_many_model_functions_with_bundled_inputs(
model,
inputs={
model.forward : inputs,
model.foo : inputs2,
}
)
Reviewed By: dhruvbird
Differential Revision: D26005708
fbshipit-source-id: 3e908c0f7092a57da9039fbc395aee6bf9dd2b20
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51153
Enabled bundled inputs for all public functions that the user wants in a torchscript module. An important caveat here is that you cant add bundled inputs to functions that were in the nn.module but weren't caught in the scripting/tracing process that brought the model to torchscript.
Old Api is exactly the same. Still only works on forward, return types the same, etc.
-----------New API-------------
Attachment of inputs:
***augment_model_with_bundled_inputs*** : works the same as before but added the option to specify an info dictionary.
***augment_many_model_functions_with_bundled_inputs*** : Similar to the above function but allows the user to specify a Dict[Callable, List[<inputs>]] (mapping function references to the bundled inputs for that function) to attach bundled inputs to many functions
Consumption of inputs:
***get_all_bundled_inputs_for_<function_name>()*** : Works exactly like get_all_bundled_inputs does, but can be used for functions other then forward if you know ahead of time what they are called, and if they have bundled inputs.
***get_bundled_inputs_functions_and_info()*** : This is easily the hackiest function. Returns a Dict['str', 'str'] mapping function_names to get_all_bundled_inputs_for_<function_name>. A user can then execute the functions specified in the values with something like
all_info = model.get_bundled_inputs_functions_and_info()
for func_name in all_info.keys():
input_func_name = all_info[func_name]['get_inputs_function_name'][0]
func_to_run = getattr(loaded, input_func_name)
The reason its done this way is because torchscript doesn't support 'Any' type yet meaning I can't return the bundled inputs directly because they could be different types for each function. Torchscript also doesn't support callable so I can't return a function reference directly either.
ghstack-source-id: 120768561
Test Plan:
Got a model into torchscript using the available methods that I'm aware of (tracing, scripting, old scripting method). Not really sure how tracing brings in functions that arent in the forward call path though. Attached bundled inputs and info to them successfully. Changes to TorchTest.py on all but the last version of this diff (where it will be/is removed for land) illustrate what I did to test.
Created and ran unit test
Reviewed By: dreiss
Differential Revision: D25931961
fbshipit-source-id: 36e87c9a585554a83a932e4dcf07d1f91a32f046
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49170
Added an extra step to **always** preserve the bundled inputs methods if they are present in the input module.
Also added a check to see if all the methods in the `preseved_methods` exist. If not, we will now throw an exception. This can hopefully stop hard-to-debug inputs from getting into downstream functions.
~~Add an optional argument `preserve_bundled_inputs_methods=False` to the `optimize_for_mobile` function. If set to be True, the function will now add three additional functions related with bundled inputs to be preserved: `get_all_bundled_inputs`, `get_num_bundled_inputs` and `run_on_bundled_input`.~~
Test Plan:
`buck test mode/dev //caffe2/test:mobile -- 'test_preserve_bundled_inputs_methods \(test_mobile_optimizer\.TestOptimizer\)'`
or
`buck test caffe2/test:mobile` to run some other related tests as well.
Reviewed By: dhruvbird
Differential Revision: D25463719
fbshipit-source-id: 6670dfd59bcaf54b56019c1a43db04b288481b6a
Summary:
Added an extra step to **always** preserve the bundled inputs methods if they are present in the input module.
Also added a check to see if all the methods in the `preseved_methods` exist. If not, we will now throw an exception. This can hopefully stop hard-to-debug inputs from getting into downstream functions.
~~Add an optional argument `preserve_bundled_inputs_methods=False` to the `optimize_for_mobile` function. If set to be True, the function will now add three additional functions related with bundled inputs to be preserved: `get_all_bundled_inputs`, `get_num_bundled_inputs` and `run_on_bundled_input`.~~
Test Plan:
`buck test mode/dev //caffe2/test:mobile -- 'test_preserve_bundled_inputs_methods \(test_mobile_optimizer\.TestOptimizer\)'`
or
`buck test caffe2/test:mobile` to run some other related tests as well.
Reviewed By: dhruvbird
Differential Revision: D25433268
fbshipit-source-id: 0bf9b4afe64b79ed1684a3db4c0baea40ed3cdd5
Summary:
By default freeze_module pass, invoked from optimize_for_mobile,
preserves only forward method. There is an option to specify a list of
methods that can be preserved during freeze_module. This PR exposes that
to optimize_for_module pass.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40629
Test Plan: python test/test_mobile_optimizer.py
Reviewed By: dreiss
Differential Revision: D22260972
Pulled By: kimishpatel
fbshipit-source-id: 452c653269da8bb865acfb58da2d28c23c66e326
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37462
Instead of running all the optimization pass in optimizeForMobile method,
introducing a whitelist optimizer dictionary as second param in the method,
when it is not passed during calling, the method will run all the optimization
passes, otherwise the method will read the dict and only run the pass with
value of True.
ghstack-source-id: 106104503
Test Plan:
python test/test_mobile_optimizer.py
Imported from OSS
Differential Revision: D22096029
fbshipit-source-id: daa9370c0510930f4c032328b225df0bcf97880f
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37046
ghstack-source-id: 102669259
Creating a python api entry to generate mobile model lints which takes a scripted module as argument and returns a map of module lints.
The initial version is to create placeholder which included module bundled input as the first lint instance. More lints will be added in the future.
Test Plan: python test/test_optimizer.py
Reviewed By: dreiss
Differential Revision: D21164648
fbshipit-source-id: 9e8f4e19d74b5464a55cc73b9dc18f358c5947d6
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36357
ghstack-source-id: 101907180
Creating a python api entry to optimize mobile model which takes a scripted module as argument and returns an optimized scripted module. The initial optimization features includes inserting and folding prepack ops.
Test Plan: python test/test_optimizer.py
Differential Revision: D20946076
fbshipit-source-id: 93cb4a5bb2371128f802d738eb26d0a4f3b2fe10