### Description
Since the major changes for `_TypedStorage` and `_UntypedStorage` are now complete, they can be renamed to be public.
`TypedStorage._untyped()` is renamed to `TypedStorage.untyped()`.
Documentation for storages is improved as well.
### Issue
Fixes#82436
### Testing
N/A
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82438
Approved by: https://github.com/ezyang
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72237
add a generic zip file reader/writer to torch.package in order to get rid of dependency on torch for non torchscript / tensor related usages of package. This also enables users to create a derived class from the zip file reader/writer classes to have their own serialization/deserialization if it's desired for performance needs.
https://www.internalfb.com/intern/diff/D35423079/ was reverted due to this refactor changing the name of where most of the implementation components of PackageExporter/PackageImporter come from like ModuleActionType_ etc.
This diff also changes the import paths where these components come from to point to the correct file compared to D35423079
Test Plan: Imported from OSS
Reviewed By: malfet
Differential Revision: D35423079
Pulled By: PaliC
fbshipit-source-id: 31abc4364d5fd007911cfb67cf36ebfac5d786f4
(cherry picked from commit 023b0d1445e0b1e1bb7a03c660cd62eb9d26d2a6)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67499
Since https://github.com/pytorch/pytorch/pull/62030 was landed, storages being produced when loading from a pickle are of type TypedStorage. We weren't catching this in our deploy serialization, leading tensors to actually get pickled instead of the storages getting shared across interpreters.
Since this is technically correct still, it wasn't caught by any of our tests, until someone tried to pass a really big tensor and started ooming.
ghstack-source-id: 141869521
Test Plan: added unit test
Reviewed By: shunting314
Differential Revision: D32004075
fbshipit-source-id: ef5a80cd3cb1dff0b6b4c1b6c95923e4faab7d50
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61680
This diff enables torch deploy for fx.graph_module with non-torch dependencies . Here are the issues currently preventing this and are fixed in this change:
- Pickle is used as an internal format to transmit objects between interpreters. It needs to serialize python code, but to be able to get the source code for imports from python_code.globals it needs access to the PackageImporter. Currently a regular _reduce_ function is used which doesn't have the notion of custom importer.
- When deserializing pickled objects on an interpreter, it is passing empty globals to exec, thus it will not be able to resolve non-torch imports located in the package. We need to be able to point exec to our custom PackageImporter.
- Subclasses extending fx.graph_module should be able to optionally provide their own Tracer (extending fx.Tracer).
As a solution a new reducer is introduced (_reduce_deploy_) for torch deploy workflow. Reducer will be registered in _deploy.py (entry point for C++ torch deploy API) when saving the object transmitting it between interpreters. It allows us to pass a proper PackageImporter for each interpreter for pickling/unpickling fx.graph_module. It also defines an api for passing custom fx.Tracer when needed.
Test Plan:
Added UT to cover changes.
```
buck test //caffe2/torch/csrc/deploy:test_deploy
```
```
buck test caffe2/test:fx
```
Reviewed By: suo
Differential Revision: D29690088
fbshipit-source-id: 3a8dbe02d5d7e085534aa61b7773c86f0f8c19b0
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53049
This makes our API symmetric--now we have an `Importer` aware Pickler
and Unpickler implementation that have similar interfaces.
Test Plan: Imported from OSS
Reviewed By: Lilyjjo
Differential Revision: D26734593
Pulled By: suo
fbshipit-source-id: 3479437cf6b98e0d6a8aa4907c75f0c61d5495d4
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53048
I am planning the custom pickler and unpicklers that we use as
semi-public interfaces for `torch.rpc` to consume. Some prefatory
movements here.
Test Plan: Imported from OSS
Reviewed By: Lilyjjo
Differential Revision: D26734594
Pulled By: suo
fbshipit-source-id: 105ae1161d90f24efc7070a8d80c6ac3d2111bea
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51975
See comments in code.
Test Plan: Imported from OSS
Reviewed By: zdevito
Differential Revision: D26340592
Pulled By: suo
fbshipit-source-id: 61b16bafad15e19060710ad2d8487c776d672847
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52320
as title
Test Plan: Imported from OSS
Reviewed By: zdevito
Differential Revision: D26468416
Pulled By: suo
fbshipit-source-id: 890eecea76426918daff900402fbcbc149e48535
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51754
This API allows you to manage multiple python interpreters in a single
process to deploy PyTorch models packaged with torch.package.
torch/csrc/deploy/deploy.h contains the API definition
torch/csrc/deploy/test_deploy.cpp has some examples.
Notes:
* mutex is added to PyTorchStreamReader to make it safe to use from multiple threads at once.
* USE_DEPLOY is only true for the special libtorch_deployinterpreter.so library, when enabled
we use a hash table to maintain PyObject <> at::Tensor mappping rather than the internal pointer
in Tensor since >1 interpreter may have a reference to the tensor.
* serialization.py has some additional functions for creating pickle objects
but keeping storages in memory for use transfering tensors between interpreters
Test Plan: Imported from OSS
Reviewed By: wconstab
Differential Revision: D26329468
Pulled By: zdevito
fbshipit-source-id: d75f4ebb9a27f1d911179d9996041bcb3ca04a07