Files
pytorch/torch/multiprocessing/__init__.py
Edward Yang 173f224570 Turn on F401: Unused import warning. (#18598)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18598
ghimport-source-id: c74597e5e7437e94a43c163cee0639b20d0d0c6a

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18598 Turn on F401: Unused import warning.**

This was requested by someone at Facebook; this lint is turned
on for Facebook by default.  "Sure, why not."

I had to noqa a number of imports in __init__.  Hypothetically
we're supposed to use __all__ in this case, but I was too lazy
to fix it.  Left for future work.

Be careful!  flake8-2 and flake8-3 behave differently with
respect to import resolution for # type: comments.  flake8-3 will
report an import unused; flake8-2 will not.  For now, I just
noqa'd all these sites.

All the changes were done by hand.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: D14687478

fbshipit-source-id: 30d532381e914091aadfa0d2a5a89404819663e3
2019-03-30 09:01:17 -07:00

79 lines
2.6 KiB
Python

"""
torch.multiprocessing is a wrapper around the native :mod:`multiprocessing`
module. It registers custom reducers, that use shared memory to provide shared
views on the same data in different processes. Once the tensor/storage is moved
to shared_memory (see :func:`~torch.Tensor.share_memory_`), it will be possible
to send it to other processes without making any copies.
The API is 100% compatible with the original module - it's enough to change
``import multiprocessing`` to ``import torch.multiprocessing`` to have all the
tensors sent through the queues or shared via other mechanisms, moved to shared
memory.
Because of the similarity of APIs we do not document most of this package
contents, and we recommend referring to very good docs of the original module.
"""
import torch
import sys
from .reductions import init_reductions
import multiprocessing
__all__ = ['set_sharing_strategy', 'get_sharing_strategy',
'get_all_sharing_strategies']
from multiprocessing import * # noqa: F401
__all__ += multiprocessing.__all__
# This call adds a Linux specific prctl(2) wrapper function to this module.
# See https://github.com/pytorch/pytorch/pull/14391 for more information.
torch._C._multiprocessing_init()
if sys.version_info < (3, 3):
"""Override basic classes in Python 2.7 and Python 3.3 to use ForkingPickler
for serialization. Later versions of Python already use ForkingPickler."""
from .queue import Queue, SimpleQueue # noqa: F401
from .pool import Pool # noqa: F401
"""Add helper function to spawn N processes and wait for completion of any of
them. This depends `mp.get_context` which was added in Python 3.4."""
from .spawn import spawn, SpawnContext # noqa: F401
if sys.platform == 'darwin' or sys.platform == 'win32':
_sharing_strategy = 'file_system'
_all_sharing_strategies = {'file_system'}
else:
_sharing_strategy = 'file_descriptor'
_all_sharing_strategies = {'file_descriptor', 'file_system'}
def set_sharing_strategy(new_strategy):
"""Sets the strategy for sharing CPU tensors.
Arguments:
new_strategy (str): Name of the selected strategy. Should be one of
the values returned by :func:`get_all_sharing_strategies()`.
"""
global _sharing_strategy
assert new_strategy in _all_sharing_strategies
_sharing_strategy = new_strategy
def get_sharing_strategy():
"""Returns the current strategy for sharing CPU tensors."""
return _sharing_strategy
def get_all_sharing_strategies():
"""Returns a set of sharing strategies supported on a current system."""
return _all_sharing_strategies
init_reductions()