Commit Graph

17348 Commits

Author SHA1 Message Date
00148825fc Run shellcheck on Jenkins scripts (#18874)
Summary:
closes #18873

Doesn't fail the build on warnings yet.
Also fix most severe shellcheck warnings
Limited to `.jenkins/pytorch/` at this time
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18874

Differential Revision: D14936165

Pulled By: kostmo

fbshipit-source-id: 1ee335695e54fe6c387ef0f6606ea7011dad0fd4
2019-04-15 12:48:52 -07:00
a0263ec047 Make DistributedDataParallel use new reducer (#18953)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18953

This removes Python side bucketing code from DistributedDataParallel
and replaces it with calls to the new C++ based bucketing and reducing
code. To confirm this is working well, we ran a test with both the
previous implementation and the new implementation, and confirmed they
are numerically equivalent.

Performance is improved by a couple percent or more, including the
single machine multiple GPU runs.

Closes #13273.

Reviewed By: mrshenli

Differential Revision: D14580911

fbshipit-source-id: 44e76f8b0b7e58dd6c91644e3df4660ca2ee4ae2
2019-04-15 12:44:38 -07:00
6ed57e052d Fix the return value of ParseFromString (#19262)
Summary:
Fix the return value of ParseFromString.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19262

Differential Revision: D14937605

Pulled By: ezyang

fbshipit-source-id: 3f441086517186a075efb3d74f09160463b696b3
2019-04-15 12:39:29 -07:00
3403cb857b Modify Cholesky derivative (#19116)
Summary:
The derivative of the Cholesky decomposition was previously a triangular matrix.

Changelog:
- Modify the derivative of Cholesky from a triangular matrix to symmetric matrix
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19116

Differential Revision: D14935470

Pulled By: ezyang

fbshipit-source-id: 1c1c76b478c6b99e4e16624682842cb632e8e8b9
2019-04-15 12:16:55 -07:00
991279dc7d produce diagram for caffe2 build matrix (#18517)
Summary:
This PR splits the configuration tree data from the logic used to construct the tree, for both `pytorch` and `caffe2` build configs.

Caffe2 configs are also now illustrated in a diagram.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18517

Differential Revision: D14936170

Pulled By: kostmo

fbshipit-source-id: 7b40a88512627377c5ea0f24765dabfef76ca279
2019-04-15 11:45:32 -07:00
7caad0ed33 Free all blocks with outstanding events on OOM-retry (#19222)
Summary:
The caching allocator tries to free all blocks on an out-of-memory
error. Previously, it did not free blocks that still had outstanding
stream uses. This change synchronizes on the outstanding events and
frees those blocks.

See #19219
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19222

Differential Revision: D14925071

Pulled By: colesbury

fbshipit-source-id: a2e9fe957ec11b00ea8e6c0468436c519667c558
2019-04-15 11:29:27 -07:00
86619b8ba6 Make sure that any of the future versions can load and execute older models. (#19174)
Summary:
Helps to test #18952
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19174

Differential Revision: D14899474

Pulled By: VitalyFedyunin

fbshipit-source-id: a4854ad44da28bd0f5115ca316e6078cbfe29d0d
2019-04-15 10:49:31 -07:00
68c4ebbeeb Sync fbcode/caffe2 and xplat/caffe2 (1) (#19218)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19218

Sync some contents between fbcode/caffe2 and xplat/caffe2 to move closer towards a world where they are identical.

Reviewed By: dzhulgakov

Differential Revision: D14919916

fbshipit-source-id: 29c6b6d89ac556d58ae3cd02619aca88c79591c1
2019-04-13 21:45:52 -07:00
2060e44ec8 upgrade bazel version in CI [xla ci] (#19246)
Summary:
The latest TF requires upgrading bazel version.
This PR should fix xla tests in CI.
[xla ci]
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19246

Differential Revision: D14929533

Pulled By: ailzhang

fbshipit-source-id: f6deb31428ed39f267d96bb9814d06f76641e73b
2019-04-13 20:16:37 -07:00
1c099fd5c9 Update docker images to use ROCm 2.3 (#19231)
Summary:
xw285cornell petrex iotamudelta

https://ci.pytorch.org/jenkins/job/caffe2-builds/job/py2-clang7-rocmdeb-ubuntu16.04-trigger-test/24676/
https://ci.pytorch.org/jenkins/job/caffe2-builds/job/py2-devtoolset7-rocmrpm-centos7.5-trigger-test/17679/
https://ci.pytorch.org/jenkins/job/pytorch-builds/job/py2-clang7-rocmdeb-ubuntu16.04-trigger/24652/
https://ci.pytorch.org/jenkins/job/pytorch-builds/job/py2-devtoolset7-rocmrpm-centos7.5-trigger/9943/
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19231

Differential Revision: D14928580

Pulled By: bddppq

fbshipit-source-id: 025b0affa6bcda6ee9f823dfc6c2cf8b92e71027
2019-04-13 13:11:26 -07:00
10bc789dff fix flake8 (#19243)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19243
ghimport-source-id: ae80aed3a5742df21afb6e55979686220a27cce7

Differential Revision: D14928670

Pulled By: zdevito

fbshipit-source-id: 20ec0d5c8d6f1c515beb55e2e63eddf3b2fc12dd
2019-04-13 10:04:39 -07:00
e958ceb5d7 Remove GraphExecutor's python bindings (#19141)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19141
ghimport-source-id: 796a41f5514d29959af052fcf5391a2834850a80

Reviewed By: jamesr66a

Differential Revision: D14888702

Pulled By: zdevito

fbshipit-source-id: c280145f08e7bc210434d1c99396a3257b626cf9
2019-04-13 08:42:24 -07:00
ddda563f22 Cleanup ScriptModule bindings (#19138)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19138
ghimport-source-id: 10f810f5e7551c1cb65fc4799744083bd7ffd1ee

Reviewed By: jamesr66a

Differential Revision: D14886945

Pulled By: zdevito

fbshipit-source-id: a5e5bb08694d03166a7516ec038656c2a02e7896
2019-04-13 08:42:21 -07:00
dcb5fd3613 get propagate_shape logic out of module.h (#19137)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19137
ghimport-source-id: 2394765f2d401e68ffdfa4c985bfab4cca2517f8

Reviewed By: jamesr66a

Differential Revision: D14885946

Pulled By: zdevito

fbshipit-source-id: daa2894ed9761107e9d273bb172840dc23ace072
2019-04-13 08:42:17 -07:00
1827ca4c35 Make debug subgraph inlining thread local (#19136)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19136
ghimport-source-id: 3a24ab36aa753ce5cce7bba3467bdbe88e5c7f60

Reviewed By: jamesr66a

Differential Revision: D14885051

Pulled By: zdevito

fbshipit-source-id: b39c6ceef73ad9caefcbf8f40dd1b9132bba03c2
2019-04-13 08:42:14 -07:00
c38c7b0ec5 Support Kwargs in C++ Function/Method calls (#19086)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19086
ghimport-source-id: 7790a5cc6e32f6f72e92add0b9f76dfa49ad9859

Reviewed By: jamesr66a

Differential Revision: D14875729

Pulled By: zdevito

fbshipit-source-id: ad1e4542381d9c33722155459e794f1ba4660dbb
2019-04-13 08:42:11 -07:00
d8669a2c7e Enable working ROCm tests (#19169)
Summary:
Enable multi-GPU tests that work with ROCm 2.2. Have been run three times on CI to ensure stability.

While there, remove skipIfRocm annotations for tests that depend on MAGMA. They still skip but now for the correct reason (no MAGMA) to improve our diagnostics.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19169

Differential Revision: D14924812

Pulled By: bddppq

fbshipit-source-id: 8b88f58bba58a08ddcd439e899a0abc6198fef64
2019-04-12 21:51:10 -07:00
ca02558d40 import warnings in torch.hub & fix master CI travis (#19181)
Summary:
fix missing import in #18758
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19181

Differential Revision: D14908198

Pulled By: ailzhang

fbshipit-source-id: 31e0dc4a27521103a1b93f72511ae1b64a36117f
2019-04-12 21:35:31 -07:00
bba96db2a5 fix lint errors in gen.py (#19221)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19221

att

Reviewed By: colesbury

Differential Revision: D14923858

fbshipit-source-id: 4793d7794172d401455c5ce72dfc27dddad515d4
2019-04-12 18:26:38 -07:00
b1539412db Add pass registration mechanism (#18587)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18587
ghimport-source-id: 80d753f7046a2a719e0c076684f44fa2059a0921

Differential Revision: D14901227

Pulled By: bwasti

fbshipit-source-id: 56511d0313419b63945a36b80e9ea51abdef2bd4
2019-04-12 15:32:00 -07:00
a3d3008e73 JIT Layernorm fusion (#18266)
Summary:
Partially fuse layer_norm by decomposing layer_norm into the batchnorm kernel that computes the stats, and then fusing the affine operations after the reduce operations, this is similar to the batchnorm fusion that apaszke did, it also only works in inference mode now.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18266

Differential Revision: D14879877

Pulled By: wanchaol

fbshipit-source-id: 0197d8f2a17ec438d3e53f4c411d759c1ae81efe
2019-04-12 14:38:31 -07:00
0e435afc3c Add more debugging helper to net transformer (#19176)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19176

Add some amenities for debugging.

Reviewed By: llyfacebook

Differential Revision: D14901740

fbshipit-source-id: 2c4018fdbf7e3aba2a754b6b4103a72893c229c2
2019-04-12 14:28:37 -07:00
1c836e7bb9 Add Quantized Backend (#18546)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18546

We'll expose all combinations of various ways of quantization in the top level dispatch key, that is we have AffineCPUTensor, PerChannelAffineCUDATensor, etc.

QTensor method added:
- is_quantized()
- item()

Differential Revision: D14637671

fbshipit-source-id: 346bc6ef404a570f0efd34e8793056ad3c7855f5
2019-04-12 12:55:49 -07:00
3f7ddd269c Step 2: Rename _unique_dim2_temporary_will_remove_soon to unique_dim (#18649)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18649
ghimport-source-id: 3411d240a6af5fe299a889667964730184e30645

Differential Revision: D14888292

Pulled By: VitalyFedyunin

fbshipit-source-id: 80da83c264598f74ab8decb165da4a1ce2b352bb
2019-04-12 12:41:20 -07:00
bd55abb463 Fix onnx ints (#19102)
Summary:
If JIT constant propagation doesn't work, we have to handle the ListConstructor in symbolic.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19102

Reviewed By: zrphercule

Differential Revision: D14875588

Pulled By: houseroad

fbshipit-source-id: d25c847d224d2d32db50aae1751100080e115022
2019-04-12 12:01:14 -07:00
c480798a1c use C10_REGISTER for GELU op
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19090

Reviewed By: BIT-silence

Differential Revision: D14864737

fbshipit-source-id: 8debd53171f7068726f0ab777a13ca46becbfbdf
2019-04-12 11:41:04 -07:00
79db4e9c10 Fix tabs lint. (#19196)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19196
ghimport-source-id: c10b1b19b087d7650e1614f008a9c2db21dfec2f

Differential Revision: D14913428

Pulled By: ezyang

fbshipit-source-id: 815b919d8e4516d0e5d89ebbdc4dff6d1d08da47
2019-04-12 11:22:05 -07:00
65ae897ae8 Pin nvidia-container-runtime version (#19195)
Summary:
This PR is to fix the CI error:
```
nvidia-docker2 : Depends: nvidia-container-runtime (= 2.0.0+docker18.09.4-1) but 2.0.0+docker18.09.5-1 is to be installed
E: Unable to correct problems, you have held broken packages.
Exited with code 100
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19195

Differential Revision: D14913104

Pulled By: yf225

fbshipit-source-id: d151205f5ffe9cac7320ded3c25baa7e051c3623
2019-04-12 10:00:40 -07:00
deda88e0aa One more fix for #18790
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19187

Differential Revision: D14913100

Pulled By: ezyang

fbshipit-source-id: bf147747f933a2c9a35f3ff00bf6b83a4f29286c
2019-04-12 09:29:15 -07:00
7e73783c6f Fix promoteTypes for QInt types (#19182)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19182

This is a bug discovered by zafartahirov, right now if one of the tensor is QInt
type we'll return undefined, but actually we want to allow ops that accepts
Tensors of the same QInt type to work.

Reviewed By: zafartahirov

Differential Revision: D14909172

fbshipit-source-id: 492fd6403da8c56e180efe9d632a3b7fc879aecf
2019-04-11 19:43:18 -07:00
422b01e788 Replace more usages of Type with DeprecatedTypeProperties (#19093)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19093
ghimport-source-id: a82e3dce912a173b42a6a7e35eb1302d9f334e03

Differential Revision: D14865520

Pulled By: li-roy

fbshipit-source-id: b1a8bf32f87920ce8d82f990d670477bc79d0ca7
2019-04-11 17:02:05 -07:00
fea0a0be53 Support attributes when copying modules (#19040)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19040
ghimport-source-id: 37933efd717795751283cae8141e2e2caaae2e95

Reviewed By: eellison

Differential Revision: D14895573

Pulled By: driazati

fbshipit-source-id: bc2723212384ffa673d2a8df2bb57f38c62cc104
2019-04-11 15:38:06 -07:00
4ae59e4744 Move version_counter_ to TensorImpl (#18223)
Summary:
According to https://github.com/pytorch/pytorch/issues/13638#issuecomment-468055428, after the Variable/Tensor merge, we may capture variables without autograd metadata inside an autograd function, and we need a working version counter in these cases. This PR makes it possible by moving `version_counter_` out of autograd metadata and into TensorImpl, so that variables without autograd metadata still have version counters.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18223

Differential Revision: D14735123

Pulled By: yf225

fbshipit-source-id: 15f690311393ffd5a53522a226da82f5abb6c65b
2019-04-11 15:12:45 -07:00
507fe66bea Enable comp ops for bool tensor (#19109)
Summary:
Enabled comparison ops for bool tensors
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19109

Differential Revision: D14871187

Pulled By: izdeby

fbshipit-source-id: cf9951847d69124a93e5e21dd0a39c9568b1037d
2019-04-11 14:37:10 -07:00
c7b5a8a876 Change is_variable() to check existence of AutogradMeta, and remove is_variable_ (#19139)
Summary:
Currently, a TensorImpl's `is_variable_` is true if and only if the TensorImpl has AutogradMeta. This PR unifies these two concepts by removing `is_variable_` and change `is_variable()` to check existence of AutogradMeta instead.

Removing `is_variable_` is part of the work in Variable/Tensor merge.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19139

Differential Revision: D14893339

Pulled By: yf225

fbshipit-source-id: ceb5e22c3c01f79b5d21d5bdbf4a7d1bc397796a
2019-04-11 14:03:33 -07:00
ef406ee925 First class modules in the compiler, round 2 (#19167)
Summary:
This PR propagates where we use first-class modules objects into the compiler. This creates a transitionary state where:

* compiler.cpp creates Graphs where `self` is a Module class and attributes/parameters/buffers/submodules are looked up with `prim::GetAttr`
* GraphExecutor still runs "lowered graphs" where the self object has been removed by a compiler pass `lower_first_class_method`.
* Tracing still creates "lowered graphs", and a pass "lift_lowered_method" creates a first-class method graph for things.

* This PR separates out Method and Function. A script::Function is a pure Graph with no `self` bound.  Similar to Python, a script::Method is just a bound `self` and its underlying `script::Function`.
* This PR also separates CompilationUnit from Module. A CompilationUnit is just a list of named script::Functions.  Class's have a CompilationUnit holding the class methods, and Modules also have a CompilationUnit holding their Methods. This avoids the weird circular case Module --has a-> Class -> has a -> Module ...

Details:
* In this transitionary state, we maintain two copies of a Graph, first-class module and lowered. Th first-class one has a self argument that is the module's class type. The lowered one is the lowered graph that uses the initial_ivalues inputs.
* When defining lowered methods using `_defined_lowered` we immediately create the first-class equivalent. The reverse is done lazily, creating lowered_methods on demand from the class.
* The two way conversions will be deleted in a future PR when the executor itself runs first-class objects. However this requires more changes to (1) the traces, (2) the python bindings, and (3) the onnx export pass and would make this PR way to large.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19167

Differential Revision: D14891966

Pulled By: zdevito

fbshipit-source-id: 0b5f03118aa65448a15c7a7818e64089ec93d7ea
2019-04-11 13:55:48 -07:00
b6ee83a5b4 Materialize a non-default device for C2 legacy storage. (#18605)
Summary:
It's not intended that Storages have 'default' CUDA devices, but this is allowable via the Storage::create_legacy codepath.

This also messages with device_caching, because the initial cache is obtained from the Storage, which may have a 'default' device.

Instead, we materialize a device by allocating 0 bytes via the allocator.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18605

Differential Revision: D14680620

Pulled By: gchanan

fbshipit-source-id: 6d43383d836e90beaf12bfe37c3f0506843f5432
2019-04-11 13:50:41 -07:00
bbe648dffb Allow empty net type (#19154)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19154

I recently saw some weird workflow error due to empty but set net_type. Maybe we should just fallback to simple net in this case.

Reviewed By: dzhulgakov

Differential Revision: D14890072

fbshipit-source-id: 4e9edf8232298000713bebb0bfdec61e9c5df17d
2019-04-11 12:43:07 -07:00
33a950924a Skip Slice if it's no op (#19155)
Summary:
If it's identity op, just skip the slice and return the input.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19155

Reviewed By: zrphercule

Differential Revision: D14890238

Pulled By: houseroad

fbshipit-source-id: f87b93df2cca0cb0e8ae2a1d95ba148044eafd4a
2019-04-11 12:26:32 -07:00
feb5d26510 Rename ONNX util test names (#19153)
Summary:
Rename test cases.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19153

Reviewed By: zrphercule

Differential Revision: D14890095

Pulled By: houseroad

fbshipit-source-id: 37a787398c88d9cc92b411c2355b43200cf1c4b0
2019-04-11 11:29:16 -07:00
c1b92f518d Remove ProcessGroup::getGroupRank (#19147)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19147

After #14809 was merged there is no longer a need for getGroupRank.
Every ProcessGroup object has its own rank and size fields which are
accurate for the global group as well as subgroups.

Strictly speaking removing a function in a minor version bump is a big
no-no, but I highly doubt this was ever used outside of
`torch.distributed` itself. This will result in a compile error for
folks who have subclassed the ProcessGroup class though.

If this is a concern we can delay merging until a later point in time,
but eventually this will need to be cleaned up.

Differential Revision: D14889736

fbshipit-source-id: 3846fe118b3265b50a10ab8b1c75425dad06932d
2019-04-11 09:17:40 -07:00
c145c34a7b Basic implementation of QRelu in C10 (#19091)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19091

Implements a basic quantized ReLU (uint8). This is a temporary solution before using the `QTensor` type instead of the tuple.

Reviewed By: dzhulgakov

Differential Revision: D14565413

fbshipit-source-id: 7d53cf5628cf9ec135603d6a1fb7c79cd9383019
2019-04-11 08:47:56 -07:00
4b20fc826d Import MultiheadAttention to PyTorch (#18334)
Summary:
Import MultiheadAttention into the core pytorch framework.
Users now can import MultiheadAttention directly from torch.nn.
See "Attention Is All You Need" for more details related to MultiheadAttention function.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18334

Differential Revision: D14577966

Pulled By: zhangguanheng66

fbshipit-source-id: 756c0deff623f3780651d9f9a70ce84516c806d3
2019-04-11 08:07:30 -07:00
b6f130aa70 try to enable uncertainty for lr loss (#17236)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17236

Following the paper in https://papers.nips.cc/paper/7141-what-uncertainties-do-we-need-in-bayesian-deep-learning-for-computer-vision.pdf, approximate the classification case with the regression formulation. For the LRLoss, add penalty based on the variance and regularization on the variance with a tunable parameter lambda.

Reviewed By: chocjy

Differential Revision: D14077106

fbshipit-source-id: 4405d8995cebdc7275a0dd07857d32a8915d78ef
2019-04-11 07:35:19 -07:00
160d0776d5 Remove comment (#19148)
Summary:
Remove pointer to nonexistent Note.
It is already removed in "Remove support for CUDNN 6 (#15851)"
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19148

Differential Revision: D14891514

Pulled By: soumith

fbshipit-source-id: dd33cfefa3a21e18afae5b3992dea085adaabda8
2019-04-11 07:04:45 -07:00
f5165ade5b Revert D14842057: Compiler uses first-class modules**
Differential Revision:
D14842057

Original commit changeset: ca6e7b5a4380

fbshipit-source-id: e8f1862a59bf20d5f78648b2fdc53a8b3750ead3
2019-04-11 06:17:01 -07:00
5e1f0b2a07 Compiler uses first-class modules** (#19043)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19043
ghimport-source-id: 0c9e80d5f35654af6d472abd5643bff3e9eb9ddf

Differential Revision: D14842057

Pulled By: zdevito

fbshipit-source-id: ca6e7b5a43805240f40b84d30e54495061067dc0
2019-04-11 00:00:48 -07:00
fce0c5e17d Require matches_jit_signature within native_functions.yaml (#18956)
Summary:
"""
This will verify that the func syntax follows the JIT signature schema. If you are a developer outside the core team, set this to False first to help us track unification. After your tests pass try setting this to True once and leave it set to True if it doesn't trigger any asserts. This means that your signature happens to be compliant. In general, it serves as a means of tracking an ongoing schema unification with the goal of aligning func syntax with other components of PyTorch in order to reduce overall complexity and assert coverage of all functions by each component.
"""
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18956

Differential Revision: D14807952

Pulled By: cpuhrsch

fbshipit-source-id: 42dac49269fb3cd96dc62e0b10820d0c32c7fb0e
2019-04-10 23:35:28 -07:00
e54cb03a51 add/move a few apis in torch.hub (#18758)
Summary:
* `torch.hub.list('pytorch/vision')` - show all available hub models in `pytorch/vision`
* `torch.hub.show('pytorch/vision', 'resnet18')` - show docstring & example for `resnet18` in `pytorch/vision`
* Moved `torch.utils.model_zoo.load_url` to `torch.hub.load_state_dict_from_url` and deprecate `torch.utils.model_zoo`
* We have too many env to control where the cache dir is, it's not very necessary. I actually want to unify `TORCH_HUB_DIR`, `TORCH_HOME` and `TORCH_MODEL_ZOO`, but haven't done it. (more suggestions are welcome!)
* Simplify `pytorch/vision` example in doc, it was used to show how how hub entrypoint can be written so had some confusing unnecessary args.

An example of hub usage is shown below
```

In [1]: import torch

In [2]: torch.hub.list('pytorch/vision', force_reload=True)
Downloading: "https://github.com/pytorch/vision/archive/master.zip" to /private/home/ailzhang/.torch/hub/master.zip
Out[2]: ['resnet18', 'resnet50']

In [3]: torch.hub.show('pytorch/vision', 'resnet18')
Using cache found in /private/home/ailzhang/.torch/hub/vision_master

    Resnet18 model
    pretrained (bool): a recommended kwargs for all entrypoints
    args & kwargs are arguments for the function

In [4]: model = torch.hub.load('pytorch/vision', 'resnet18', pretrained=True)
Using cache found in /private/home/ailzhang/.torch/hub/vision_master
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18758

Differential Revision: D14883651

Pulled By: ailzhang

fbshipit-source-id: 6db6ab708a74121782a9154c44b0e190b23e8309
2019-04-10 23:10:39 -07:00
5164622ba4 Revert D14878128: [jit] Support attributes when copying modules
Differential Revision:
D14878128

Original commit changeset: 7ef5f7b1b16b

fbshipit-source-id: 3818222a897f8c01bc67f550ed0fd3ddecf61015
2019-04-10 22:24:30 -07:00