Commit Graph

17163 Commits

Author SHA1 Message Date
40a54bf2f1 Change ReinitializeTensor to use C10_LOG_FIRST_N (#18531)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18531

Currently we use C10_LOG_EVERY_MS to log the data type change, but it pollutes the log of some service,
we would like to change it to C10_LOG_FIRST_N to prevent that.

Reviewed By: dzhulgakov

Differential Revision: D14647704

fbshipit-source-id: b84e4002bd4aa94d616133cd1049c3d4ab05386e
2019-04-02 21:03:37 -07:00
80404cb2f5 Add support for getting TensorProto argument (#18364)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18364

att

Reviewed By: bddppq

Differential Revision: D14584784

fbshipit-source-id: 03f9207d5cf4f7f4b812428a931edbcdcb21ca8d
2019-04-02 20:58:28 -07:00
31849bc524 make test module hook use save/load (#18284)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18284
ghimport-source-id: 5a92c03fda19072ffb6afd40e0f56806716c7be6

Stack from [ghstack](https://github.com/ezyang/ghstack):
* #18296 [jit] Add namespacing for ScriptClasses
* **#18284 [jit] make test module hook use save/load**
* #18211 [jit] Turn script_type_parser into a class
* #18148 [jit] python interop for script classes

Instead of python-printing and comparing strings (which does not capture
depdency information, etc.), use save/load on in-memory buffers and
compare the main module contents inside the buffer

Reviewed By: ailzhang

Differential Revision: D14581129

fbshipit-source-id: 52264ae9ce076775ab3fd1a0c32c8d6f6677a903
2019-04-02 18:09:52 -07:00
2d07993bcb Add ability to specialize class types to ArgumentSpec (#18314)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18314
ghimport-source-id: 8cecb768d476ab19c9460f39c8f94a764e4cb052

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18314 Add ability to specialize class types to ArgumentSpec**
* #18226 Add Slot type to abstract the raw pointers being used for slots.

Differential Revision: D14574395

fbshipit-source-id: cc3af6e56e9ae52990f4a1ad56ecceaa2d493577
2019-04-02 17:35:57 -07:00
5f5a2aaab9 Operator-level performance microbenchmarks (#18740)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18740

Test utilities for writing Caffe2/PyTorch performance microbenchmarks. Brief description of the file structure

* benchmark_core.py : core utiltiites for running microbenchmark tests
* benchmark_caffe2.py : Caffe2 specific benchmark utilitites
* benchmark_pytorch.py: PyTorch specific benchmark utilities
* benchmark_runner.py : Main function. Currently it can run the microbenchmark tests in a stand-alone mode. The next step is to have this integrate with AI-PEP.

The utilities are located at https://github.com/pytorch/pytorch/tree/master/test to have access to both Caffe2/PyTorch Python's frontend.

Include two operator microbenchmarks; support both Caffe2/PyTorch:
* MatMul
* Add

Reference: PyTorch benchmarks : https://github.com/pytorch/benchmark/tree/master/timing/python. In this work, we start with two example binary operators MatMul and Add, but eventually we should to cover unary operators like in the PyTorch benchmark repo.

Reviewed By: zheng-xq

Differential Revision: D13887111

fbshipit-source-id: b7a56b95448c9ec3e674b0de0ffb96af4439bfce
2019-04-02 17:06:19 -07:00
b832b99afb Bool Tensor for CUDA (#18166)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18166
ghimport-source-id: a8e2ba2d966e49747a55701c4f6863c5e24d6f14

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18166 Bool Tensor for CUDA**
* #18165 Resolved comments from Bool Tensor for CPU PR
------

This PR enables bool tensor creation and some basic operations for the CPU backend. This is a part of Bool Tensor feature implementation work. The whole plan looks like this:
1. Storage Implementation [Done]
2. Tensor Creation.
a) CPU [Done]
b) CUDA [This PR]
3. Tensor Conversions.
4. Tensor Indexing.
5. Tensor Operations.
6. Back compatibility related changes.

Change:
Enable bool tensor in CUDA with the following operations:

    torch.zeros
    torch.tensor
    torch.ones
    torch.rand/rand_like/randint/randint_like
    torch.full
    torch.full_like
    torch.empty
    torch.empty_like

Tested via unit tests and local scripts.

Differential Revision: D14605104

fbshipit-source-id: b7d7340a7d70edd03a109222d271e68becba762c
2019-04-02 16:17:05 -07:00
b77e3c2ca1 Add helpful information to the gradient/inplace operation exception (#18523)
Summary:
To debug a `one of the variables needed for gradient computation has been modified by an inplace operation` error, I wanted to know *which* variable has been modified, so I extended the error message with what information is easily available at this point.

Before:
```
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation
```

After:
```
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [80, 1]], which is output 0 of UnsqueezeBackward0, is at version 1, not expected version 0. Hint: enable anomaly detection to find the forward pass operation which modified it.
```

The hint to enable anomaly detection is only shown when it is not enabled. It's meant to save people some googling. I'd even go further and reference `torch.autograd.set_detect_anomaly(True)`, but maybe we're not running Python?

Disclaimer: I haven't looked at other parts of the code to check if using `std::stringstream` is acceptable practice, let me know if it isn't. Similarly, I haven't checked about indentation practices.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18523

Differential Revision: D14683249

Pulled By: soumith

fbshipit-source-id: f97a99d4aabea7461df766d66cd72300b48e2350
2019-04-02 15:23:04 -07:00
74d9146559 build_variables.py: turn on link_whole for _C_impl library. (#18763)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18763

Without `link_whole` flag in opt-builds some of the files are not linked into `_C_impl` library, which causes some of static initializers not to run (namely, registering an cutomPythonOperation from python_interpreter.cpp). This diff fixes it.

Differential Revision: D14732471

fbshipit-source-id: 57cff6b4b6d479ad7ab7fd29f677746d91d6ff45
2019-04-02 15:17:13 -07:00
84a9694ed0 Fix windows msbuild bug (#18748)
Summary:
Fix the bug introduced by #18681 where an undefined variable was being used to limit max cpu count when building for Windows without Ninja.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18748

Differential Revision: D14733209

Pulled By: soumith

fbshipit-source-id: 52fc0dd4dde99da75a6956b63f02da2e647eed4f
2019-04-02 14:35:40 -07:00
2e97c82470 torch.cross' dim default changed to c10::optional instead of int=-1 (#17582)
Summary:
Argument dim=-1 doesn't work for torch.cross. The signature of the torch.cross has been changed to c10::optional<int64_t> dim instead of int64_t. So based on document "If dim is not given, it defaults to the first dimension found with the size 3." and if dim is specified (even negative) it will use the correspondent dim.

Fixes #17229
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17582

Differential Revision: D14483063

Pulled By: ifedan

fbshipit-source-id: f9699093ec401cb185fd33ca4563c8a46cdcd746
2019-04-02 13:27:00 -07:00
3027e783b1 Fix multi-configuration on Windows CMake (CUDA) (#18548)
Summary:
Multiple configurations is the default (eg. Release;Debug) on Windows and this check always broke this configuration as CMAKE_BUILD_TYPE was not set. The workaround was to always set CMAKE_BUILD_TYPE to Debug or Release, which was very unfortunate.

The correct method is to use generator expressions that expand depending on the current CONFIG being processed.

Side note: Anywhere else CMAKE_BUILD_TYPE is checked should probably be fixed too.
Note that the CMakeLists.txt forces it in to Release mode. However, I came across this error when importing the prebuilt Config in to another project, where CMAKE_BUILD_TYPE was not set.

> 3>CMake Error at pre_built/pytorch-1.0.1/share/cmake/Caffe2/public/cuda.cmake:380 (message):
> 3>  Unknown cmake build type:

Proper support for configurations would mean we can build debug and release at the same time and as you can see, it is less CMake code.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18548

Differential Revision: D14730790

Pulled By: ezyang

fbshipit-source-id: 70ae16832870d742c577c34a50ec7564c3da0afb
2019-04-02 13:19:07 -07:00
36237c4893 Fix flake8 issues in gragrad test
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18727

Differential Revision: D14724887

Pulled By: ifedan

fbshipit-source-id: 8c1db6460303e746e4aea0142302b8d61277c067
2019-04-02 12:45:18 -07:00
f095c34b73 Register operators by passing arguments to RegisterOperators constructor (#18577)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18577

This is also part of the legacy API and we need to support it if we want to replace it.

Reviewed By: dzhulgakov

Differential Revision: D14671432

fbshipit-source-id: 007abf4ab816647a509fc08e35d79b6c1aa55b03
2019-04-02 12:33:33 -07:00
58f5954252 Allow registering an operator schema without a kernel (#18551)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18551

This is helpful for defining a set of operators as an interface but not adding concrete kernels just yet.
The registration logic will ensure that any other libraries that add kernels for these schemas exactly match the schema defined here.

Reviewed By: dzhulgakov

Differential Revision: D14660208

fbshipit-source-id: 7adb5a4876cff5a0ad21d92d8c450cb889f00cc3
2019-04-02 12:33:30 -07:00
7a37e066e6 Improve compiler error messages of the op registration API (#18550)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18550

When the operator registration API is used wrongly, in most cases we should now get a nice compiler error
instead of weird template error messages.

This is done by making the enable_if conditions more broad so they also match error cases,
but then having static_asserts against these error cases inside the function.
Before that, since the function didn't match, the error message said something like "no function found to match your call",
now it will show the error message specified in the static_asserts.

Reviewed By: dzhulgakov

Differential Revision: D14659178

fbshipit-source-id: 7ca4fb72d9051eadf0a7e2717b962bf1213a52b2
2019-04-02 12:33:27 -07:00
ae1d13a06f Improve and test error messages for signature mismatches (#18547)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18547

- Argument indices in the error messages are 1-indexed not 0-indexed.
- Add test cases that a mismatching signature actually shows the correct error messages

Reviewed By: dzhulgakov

Differential Revision: D14656695

fbshipit-source-id: 55e45634baa3117e18b8687ea6b2a2f83715bdf6
2019-04-02 12:33:24 -07:00
bb8a0d717c Enable gmock and fix system gtest issue (#18706)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18706

- Enable gmock
- Fix issue where the gtest source files in third_party would include system gtest headers

Reviewed By: ezyang

Differential Revision: D14715302

fbshipit-source-id: 5335390913e651bda85c69d7ea9b5c1bce58f172
2019-04-02 12:33:22 -07:00
01c03caacc Emergency workaround for apt-get failure. (#18733)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18733
ghimport-source-id: b56766fb4b1084d8a7947cf622275d44e325141b

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18733 Emergency workaround for apt-get failure.**

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Reviewed By: dreiss

Differential Revision: D14725779

fbshipit-source-id: 6855347853a3f13461ca267ed563e2db5815166e
2019-04-02 10:49:21 -07:00
0b6ed83f33 Fix clang-tidy errors in torch/csrc/distributed
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18709

Differential Revision: D14725936

Pulled By: pietern

fbshipit-source-id: 307bc446d53da5d0e04d730bb51b7fb29212ace3
2019-04-02 10:32:37 -07:00
385a755b68 Undefined behavior with memset of std::string to 0 (#18703)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18703

 `zeroPtr` is sometimes a `std::string` tensor, so `memset` to 0 is undefined behavior.

This might be accidentally safe with `std::string` implementation that use SSO (Small String Optimization), but will crash otherwise.

Reviewed By: zheng-xq

Differential Revision: D14714458

fbshipit-source-id: 012a18464e6514d38ff791509b88ddc3fc55b2b1
2019-04-02 10:10:11 -07:00
a799751e33 Revert D14717015: [pytorch][PR] fix nccl compilation to make sure it compiles for architectures that pytorch compiles for
Differential Revision:
D14717015

Original commit changeset: 4aac036f57e5

fbshipit-source-id: c820b8dfb27564271e6b80e133fe655658a7c25c
2019-04-02 09:39:03 -07:00
1f5a46ab05 Automatic update of fbcode/onnx to f0d7df2c643c4e37f1fd7735ef02c972c4d19fb5 (#18695)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18695

Previous import was fb1a80692c1ab0bd27b1072f2e7bffacba336777

Included changes:
- **[f0d7df2c](https://github.com/onnx/onnx/commit/f0d7df2c)**: fix testcase names of maxpool_2d_ceil and averagepool_2d_ceil (#1896) <karljang>

Reviewed By: zrphercule

Differential Revision: D14709993

fbshipit-source-id: 7fe2145a481ea2c1b6d85ba1c85c662200a53241
2019-04-02 09:16:48 -07:00
c484cf43a0 Adding pin_memory kwarg to zeros, ones, empty, ... tensor constructors. (#18455)
Summary:
Make it possible to construct a pinned memory tensor without creating a storage first and without calling pin_memory() function. It is also faster, as copy operation is unnecessary.

Supported functions:
```python
torch.rand_like(t, pin_memory=True)
torch.randn_like(t, pin_memory=True)
torch.empty_like(t, pin_memory=True)
torch.full_like(t, 4, pin_memory=True)
torch.zeros_like(t, pin_memory=True)
torch.ones_like(t, pin_memory=True)
torch.tensor([10,11], pin_memory=True)
torch.randn(3, 5, pin_memory=True)
torch.rand(3, pin_memory=True)
torch.zeros(3, pin_memory=True)
torch.randperm(3, pin_memory=True)
torch.empty(6, pin_memory=True)
torch.ones(6, pin_memory=True)
torch.eye(6, pin_memory=True)
torch.arange(3, 5, pin_memory=True)
```

Part of the bigger: `Remove Storage` plan.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18455

Reviewed By: ezyang

Differential Revision: D14672084

Pulled By: VitalyFedyunin

fbshipit-source-id: 9d0997ec00f59500ee018f8b851934d334012124
2019-04-02 08:48:19 -07:00
aed7c9bc96 Improve Backend comment. (#18567)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18567
ghimport-source-id: 1e50e611a3afcfae86828b7afe06c3fdc6a7bef7

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18567 Improve Backend comment.**

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Reviewed By: dzhulgakov

Differential Revision: D14666189

fbshipit-source-id: 64a41c4a998b1a59ff780d1ae06fa16e5ef3c7c4
2019-04-02 08:06:48 -07:00
baac5489a8 Expose alias multinomial methods to ATen (#17904)
Summary:
This PR exposes the multinomialAliasSetup and multinomialAliasDraw methods.

cc: neerajprad
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17904

Differential Revision: D14700205

Pulled By: ezyang

fbshipit-source-id: 16462fb1f1ef1d560fd586632ea356b23e966ee3
2019-04-02 07:56:41 -07:00
5ade96fc84 Update cpp_extension.py (#18638)
Summary:
Hi. It seems that when building CPP-extensions with CUDA for Windows, an `extra_cuda_cflags` options are not properly forwarded to `nvcc`.

Use of extra CUDA options is necessary to build, for instance, a InplaceABN (https://github.com/mapillary/inplace_abn), which requires `--expt-extended-lambda` option.

This PR adds one line that correctly appends `extra_cuda_cflags`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18638

Differential Revision: D14704270

Pulled By: ezyang

fbshipit-source-id: e1e330d193d9afd5707a5437a74c0499460d2b90
2019-04-02 07:56:38 -07:00
fba89b2ae1 fix typo
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18653

Differential Revision: D14713920

Pulled By: ezyang

fbshipit-source-id: 170295a162dd23916c1dcc9330918d33277cc9ed
2019-04-02 07:51:30 -07:00
d5bf6ddc29 Kill LegacyBridge functions that don't do multiple dispatch. (#18696)
Summary:
At some point, we needed these functions to deal with autograd dispatching to the sparse of TH version of a backwards.  But we rewrote all backwards definitions in terms of native functions, so this is no longer necessary.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18696

Differential Revision: D14710834

Pulled By: gchanan

fbshipit-source-id: b22568c58eefc79d672555bd8832398ccd965cb7
2019-04-02 07:34:55 -07:00
af84371ba8 Updating submodules
Reviewed By: zpao

fbshipit-source-id: da3cd711bb81b07c6c284426ffc5e10a969b0d2b
2019-04-02 06:50:53 -07:00
f084c129db add Int8FCRelu (#18673)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18673

Add a fused FC + Relu

Reviewed By: csummersea

Differential Revision: D14667055

fbshipit-source-id: d88fefba008fc0ca450291532d2b320694c6b785
2019-04-01 23:50:30 -07:00
8e873ce273 Fix uninitialized value in pickler (#18678)
Summary:
Fixes #18671
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18678

Differential Revision: D14708969

Pulled By: driazati

fbshipit-source-id: d372c6e3a2a3d3fc48d8afc1fa6807f2ce0e5c6e
2019-04-01 17:34:36 -07:00
2e029db2f9 fixes multiprocessing serialization for integer nn.Parameter (#18639)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/17345
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18639

Differential Revision: D14711565

Pulled By: soumith

fbshipit-source-id: 0063ed138a215b95d6571dcd68b18569714abe19
2019-04-01 17:15:42 -07:00
fc6296d777 fix nccl compilation to make sure it compiles for architectures that pytorch compiles for (#18704)
Summary:
cc: t-vi gchanan zou3519

This fixes https://github.com/pytorch/pytorch/issues/18359
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18704

Differential Revision: D14717015

Pulled By: soumith

fbshipit-source-id: 4aac036f57e564b05d759662e8ad7a80170901c0
2019-04-01 17:10:42 -07:00
1b25fdbcd0 More type stubs (#18511)
Summary:
Added stubs for:

* The `device` module
* The `cuda` module
* Parts of the `optim` module
* Began adding stubs for the `autograd` module. I'll annotate more later but `no_grad` and friends are probably the most used exports from it so it seemed like a good place to start.

This would close #16996, although comments on that issue reference other missing stubs so maybe it's worth keeping open as an umbrella issue.

The big remaining missing package is `nn`.

Also added a `py.typed` file so mypy will pick up on the type stubs. That closes #17639.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18511

Differential Revision: D14715053

Pulled By: ezyang

fbshipit-source-id: 9e4882ac997063650e6ce47604b3eaf1232c61c9
2019-04-01 16:03:58 -07:00
aa23b8c664 NCCL build fix WITH_DISTRIBUTED=1.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18691

Reviewed By: ezyang

Differential Revision: D14706205

Pulled By: gchanan

fbshipit-source-id: 802f19bfd7df3703c0dbce03036e2f2e32eb3efb
2019-04-01 15:58:54 -07:00
16f07d7dac caffe2 - set up correct inheritance structure for remaining operator test classes (#18622)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18622

Set up correct inheritance structure for remaining operator test classes

Reviewed By: ezyang

Differential Revision: D14685941

fbshipit-source-id: a6b1b3be325935b7fec7515be13a4994b3016bf0
2019-04-01 15:53:22 -07:00
20b63aa977 Peephole Optimize Shape Ops (#18549)
Summary:
Peephole optimize ops that just require Dimensioned Tensor Type, which is what we specialize graphs on.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18549

Differential Revision: D14690827

Pulled By: eellison

fbshipit-source-id: 9d7439eb584f0a5b877f5aa53cf80150f00e7e5f
2019-04-01 15:39:43 -07:00
4a0f842d42 Deprecated lambda based API (#18542)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18542

This adds the deprecated API for defining kernels as lambdas. The new API for defining kernels as lambdas was introduced in D14653005.

Reviewed By: dzhulgakov

Differential Revision: D14653551

fbshipit-source-id: 99900f1436716c69e52c83b68333b642ec2c8558
2019-04-01 14:58:35 -07:00
723ce02a55 deprecated function based API (#18444)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18444

This adds the deprecated function based API to c10::RegisterOperators().
This is the API currently exposed under jit::RegisterOperators() and we need to support it for backwards compatibility.

Reviewed By: dzhulgakov

Differential Revision: D14514218

fbshipit-source-id: c77676851cfd431d66f18fd8038cf153a3a7d7cc
2019-04-01 14:58:32 -07:00
246f5c412e Revert "Tensor construction codemod(raw_mutable_data) (#16373)" (#18680)
Summary:
This reverts commit d73c830e236f5b980e5c91914b818d150b60278c.

We have observed significant perf drop when training ResNext101 with multiple amd GPUs:

Before:
https://ci.pytorch.org/jenkins/job/caffe2-builds/job/py2-clang7-rocmdeb-ubuntu16.04-bench/1636/console
2 GPUs ResNext training got 150\~160 imgs/sec
4 GPUs ResNext training got 270\~280 imgs/sec

After:
https://ci.pytorch.org/jenkins/job/caffe2-builds/job/py2-clang7-rocmdeb-ubuntu16.04-bench/1637/console
Both 2 and 4 GPUs ResNext training drop to 110\~120 imgs/sec

Similar perf drop are seen on ResNet50 training jobs as well.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18680

Differential Revision: D14702941

Pulled By: bddppq

fbshipit-source-id: 828141805afc23f25c08d4a2eb6d4b99f817c128
2019-04-01 14:39:13 -07:00
bdfdf6c2b9 C++ handler for gradient reduction (#18251)
Summary:
This commit adds the `c10d::Reducer` class that hooks into autograd
and performs gradient bucketing and reduction. These are the core
parts of `nn.parallel.DistributedDataParallel` that up to now were
only usable for CUDA models.

This should enable the following:

* Distributed data parallelism for models defined using the C++ frontend.
* Allow overlap of gradient computation and reduction for non-CUDA models.
* Enable distributed data parallelism for models with some unused parameters.

This does not include any logic for computing bucket assignment, which
can be done separately; either by observing autograd execution order
(this is what Apex does), or by assigning buckets based on some
maximum byte size, or both.

Also see #17757 and #13273.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18251

Reviewed By: mrshenli

Differential Revision: D14571899

Pulled By: pietern

fbshipit-source-id: 20f95eefd288dfe8cfffe0a28ca22fa7c9c3cd4c
2019-04-01 14:30:02 -07:00
a0285dd0f4 Updating submodules
Reviewed By: zpao

fbshipit-source-id: 735fc388bff7066e8f46526266a73bf35e121442
2019-04-01 13:59:58 -07:00
89e9b1cf8e add ConvRelu schema (#18693)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18693

As title

Reviewed By: protonu

Differential Revision: D14662880

fbshipit-source-id: 3664faa660a04e1f528a413d2a1700b872c3c684
2019-04-01 13:09:07 -07:00
90a5c56988 offload scripts from win-test.sh
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18601

Differential Revision: D14711856

Pulled By: kostmo

fbshipit-source-id: 75fe620541fe2903f69a53dbd1b6d51a0d718113
2019-04-01 13:04:30 -07:00
929258a680 Some fixes for the build script on Windows (#18681)
Summary:
Fixes https://discuss.pytorch.org/t/pytorch-build-from-source-on-windows/40288/13?u=peterjc123.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18681

Differential Revision: D14711039

Pulled By: soumith

fbshipit-source-id: f7e1a94b163064c055670b2925cd4502e7773599
2019-04-01 12:42:51 -07:00
d6c269c33e Fix for double backwards tests (#18190)
Summary:
If none of the outputs require_grad, we don't actually check gradgrad, instead we will check that their numerical gradients are 0.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18190

Differential Revision: D14563388

Pulled By: ifedan

fbshipit-source-id: a4eb94c9eb60f14dbe6986cd8cef1fe78a7bc839
2019-04-01 12:33:30 -07:00
be01c90797 Add string index/slice operations (#18247)
Summary:
Adds support for string indexing (`"a"[0]`) and slicing (`"abc"[1:3]`)
to script.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18247

Differential Revision: D14574486

Pulled By: driazati

fbshipit-source-id: 4b42aa0881e5398ea7f112be46c0335e6e19dced
2019-04-01 12:11:35 -07:00
af9335436d Re-land Parsing file check (#18570)
Summary:
The last time I tried to land it there was a merge race with the docs coverage test lol. Re-landing with the fix.

Re-land of https://github.com/pytorch/pytorch/pull/18304
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18570

Reviewed By: driazati

Differential Revision: D14707285

Pulled By: eellison

fbshipit-source-id: 3a0265928aa8cad78961723d8bf0fbf871fdb71d
2019-04-01 11:56:32 -07:00
3749d65a7e Create Node2Vec ModuleKeeper
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18504

Reviewed By: sunnieshang

Differential Revision: D14632091

fbshipit-source-id: d4544866552dc6bcbc7515be9e88cb11e7622a44
2019-04-01 10:36:23 -07:00
822c8ee143 use acc16 only when n>128 and k>128 in Skylake (#18672)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18672

In Skylake, when n < 128 or k < 128, acc16 is slower.

Reviewed By: jianyuh

Differential Revision: D14700576

fbshipit-source-id: 80ca9f1af4626637eed9c5ca49f95ae744811189
2019-04-01 08:52:28 -07:00