Commit Graph

18149 Commits

Author SHA1 Message Date
6af2482612 Leave it as an option for whether to colorize output during build (#20771)
Summary:
Currently PyTorch forces color output due to #20662. But users should be left an option to turn it off because redirection of the output to a file would be messed if color output is forced.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20771

Differential Revision: D15495677

Pulled By: ezyang

fbshipit-source-id: 9d89bbed40d0b67368554305394763a54c5ff6f5
2019-05-24 09:22:52 -07:00
ec45baf4dd tensor_illustration with correct numbers and better fonts for README file (#20751)
Summary:
Fix of README tensor image for issue #20641
Numbers are fixed, symbols made more readable.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20751

Differential Revision: D15495706

Pulled By: ezyang

fbshipit-source-id: b6013574d16253ec681fc57143efe3d53952fbe9
2019-05-24 09:18:18 -07:00
ef1fdc27a3 Raise TypeError when the argument to isinf and isfinite is not a tensor (#20817)
Summary:
Currently when the argument to isinf and isfinite is not tensor, a ValueError is raised. This, however, should be a TypeError, because the error is a type mismatch.

In the error message, "str(tensor)" is replaced by "repr(tensor)" because, when an error occurs, a printable representation of the object is likely more useful than the "informal" string version of the object.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20817

Differential Revision: D15495624

Pulled By: ezyang

fbshipit-source-id: 514198dcd723a7031818e50a87e187b22d51af73
2019-05-24 09:18:15 -07:00
87040af498 Fix documentation for attention mask shape (#20850)
Summary:
Attention mask should be of shape `(L, S)` since it is added to `attn_output_weights`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20850

Differential Revision: D15495587

Pulled By: ezyang

fbshipit-source-id: 61d6801da5291df960daab273e874df28aedbf6e
2019-05-24 09:10:11 -07:00
a5c90aaf47 Use "length of the RNN input" instead of "length of the RNN"
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20873

Differential Revision: D15495570

Pulled By: ezyang

fbshipit-source-id: e3b4cd67ccf97d0053ac053c3bcb74415b928c0a
2019-05-24 09:03:50 -07:00
3e4f213e82 Instructions for how to update pytorch-ci-hud when updating binary builds (#20758)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20758
ghimport-source-id: ffb4c97c42c6efbb16ea5d93ea8af1bdf71cb1e4

Differential Revision: D15435639

Pulled By: ezyang

fbshipit-source-id: a12bde8b0b11bbe0d0280b6b3994d9c65dc4f5cc
2019-05-24 07:20:06 -07:00
c3d05e86cc Resend "Split ATen/Parallel into interface and backend" (#20825)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20825
ghimport-source-id: 0371fbd37cb37635647d473d5ac9f2859e787061

Differential Revision: D15458073

Pulled By: ilia-cher

fbshipit-source-id: cd27d0da1691f6be1183cd152348ac0d93a53996
2019-05-24 02:03:06 -07:00
6b74856747 Fix init_thread calls in thread pool initialization (#20848)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20848
ghimport-source-id: e542858a198252838c1f3100dbfbe90fd3960f07

Differential Revision: D15466918

Pulled By: ilia-cher

fbshipit-source-id: e75d38f51edd5b508c4ca28a292e4141e90f209f
2019-05-24 01:14:31 -07:00
1bb728fe14 Change the quantizer to match the behavior of the FBGEMM implementation (#20892)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20892

FBGEMM uses 64 bit values. Need to change our implementation to match

Reviewed By: jerryzh168

Differential Revision: D15487664

fbshipit-source-id: 29cba26093c6f9aeafce14982c1ae12149e63562
2019-05-24 00:46:08 -07:00
fc941d3bca Catchall kernels instead of fallback kernels (#20773)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20773

This removes the feature to register fallback kernels that are called when no other kernel matches.
Instead, we introduce the concept of catchall kernels that are always called independent of inputs.
If you only have a fallback/catchall kernel and no kernels with concrete dispatch keys, then both concepts behave in the same way.
The difference is that we now disallow operators to have both, a catchall kernel and kernels with concrete dispatch keys.
This was possible before when they have been fallback kernels.

The reason for this change is that we anticipate needing a method_missing feature in backends, i.e. a backend-wide fallback to call when the backend doesn't specify a kernel for an operator.
We are not clear on precendence between this backend-wide fallback and an operator level fallback. Disallow fallbacks for now so we are free to choose later without breaking backwards compatibility.

Reviewed By: dzhulgakov

Differential Revision: D15438977

fbshipit-source-id: cb3aa764a1659d909ee21a7bd8ec3d32438aafaa
2019-05-23 23:47:51 -07:00
c25e33789e Lightweight at-most-once logging for API usage (#20745)
Summary:
Resubmit #20698 which got messed up.

Idea is that when PyTorch is used in a custom build environment (e.g. Facebook), it's useful to track usage of various APIs centrally. This PR introduces a simple very lightweight mechanism to do so - only first invocation of a trigger point would be logged. This is significantly more lightweight than #18235 and thus we can allow to put logging in e.g. TensorImpl.

Also adds an initial list of trigger points. Trigger points are added in such a way that no static initialization triggers them, i.e. just linking with libtorch.so will not cause any logging. Further suggestions of what to log are welcomed.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20745

Differential Revision: D15429196

Pulled By: dzhulgakov

fbshipit-source-id: a5e41a709a65b7ebccc6b95f93854e583cf20aca
2019-05-23 23:17:59 -07:00
8cde4c4d22 Remove Variable::Impl and DifferentiableViewImpl (#17072)
Summary:
As part of the Variable/Tensor merge work: https://github.com/pytorch/pytorch/issues/13638, we make the following changes in this PR:
1. Remove the `Variable::Impl` class and the `DifferentiableViewImpl` class
2. Change all `Variable.data()` call sites to either use `Variable` directly, or use `Variable.tensor_data()`
3. Remove `Variable.data()` API
3. Add `Variable.variable_data()` that matches `tensor.data` in Python API, which creates a new `Variable` that shares the same storage and tensor metadata with the original `Variable`, but with a completely new autograd history.

After this PR, Variable doesn't wrap a Tensor internally anymore, and both Variable and Tensor use the same TensorImpl class as its `impl_`. The only difference is that Variable always has AutogradMeta in its TensorImpl, but Tensor doesn't.

**Note that this PR is BC-breaking in the following use cases:**

**Use Case 1:**
Previously, `x.data = y` works even if `x` and `y` are of different TensorImpl type (e.g. `x` is a CPU dense tensor whose impl is of type TensorImpl, while `y` is a CPU sparse tensor whose impl is of type SparseTensorImpl). However, after this PR, `x.data = y` doesn't work anymore if `x` and `y` are of different TensorImpl type, because the underlying implementation `variable.set_data(tensor)` no longer works if `variable` and `tensor` have different TensorImpl type.

**Use Case 2:**
If a tensor `x`'s `grad` is sparse, accumulating dense gradients to `x` will change the tensor that `x.grad` is pointing to. This is better illustrated with the following example:
```python
params = torch.tensor([1.5, 1.5]).requires_grad_()
with torch.no_grad():
    # Change gradient to a sparse tensor
    params.grad = torch.sparse_coo_tensor(torch.tensor([[1, 1]]).long(), torch.tensor([1., 1.]))

grad_saved = params.grad
params.backward(torch.tensor([1.5, 1.5]))
assert id(grad_saved) == id(params.grad)  # This will fail after this PR
```
The assertion in the last line will fail after this PR, because adding dense gradients to sparse gradients will change the `params.grad` tensor reference.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17072

Differential Revision: D14075257

Pulled By: yf225

fbshipit-source-id: 0e681df641270dea586042dd26db59f2e76b5957
2019-05-23 21:09:04 -07:00
f93e0619f3 Adding ShufflenetV2 to caffe2's benchmark suite. (#20180)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20180

Adding ShufflenetV2 (by Ma et. al. 2018) to the caffe2's benchmark
suite.

To run, use: `buck run mode/opt caffe2/caffe2/python/examples:imagenet_trainer -- --train_data null --batch_size 128 --epoch_size 3200 --num_epochs 2 --num_gpus 2 --model shufflenet`

Reviewed By: bddppq, xw285cornell

Differential Revision: D15094282

fbshipit-source-id: 0e1ce9c5975868e917b0f179e2c5b15647a76b4e
2019-05-23 20:40:17 -07:00
3aa7ee6fe6 Updating submodules
Reviewed By: yns88

fbshipit-source-id: 17161d7e1e742b402715f8ed006e5b3abfa78561
2019-05-23 20:40:14 -07:00
cfb6c4a8ee Updating submodules
Reviewed By: yns88

fbshipit-source-id: 58b230ad12620032f391733c7f9c1e44aeaa390b
2019-05-23 19:49:06 -07:00
62af37aa88 dropout symbolic_script should respect the training flag (#20760)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20760
ghimport-source-id: eb667c3549a03a2fc01ffa0a2d3bc7e3a29b78e0

Reviewed By: jamesr66a

Differential Revision: D15486511

Pulled By: suo

fbshipit-source-id: 56ae930a01b0f6f4305a2a745135d4529b4a1ca0
2019-05-23 18:17:17 -07:00
bd53c8eb93 Move torchvision install out of onnx test script
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20890

Differential Revision: D15486657

Pulled By: bddppq

fbshipit-source-id: 3acd7386d1f070cad9bd43d6e74244b706c0dc16
2019-05-23 18:02:48 -07:00
d5b7138a2c Dict is a reference type (#20669)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20669

Before, Dict was a value type, i.e. copying it did a deep copy.
Unfortunately, this doesn't work well with storing and passing Dicts around in IValues because IValues are reference types.
This diff changes Dict to be a reference type.

Reviewed By: dzhulgakov

Differential Revision: D15404911

fbshipit-source-id: dc990d3eb7cae044b74dd0253f8b704dde6a6c86
2019-05-23 15:24:31 -07:00
93d5503f34 bug fix 19374 - fix for upsample export
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20116

Differential Revision: D15256899

Pulled By: houseroad

fbshipit-source-id: cf0dfd679d528fbb77f483e23071f4a96fb27091
2019-05-23 14:48:23 -07:00
48bf7b9be8 Fix oscillation in coalesceInsertedDataDependencies (#20833)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20833

Att. The algorithm is still "horrendously inefficient". But since we are sunsetting Nomnigraph, I just did the minimal fix here.

Reviewed By: tracelogfb

Differential Revision: D15463880

fbshipit-source-id: 413a1280a92c1923ba49031177816a2d5f888575
2019-05-23 14:04:20 -07:00
2d96876d88 Use conda torchvision version (#20865)
Summary:
This tries to fix the following error on current master:
```
May 23 16:18:47 Traceback (most recent call last):
May 23 16:18:47   File "main.py", line 7, in <module>
May 23 16:18:47     from torchvision import datasets, transforms
May 23 16:18:47   File "/opt/conda/lib/python3.6/site-packages/torchvision/__init__.py", line 1, in <module>
May 23 16:18:47     from torchvision import models
May 23 16:18:47   File "/opt/conda/lib/python3.6/site-packages/torchvision/models/__init__.py", line 11, in <module>
May 23 16:18:47     from . import detection
May 23 16:18:47   File "/opt/conda/lib/python3.6/site-packages/torchvision/models/detection/__init__.py", line 1, in <module>
May 23 16:18:47     from .faster_rcnn import *
May 23 16:18:47   File "/opt/conda/lib/python3.6/site-packages/torchvision/models/detection/faster_rcnn.py", line 7, in <module>
May 23 16:18:47     from torchvision.ops import misc as misc_nn_ops
May 23 16:18:47   File "/opt/conda/lib/python3.6/site-packages/torchvision/ops/__init__.py", line 1, in <module>
May 23 16:18:47     from .boxes import nms, box_iou
May 23 16:18:47   File "/opt/conda/lib/python3.6/site-packages/torchvision/ops/boxes.py", line 2, in <module>
May 23 16:18:47     from torchvision import _C
May 23 16:18:47 ImportError: /opt/conda/lib/python3.6/site-packages/torchvision/_C.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZN2at19NonVariableTypeMode10is_enabledEv
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20865

Differential Revision: D15481736

Pulled By: yf225

fbshipit-source-id: 67d4fd70652ccc709b44cb15392d6e44a8fe9235
2019-05-23 13:49:59 -07:00
b6d0f6c85a Move THCTensor_{random, clampedRandom, cappedRandom} to ATen (#20620)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20620
ghimport-source-id: 7c09c2462021e3fa5adef61570a575964ff16125

Differential Revision: D15454050

Pulled By: ezyang

fbshipit-source-id: 5b0421c56445baf19dbdbdd9680af128a5cdf443
2019-05-23 13:44:16 -07:00
48424a6c94 Avoid dynamic dispatch inside the omp loop in AdaptiveAvgPool2d (#20366)
Summary:
This PR changes CPU implementation of `AdaptiveAveragePool2D` by
- move dispatch to outside the OpenMP loop
- support fp16
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20366

Differential Revision: D15456069

Pulled By: ezyang

fbshipit-source-id: 00fa2916f8b136af9f5c8b5db0eca4619f9f5bac
2019-05-23 13:29:29 -07:00
cf0268e51c Modify cos to cosh in Vec256 (#20797)
Summary:
Minor typo fix.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20797

Differential Revision: D15469308

Pulled By: ezyang

fbshipit-source-id: 3288ad69316e296e46d861737c5b09e0ea1e694b
2019-05-23 13:24:09 -07:00
70caa2efe2 Add mkldnn sigmoid operator
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20820

Reviewed By: dzhulgakov

Differential Revision: D15455866

fbshipit-source-id: 712b06dfbd441051dc284a1acdf94926df09bc1d
2019-05-23 12:51:57 -07:00
8dedb04c26 Enable torch.jit.trace for mkldnn modules
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20800

Differential Revision: D15447892

fbshipit-source-id: 78e76523c5412c020a2bc22d6998ff7b36356720
2019-05-23 12:51:54 -07:00
63585c3b81 Add support for save and load mkldnn modules
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20799

Reviewed By: wanchaol

Differential Revision: D15447891

fbshipit-source-id: e34de946c79282fb934a5c52ff1def41c7993c75
2019-05-23 12:51:50 -07:00
5f83c5d834 Fix build error with MSVC (#20853)
Summary:
Close #20642

Possibly broken by #19816
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20853

Differential Revision: D15474620

Pulled By: jerryzh168

fbshipit-source-id: 99b52d92a93bac7cab52537f1ebdbd286d4b2cfe
2019-05-23 12:11:29 -07:00
31e2d20c5e Dictionarize check_inputs coming from trace
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20813

Differential Revision: D15466836

Pulled By: Krovatkin

fbshipit-source-id: ffdb418592b76dc67c65c59f4dc7303f08734f97
2019-05-23 11:17:55 -07:00
2c556a9489 fix the input/output type mismatch (#20829)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20829

as title

Reviewed By: jamesr66a

Differential Revision: D15461937

fbshipit-source-id: 02c7150c0e8d020030ae8898008f718c74850dca
2019-05-23 11:08:21 -07:00
9c57d8df42 Make LayerNorm.normalized_shape a tuple
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20832

Pulled By: driazati

Differential Revision: D15464693

fbshipit-source-id: 244f24d6917b17dde5e33ff852c716fb053b7ca5
2019-05-23 10:51:08 -07:00
99b3f5cd70 Fixes error with custom scalars, fixes #20579 (#20580)
Summary:
When adding custom scalars like this
```python
from torch.utils.tensorboard import SummaryWriter

with SummaryWriter() as writer:
    writer.add_custom_scalars({'Stuff': {
        'Losses': ['MultiLine', ['loss/(one|two)']],
        'Metrics': ['MultiLine', ['metric/(three|four)']],
    }})
```
This error is raised:
```
TypeError: Parameter to MergeFrom() must be instance of same class: expected tensorboard.SummaryMetadata.PluginData got list.
```

Removing the square brackets around `SummaryMetadata.PluginData(plugin_name='custom_scalars')` should be enough to fix it.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20580

Differential Revision: D15469700

Pulled By: orionr

fbshipit-source-id: 7ce58034bc2a74ab149fee6419319db68d8abafe
2019-05-23 10:17:36 -07:00
a16708a1ae Workaround python2.7 find_module limitation / explicitly close file (#20782)
Summary:
fix #20781 #20757
hmm I don't know an easy way to add a test to make sure it runs against a package installed as .egg. But i tested it locally with torchvision.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20782

Differential Revision: D15443600

Pulled By: ailzhang

fbshipit-source-id: 285eb0d9a44d6edb8e93618fa293f4feb431d2ae
2019-05-23 09:44:17 -07:00
ec57d1f18a Port dilated_max_pool2d() to ATen
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20691

Differential Revision: D15435960

Pulled By: ezyang

fbshipit-source-id: 548b7cc42e52ad2c641ec7d9cf78028d9411d02e
2019-05-23 09:04:04 -07:00
f039401bf2 Add back at::_copy_from for use by XLA (#20783)
Summary:
XLA needs a way to override CPUTensor.copy_(XLATensor), but we only
dispatch on the "self" argument. This inverts the dispatch order when
"src" is an unhandled type.

Note that things like XLATensor.copy_(CPUTensor) never enter this
implementation.

cc dlibenzi
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20783

Differential Revision: D15443187

Pulled By: colesbury

fbshipit-source-id: 4ee93ba598ef0fed2a99c0683aae30cb50a1f99c
2019-05-23 08:47:20 -07:00
80aed36fb6 fix a couple of typos in README markdown (#20819)
Summary:
was reading the README on github and came across a couple of typos.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20819

Differential Revision: D15469603

Pulled By: nairbv

fbshipit-source-id: 0ed7868de2d4e6d82557a8c170783966f8a1afd7
2019-05-23 08:11:25 -07:00
8fc069fa17 add batch of string ops (#20826)
Summary:
First batch of https://github.com/pytorch/pytorch/issues/20769, handles `isupper`, `islower`, `isdigit`, `isspace`, `isalnum`, `isalpha`, `upper`, `lower`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20826

Differential Revision: D15466986

Pulled By: eellison

fbshipit-source-id: d1df65721da803dfa30e28fdd9b874405be6bc7d
2019-05-23 08:01:16 -07:00
90182a7332 Install torchvision from master
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20836

Differential Revision: D15464705

Pulled By: bddppq

fbshipit-source-id: abe2ac2de2bf4c8d07334e6b2565c738c40428ae
2019-05-23 02:16:57 -07:00
d35a587958 Remove cpu_half, cpu_bool, cuda_bool from native_functions.yaml (#20552)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20552
ghimport-source-id: 0ef4e85b40f3b927564257f44f72f671251acaf1

Differential Revision: D15362154

Pulled By: li-roy

fbshipit-source-id: b2477582389099c6696dca33f1371e8e136e32b6
2019-05-22 22:58:40 -07:00
41100d4027 Add PerChannelAffineQuantizer (#20764)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20764

att

Reviewed By: dskhudia

Differential Revision: D15367364

fbshipit-source-id: 1d3ebf356ceac73b0fa4493209839d1c66d4d5b3
2019-05-22 19:16:52 -07:00
a21cf76575 Revert D15459166: [pytorch][PR] add batch of string ops
Differential Revision:
D15459166

Original commit changeset: 0ed908022475

fbshipit-source-id: d0a04228605e3437a02961a525eed8f8b3b59c17
2019-05-22 19:07:50 -07:00
5952ca8d9f Remove duplicated _optimize_trace and use core (#20394)
Summary:
The duplicated code of `_optimize_trace` in _pytorch_graph.py is used to bypass some optimization step which causes missing scope.

It seems that most of the problematic steps have been fixed recently. Standard models implemented in torchvision are visually inspected before the commit. However, the `+=` in 50d54a82d1/torchvision/models/resnet.py (L63) will let f4d9bfaa4d/torch/onnx/utils.py (L159) produce a bad result. It can be fixed by replacing it with `out += identity`. This also implies that `+=` has non-intuitive behavior.

cc orionr ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20394

Reviewed By: NarineK

Differential Revision: D15452204

Pulled By: orionr

fbshipit-source-id: eaa4c13f16551c78dc6419f1e22eb2c560af4cc5
2019-05-22 18:34:20 -07:00
871c9dcb1d move batchnorm and layernorm fusion to decompose (#20337)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20337
ghimport-source-id: 2196f84f2ef384c1f25587b2fb4bd9dd2f63c2b4

Differential Revision: D15448596

Pulled By: wanchaol

fbshipit-source-id: b66e608f1b72471fc0775aaa4e09f9fa1070fc3c
2019-05-22 18:01:27 -07:00
cde611a66c Quantized Conv2d operator (#20772)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20772

Copy of D15178352

A conflicting commit landed at the same time as D15178352 that removed registering kernels using IntArrayRef, Hence, D15178352 was revered. Using std::vector instead.

Reviewed By: zafartahirov

Differential Revision: D15437237

fbshipit-source-id: cd2f1caebcc720352b48ce25d716cb1ca49a5197
2019-05-22 17:53:24 -07:00
aebcd80ae4 add batch of string ops (#20826)
Summary:
First batch of https://github.com/pytorch/pytorch/issues/20769, handles `isupper`, `islower`, `isdigit`, `isspace`, `isalnum`, `isalpha`, `upper`, `lower`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20826

Differential Revision: D15459166

Pulled By: eellison

fbshipit-source-id: 0ed908022475e27011803cc4af7cf393a4312783
2019-05-22 17:33:04 -07:00
7aa3887f43 make wildcards alias only each other (#20670)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20670
ghimport-source-id: f5704c49fcb829e4668441f31fcf9305da22335c

Reviewed By: jamesr66a

Differential Revision: D15447567

Pulled By: suo

fbshipit-source-id: 391236806838de2524410e26946456441e562470
2019-05-22 16:50:09 -07:00
90910fc6cb Mark values entering containers as wildcards (#20556)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20556
ghimport-source-id: d7c62e38a2f6928f6f8d988c26a38ea8f8cff8b6

Reviewed By: jamesr66a

Differential Revision: D15447568

Pulled By: suo

fbshipit-source-id: 77ebc11b571b8517d3bad3ee1b3ee5ac037542b2
2019-05-22 16:50:06 -07:00
28be521e39 Fix bug in exporting node with multiple outputs by scripting
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20256

Differential Revision: D15422040

Pulled By: houseroad

fbshipit-source-id: 5de2a992d7d99a48905c39a1878eb0b3b68d6a3f
2019-05-22 16:29:36 -07:00
c2e3e79afc fix pow bug on overloads and clean up (#20824)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20824
ghimport-source-id: ceb1b64e2866ec8577800a8c378d8222a62cf199

Reviewed By: cpuhrsch

Differential Revision: D15458009

Pulled By: wanchaol

fbshipit-source-id: 51546d142d2c84e961d8b12ae85a2988a342da3b
2019-05-22 16:21:18 -07:00
98928f4d79 Allow both Variables and Tensors in c10 kernel interface (#20816)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20816

Previously, the c10 dispatcher expected ops to be called with Variables and unwrapped them to Tensors before calling into the kernel.
The kernel was expected to return Tensors that were re-wrapped into Variables before passing them on into the system.

However, that doesn't work with kernels that call other operators. One recent example was a kernel that returned the result of `torch::ones()` as output.
Now, with this diff, the c10 dispatcher still passes Tensors to the kernel and Variables back into the system, but it accepts ops to be called with both Tensors or Variables
and kernels are also allowed to return either.

After https://github.com/pytorch/pytorch/pull/17072 , we should be able to get rid of the whole wrapping/unwrapping logic.

Reviewed By: hl475

Differential Revision: D15453963

fbshipit-source-id: 7602b7f2bc43e8ceb8a8c0e97aafcc53d4c47b6c
2019-05-22 16:03:12 -07:00