Commit Graph

18103 Commits

Author SHA1 Message Date
90910fc6cb Mark values entering containers as wildcards (#20556)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20556
ghimport-source-id: d7c62e38a2f6928f6f8d988c26a38ea8f8cff8b6

Reviewed By: jamesr66a

Differential Revision: D15447568

Pulled By: suo

fbshipit-source-id: 77ebc11b571b8517d3bad3ee1b3ee5ac037542b2
2019-05-22 16:50:06 -07:00
28be521e39 Fix bug in exporting node with multiple outputs by scripting
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20256

Differential Revision: D15422040

Pulled By: houseroad

fbshipit-source-id: 5de2a992d7d99a48905c39a1878eb0b3b68d6a3f
2019-05-22 16:29:36 -07:00
c2e3e79afc fix pow bug on overloads and clean up (#20824)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20824
ghimport-source-id: ceb1b64e2866ec8577800a8c378d8222a62cf199

Reviewed By: cpuhrsch

Differential Revision: D15458009

Pulled By: wanchaol

fbshipit-source-id: 51546d142d2c84e961d8b12ae85a2988a342da3b
2019-05-22 16:21:18 -07:00
98928f4d79 Allow both Variables and Tensors in c10 kernel interface (#20816)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20816

Previously, the c10 dispatcher expected ops to be called with Variables and unwrapped them to Tensors before calling into the kernel.
The kernel was expected to return Tensors that were re-wrapped into Variables before passing them on into the system.

However, that doesn't work with kernels that call other operators. One recent example was a kernel that returned the result of `torch::ones()` as output.
Now, with this diff, the c10 dispatcher still passes Tensors to the kernel and Variables back into the system, but it accepts ops to be called with both Tensors or Variables
and kernels are also allowed to return either.

After https://github.com/pytorch/pytorch/pull/17072 , we should be able to get rid of the whole wrapping/unwrapping logic.

Reviewed By: hl475

Differential Revision: D15453963

fbshipit-source-id: 7602b7f2bc43e8ceb8a8c0e97aafcc53d4c47b6c
2019-05-22 16:03:12 -07:00
9ea009fe8b Add as_quantized_tensor (#20740)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20740

Provide a way to assemble quantized Tensor from int8 Tensor, scale and zero point.

Differential Revision: D15232416

fbshipit-source-id: c3a3d9d7214b1dc569214c019440c2779fbd063b
2019-05-22 15:19:45 -07:00
12bc81ae2a Change comparison ops result dtype to bool [Part1] (#20767)
Summary:
This is the first part of the planned changes to change the comparison operations result tensor dtype from Byte to Bool. You can see the whole list of changes (not cleaned up) [here](https://github.com/pytorch/pytorch/pull/19332). As the PR is too big for a single review im breaking it into pieces.

**Changes in this PR:**
1. Enable these methods for bool tensors:
- maskedSelect
- maskedSelectBool
- bitand
- cbitand
- bitor
- cbitor
- bitxor
- cbitxor
- sign
- equal
- neg

2. Add bool clause for the TH version of sign method.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20767

Differential Revision: D15436446

Pulled By: izdeby

fbshipit-source-id: 8d2494b5f4873cd79c7f1a40d2cb045cadfad51a
2019-05-22 15:12:46 -07:00
6ec3c12255 Update references to minimum CUDA and cuDNN version (#20718)
Summary:
I didn't update the Windows references because I wasn't sure if they apply to CUDA 9. peterjc123 what should the Windows section say?
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20718

Differential Revision: D15459276

Pulled By: colesbury

fbshipit-source-id: 917e22f8ac75378d88c962c226b5a42b6799c79a
2019-05-22 14:54:54 -07:00
05543153dd CUDA implementation of fakequant (#20252)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20252

Add CUDA implementation for fakequant op for quantization aware training.

Reviewed By: zafartahirov

Differential Revision: D15243386

fbshipit-source-id: 37610ab046786ffc69aaec5235e5df8304c353d6
2019-05-22 14:46:39 -07:00
fdb923996d Revert D15445092: Some minor fix to unblock the Bert model quantization
Differential Revision:
D15445092

Original commit changeset: 22da41a56ecb

fbshipit-source-id: eca9a85900bf48fe6a9da5cfff61606a10f0c3de
2019-05-22 14:25:14 -07:00
cfc98ae714 fix add_histogram_raw (#20688)
Summary:
This is a porting of the fix from:
https://github.com/lanpa/tensorboardX/issues/421

cc orionr
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20688

Reviewed By: NarineK

Differential Revision: D15415093

Pulled By: orionr

fbshipit-source-id: d32a6298218fbc6fe315aa0f18b57e0c8ef92627
2019-05-22 14:06:21 -07:00
fd2aa93b37 Exposing LengthsSum/Mean/Max in pytorch (#20802)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20802

Need this for sequence model

Reviewed By: dzhulgakov

Differential Revision: D15448529

fbshipit-source-id: cd5abe3b689fc0e02feff10faf8cd61c99369f4f
2019-05-22 13:55:19 -07:00
8d7a025703 ONNX Export Scatter
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18543

Differential Revision: D14658639

Pulled By: houseroad

fbshipit-source-id: 5d7821b54d2fc93f71120155adf328897d13aff6
2019-05-22 13:31:54 -07:00
fea4a56af3 Add ability to filter metric schema in LayerModelHelper (#20786)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20786

Add a method to LayerModelHelper to filter metrics_schema. A general model builder may add metric schema that is not needed in some situations. This change add the ability to skip those unneeded.

Reviewed By: alex1o1o7cloud

Differential Revision: D15418140

fbshipit-source-id: 520f5dffd9938cf206cb1352e2953a4d4d2b6ab1
2019-05-22 12:26:20 -07:00
810816a1f9 Automatic update of fbcode/onnx to cc2333a3f929caca7223b98699237f19388dd585 (#20763)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20763

Previous import was ead449a30d026a7a0a59e2ba0a42ca8e52ec2359

Included changes:
- **[cc2333a3](https://github.com/onnx/onnx/commit/cc2333a3)**: Version Conversion of Min, Max, Mean from opset 7 to 8 (#2032) <Ksenija Stanojevic>
- **[5d0975f4](https://github.com/onnx/onnx/commit/5d0975f4)**: Fix auto_pad shape inference bug (#2028) <stevenlix>
- **[819afd05](https://github.com/onnx/onnx/commit/819afd05)**: Version Conversion from opset 8 to 9 (#2007) <Ksenija Stanojevic>
- **[6c913669](https://github.com/onnx/onnx/commit/6c913669)**: fix macro ONNX_DISALLOW_COPY_AND_ASSIGN bug (#2017) <one-hello>

Reviewed By: BIT-silence

Differential Revision: D15425957

fbshipit-source-id: b799357930e8c9421e9bfcbfd97907e086862a6d
2019-05-22 11:37:01 -07:00
4e0d098ace Fix optimizer type hint (#20648)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/20548
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20648

Differential Revision: D15453935

Pulled By: ezyang

fbshipit-source-id: 8778e819c58fdc2620f123ec5b5fd568e23b7705
2019-05-22 11:27:40 -07:00
795a1a6ffa When detecting numpy, assign relavant variables outside the try block (#20739)
Summary:
When detecting the presence of NumPy using import, move numpy-related variable assignments outside the try block (i.e., to an else block) to improve readability.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20739

Differential Revision: D15453916

Pulled By: ezyang

fbshipit-source-id: d3c37f2b290846be3c6a1462251cbb3e95d493be
2019-05-22 11:27:36 -07:00
fd95947e68 Revert D15248618: Split ATen/Parallel into interface and backend
Differential Revision:
D15248618

Original commit changeset: 060879266bc8

fbshipit-source-id: fc5cbb030b87613c9e15100118c3d4a064097c20
2019-05-22 09:55:51 -07:00
70ecddfd76 Some minor fix to unblock the Bert model quantization (#20787)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20787

Set requires_grad=False for bias: this will block the jit tracing.
The as_type fix: The input tensor shape and output tensor shape will be different, which will trigger the assertion failure at https://fburl.com/0m8xy7tc.

Reviewed By: jamesr66a

Differential Revision: D15445092

fbshipit-source-id: 22da41a56ecb9ac092585d0cc1ff0658fb9d631b
2019-05-21 23:13:08 -07:00
a501e7d5be Add quant-dequant nodes for bias. (#20045)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20045

This pass adds quant-dequant nodes for bias. This pass requires
quant-dequant pass for activations and weights to be done as it is required
to compute the qparams for bias

Differential Revision: D15179141

fbshipit-source-id: 3aab9fceefcadc3fa42a4e802d9b1e18addad78a
2019-05-21 21:59:37 -07:00
c2d0e7316f Add DictType to Metadata (#20770)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20770

Add dict type since it's part of the pytorch built-in system, and sparse features and text features will be converted to Dict

Reviewed By: pritamdamania87

Differential Revision: D15436255

fbshipit-source-id: 239adbd6a8f68be29020fe656d790f6872f1f0e9
2019-05-21 21:53:06 -07:00
70eb315da4 Use AT_INTERNAL_ASSERT in test_base (#20555)
Summary:
as title. We were using AT_ASSERT, which is newly deprecated. In this case, we do in fact want an internal assertion since this is used in testing code to describe expected behavior.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20555

Differential Revision: D15362964

Pulled By: suo

fbshipit-source-id: 984bfe71a774571611f3bbd81767d3cdb878a6fd
2019-05-21 21:25:07 -07:00
4a85e7955c Rename FC to Linear in the test routine (#20716)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20716

As Title says.

Reviewed By: zafartahirov

Differential Revision: D15410823

fbshipit-source-id: e82fc241ee288b41304675cb087c0cdcd60d7148
2019-05-21 19:58:19 -07:00
77651615c8 fbgemm precision argument (#20790)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20790

att

Reviewed By: jianyuh

Differential Revision: D15445903

fbshipit-source-id: fd338aea55e40eecc780be881e67417679e2ea35
2019-05-21 19:26:15 -07:00
c4a3b4d528 Split ATen/Parallel into interface and backend (#20057)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20057
ghimport-source-id: c583f61bf661c994eb4d0625748a299e892a7246

Differential Revision: D15248618

Pulled By: ilia-cher

fbshipit-source-id: 060879266bc8616916fe220adef6ae6c0b076fbd
2019-05-21 19:15:47 -07:00
adbab82846 int_repr for different quantized types (#20656)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20656

att

Reviewed By: zafartahirov

Differential Revision: D15398134

fbshipit-source-id: b02899d4ff33598416f65cf76b2ecc62adee243b
2019-05-21 17:57:42 -07:00
c1d6bcf301 Use SmallVector to allocate Compound operands inline. (#20762)
Summary:
Reduces load time for serialized ResNet-18 by 5.5%.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20762

Differential Revision: D15437364

fbshipit-source-id: 2ba34dd229a1054553d0ee09f044ce1915377d78
2019-05-21 16:37:52 -07:00
c9da01194a Optimize pytorch layer_norm forward (#20345)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20345

Seperate from D15194600
Optimize pytorch layer_norm op part 1:
  optimize layer_norm_forward_cpu
  import Eigen Maps for the performance of reduction

Reviewed By: zheng-xq

Differential Revision: D15290608

fbshipit-source-id: cf2c208dfd6fbcbc4c69db3ed60278d9bee156b5
2019-05-21 15:59:49 -07:00
9cec8ae146 use tensoriterator instead of th for fill_ implementation. (#20719)
Summary:
Moves fill_ to aten as suggested in:
https://github.com/pytorch/pytorch/pull/20336#issuecomment-493260729

borrows from cpuhrsch's PR: https://github.com/pytorch/pytorch/pull/18876/files#diff-0d1178f1a4ce15aeb760d251974e6924

Co-authored-by: cpuhrsch
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20719

Differential Revision: D15439420

Pulled By: nairbv

fbshipit-source-id: cbcc313cda61a528cecc4a28d601871565e6110c
2019-05-21 15:45:45 -07:00
7a0c6d528a Fix copy_transpose_valid check (#20759)
Summary:
Fixes #20755

(Was broken in #20685)

cc vadimkantorov
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20759

Differential Revision: D15433712

Pulled By: colesbury

fbshipit-source-id: 29f612f7d4d7b73158d6f5dc1e46fd2f8fb09a2f
2019-05-21 15:37:37 -07:00
5acc664f9d make magic methods work with casts too (#20654)
Summary:
Previous implementation of magic methods extended from BuiltinOperators, but it should be able to work with other sugared values, such as casts.

I was also considering making CastValue's and BuiltinOperators's extend from a MagicMethod super class, and having them try to call into the super's before their own call. However, not all Builtin Operators have corresponding magic methods so i did it this way instead (although there are workarounds for that).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20654

Differential Revision: D15434469

Pulled By: eellison

fbshipit-source-id: 813fa00bf8b5b9ada46505075ebf984d8eee6aef
2019-05-21 14:23:06 -07:00
e6f22e1b89 Change Bias to QTensor with qint32(int32_t) (#20713)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20713

As Title says.

Reviewed By: zafartahirov

Differential Revision: D15410734

fbshipit-source-id: c00f409278736cf9e3205f7d36dda1b96120f47d
2019-05-21 14:17:37 -07:00
b9a150ede0 Change Weight to QTensor with qint8(int8_t) (#20712)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20712

As Title says.

Differential Revision: D15410696

fbshipit-source-id: 48147a79d8cc47a724eb473796a37a1c64f8e883
2019-05-21 14:17:34 -07:00
ac2314fdeb Fix a bug in quantize_linear (#20711)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20711

For uint8_t, ```std::numeric_limits::digits``` returns 8;
For int8_t, ```std::numeric_limits::digits``` returns 7.

FBGEMM wants to get the ```qparams.precision``` to be always 8 for both int8_t and uint8_t.

Reviewed By: jerryzh168

Differential Revision: D15410695

fbshipit-source-id: 17dc3842d7c426947454c201bcb167b87b7301ce
2019-05-21 14:17:31 -07:00
32803b52f6 Update Conda description in PyTorch README (#20726)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20726

Edward says it doesn't actually provide compilers,
but it does provide dependencies, so let's mention that instead.

Reviewed By: ezyang

Differential Revision: D15423316

fbshipit-source-id: 9b384f88e5bf7a3d2c132508620c276b49e1569f
2019-05-21 14:12:30 -07:00
5d8879cf6d Auto-convert GPU arrays that support the __cuda_array_interface__ protocol (#20584)
Summary:
This PR implements auto-conversion of GPU arrays that support the `__cuda_array_interface__` protocol (fixes #15601).

If an object exposes the `__cuda_array_interface__` attribute, `touch.as_tensor()` and `touch.tensor()` will use the exposed device memory.

#### Zero-copy
When using `touch.as_tensor(...,device=D)` where `D` is the same device as the one used in `__cuda_array_interface__`.

#### Implicit copy
When using `touch.as_tensor(...,device=D)` where `D` is the CPU or another non-CUDA device.

#### Explicit copy
When using `torch.tensor()`.

#### Exception
When using `touch.as_tensor(...,device=D)` where `D` is a CUDA device not used in `__cuda_array_interface__`.

#### Lifetime
`torch.as_tensor(obj)` tensor grabs a reference to `obj` so that the lifetime of `obj` exceeds the tensor
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20584

Differential Revision: D15435610

Pulled By: ezyang

fbshipit-source-id: c423776ba2f2c073b902e0a0ce272d54e9005286
2019-05-21 14:06:46 -07:00
847d9c57d1 Improve the recommended citation
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20768

Differential Revision: D15436734

Pulled By: ezyang

fbshipit-source-id: d073f3b76a60bd8edf1e7799a1bb153d04a09bb1
2019-05-21 13:51:56 -07:00
bb20956e3c Add support for CMake switches for VS 2019 (#20752)
Summary:
Appending `arch` to the generator name is not supported for VS starting from VS 2019.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20752

Differential Revision: D15436740

Pulled By: ezyang

fbshipit-source-id: 20057aae8f708d82619927bf2cb87dd1bc2df312
2019-05-21 13:46:39 -07:00
47dc65fe76 add str comparisons (#20761)
Summary:
add string comparisons
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20761

Differential Revision: D15434616

Pulled By: eellison

fbshipit-source-id: c00c7bac6308dbcc6a9e46b92421f49fb2d5a81c
2019-05-21 12:47:50 -07:00
cca923c481 Add dequantize_linear for JIT pass (#20107)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20107

att

Reviewed By: nishantpdce

Differential Revision: D15202187

fbshipit-source-id: 7d6274a67fcca695c0425587f35046fecbc2ccdc
2019-05-21 12:26:48 -07:00
cc02a1af61 Throw error if multiple kernels registered (#20737)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20737

If someone tries to register multiple kernels in the same .op() call, we're now throwing an error.

Differential Revision: D15425660

fbshipit-source-id: 6d2f1444da3e16a6a98863d847965c2aa211e046
2019-05-21 12:17:01 -07:00
f3d827f311 Hipify fb/quantize
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20725

Reviewed By: bddppq

Differential Revision: D15407710

fbshipit-source-id: e5fdeee7e2dffd43cfdd6fab6193eb8a80902c02
2019-05-21 10:51:36 -07:00
b5edeca39d Split cpu/gpu in caffe2/distributed + some clean up (#20674)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20674

A few targets in caffe2/caffe2/distribute needs to be split too, otherwise won't compile. Also some clean ups and make select_gpu_type to gpu_library_selector

Differential Revision: D15406019

fbshipit-source-id: 6455ab885b248502b48d4c7565597e00fecfd547
2019-05-21 10:51:33 -07:00
d7cd2d7a8c compile with -fcolor-diagnostics (#20662)
Summary:
Let there be color!
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20662

Differential Revision: D15434110

Pulled By: suo

fbshipit-source-id: a317ae72ad72e0b8249f55c9c8d31f420c78c040
2019-05-21 10:32:55 -07:00
c790f10e2d Fix missing cudatoolkit dependency in binary linux tests
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20732

Differential Revision: D15434025

Pulled By: pjh5

fbshipit-source-id: 74a5798d14b6e61cdcdc784c159294b87264d3de
2019-05-21 10:27:15 -07:00
e3970d66d4 Fixing upload_binary_htmls again
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20736

Differential Revision: D15433417

Pulled By: pjh5

fbshipit-source-id: 58964a341226b536be899855058422cb82aa054b
2019-05-21 10:16:08 -07:00
fac307a5cf Revert D15178352: [pt1][quant] Quantized Conv2d operator
Differential Revision:
D15178352

Original commit changeset: 2e5453283137

fbshipit-source-id: 73cf64c483eedbd41a047e7593c0c92bbd33008c
2019-05-21 09:59:57 -07:00
eca7fa35a4 Fix -Wattributes warning on older versions of gcc (#20587)
Summary:
building with cuda and gcc 4.8.5-28, we see many warnings like:

[893/1645] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_ELU.cu.o
/home/bvaughan/repos/pytorch/c10/util/ArrayRef.h:277:48: warning: ‘deprecated’ attribute directive ignored [-Wattributes]
 using IntList C10_DEPRECATED_USING = ArrayRef<int64_t>;

This change prevents those warnings on the older compiler.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20587

Differential Revision: D15432749

Pulled By: nairbv

fbshipit-source-id: fd707afcbd6564f96617378d7cd6d62d941a052b
2019-05-21 09:47:40 -07:00
712c60f960 Fixing missing miniconda path in macos smokes
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20727

Differential Revision: D15433407

Pulled By: pjh5

fbshipit-source-id: 2f5d4e1e49068e9597f7052deb70a287b91e482b
2019-05-21 09:47:37 -07:00
29b1b59449 Quantized Conv2d operator (#20064)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20064

Initial implementation of quantized convolution operator using fbgemm.

Reviewed By: zafartahirov

Differential Revision: D15178352

fbshipit-source-id: 2e5453283137dc165e9a20164ffc138fa8caf88a
2019-05-21 09:13:42 -07:00
d73caca2a1 Add mandatory ScalarType nodes as input to the quant-dequant nodes. (#20468)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20468

ScalarType node is mandatory for activations and parameters now.
This change inserts ScalarType node for all the quant-dequant nodes. For the activations, currently the default value is at::ScalarType::Undefined. Remove this and explicitly pass the at::ScalarType::QUint8 dtype

Differential Revision: D15331600

fbshipit-source-id: 5b51e0b42e694bf409026af4783a12da6d7e234b
2019-05-20 20:01:17 -07:00