Commit Graph

271 Commits

Author SHA1 Message Date
6b74856747 Fix init_thread calls in thread pool initialization (#20848)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20848
ghimport-source-id: e542858a198252838c1f3100dbfbe90fd3960f07

Differential Revision: D15466918

Pulled By: ilia-cher

fbshipit-source-id: e75d38f51edd5b508c4ca28a292e4141e90f209f
2019-05-24 01:14:31 -07:00
fc941d3bca Catchall kernels instead of fallback kernels (#20773)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20773

This removes the feature to register fallback kernels that are called when no other kernel matches.
Instead, we introduce the concept of catchall kernels that are always called independent of inputs.
If you only have a fallback/catchall kernel and no kernels with concrete dispatch keys, then both concepts behave in the same way.
The difference is that we now disallow operators to have both, a catchall kernel and kernels with concrete dispatch keys.
This was possible before when they have been fallback kernels.

The reason for this change is that we anticipate needing a method_missing feature in backends, i.e. a backend-wide fallback to call when the backend doesn't specify a kernel for an operator.
We are not clear on precendence between this backend-wide fallback and an operator level fallback. Disallow fallbacks for now so we are free to choose later without breaking backwards compatibility.

Reviewed By: dzhulgakov

Differential Revision: D15438977

fbshipit-source-id: cb3aa764a1659d909ee21a7bd8ec3d32438aafaa
2019-05-23 23:47:51 -07:00
c25e33789e Lightweight at-most-once logging for API usage (#20745)
Summary:
Resubmit #20698 which got messed up.

Idea is that when PyTorch is used in a custom build environment (e.g. Facebook), it's useful to track usage of various APIs centrally. This PR introduces a simple very lightweight mechanism to do so - only first invocation of a trigger point would be logged. This is significantly more lightweight than #18235 and thus we can allow to put logging in e.g. TensorImpl.

Also adds an initial list of trigger points. Trigger points are added in such a way that no static initialization triggers them, i.e. just linking with libtorch.so will not cause any logging. Further suggestions of what to log are welcomed.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20745

Differential Revision: D15429196

Pulled By: dzhulgakov

fbshipit-source-id: a5e41a709a65b7ebccc6b95f93854e583cf20aca
2019-05-23 23:17:59 -07:00
8cde4c4d22 Remove Variable::Impl and DifferentiableViewImpl (#17072)
Summary:
As part of the Variable/Tensor merge work: https://github.com/pytorch/pytorch/issues/13638, we make the following changes in this PR:
1. Remove the `Variable::Impl` class and the `DifferentiableViewImpl` class
2. Change all `Variable.data()` call sites to either use `Variable` directly, or use `Variable.tensor_data()`
3. Remove `Variable.data()` API
3. Add `Variable.variable_data()` that matches `tensor.data` in Python API, which creates a new `Variable` that shares the same storage and tensor metadata with the original `Variable`, but with a completely new autograd history.

After this PR, Variable doesn't wrap a Tensor internally anymore, and both Variable and Tensor use the same TensorImpl class as its `impl_`. The only difference is that Variable always has AutogradMeta in its TensorImpl, but Tensor doesn't.

**Note that this PR is BC-breaking in the following use cases:**

**Use Case 1:**
Previously, `x.data = y` works even if `x` and `y` are of different TensorImpl type (e.g. `x` is a CPU dense tensor whose impl is of type TensorImpl, while `y` is a CPU sparse tensor whose impl is of type SparseTensorImpl). However, after this PR, `x.data = y` doesn't work anymore if `x` and `y` are of different TensorImpl type, because the underlying implementation `variable.set_data(tensor)` no longer works if `variable` and `tensor` have different TensorImpl type.

**Use Case 2:**
If a tensor `x`'s `grad` is sparse, accumulating dense gradients to `x` will change the tensor that `x.grad` is pointing to. This is better illustrated with the following example:
```python
params = torch.tensor([1.5, 1.5]).requires_grad_()
with torch.no_grad():
    # Change gradient to a sparse tensor
    params.grad = torch.sparse_coo_tensor(torch.tensor([[1, 1]]).long(), torch.tensor([1., 1.]))

grad_saved = params.grad
params.backward(torch.tensor([1.5, 1.5]))
assert id(grad_saved) == id(params.grad)  # This will fail after this PR
```
The assertion in the last line will fail after this PR, because adding dense gradients to sparse gradients will change the `params.grad` tensor reference.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17072

Differential Revision: D14075257

Pulled By: yf225

fbshipit-source-id: 0e681df641270dea586042dd26db59f2e76b5957
2019-05-23 21:09:04 -07:00
70eb315da4 Use AT_INTERNAL_ASSERT in test_base (#20555)
Summary:
as title. We were using AT_ASSERT, which is newly deprecated. In this case, we do in fact want an internal assertion since this is used in testing code to describe expected behavior.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20555

Differential Revision: D15362964

Pulled By: suo

fbshipit-source-id: 984bfe71a774571611f3bbd81767d3cdb878a6fd
2019-05-21 21:25:07 -07:00
cca923c481 Add dequantize_linear for JIT pass (#20107)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20107

att

Reviewed By: nishantpdce

Differential Revision: D15202187

fbshipit-source-id: 7d6274a67fcca695c0425587f35046fecbc2ccdc
2019-05-21 12:26:48 -07:00
eca7fa35a4 Fix -Wattributes warning on older versions of gcc (#20587)
Summary:
building with cuda and gcc 4.8.5-28, we see many warnings like:

[893/1645] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THCUNN/caffe2_gpu_generated_ELU.cu.o
/home/bvaughan/repos/pytorch/c10/util/ArrayRef.h:277:48: warning: ‘deprecated’ attribute directive ignored [-Wattributes]
 using IntList C10_DEPRECATED_USING = ArrayRef<int64_t>;

This change prevents those warnings on the older compiler.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20587

Differential Revision: D15432749

Pulled By: nairbv

fbshipit-source-id: fd707afcbd6564f96617378d7cd6d62d941a052b
2019-05-21 09:47:40 -07:00
be1f83c350 Fix dll linkage for tensor type ids (#20547)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20547

-

Differential Revision: D15359988

fbshipit-source-id: 680115a6b73f64c9b02f86eccb8feb799adc6c90
2019-05-20 16:25:09 -07:00
10445c0404 Finish removal of AT_CHECK, officially deprecate the macro. (#20600)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20600

All future uses of AT_CHECK will fail our CI.

Reviewed By: jerryzh168

Differential Revision: D15375397

fbshipit-source-id: 5582664d6c7c4f1a56ae45647eb1bca49fed2866
2019-05-20 11:57:15 -07:00
036a159fb9 Audit AT_ASSERT sites in TensorImpl.h; doc improvements (#20649)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20649

I went through every occurrence of AT_ASSERT in this file and
thought about whether or not it should be TORCH_INTERNAL_ASSERT
or TORCH_CHECK.  I think I did a good job at it.  Some thoughts:

- In order to decide if a check is "internal" or not, we must
  think about where the separation between userspace and our internals
  are.  I think any code that utilizes the PyTorch or Caffe2 C++ frontends
  count as userspace.  An important collorary is that the majority of operator
  code "counts" as userspace, even though it lives in our repository.  This
  is inline with TCB (trusted computing base) thinking: you want the TCB to
  be as small as possible, and because we have a *lot* of operator
  implementations, they should not count as TCB.

- The primary test I applied when considering an AT_ASSERT was whether or
  not I could trigger this error by just making method calls on caffe2::Tensor
  or at::Tensor.  If I could, that made it a TORCH_CHECK.  This covers most
  of the misapplications of TORCH_INTERNAL_ASSERT.  One place I didn't
  do this was the "is variable" checks; I think you have to work a bit
  harder to trigger this case, and userspace code is not mixing up
  Variables and Tensros.

- I updated the docs for device_opt_, explaining when it could be nullopt.
  (The nullopt checks here are TORCH_CHECK, because you can trigger them
  by taking an undefined tensor and poking the methods.)

Differential Revision: D15395576

fbshipit-source-id: 1c51b396012e7d949fbb4258092cf80e5e6f851b
2019-05-20 09:54:37 -07:00
8acaa286b7 Make CUDACachingAllocator::recordStream() a no-op on null ptrs (#20658)
Summary:
Fixes #20651

Communication collectives in `torch.distributed` call `CUDACachingAllocator::recordStream()` on input and output tensors to prevent their memory blocks being freed too early. `CUDACachingAllocator` uses tensor's data pointer to track memory blocks, which does not accept null pointers. However, empty tensor's `storage().data()` might be null. In this case, as there is no associated memory block for the empty tensor, it should be fine to make `recordStream()` a no-op.

Tests only cover `broadcast` empty tensors for GLOO backend, because GLOO does not support empty inputs (facebookincubator/gloo/issues/179). It can be addressed in either `ProcessGroupGloo` or GLOO itself. Will add more tests when that gap is filled.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20658

Differential Revision: D15399371

Pulled By: mrshenli

fbshipit-source-id: d29ebd1c72fddae49531f32695f81b89e42e5a4d
2019-05-20 07:13:51 -07:00
9b1dbffba5 Re-sync with internal repository (#20702) 2019-05-20 09:22:57 -04:00
4598729399 better handling of getenv 2019-05-19 23:19:52 -07:00
d3059b9c49 Lightweight logging for once-only API usage 2019-05-19 23:04:40 -07:00
85fad0597c Add qint8 type (int8_t) (#19984)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19984

Add qint8 for QTensor, with underlying type of int8_t

Reviewed By: jianyuh

Differential Revision: D15150715

fbshipit-source-id: 57580f599d46f9323af5ce462dbbc464b25e40d7
2019-05-17 20:35:05 -07:00
d9dcfacd9e Improve CPUAllocator OOM message (#20618)
Summary:
Spotted while debugging some problem

Before
```
>>> torch.empty(10**15)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
RuntimeError: [enforce fail at CPUAllocator.cpp:56] posix_memalign(&data, gAlignment, nbytes) == 0. 12 vs 0
```

After
```
>>> torch.empty(10**15)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
RuntimeError: [enforce fail at CPUAllocator.cpp:65] . DefaultCPUAllocator: can't allocate memory: you tried to allocate 4000000000000000 bytes. Error code 12 (Cannot allocate memory)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20618

Reviewed By: ezyang

Differential Revision: D15390400

Pulled By: dzhulgakov

fbshipit-source-id: 31f448303e4bd5f8c2bad8ca0f05bcece22a4b5e
2019-05-17 16:14:49 -07:00
79c5dc313c Remove unnecessary format literals from error message.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20646

Differential Revision: D15394795

fbshipit-source-id: 8033cf03341244b2b6a119e3c59f48ee6fe959cc
2019-05-17 10:45:40 -07:00
4e551a7edb Make C10_NODISCARD macro more portable for nvcc+clang. (#20324)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20324
ghimport-source-id: e51181c82f87c946b5ffcb87b0ad71a056cb4659

Differential Revision: D15359317

Pulled By: ezyang

fbshipit-source-id: d88798f13a61c74456641ddec8250c08ce8af240
2019-05-17 08:57:19 -07:00
409200df59 Move inter-op settings into ATen/Parallel (#20050)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20050
ghimport-source-id: cc102bab8abf3e56c099245976786317ed63ea14

Differential Revision: D15248576

Pulled By: ilia-cher

fbshipit-source-id: 55ddcb7af387ddfc68a42ac7167de07ea648e249
2019-05-17 03:12:02 -07:00
220e6894c5 Rename qint8 data type (#19932)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19932

In preparation to add int8_t data type for QTensor

Reviewed By: zafartahirov

Differential Revision: D15137838

fbshipit-source-id: 59462c36d6fc5982986d4196bf3f32f49bb294d7
2019-05-16 18:09:28 -07:00
5b78a5eadb Memory format support for contiguous and is_contiguous (#20455)
Summary:
#19975 was separated by 2 PRs.

This one:

Introduce MemoryFormat argument to the `x.is_contiguous(memory_format=torch.channels_last)` and to the `y = x.contiguous(memory_format=torch.channels_last)` functions.

At this moment both functions just operate with strides and doesn't store any tensor state.

(Original RFC #19092)

-----

Expands functionality of two tensor functions `.is_contiguous` and `.contiguous` (both python and c++ api).

Note: We had several complaints about `.to(memory_format)` function, and decided not to support it.

1.  `.contiguous` now support optional keyword-only argument - `memory_format`, which can be either `torch.contiguous_format` or `torch.channels_last`.

    - Using `torch.contiguous_format` will preserve existing `.contiguous()` behavior.

    - Calling `x.contiguous(memory_format=torch.channels_last)` returns new tensor which maintain same semantical layout (NCHW), but have different memory allocation pattern.

        `x.contiguous(memory_format=torch.channels_last)` expects input tensor to be 3d, 4d or 5d; and fails otherwise.

2. `.is_contiguous` now support optional keyword-only argument - `memory_format`, which can be either `torch.contiguous_format` or `torch.channels_last`.

    - `x.is_contiguous(memory_format=torch.contiguous_format)` preserves same functionality as `x.is_contiguous()` and remains unchanged.

    - `x.is_contiguous(memory_format=torch.channels_last)` returns true if A) input tensor is contiguous in memory AND B) allocated in the memory in NWHC (or similar for 3d,5d) format.

Note: By the end of the phase one `x.is_contiguous(memory_format=torch.channels_last)` will calculate state of the Tensor on every call. This functionality going to be updated later.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20455

Differential Revision: D15341577

Pulled By: VitalyFedyunin

fbshipit-source-id: bbb6b4159a8a49149110ad321109a3742383185d
2019-05-16 07:18:24 -07:00
456b889353 Require passing version_counter and allow_tensor_metadata_change to shallow_copy_and_detach() (#20496)
Summary:
Previously, the caller of `shallow_copy_and_detach()` is responsible for deciding whether the shallow-copy should share the source TensorImpl's version counter, or have its own new version counter. However, since this decision is crucial for ensuring the correctness of the shallow-copy's version counter, we want to enforce users of `shallow_copy_and_detach()` to pass a version counter to the function call, so that they are required to make the decision at the time of API usage, not as an afterthought.

For similar reasons, we want to enforce users of `shallow_copy_and_detach()` to pass `allow_tensor_metadata_change` to the function call, so that they are required to decide "whether the TensorImpl shallow-copy should allow tensor metadata change" at the time of API usage, not as an afterthought.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20496

Differential Revision: D15363620

Pulled By: yf225

fbshipit-source-id: a65e74738b10452668d6dc644b43aad5b3d8c9e6
2019-05-15 21:02:48 -07:00
7db1fb84fa Use slimmer exception raising code when on mobile. (#20543)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20543

All of that code for concatenating strings together adds up. Just discard it all for mobile builds.

Reviewed By: ljk53

Differential Revision: D15353447

fbshipit-source-id: a82dd0b884335d662605aabf7dd3d09dfcc1478b
2019-05-15 19:45:18 -07:00
5917ec2c52 Print registry warning only when DEBUG is set (#20398)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20398

Reduce logging volume from the Registry

Reviewed By: nairbv

Differential Revision: D15312262

fbshipit-source-id: e3546c288d6e1a396b2a4b08204a418aca889437
2019-05-15 19:29:05 -07:00
abb3698976 Add QInt32 ScalarType and qint32 data type (#19816)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19816

We need this for quantization for bias
add third argument of ScalarType to `quantize_linear`

Differential Revision: D15094174

fbshipit-source-id: f19ec8f4716cf5fe0aa21b38d45af6d27c9ab377
2019-05-15 18:50:18 -07:00
73a97387c1 Replace AT_CHECK with TORCH_CHECK [shard 9/10]
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20435

Reviewed By: jerryzh168

Differential Revision: D15318877

fbshipit-source-id: 4d83571187ea14a604fef83ac355d328b46d93e1
2019-05-15 08:05:59 -07:00
365fc26571 Replace AT_CHECK with TORCH_CHECK [shard 8/10]
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20434

Reviewed By: jerryzh168

Differential Revision: D15318396

fbshipit-source-id: dcd0f51be2d64b9440bb95ce8f40acb12545c2f4
2019-05-15 08:05:56 -07:00
56fb5e03b5 refactor registerStoragePyTypeObject (#20467)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20467

for upcoming changes in Storage for QInt8

Reviewed By: ezyang

Differential Revision: D15330865

fbshipit-source-id: 2840e59c0bf088983f792fd724de41b3bb3dec55
2019-05-14 18:22:33 -07:00
f0829f37c8 Rename AT_ASSERT to TORCH_INTERNAL_ASSERT; other macro updates (#20321)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20321

First part of https://github.com/pytorch/pytorch/issues/20287

- Rename `AT_ASSERT` to `TORCH_INTERNAL_ASSERT`
- Make `TORCH_INTERNAL_ASSERT` work with variadic inputs
- Deprecated `AT_ASSERT` and `AT_ASSERTM`
- Rename `AT_CHECK` to `TORCH_CHECK`
- Make `TORCH_CHECK` give a better error message when no arguments are
  provided
- Deprecate `AT_ERROR` in favor of `TORCH_CHECK(false, ...)`
- Deprecate `AT_INDEX_ERROR` in favor of `TORCH_CHECK_INDEX(false, ...)`
- Rename `AT_WARN` to `TORCH_WARN`

No use sites are changed; I'll work on that in follow up patches
(or disable the deprecation, if necessary.)

Differential Revision: D15278439

fbshipit-source-id: 7e0ed489d4e89e5f56b8ad7eafa72cb9a06065ee
2019-05-13 16:16:42 -07:00
3a0b27b73d Move at::NonVariableTypeMode to TensorImpl, and check it in is_variable() (#20392)
Summary:
As part of the Variable/Tensor merge, we allow passing Tensor with AutogradMeta into ATen ops, but we want to make sure they are not treated as Variables (i.e. their `is_variable()` is false). This PR makes the necessary change to make this work.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20392

Differential Revision: D15321899

Pulled By: yf225

fbshipit-source-id: c2ab09db73c63bd71ba2d8391095f4d6b4240a9a
2019-05-13 15:49:23 -07:00
3ee97183b0 ScaleBlobs Operator (#19660)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19660

Implementation of aggregated Scale operator.
The operator takes a list of tensors as an input and scales all of them them with the argument float value.
The tensor sizes can be different, therefore bookkeeping of the sizes and pointers to the tensors are
necessary for the GPU version of the kernel.

Reviewed By: BIT-silence

Differential Revision: D14984233

fbshipit-source-id: 37cc97159a4f2c38cd6fff4f5710ab7d3a773611
2019-05-08 17:57:33 -07:00
4211f674f0 Cleanup includes in c10/core/CPUAllocator.cpp. (#19885)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19885
ghimport-source-id: 1f7d228ac8d1dd9aeb70446a6baf63f62f195663

Differential Revision: D15118516

Pulled By: ZolotukhinM

fbshipit-source-id: bdaf56d97db9e70cbd36ca03349f6eabfbac2668
2019-05-06 16:06:19 -07:00
a8387b7779 Delete TensorImpl::GetDevice() (#20025)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20025

Delete TensorImpl::GetDevice() and clean all its call sites.

Reviewed By: ezyang

Differential Revision: D15170917

fbshipit-source-id: b6862b74aa036198544f79d18a8c0f995cb0ca7b
2019-05-06 12:44:23 -07:00
5108e807e0 add new macro TORCH_MOBILE for libtorch mobile build (#19761)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19761
ghimport-source-id: 359a44594d3d5afb8102435b4eac6ab920c24ef4

Differential Revision: D15087652

Pulled By: ljk53

fbshipit-source-id: f1a79c38c9415bb3786cc4d073370b1cb807e5ce
2019-04-30 21:25:15 -07:00
ba84ad0d97 LeftRight works for classes without default constructors (#19775)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19775

-

Reviewed By: dzhulgakov

Differential Revision: D15090319

fbshipit-source-id: e80865975970400c3db24bba4af4327105f3b9b2
2019-04-30 16:34:15 -07:00
e97da36cbb Explicitly disable copy&move on LeftRight (#19774)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19774

see in-source comment

Reviewed By: dzhulgakov

Differential Revision: D15090320

fbshipit-source-id: ae9ba5b5df7115c2b1c275e384030063dbbf8f1a
2019-04-30 16:34:12 -07:00
e710f3b1e1 Fix C10_MOBILE macro for ios (#19779)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19779

This macro wasn't set correctly because the target macros weren't included from Apple's header.

Reviewed By: dzhulgakov

Differential Revision: D15090427

fbshipit-source-id: 43ca44f0f409e11718b7f60c3fdcd2aa02d7018e
2019-04-30 12:03:24 -07:00
6ec55c13a9 Enable assignment for QTensor in pytorch frontend (#19676)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19676

Make copy work with QTensor, enable assignment of QTensor in pytorch frontend.

Differential Revision: D15064710

fbshipit-source-id: 04f2dc02a825695d41fa1114bfca49e92108fef3
2019-04-24 16:05:34 -07:00
c42f3f9055 Revert D15008160: Enable assignment for QTensor in pytorch frontend
Differential Revision:
D15008160

Original commit changeset: 5f1166246d76

fbshipit-source-id: 24c7350431ae6a87199d6e3f7ffbbc8ec7d3c28b
2019-04-24 06:58:13 -07:00
309c15e2df Enable assignment for QTensor in pytorch frontend (#19530)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19530
Make copy work with QTensor, enable assignment of QTensor in pytorch frontend.

Differential Revision: D15008160

fbshipit-source-id: 5f1166246d768b23f009cde1fa03e8952368a332
2019-04-23 21:29:31 -07:00
969af4315a Explicitly define supported types (#19516)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19516

Explicitly define types that are supported in kernel inputs and outputs.
Also, this allows us to show much nicer error messages if a user writes kernels with wrong argument types.

Reviewed By: ezyang

Differential Revision: D15020306

fbshipit-source-id: 55ebec81e075e874777acd59aa29a5578fc19ef7
2019-04-22 16:31:28 -07:00
189f30603c Make complex its own backend (#19275)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19275
ghimport-source-id: 73fd40b02152aed6f24225a88d7ffde7f700899e

Differential Revision: D14948582

Pulled By: li-roy

fbshipit-source-id: a1be6e57057defc74a007c5351c5edb2b9dcaf30
2019-04-21 21:16:10 -07:00
9f4f7e1621 Support compilation on gcc-7.4.0 (#19470)
Summary:
There are two corrections in this pull request.
The first is specific to gcc-7.4.0.
compiled with -std=c++14 gcc-7.4.0 has __cplusplus = 201402L
This does not meet the check set in Deprecated.h, which asks for >201402L.
The compiler goes down to the __GNUC__ check, which passes and sets C10_DEPRECATED_MESSAGE to a value that c++14 does not appear to support or even recognize, leading to a compile time error.
My recommended solution, which worked for my case, was to change the = into a >=

The second correction comes in response to this error:
caffe2/operators/crash_op.cc: In member function ‘virtual bool caffe2::CrashOp::RunOnDevice()’:
caffe2/operators/crash_op.cc:14:11: error: ‘SIGABRT’ was not declared in this scope

I am merely committing to the repository the solution suggested here (which worked for me)
https://discuss.pytorch.org/t/building-pytorch-from-source-in-conda-fails-in-pytorch-caffe2-operators-crash-op-cc/42859
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19470

Differential Revision: D15019529

Pulled By: ailzhang

fbshipit-source-id: 9ce9d713c860ee5fd4266e5c2a7f336a97d7a90d
2019-04-19 21:41:36 -07:00
3762cf9cc6 Expose QScheme in frontend (#19381)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19381

Expose QScheme enum in frontend so that people can use it in
quantization configs in modules.

Differential Revision: D14922992

fbshipit-source-id: ab07b8a7ec42c1c1f5fe84a4a0c805adbcad408d
2019-04-19 11:57:59 -07:00
d9115b533a remove needless ## in REGISTER_ALLOCATOR definition. (#19261)
Summary:
remove needless ## in REGISTER_ALLOCATOR definition.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19261

Differential Revision: D15002025

Pulled By: soumith

fbshipit-source-id: 40614b1d79d1fe05ccf43f0ae5aab950e4c875c2
2019-04-18 22:44:09 -07:00
ce969c0bc4 Add tests for argument types (#19290)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19290

Add test cases for the supported argument types
And TODOs for some unsupported ones that we might want to support.

Reviewed By: dzhulgakov

Differential Revision: D14931920

fbshipit-source-id: c47bbb295a54ac9dc62569bf5c273368c834392c
2019-04-18 17:20:13 -07:00
a456e1e196 Add either type (#19285)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19285

The either type is a tagged union with two members.
This is going to be used in a diff stacked on top to allow a function to return one of two types.

Also, generally, either<Error, Result> is a great pattern for returning value_or_error from a function without using exceptions and we could use this class for that later.

Reviewed By: dzhulgakov

Differential Revision: D14931923

fbshipit-source-id: 7d1dd77b3e5b655f331444394dcdeab24772ab3a
2019-04-18 02:04:43 -07:00
c7b1fdb767 Fixing function schema parser for Android (#19281)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19281

String<->Number conversions aren't available in the STL used in our Android environment.
This diff adds workarounds for that so that the function schema parser can be compiled for android

Reviewed By: dzhulgakov

Differential Revision: D14931649

fbshipit-source-id: d5d386f2c474d3742ed89e52dff751513142efad
2019-04-17 23:50:17 -07:00
ad8f34fcca Add empty_quantized (#18960)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18960

empty_affine_quantized creates an empty affine quantized Tensor from scratch.
We might need this when we implement quantized operators.

Differential Revision: D14810261

fbshipit-source-id: f07d8bf89822d02a202ee81c78a17aa4b3e571cc
2019-04-17 16:17:40 -07:00
db611b7caf Delete C10Tensor (#19328)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19328

Plans changed and we don't want this class anymore.

Reviewed By: dzhulgakov

Differential Revision: D14966746

fbshipit-source-id: 09ea4c95b352bc1a250834d32f35a94e401f2347
2019-04-17 00:02:27 -07:00