893 Commits

Author SHA1 Message Date
97b9712aed Create Sequential::extend (#9116)
Summary:
There is no way to concatenate two `Sequential`s in Python, but it's also easier to do in an immutable fashion by just writing `Sequential(first.modules() + second.modules())`. Concatenating vectors isn't as easy in C++, so I think it's fair to save users some for loops by giving them `Sequential::extend()`.

apaszke ebetica ezyang

CC jamespinkerton
Closes https://github.com/pytorch/pytorch/pull/9116

Reviewed By: ezyang

Differential Revision: D8719630

Pulled By: goldsborough

fbshipit-source-id: 840d7ac70755350e6202b493c531e30ecbb6546f
2018-07-02 19:42:03 -07:00
5d474e1812 Make all module members public (#9111)
Summary:
Having circulated the C++ API a bit, I found that it would make it easier for folks to access module parameters directly than through the `parameters()` map. So here I make all variables/submodules and also the configuration options for every module public.

For RNNs, I also updated the names of parameters to match PyTorch, e.g. `hhw` -> `w_hh`. This should make it easier to transition from Python.

apaszke ebetica
Closes https://github.com/pytorch/pytorch/pull/9111

Differential Revision: D8717112

Pulled By: goldsborough

fbshipit-source-id: 3d36d5e161f7a86f44db7136c9c2fa53067abe1c
2018-07-02 16:09:57 -07:00
9ce15173fb Move _cudnn_init_dropout_state to TensorOptions and enable cuDNN dropout in C++ API RNNs (#9012)
Summary:
The goal of this PR was to add support for dropout descriptors in the C++ API's RNN class.
The end result is a 4x-5x speedup for our RNN integration tests since they can now use cuDNN instead of autograd when dropout is set.

To achieve this, I had to move `_cudnn_init_dropout_state` to the `TensorOptions` API.

I also fixed a bug around `RNN::cuda()` not flattening parameters for cuDNN.

ebetica ezyang
Closes https://github.com/pytorch/pytorch/pull/9012

Reviewed By: pjh5

Differential Revision: D8689786

Pulled By: goldsborough

fbshipit-source-id: 44fb191f5a38e41c4ded5417306b5bbc012cd56c
2018-06-29 17:25:23 -07:00
66465f1e17 Create nn::Module::is (#8970)
Summary:
When initializing weights for my C++ model, I had to write

```cpp
void initialize_weights(nn::Module& module) {
  if (module.name().find("Conv2d") != std::string::npos) {
    module.parameters()["weight"].data().normal_(0.0, 0.02);
  } else if (module.name().find("BatchNorm") != std::string::npos) {
    auto parameters = module.parameters();
    parameters["weight"].data().normal_(1.0, 0.02);
    parameters["bias"].data().fill_(0);
  }
}
```

The string-based module determination is not very nice, and not very C++-y. So I created `nn::Module::is<T>` which does a `dynamic_cast` inside. It also handles the `ModuleHolder` vs. `Module` distinction.

It now becomes

```cpp
if (module.is<nn::Conv2d>()) {
    module.parameters()["weight"].data().normal_(0.0, 0.02);
  } else if (module.is<nn::BatchNorm>()) {
    auto parameters = module.parameters();
    parameters["weight"].data().normal_(1.0, 0.02);
    parameters["bias"].data().fill_(0);
  }
```

ebetica ezyang apaszke
Closes https://github.com/pytorch/pytorch/pull/8970

Differential Revision: D8677476

Pulled By: goldsborough

fbshipit-source-id: 053294e19b6a58cce868167596c89639f7de91c2
2018-06-28 16:10:04 -07:00
ccc14071f4 Fix Module::zero_grad (#8964)
Summary:
`nn::Module::zero_grad` did not respect undefined `grad()` variables. This is fixed (the code now replicates PyTorch).

ebetica ezyang apaszke
Closes https://github.com/pytorch/pytorch/pull/8964

Reviewed By: ezyang

Differential Revision: D8677529

Pulled By: goldsborough

fbshipit-source-id: afdc4ba00dbf5012c37d1f794c731937ee5e422e
2018-06-28 10:26:52 -07:00
148088a681 Convert at::Tensor to torch::Tensor in AnyModule (#8968)
Summary:
Operations on `Variable`s (or `torch::Tensor`) usually return `at::Tensor`. This is usually fine, but the `AnyModule` used in the implementation of `torch::Sequential` is very picky about types, and does not understand implicit conversions like this. This means that `sequential.forward(at_tensor_that_is_actually_a_variable)` will fail unless you wrap `at_tensor_that_is_actually_a_variable` with `torch::Tensor`.

This PR adds a special case to `AnyModule` that will convert an `at::Tensor` to `torch::Tensor` when the tensor is really a variable, and else just pass the `at::Tensor`. This is a nice little usability improvement for the often-used `Sequential` class.

ebetica ezyang
Closes https://github.com/pytorch/pytorch/pull/8968

Reviewed By: ezyang

Differential Revision: D8670407

Pulled By: goldsborough

fbshipit-source-id: 3635ed6ed28238f3900ce4a876d07f1b11713831
2018-06-28 06:40:48 -07:00
03d0a70a4d Set random seed at the start of C++ tests (#8903)
Summary:
Sets the random seed at the start of C++ tests so that everything is super deterministic.

I made sure we only generate random values from torch instead of `std::`, so that this seed always applies. I.e. I do:

```
torch::randint(2, {2}, at::kInt64)
```

instead of

```
std::rand() % 2
```

Also got rid of the tests that test the random seeding, since it would interfere here. And the test is not useful since we just use ATen's seeding mechanism, which should work.

Fixes  #7288 #7286 #7289

ebetica ezyang
Closes https://github.com/pytorch/pytorch/pull/8903

Differential Revision: D8667269

Pulled By: goldsborough

fbshipit-source-id: a833e86e156d5e68dae8c53a4b1c433cb0608b6c
2018-06-27 20:09:46 -07:00
fef9a66d08 Use torch:: instead of at:: (#8911)
Summary:
This PR is the final step to making `torch::` the only  namespace users of the C++ API ever see. Basically, I did:

``` cpp

namespace torch {
using namespace at;
}
```

And then changed `torch::` to `at::` almost everywhere. This worked surprisingly well out of the box. So users can now write `torch::relu`  and `torch::log_softmax` and `torch::conv2d` instead of having to know when to use `at::` and when `torch::`. This is happy!

Another thing I did was to have `using Dtype = at::ScalarType`, which will be the eventual name anyway.

ebetica ezyang apaszke zdevito
Closes https://github.com/pytorch/pytorch/pull/8911

Reviewed By: ezyang

Differential Revision: D8668230

Pulled By: goldsborough

fbshipit-source-id: a72ccb70fca763c396c4b0997d3c4767c8cf4fd3
2018-06-27 14:42:01 -07:00
55757357b2 [C++ API] Better forward methods (#8739)
* Better forward methods in C++ API

capitalize error message in test_torch.test_flatten

Support for operator()

* Add operator() to Functional

* Get rid of SigmoidLinear

* Add BoundFunction to FunctionalImpl

* Remove macro from conv because it makes errors more nasty
2018-06-26 13:23:16 -07:00
1f36caceb2 [C++ API] Rework optimization package (#8815)
* Rework optim folder

* Removed TORCH_OPTIMIZER_CLASS macro

* Got rid of CRTP/Impl

* Removed TORCH_AUTOGRAD_KWARG

* Differentiate between Optimizer and LossClosureOptimizer

* Make Optimizers parameters based instead of model based

* Allow construction of optimizer from arbitrary vector

* Added test for zero grad

* Added test for external parameter vectors

* Now comparing against baseline values

* Documentation

* Post rebase fixes

* Different strategy for creating and accessing buffers in optimizers

* Fix member ordering
2018-06-26 10:13:14 -07:00
47492ed451 [C++ API] Bag of fixes (#8843)
* Bag of fixes

* Rename tensor_range.h to tensor_list_view.h

* Post rebase fixes

* Rename torch::tensor namespace to torch::tensors due to name conflict

* Avoid recursion in Module::to
2018-06-25 21:11:49 -07:00
521f5111ad [C++ API] Use torch::Tensor instead of at::Tensor/Variable mix (#8680)
* Use torch::Tensor instead of at::Tensor/Variable mix

* TensorRange -> TensorListView
2018-06-24 19:03:39 -07:00
a2dd707031 [C++ API] Create fixed width dtypes in torch:: namespace (#8639)
* Create fixed width dtypes in torch:: namespace

* Make kByte -> kUInt8
2018-06-19 12:40:58 -07:00
7ccecbbb4e Create Tensor::options (#8630) 2018-06-19 11:09:01 -07:00
271406f276 [C++ API] Make pImpl easy to use in modules to enable happy reference semantics (#8347)
* Created TORCH_MODULE macro

Rewrote Linear

Rewrote Dropout and added default constructor to TORCH_MODULE macro

Turned TORCH_MODULE contens into a proper base class

Added some documentation

Got rid of the old Dropout module

Got rid of the old Embedding module

Got rid of the old BatchNorm module

Got rid of the old Conv module

Fixing optimizers

Rebase

Removed old RNN modules and the TORCH_ATTR macro

Removed temporary P:: namespace

Added cloning behavior to all modules

Got rid of some get() calls

self review nits

Remove noexcept from ModuleHolder methods that can throw

Remove spaces

Add missing override to reset() methods

Added examples to documentation in pimpl.h

* Post rebase fixes
2018-06-18 19:45:53 -07:00
372d1d6735 Create ATen tensors via TensorOptions (#7869)
* Created TensorOptions

Storing the type in TensorOptions to solve the Variable problem

Created convenience creation functions for TensorOptions and added tests

Converted zeros to TensorOptions

Converted rand to TensorOptions

Fix codegen for TensorOptions and multiple arguments

Put TensorOptions convenience functions into torch namespace too

All factory functions except *_like support TensorOptions

Integrated with recent JIT changes

Support *_like functions

Fix in place modification

Some cleanups and fixes

Support sparse_coo_tensor

Fix bug in Type.cpp

Fix .empty calls in C++ API

Fix bug in Type.cpp

Trying to fix device placement

Make AutoGPU CPU compatible

Remove some auto_gpu.h uses

Fixing some headers

Fix some remaining CUDA/AutoGPU issues

Fix some AutoGPU uses

Fixes to dispatch_tensor_conversion

Reset version of new variables to zero

Implemented parsing device strings

Random fixes to tests

Self review cleanups

flake8

Undo changes to variable.{h,cpp} because they fail on gcc7.2

Add [cuda] tag to tensor_options_cuda.cpp

Move AutoGPU::set_index_from into .cpp file because Windows is stupid and sucks

Fix linker error in AutoGPU.cpp

Fix bad merge conflict in native_functions.yaml

Fixed caffe2/contrib/aten

Fix new window functions added to TensorFactories.cpp

* Removed torch::TensorOptions

Added code to generate wrapper functions for factory methods

Add implicit constructor from Backend to TensorOptions

Remove Var() from C++ API and use torch:: functions

Use torch:: functions more subtly in C++ API

Make AutoGPU::set_device more exception safe

Check status directly in DynamicCUDAHooksInterface

Rename AutoGPU to DeviceGuard

Removed set_requires_grad from python_variables.h and warn appropriately in Variable::set_requires_grad

remove python_default_init: self.type()

Add back original factory functions, but with deprecation warnings

Disable DeviceGuard for a couple functions in ATen

Remove print statement

Fix DeviceGuard construction from undefined tensor

Fixing CUDA device compiler issues

Moved as many methods as possible into header files

Dont generate python functions for deprecated factories

Remove merge conflict artefact

Fix tensor_options_cuda.cpp

Fix set_requires_grad not being checked

Fix tensor_new.h

TEMPORARILY put some methods in .cpp files to see if it solves issues on windows and mac

Fix bug in DeviceGuard.h

Missing includes

TEMPORARILY moving a few more methods into .cpp to see if it fixes windows

Fixing linker errors

* Fix up SummaryOps to use new factories

Undo device agnostic behavior of DeviceGuard

Use -1 instead of optional for default device index

Also move DeviceGuard methods into header

Fixes around device index after optional -> int32_t switch

Fix use of DeviceGuard in new_with_tensor_copy

Fix tensor_options.cpp

* Fix Type::copy(

* Remove test_non_float_params from ONNX tests

* Set requires_grad=False in ONNX tests that use ints

* Put layout/dtype/device on Tensor

* Post merge fixes

* Change behavior of DeviceGuard to match AutoGPU

* Fix C++ API integration tests

* Fix flip functions
2018-06-16 00:40:35 -07:00
de4e97e89a [C++ API] Cursors (#8190)
* Add cursors to C++ API

* Small self nits

* s/struct/class

* Use more STL like names for cursors
2018-06-11 09:48:43 -07:00
990c6c5531 [C++ API] Improve and use OrderedDict for parameters / modules (#7823)
* Improve OrderedDict for C++ API

* Give OrderedDict a subject and fix review comments

* Fix OrderedDict use in torch/csrc/jit/script/init.cpp
2018-06-05 14:29:09 -04:00
04a3616de0 Replace std::size_t with size_t (#8093) 2018-06-04 11:10:44 -04:00
4a80755834 Split up detail.h (#7836) 2018-05-30 08:55:34 -07:00
28b1a3852c Add backward() to Tensor and Variable (#7774)
* Add backward() to Tensor and Variable

* Add at:: in front of Tensor

* Trying to not move optional to appease windows?

* Move implementation into cpp file

* Undo some formatting changes
2018-05-24 17:31:41 -07:00
b12164005f [C++ API] Remove virtual forward and implement Sequential based on Any(Module) (#7508)
* Remove virtual forward

* Rebase
2018-05-24 12:46:51 -07:00
cfd70dc1cf [C++ API] Back to reset() and fixed in-place cloning (#7796)
* Back to reset() and fixed in-place cloning

* Add final override to clone_
2018-05-23 22:11:32 -07:00
60745b3380 Revert #7750 and #7762 to fix Windows CI on master (#7772)
* Revert "Add missing brace (#7762)"

This reverts commit ea27c5af50f6bc8ba82068e6d36ade9c773dc101.

* Revert "[C++ API] Add backward() to Tensor and Variable  (#7750)"

This reverts commit 1e2762796f33123d86782936089dbeda37bdcc92.
2018-05-22 15:42:52 -07:00
1e2762796f [C++ API] Add backward() to Tensor and Variable (#7750)
* Add backward() to Tensor and Variable

* Added a couple tests
2018-05-22 10:43:04 -07:00
549b4069bb [C++ API] Using new registration mechanism (#7663)
* Using new registration mechanism

* Fix signature of param() in module.cpp

* Remove ParameterList

* Fix tests
2018-05-21 17:59:21 -07:00
cba19e59ca [C++ API] Implement builder style construction (#7597)
* Implemented fused builder based construction mechanism

* "weights" -> "weight"

* Use int64_t instead of size_t everywhere in RNN

* Extracted Conv::ExpandingSize into its own thing

* Rename TORCH_PARAMETER to TORCH_ATTR

* Added documentation

* Fix weight names in batchnorm module
2018-05-17 17:10:15 -04:00
562d9971c9 Add LBFGS optimization algorithm to C++ API (#7596)
* Adding LBFGS to cpp API

* Adding stop conditions

* Test cases now passing and adding closure to all algs

* Addressing code review

* Set seeds to make optim tests more deterministic
2018-05-17 14:03:08 -04:00
3414475653 [C++ API] Remove initialize_* functions (#7517)
* Remove initialize_ functions

* Fix clone() to recursively clone children

* Small codemove
2018-05-14 18:24:58 -07:00
6ada041b31 Some small fixes in C++ API (#7510) 2018-05-11 18:56:53 -07:00
64834f6fb8 Split libATen.so into libATen_cpu.so and libATen_cuda.so (#7275)
* Split libATen.so into libATen_cpu.so and libATen_cuda.so

Previously, ATen could be built with either CPU-only support, or
CPU/CUDA support, but only via a compile-time flag, requiring
two separate builds.  This means that if you have a program which
indirectly uses a CPU-only build of ATen, and a CPU/CUDA-build of
ATen, you're gonna have a bad time.  And you might want a CPU-only
build of ATen, because it is 15M (versus the 300M of a CUDA build).

This commit splits libATen.so into two libraries, CPU/CUDA, so
that it's not necessary to do a full rebuild to get CPU-only
support; instead, if you link against libATen_cpu.so only, you
are CPU-only; if you additionally link/dlopen libATen_cuda.so,
this enables CUDA support.  This brings ATen's dynamic library
structure more similar to Caffe2's.  libATen.so is no more
(this is BC BREAKING)

The general principle for how this works is that we introduce
a *hooks* interface, which introduces a dynamic dispatch indirection
between a call site and implementation site of CUDA functionality,
mediated by a static initialization registry.  This means that we can continue
to, for example, lazily initialize CUDA from Context (a core, CPU class) without
having a direct dependency on the CUDA bits.  Instead, we look up
in the registry if, e.g., CUDA hooks have been loaded (this loading
process happens at static initialization time), and if they
have been we dynamic dispatch to this class.  We similarly use
the hooks interface to handle Variable registration.

We introduce a new invariant: if the backend of a type has not
been initialized (e.g., it's library has not been dlopened; for
CUDA, this also includes CUDA initialization), then the Type
pointers in the context registry are NULL.  If you access the
registry directly you must maintain this invariant.

There are a few potholes along the way.  I document them here:

- Previously, PyTorch maintained a separate registry for variable
  types, because no provision for them was made in the Context's
  type_registry.  Now that we have the hooks mechanism, we can easily
  have PyTorch register variables in the main registry.  The code
  has been refactored accordingly.

- There is a subtle ordering issue between Variable and CUDA.
  We permit libATen_cuda.so and PyTorch to be loaded in either
  order (in practice, CUDA is always loaded "after" PyTorch, because
  it is lazily initialized.)  This means that, when CUDA types are
  loaded, we must subsequently also initialize their Variable equivalents.
  Appropriate hooks were added to VariableHooks to make this possible;
  similarly, getVariableHooks() is not referentially transparent, and
  will change behavior after Variables are loaded.  (This is different
  to CUDAHooks, which is "burned in" after you try to initialize CUDA.)

- The cmake is adjusted to separate dependencies into either CPU
  or CUDA dependencies.  The generator scripts are adjusted to either
  generate a file as a CUDA (cuda_file_manager) or CPU file (file_manager).

- I changed all native functions which were CUDA-only (the cudnn functions)
  to have dispatches for CUDA only (making it permissible to not specify
  all dispatch options.)  This uncovered a bug in how we were handling
  native functions which dispatch on a Type argument; I introduced a new
  self_ty keyword to handle this case.  I'm not 100% happy about it
  but it fixed my problem.

  This also exposed the fact that set_history incompletely handles
  heterogenous return tuples combining Tensor and TensorList.  I
  swapped this codegen to use flatten() (at the possible cost of
  a slight perf regression, since we're allocating another vector now
  in this code path).

- thc_state is no longer a public member of Context; use getTHCState() instead

- This PR comes with Registry from Caffe2, for handling static initialization.
  I needed to make a bunch of fixes to Registry to make it more portable

  - No more ##__VA_ARGS__ token pasting; instead, it is mandatory to pass at
    least one argument to the var-args. CUDAHooks and VariableHooks pass a nullary
    struct CUDAHooksArgs/VariableHooksArgs to solve the problem. We must get rid of
    token pasting because it does not work with MSVC.

  - It seems MSVC is not willing to generate code for constructors of template
    classes at use sites which cross DLL boundaries. So we explicitly instantiate
    the class to get around the problem. This involved tweaks to the boilerplate
    generating macros, and also required us to shuffle around namespaces a bit,
    because you can't specialize a template unless you are in the same namespace as
    the template.
  - Insertion of AT_API to appropriate places where the registry must be exported

- We have a general problem which is that on recent Ubuntu distributions,
  --as-needed is enabled for shared libraries, which is (cc @apaszke who was
  worrying about this in #7160 see also #7160 (comment)). For now, I've hacked
  this up in the PR to pass -Wl,--no-as-needed to all of the spots necessary to
  make CI work, but a more sustainable solution is to attempt to dlopen
  libATen_cuda.so when CUDA functionality is requested.

    - The JIT tests somehow manage to try to touch CUDA without loading libATen_cuda.so. So
      we pass -Wl,--no-as-needed when linking libATen_cuda.so to _C.so

- There is a very subtle linking issue with lapack, which is solved by making sure libATen_cuda.so links against LAPACK. There's a comment in aten/src/ATen/CMakeLists.txt about htis as well as a follow up bug at #7353

- autogradpp used AT_CUDA_ENABLED directly. We've expunged these uses and added
  a few more things to CUDAHooks (getNumGPUs)

- Added manualSeedAll to Generator so that we can invoke it polymorphically (it
  only does something different for CUDAGenerator)

- There's a new cuda/CUDAConfig.h header for CUDA-only ifdef macros (AT_CUDNN_ENABLED, most prominently)

- CUDAHooks/VariableHooks structs live in at namespace because Registry's
  namespace support is not good enough to handle it otherwise (see Registry
  changes above)

- There's some modest moving around of native functions in ReduceOps and
  UnaryOps to get the CUDA-only function implementations into separate files, so
  they are only compiled into libATen_cuda.so. sspaddmm needed a separate CUDA
  function due to object linkage boundaries.

- Some direct uses of native functions in CUDA code has to go away, since these
  functions are not exported, so you have to go through the dispatcher
  (at::native::empty_like to at::empty_like)

- Code in THC/THCS/THCUNN now properly use THC_API macro instead of TH_API
  (which matters now that TH and THC are not in the same library)

- Added code debt in torch/_thnn/utils.py and other THNN parsing code to handle
  both TH_API and THC_API

- TensorUtils.h is now properly exported with AT_API

- Dead uses of TH_EXPORTS and co expunged; we now use ATen_cpu_exports and
  ATen_cuda_exports (new, in ATenCUDAGeneral.h) consistently

- Fix some incorrect type annotations on _cudnn_rnn_backward, where we didn't
  declare a type as possibly undefined when we should have. We didn't catch this
  previously because optional annotations are not tested on "pass-through" native
  ATen ops (which don't have dispatch). Upstream issue at #7316

- There's a new cmake macro aten_compile_options for applying all of our
  per-target compile time options. We use this on the cpu and cuda libraries.

- test/test_cpp_extensions.py can be run directly by invoking in Python,
  assuming you've setup your PYTHONPATH setup correctly

- type_from_string does some new funny business to only query for all valid CUDA
  types (which causes CUDA initialization) when we see "torch.cuda." in the
  requested string

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Last mile libtorch fixes

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* pedantic fix

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2018-05-10 10:28:33 -07:00
c5de3314cf Add name() to C++ modules (#7409)
* Add name() to C++ modules

* Use RTTI to get module name by default

* Add functional.cpp to CMakeLists.txt

* Call typeid() inside name() instead of constructor

* Add tests and use default constructor
2018-05-10 08:52:38 -07:00
4eaf5261d3 Provide default implementation of clone() in base module (#7446) 2018-05-10 00:49:29 -07:00
3023dd25f3 Use set_type to implement type conversions in C++ API (#7408)
* Use set_type to implement .cuda() in C++ API

* Change C++ module parameter types in place

* Fix bug where batchnorm state was not moved to CUDA
2018-05-09 17:01:19 -04:00
6fd252ccae AUTOGRAD_ to TORCH_AUTOGRAD_ for macros (#7424) 2018-05-09 10:45:05 -04:00
8fce8673bb Rename Container to Module in autogradpp and reorg code (#7304)
* Rename autograd namespace to torch and change torch.h into python.h

* Pave the way for torch::nn::Module

* Reorganize module code structure

* Undo ONNX update

* Remove sleef submodule
2018-05-07 14:45:00 -07:00
54a4867675 Bring back C++ extension torch.h (#7310)
* Bring back C++ extension torch.h

* Fix python.h include in python_tensor.cpp
2018-05-05 14:06:27 -07:00
5c575a1497 Fixes RNN shapes for C++ API (#7272) 2018-05-04 14:00:30 -04:00
67d0d14908 Rename autograd namespace to torch and change torch.h into python.h (#7267)
* Rename autograd namespace to torch and change torch.h into python.h

* Include torch.h instead of python.h in test/cpp/api

* Change some mentions of torch.h to python.h in C++ extensions

* Set paths directly, without find_path
2018-05-04 08:04:57 -07:00
7c70c3bdca Fixes for C++ build on macOS (#7192)
* Fix C++ build on Mac

* Enable CI on Mac

* Create NO_API switch to only build jit without api

* More fixes

* Fixes to CMake
2018-05-02 23:06:04 -07:00
e25e501bea Fix build for osx (#7187)
For some reason, this used to build in autogradpp but requires us to put the declaration in the .cpp in PyTorch.
2018-05-02 21:08:14 -04:00
87e6362393 Add more warnings to C++ API build (#7123)
Enables more warnings in the C++ API build.

Fixed a bunch of things in torch/csrc/.

Mostly taken from c10

* Enable -pedantic for C++ build

* Enable more warnings

* Include CUDA and library headers with -isystem

* Fix sign-promo warning
2018-05-01 10:40:22 -04:00
af71fb882f Merge autogradpp into PyTorch (#7074)
* Dump autogradpp into PyTorch

* Fixed up CMake for autogradpp/C++ API

* Made cereal a submodule

* Change search location of autogradpps mnist directory

* Add test_api to CI

* Download MNIST from the internet instead of storing in repo

* Fix warnings
2018-04-30 12:53:46 -07:00