Commit Graph

70 Commits

Author SHA1 Message Date
49481d576d Torch rename (#20774)
Summary:
This renames the CMake `caffe2` target to `torch`, as well as renaming `caffe2_gpu` to `torch_gpu` (and likewise for other gpu target variants).  Many intermediate variables that don't manifest as artifacts of the build remain for now with the "caffe2" name; a complete purge of `caffe2` from CMake variable names is beyond the scope of this PR.

The shell `libtorch` library that had been introduced as a stopgap in https://github.com/pytorch/pytorch/issues/17783 is again flattened in this PR.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20774

Differential Revision: D15769965

Pulled By: kostmo

fbshipit-source-id: b86e8c410099f90be0468e30176207d3ad40c821
2019-06-12 20:12:34 -07:00
835a6b9da2 Fix namedtensor build (#21609)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21609
ghimport-source-id: 648a0bcd28db2cdda1bf2fa6a904ca8f851088c2

Differential Revision: D15747687

Pulled By: zou3519

fbshipit-source-id: 2a972a15fa7399391617fc6e6b19879b86568c3a
2019-06-11 06:53:50 -07:00
f8aa6a8f44 Make a deep copy of extra_compile_flag dictionnary (#20221)
Summary:
See issue #20169
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20221

Differential Revision: D15317126

Pulled By: ezyang

fbshipit-source-id: 0a12932db4f6ba15ea1d558fa329ce23fe2baef6
2019-05-13 08:11:39 -07:00
4ba28deb6e Unify libtorch and libcaffe2 (#17783)
Summary:
This PR is an intermediate step toward the ultimate goal of eliminating "caffe2" in favor of "torch".  This PR moves all of the files that had constituted "libtorch.so" into the "libcaffe2.so" library, and wraps "libcaffe2.so" with a shell library named "libtorch.so".  This means that, for now, `caffe2/CMakeLists.txt` becomes a lot bigger, and `torch/CMakeLists.txt` becomes smaller.

The torch Python bindings (`torch_python.so`) still remain in `torch/CMakeLists.txt`.

The follow-up to this PR will rename references to `caffe2` to `torch`, and flatten the shell into one library.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17783

Differential Revision: D15284178

Pulled By: kostmo

fbshipit-source-id: a08387d735ae20652527ced4e69fd75b8ff88b05
2019-05-10 09:50:53 -07:00
3bfdffe487 Fix default CXX for Windows in cpp_extensions.py (#19052)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/19017.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19052

Differential Revision: D14846702

Pulled By: soumith

fbshipit-source-id: b0e4dadaa749da0fa2d0405a1a064820d094220a
2019-04-08 23:14:22 -07:00
e0c593eae7 detect C++ ABI flag for cpp extensions from available runtime information (#18994)
Summary:
Previously, when a user built PyTorch from source, but set the version string manually to be binary-formatted, it would've simply used CXX11_ABI=0 incorrectly.

We have this information available at runtime with `torch._C._GLIBCXX_USE_CXX11_ABI`, so this PR improves the situation by simply using that information.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18994

Differential Revision: D14839393

Pulled By: soumith

fbshipit-source-id: ca92e0810b29ffe688be82326e02a64a5649a3ad
2019-04-08 17:50:03 -07:00
d6d0fcc92b Add c10_cuda to libraries in CUDAExtension for Windows (#18982)
Summary:
This change was necessary for me to compile [apex](https://github.com/NVIDIA/apex) on Windows.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18982

Differential Revision: D14819818

Pulled By: soumith

fbshipit-source-id: 37ff9b93a72ab2b7c87f23a61e9f776c71c4c1a8
2019-04-06 10:30:51 -07:00
5ade96fc84 Update cpp_extension.py (#18638)
Summary:
Hi. It seems that when building CPP-extensions with CUDA for Windows, an `extra_cuda_cflags` options are not properly forwarded to `nvcc`.

Use of extra CUDA options is necessary to build, for instance, a InplaceABN (https://github.com/mapillary/inplace_abn), which requires `--expt-extended-lambda` option.

This PR adds one line that correctly appends `extra_cuda_cflags`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18638

Differential Revision: D14704270

Pulled By: ezyang

fbshipit-source-id: e1e330d193d9afd5707a5437a74c0499460d2b90
2019-04-02 07:56:38 -07:00
2b7a5d1876 don't include /usr/include when nvcc is in /usr/bin (#18127)
Summary:
...because gcc will have failures with very strange error messages
if you do.

This affects people with Debian/Ubuntu-provided NVCC, the PR should
not change anything for anyone else.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18127

Differential Revision: D14504386

Pulled By: soumith

fbshipit-source-id: 1aea168723cdc71cdcfffb3193ee116108ae755e
2019-03-18 12:18:27 -07:00
fe90ee9dc8 Add /MD to prevent linking errors on Windows
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17799

Differential Revision: D14385777

Pulled By: ezyang

fbshipit-source-id: 8c1d9f80c48399087f5fae4474690e6d80d740e6
2019-03-08 10:46:25 -08:00
c78da0c6ed Enable using CMD when building cpp extensions on Windows
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17706

Differential Revision: D14346482

Pulled By: ezyang

fbshipit-source-id: 7c85e51c701f6c0947ad324ef19fafda40ae1cb9
2019-03-06 14:45:31 -08:00
21193bf123 try to get rid of tmp_install (#16414)
Summary:
Rehash of previous attempts. This tries a different approach where we accept the install as specified in cmake (leaving bin/ include/ and lib/ alone), and then try to adjust the rest of the files to this more standard layout.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16414

Differential Revision: D13863635

Pulled By: zdevito

fbshipit-source-id: 23725f5c64d7509bf3ca8f472dcdcad074de9828
2019-01-29 17:29:40 -08:00
c7ec7cdd46 Fixed syntax error in doctest (#15646)
Summary:
I fixed a very small extra parenthesis in a doctest.

I'm also going to use this issue as a place to propose the eventual inclusion of xdoctest (a pip installable library I wrote) in pytorch's test suite. I think there are a lot of problems with Python's built in doctest module, and I've built xdoctest to fix them. I would love for my project to get some exposure and its addition to PyTorch may benefit both projects. Please see the readme for more details on what xdoctest brings to the table over the builtin doctest module: https://github.com/Erotemic/xdoctest

I came across this small syntax error when working on ensuring xdoctest was compatible with pytorch. It isn't 100% there yet, but I'm working on it. My goal is to ensure that xdoctest is 100% compatible with all of torch's doctest out-of-the-box before writing up the PR. I'm also airing the idea out-loud before I commit too much time into this (or get my hopes up), so I'm attaching this little blurb to a no-brainer-merge PR to (1) demonstrate a little bit of value (because xdoctest flagged this syntax error) and (2) see how its received.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15646

Differential Revision: D13606111

Pulled By: soumith

fbshipit-source-id: d4492801a38ee0ae64ea0326a83239cee4d811a4
2019-01-09 01:29:11 -08:00
0bf1383f0a Python <-> C++ Frontend inter-op (#13481)
Summary:
This PR enables C++ frontend modules to be bound into Python and added as submodules of Python modules. For this, I added lots of pybind11 bindings for the `torch::nn::Module` class, and modified the `torch.nn.Module` class in Python to have a new Metaclass that makes `isinstance(m, torch.nn.Module)` return true when `m` is a C++ frontend module. The methods and fields of C++ modules are bound in such a way that they work seamlessly as submodules of Python modules for most operations (one exception I know of: calling `.to()` ends up calling `.apply()` on each submodule with a Python lambda, which cannot be used in C++ -- this may require small changes on Python side).

I've added quite a bunch of tests to verify the bindings and equality with Python. I think I should also try out adding a C++ module as part of some large PyTorch module, like a WLM or something, and see if everything works smoothly.

The next step for inter-op across our system is ScriptModule <-> C++ Frontend Module inter-op. I think this will then also allow using C++ frontend modules from TorchScript.

apaszke zdevito

CC dzhulgakov
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13481

Differential Revision: D12981996

Pulled By: goldsborough

fbshipit-source-id: 147370d3596ebb0e94c82cec92993a148fee50a7
2018-12-13 08:04:02 -08:00
db15f2e13f Fix version.groups() (#14505)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/14502

fmassa soumith
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14505

Differential Revision: D13242386

Pulled By: goldsborough

fbshipit-source-id: faebae8795e1efd9c0ebc2294fe9648193d16624
2018-11-28 20:27:33 -08:00
6f2307ba6a Allow building libraries with setuptools that dont have abi suffix (#14130)
Summary:
When using `setuptools` to build a Python extension, setuptools will automatically add an ABI suffix like `cpython-37m-x86_64-linux-gnu` to the shared library name when using Python 3. This is required for extensions meant to be imported as Python modules. When we use setuptools to build shared libraries not meant as Python modules, for example libraries that define and register TorchScript custom ops, having your library called `my_ops.cpython-37m-x86_64-linux-gnu.so` is a bit annoying compared to just `my_ops.so`, especially since you have to reference the library name when loading it with `torch.ops.load_library` in Python.

This PR fixes this by adding a `with_options` class method to the `torch.utils.cpp_extension.BuildExtension` which allows configuring the `BuildExtension`. In this case, the first option we add is `no_python_abi_suffix`, which we then use in `get_ext_filename` (override from `setuptools.build_ext`) to throw away the ABI suffix.

I've added a test `setup.py` in a `no_python_abi_suffix_test` folder.

Fixes https://github.com/pytorch/pytorch/issues/14188

t-vi fmassa soumith
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14130

Differential Revision: D13216575

Pulled By: goldsborough

fbshipit-source-id: 67dc345c1278a1a4ee4ca907d848bc1fb4956cfa
2018-11-27 17:35:53 -08:00
a13fd7ec28 Allow torch.utils.cpp_extension.load to load shared libraries that aren't Python modules (#13941)
Summary:
For custom TorchScript operators, `torch.ops.load_library` must be used and passed the path to the shared library containing the custom ops. Our C++ extensions stuff generally is meant to build a Python module and import it. This PR changes `torch.utils.cpp_extension.load` to have an option to just return the shared library path instead of importing it as a Python module, so you can then pass it to `torch.ops.load_library`. This means folks can re-use `torch.utils.cpp_extension.load` and `torch.utils.cpp_extension.load_inline` to even write their custom ops inline. I think t-vi  and fmassa will appreciate this.

soumith
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13941

Differential Revision: D13110592

Pulled By: goldsborough

fbshipit-source-id: 37756307dbf80a81d2ed550e67c8743dca01dc20
2018-11-26 09:39:21 -08:00
5b1b8682a3 Missing .decode() after check_output in cpp_extensions (#13935)
Summary:
soumith
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13935

Differential Revision: D13090852

Pulled By: goldsborough

fbshipit-source-id: 47da269d074fd1e7220e90580692d6ee489ec78b
2018-11-16 12:16:29 -08:00
2983998bb3 add torch-python target (#12742)
Summary:
This is the next minimal step towards moving _C into cmake. For now,
leave _C in setup.py, but reduce it to an empty stub file. All of its
sources are now part of the new torch-python cmake target.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12742

Reviewed By: soumith

Differential Revision: D13089691

Pulled By: anderspapitto

fbshipit-source-id: 1c746fda33cfebb26e02a7f0781fefa8b0d86385
2018-11-16 11:43:48 -08:00
7978ba45ba Update path in CI script to access ninja (#13646)
Summary:
We weren't running C++ extensions tests in CI.
Also, let's error hard when `ninja` is not available instead of skipping C++ extensions tests.

Fixes https://github.com/pytorch/pytorch/issues/13622

ezyang soumith yf225
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13646

Differential Revision: D12961468

Pulled By: goldsborough

fbshipit-source-id: 917c8a14063dc40e6ab79a0f7d345ae2d3566ba4
2018-11-07 14:31:29 -08:00
393ad6582d Use torch:: instead of at:: in all C++ APIs (#13523)
Summary:
In TorchScript and C++ extensions we currently advocate a mix of `torch::` and `at::` namespace usage. In the C++ frontend I had instead exported all symbols from `at::` and some from `c10::` into the `torch::` namespace. This is far, far easier for users to understand, and also avoid bugs around creating tensors vs. variables. The same should from now on be true for the TorchScript C++ API (for running and loading models) and all C++ extensions.

Note that since we're just talking about typedefs, this change does not break any existing code.

Once this lands I will update stuff in `pytorch/tutorials` too.

zdevito ezyang gchanan
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13523

Differential Revision: D12942787

Pulled By: goldsborough

fbshipit-source-id: 76058936bd8707b33d9e5bbc2d0705fc3d820763
2018-11-06 14:32:25 -08:00
cc3cecdba0 Fix the bug when compile using nvcc compiler. (#13509)
Summary:
I found a bug about compiling the cuda file when I install maskrcnn-benchmark lib.

`python setup.py build develop` will throw the error:
```
  File "/usr/local/lib/python2.7/dist-packages/torch/utils/cpp_extension.py", line 214, in unix_wrap_compile
    original_compile(obj, src, ext, cc_args, cflags, pp_opts)
  File "/usr/lib/python2.7/distutils/unixccompiler.py", line 125, in _compile
    self.spawn(compiler_so + cc_args + [src, '-o', obj] +
TypeError: coercing to Unicode: need string or buffer, list found
```

For more information, please see [issue](https://github.com/facebookresearch/maskrcnn-benchmark/issues/99).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13509

Differential Revision: D12902675

Pulled By: soumith

fbshipit-source-id: b9149f5de21ae29f94670cb2bbc93fa368f4e0f7
2018-11-02 11:09:43 -07:00
7b47262936 Use names instead of indices in format (#13266)
Summary:
apaszke
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13266

Differential Revision: D12841054

Pulled By: goldsborough

fbshipit-source-id: 7ce9f942367f82484cdae6ece419ed5c0dc1de2c
2018-10-31 15:17:47 -07:00
1c8a823b3b More robust ABI compatibility check for C++ extensions (#13092)
Summary:
This PR makes the ABI compatibility check for C++ extensions more robust by resolving the real path of the compiler binary, such that e.g. `"c++"` is resolved to the path of g++. This more robust than assuming that `c++ --version` will contain the word "gcc".

CC jcjohnson

Closes #10114

soumith
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13092

Differential Revision: D12810448

Pulled By: goldsborough

fbshipit-source-id: 6ac460e24496c0d8933b410401702363870b7568
2018-10-29 11:56:02 -07:00
c47f680086 arc lint torch/utils (#13141)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13141

This is an example diff to show what lint rules are being applied.

Reviewed By: mingzhe09088

Differential Revision: D10858478

fbshipit-source-id: cbeb013f10f755b0095478adf79366e7cf7836ff
2018-10-25 14:59:03 -07:00
Jat
1b07eb7148 torch.utils.cpp_extension.verify_ninja_availability() does not return True as documented
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/12922

Differential Revision: D10502167

Pulled By: ezyang

fbshipit-source-id: 2e32be22a310e6e014eba0985e93282ef5764605
2018-10-23 07:38:08 -07:00
01227f3ba7 Env variable to not check compiler abi (#12708)
Summary:
For https://github.com/pytorch/pytorch/issues/10114

soumith fmassa
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12708

Differential Revision: D10444102

Pulled By: goldsborough

fbshipit-source-id: 529e737e795bd8801beab2247be3dad296af5a3e
2018-10-21 20:07:50 -07:00
713e706618 Move exception to C10 (#12354)
Summary:
There are still a few work to be done:

- Move logging and unify AT_WARN with LOG(ERROR).
- A few header files are still being plumbed through, need cleaning.
- caffe2::EnforceNotMet aliasing is not done yet.
- need to unify the macros. See c10/util/Exception.h

This is mainly a codemod and not causing functional changes. If you find your job failing and trace back to this diff, usually it can be fixed by the following approaches:

(1) add //caffe2/c10:c10 to your dependency (or transitive dependency).
(2) change objects such as at::Error, at::Optional to the c10 namespace.
(3) change functions to the c10 namespace. Especially, caffe2::MakeString is not overridden by the unified c10::str function. Nothing else changes.

Please kindly consider not reverting this diff - it involves multiple rounds of rebasing and the fix is usually simple. Contact jiayq@ or AI Platform Dev for details.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/12354

Reviewed By: orionr

Differential Revision: D10238910

Pulled By: Yangqing

fbshipit-source-id: 7794d5bf2797ab0ca6ebaccaa2f7ebbd50ff8f32
2018-10-15 13:33:18 -07:00
93ecf4d72a Remove raise_from (#12185)
Summary:
soumith

CC alsrgv

Fixes #11995
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12185

Differential Revision: D10120103

Pulled By: goldsborough

fbshipit-source-id: ef7807ad83f9efc05d169675b7ec72986a5d17c3
2018-09-29 22:41:55 -07:00
e05d689c49 Unify C++ API with C++ extensions (#11510)
Summary:
Currently the C++ API and C++ extensions are effectively two different, entirely orthogonal code paths. This PR unifies the C++ API with the C++ extension API by adding an element of Python binding support to the C++ API. This means the `torch/torch.h` included by C++ extensions, which currently routes to `torch/csrc/torch.h`, can now be rerouted to `torch/csrc/api/include/torch/torch.h` -- i.e. the main C++ API header. This header then includes Python binding support conditioned on a define (`TORCH_WITH_PYTHON_BINDINGS`), *which is only passed when building a C++ extension*.

Currently stacked on top of https://github.com/pytorch/pytorch/pull/11498

Why is this useful?

1. One less codepath. In particular, there has been trouble again and again due to the two `torch/torch.h` header files and ambiguity when both ended up in the include path. This is now fixed.
2. I have found that it is quite common to want to bind a C++ API module back into Python. This could be for simple experimentation, or to have your training loop in Python but your models in C++. This PR makes this easier by adding pybind11 support to the C++ API.
3. The C++ extension API simply becomes richer by gaining access to the C++ API headers.

soumith ezyang apaszke
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11510

Reviewed By: ezyang

Differential Revision: D9998835

Pulled By: goldsborough

fbshipit-source-id: 7a94b44a9d7e0377b7f1cfc99ba2060874d51535
2018-09-24 14:44:21 -07:00
6100c0ea14 Introduce ExtensionVersioner for C++ extensions (#11725)
Summary:
Python never closes shared library it `dlopen`s. This means that calling `load` or `load_inline` (i.e. building a JIT C++ extension) with the same C++ extension name twice in the same Python process will never re-load the library, even if the compiled source code and the underlying shared library have changed. The only way to circumvent this is to create a new library and load it under a new module name.

I fix this, of course, by introducing a layer of indirection. Loading a JIT C++ extension now goes through an `ExtensionVersioner`, which hashes the contents of the source files as well as build flags, and if this hash changed, bumps an internal version stored for each module name. A bump in the version will result in the ninja file being edited and a new shared library and effectively a new C++ extension to be compiled. For this the version name is appended as `_v<version>` to the extension name for all versions greater zero.

One caveat is that if you were to update your code many times and always re-load it in the same process, you may end up with quite a lot of shared library objects in your extension's folder under `/tmp`. I imagine this isn't too bad, since extensions are typically small and there isn't really a good way for us to garbage collect old libraries, since we don't know what still has handles to them.

Fixes https://github.com/pytorch/pytorch/issues/11398 CC The controller you requested could not be found.

ezyang gchanan soumith fmassa
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11725

Differential Revision: D9948244

Pulled By: goldsborough

fbshipit-source-id: 695bbdc1f1597c5e4306a45cd8ba46f15c941383
2018-09-20 14:43:12 -07:00
c22dcc266f Show build output in verbose mode of C++ extensions (#11724)
Summary:
Two improvements to C++ extensions:

1. In verbose mode, show the ninja build output (the exact compile commands, very useful)
2. When raising an error, don't show the `CalledProcessError` that shows ninja failing, only show the `RuntimeError` with the captured stdout

soumith fmassa ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11724

Differential Revision: D9922459

Pulled By: goldsborough

fbshipit-source-id: 5b319bf24348eabfe5f4c55d6d8e799b9abe523a
2018-09-19 20:17:43 -07:00
7949250295 Fixes for Torch Script C++ API (#11682)
Summary:
A couple fixes I deem necessary to the TorchScript C++ API after writing the tutorial:

1. When I was creating the custom op API, I created `torch/op.h` as the one-stop header for creating custom ops. I now notice that there is no good header for the TorchScript C++ story altogether, i.e. when you just want to load a script module in C++ without any custom ops necessarily. The `torch/op.h` header suits that purpose just as well of course, but I think we should rename it to `torch/script.h`, which seems like a great name for this feature.

2. The current API for the CMake we provided was that we defined a bunch of variables like `TORCH_LIBRARY_DIRS` and `TORCH_INCLUDES` and then expected users to add those variables to their targets. We also had a CMake function that did that for you automatically. I now realized a much smarter way of doing this is to create an `IMPORTED` target for the libtorch library in CMake, and then add all this stuff to the link interface of that target. Then all downstream users have to do is `target_link_libraries(my_target torch)` and they get all the proper includes, libraries and compiler flags added to their target. This means we can get rid of the CMake function and all that stuff. orionr  AFAIK this is a much, much better way of doing all of this, no?

3. Since we distribute libtorch with `D_GLIBCXX_USE_CXX11_ABI=0`, dependent libraries must set this flag too. I now add this to the interface compile options of this imported target.

4. Fixes to JIT docs.

These could likely be 4 different PRs but given the release I wouldn't mind landing them all asap.

zdevito dzhulgakov soumith
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11682

Differential Revision: D9839431

Pulled By: goldsborough

fbshipit-source-id: fdc47b95f83f22d53e1995aa683e09613b4bfe65
2018-09-17 09:54:50 -07:00
01c7542f43 Use -isystem for system includes in C++ extensions (#11459)
Summary:
I noticed warnings from within pybind11 being shown when building C++ extensions. This can be avoided by including non-user-supplied headers with `-isystem` instead of `-I`

I hope this works on Windows.

soumith ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11459

Differential Revision: D9764444

Pulled By: goldsborough

fbshipit-source-id: b288572106078f347f0342f158f9e2b63a58c235
2018-09-11 10:40:20 -07:00
35008e0a1a Add flags to fix half comparison and test (#11395)
Summary:
The controller you requested could not be found.  found there are some issues when using comparison operators for half types when certain THC header are included. I was able to reproduce and added a test. I also fix the issue by adding the proper definitions to avoid this issue.

Reported in https://github.com/pytorch/pytorch/pull/10301#issuecomment-416773333
Related: https://github.com/pytorch/tutorials/pull/292

soumith fmassa
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11395

Differential Revision: D9725102

Pulled By: goldsborough

fbshipit-source-id: 630425829046bbebea3409bb792a9d62c91f41ad
2018-09-10 14:10:21 -07:00
ba6f10343b update CUDAExtension doc (#11370)
Summary:
fix typo
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11370

Differential Revision: D9701777

Pulled By: soumith

fbshipit-source-id: 9f3986cf30ae0491e79ca4933c675a99d6078982
2018-09-07 12:56:38 -07:00
f60a2b682e allow spaces in filename for jit-compiled cpp_extensions (#11146)
Summary:
Now, folder having spaces will not error out for `torch.utils.cpp_extensionload(name="xxx", sources=["xxx.cpp"], verbose=True)` calls.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11146

Differential Revision: D9618838

Pulled By: soumith

fbshipit-source-id: 63fb49bfddc0998dccd8a33a6935543b1a6c2def
2018-09-01 20:39:51 -07:00
504d705d0f Support for CUDNN_HOME/CUDNN_PATH in C++ extensions (#10922)
Summary:
Currently we assume to find cudnn includes and libraries in the `CUDA_HOME` root. But this is not always true. So we now support a `CUDNN_HOME`/`CUDNN_PATH` environment variable that can have its own `/include` and `/lib64` folder.

This means cudnn extensions now also get support on the FAIR cluster.

soumith fmassa
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10922

Differential Revision: D9526856

Pulled By: goldsborough

fbshipit-source-id: 5c64a5ff7cd428eb736381c24736006b21f8b6db
2018-08-28 09:40:29 -07:00
5c0d9a2493 Soumith's last few patches to v0.4.1
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/10646

Reviewed By: ml7

Differential Revision: D9400556

Pulled By: pjh5

fbshipit-source-id: 1c9d54d5306f93d103fa1b172fa189fb68e32490
2018-08-20 18:28:27 -07:00
cc5b47ff47 Fix the logic for PATH guess on Windows
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/10372

Differential Revision: D9240207

Pulled By: soumith

fbshipit-source-id: 0933f6fde19536c7da7d45044efbdcfe8ea40e1f
2018-08-09 12:40:44 -07:00
4c615b1796 Introduce libtorch to setup.py build (#8792)
Summary:
Prior to this diff, there have been two ways of compiling the bulk of the torch codebase. There was no interaction between them - you had to pick one or the other.

1) with setup.py. This method
- used the setuptools C extension functionality
- worked on all platforms
- did not build test_jit/test_api binaries
- did not include the C++ api
- always included python functionality
- produced _C.so

2) with cpp_build. This method
- used CMake
- did not support Windows or ROCM
- was capable of building the test binaries
- included the C++ api
- did not build the python functionality
- produced libtorch.so

This diff combines the two.

1) cpp_build/CMakeLists.txt has become torch/CMakeLists.txt. This build
- is CMake-based
- works on all platforms
- builds the test binaries
- includes the C++ api
- does not include the python functionality
- produces libtorch.so

2) the setup.py build
- compiles the python functionality
- calls into the CMake build to build libtorch.so
- produces _C.so, which has a dependency on libtorch.so

In terms of code changes, this mostly means extending the cmake build to support the full variety of environments and platforms. There are also a small number of changes related to the fact that there are now two shared objects - in particular, windows requires annotating some symbols with dllimport/dllexport, and doesn't allow exposing thread_local globals directly.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/8792

Reviewed By: ezyang

Differential Revision: D8764181

Pulled By: anderspapitto

fbshipit-source-id: abec43834f739049da25f4583a0794b38eb0a94f
2018-07-18 14:59:33 -07:00
512c49e831 Correct link flag order for GNU ld in utils.cpp_extension.load (#9021)
Summary:
Any flags linking libraries only take effect on inputs preceding them,
so we have to call `$cxx $in $ldflags -o $out` instead of the other way
around.

This was probably not detected so far since the torch libraries are
already loaded when loading JIT-compiled extensions, so this only has an
effect on third-party libraries.

This also matches our behavior on windows.
Closes https://github.com/pytorch/pytorch/pull/9021

Reviewed By: soumith

Differential Revision: D8694049

Pulled By: ezyang

fbshipit-source-id: e35745fc3b89bf39c14f07ce90d6bd18e6a3d7cc
2018-06-29 08:24:07 -07:00
Ben
4f604a436b Export tensor descriptor (#8313)
* Export TensorDescriptor

* Export descriptors

* install cudnn_h

* Add tests and with_cuda

* tab to space

* forgot cpp

* fix flake

* ld flags

* flake

* address comments

* clang-format

* fixtest

* fix test

* extra headers

* extra headers

* camelcasing
2018-06-20 22:32:50 -07:00
ce122cc2d3 Relax CUDA_HOME detection logic, to build when libraries are found. (#8244)
Log when no cuda runtime is found, but CUDA is found
2018-06-07 20:08:13 -04:00
4bf0202cac [build] Have PyTorch depend on minimal libcaffe2.so instead of libATen.so (#7399)
* Have PyTorch depend on minimal libcaffe2.so instead of libATen.so

* Build ATen tests as a part of Caffe2 build

* Hopefully cufft and nvcc fPIC fixes

* Make ATen install components optional

* Add tests back for ATen and fix TH build

* Fixes for test_install.sh script

* Fixes for cpp_build/build_all.sh

* Fixes for aten/tools/run_tests.sh

* Switch ATen cmake calls to USE_CUDA instead of NO_CUDA

* Attempt at fix for aten/tools/run_tests.sh

* Fix typo in last commit

* Fix valgrind call after pushd

* Be forgiving about USE_CUDA disable like PyTorch

* More fixes on the install side

* Link all libcaffe2 during test run

* Make cuDNN optional for ATen right now

* Potential fix for non-CUDA builds

* Use NCCL_ROOT_DIR environment variable

* Pass -fPIC through nvcc to base compiler/linker

* Remove THCUNN.h requirement for libtorch gen

* Add Mac test for -Wmaybe-uninitialized

* Potential Windows and Mac fixes

* Move MSVC target props to shared function

* Disable cpp_build/libtorch tests on Mac

* Disable sleef for Windows builds

* Move protos under BUILD_CAFFE2

* Remove space from linker flags passed with -Wl

* Remove ATen from Caffe2 dep libs since directly included

* Potential Windows fixes

* Preserve options while sleef builds

* Force BUILD_SHARED_LIBS flag for Caffe2 builds

* Set DYLD_LIBRARY_PATH and LD_LIBRARY_PATH for Mac testing

* Pass TORCH_CUDA_ARCH_LIST directly in cuda.cmake

* Fixes for the last two changes

* Potential fix for Mac build failure

* Switch Caffe2 to build_caffe2 dir to not conflict

* Cleanup FindMKL.cmake

* Another attempt at Mac cpp_build fix

* Clear cpp-build directory for Mac builds

* Disable test in Mac build/test to match cmake
2018-05-24 07:47:27 -07:00
cf9b80720d Dont emit warning for ABI incompatibility when PyTorch was built from source (#7681) 2018-05-19 20:25:52 +01:00
8f42bb65b3 Be more lenient w.r.t. flag processing in C++ extensions (#7621) 2018-05-16 18:17:18 -04:00
64834f6fb8 Split libATen.so into libATen_cpu.so and libATen_cuda.so (#7275)
* Split libATen.so into libATen_cpu.so and libATen_cuda.so

Previously, ATen could be built with either CPU-only support, or
CPU/CUDA support, but only via a compile-time flag, requiring
two separate builds.  This means that if you have a program which
indirectly uses a CPU-only build of ATen, and a CPU/CUDA-build of
ATen, you're gonna have a bad time.  And you might want a CPU-only
build of ATen, because it is 15M (versus the 300M of a CUDA build).

This commit splits libATen.so into two libraries, CPU/CUDA, so
that it's not necessary to do a full rebuild to get CPU-only
support; instead, if you link against libATen_cpu.so only, you
are CPU-only; if you additionally link/dlopen libATen_cuda.so,
this enables CUDA support.  This brings ATen's dynamic library
structure more similar to Caffe2's.  libATen.so is no more
(this is BC BREAKING)

The general principle for how this works is that we introduce
a *hooks* interface, which introduces a dynamic dispatch indirection
between a call site and implementation site of CUDA functionality,
mediated by a static initialization registry.  This means that we can continue
to, for example, lazily initialize CUDA from Context (a core, CPU class) without
having a direct dependency on the CUDA bits.  Instead, we look up
in the registry if, e.g., CUDA hooks have been loaded (this loading
process happens at static initialization time), and if they
have been we dynamic dispatch to this class.  We similarly use
the hooks interface to handle Variable registration.

We introduce a new invariant: if the backend of a type has not
been initialized (e.g., it's library has not been dlopened; for
CUDA, this also includes CUDA initialization), then the Type
pointers in the context registry are NULL.  If you access the
registry directly you must maintain this invariant.

There are a few potholes along the way.  I document them here:

- Previously, PyTorch maintained a separate registry for variable
  types, because no provision for them was made in the Context's
  type_registry.  Now that we have the hooks mechanism, we can easily
  have PyTorch register variables in the main registry.  The code
  has been refactored accordingly.

- There is a subtle ordering issue between Variable and CUDA.
  We permit libATen_cuda.so and PyTorch to be loaded in either
  order (in practice, CUDA is always loaded "after" PyTorch, because
  it is lazily initialized.)  This means that, when CUDA types are
  loaded, we must subsequently also initialize their Variable equivalents.
  Appropriate hooks were added to VariableHooks to make this possible;
  similarly, getVariableHooks() is not referentially transparent, and
  will change behavior after Variables are loaded.  (This is different
  to CUDAHooks, which is "burned in" after you try to initialize CUDA.)

- The cmake is adjusted to separate dependencies into either CPU
  or CUDA dependencies.  The generator scripts are adjusted to either
  generate a file as a CUDA (cuda_file_manager) or CPU file (file_manager).

- I changed all native functions which were CUDA-only (the cudnn functions)
  to have dispatches for CUDA only (making it permissible to not specify
  all dispatch options.)  This uncovered a bug in how we were handling
  native functions which dispatch on a Type argument; I introduced a new
  self_ty keyword to handle this case.  I'm not 100% happy about it
  but it fixed my problem.

  This also exposed the fact that set_history incompletely handles
  heterogenous return tuples combining Tensor and TensorList.  I
  swapped this codegen to use flatten() (at the possible cost of
  a slight perf regression, since we're allocating another vector now
  in this code path).

- thc_state is no longer a public member of Context; use getTHCState() instead

- This PR comes with Registry from Caffe2, for handling static initialization.
  I needed to make a bunch of fixes to Registry to make it more portable

  - No more ##__VA_ARGS__ token pasting; instead, it is mandatory to pass at
    least one argument to the var-args. CUDAHooks and VariableHooks pass a nullary
    struct CUDAHooksArgs/VariableHooksArgs to solve the problem. We must get rid of
    token pasting because it does not work with MSVC.

  - It seems MSVC is not willing to generate code for constructors of template
    classes at use sites which cross DLL boundaries. So we explicitly instantiate
    the class to get around the problem. This involved tweaks to the boilerplate
    generating macros, and also required us to shuffle around namespaces a bit,
    because you can't specialize a template unless you are in the same namespace as
    the template.
  - Insertion of AT_API to appropriate places where the registry must be exported

- We have a general problem which is that on recent Ubuntu distributions,
  --as-needed is enabled for shared libraries, which is (cc @apaszke who was
  worrying about this in #7160 see also #7160 (comment)). For now, I've hacked
  this up in the PR to pass -Wl,--no-as-needed to all of the spots necessary to
  make CI work, but a more sustainable solution is to attempt to dlopen
  libATen_cuda.so when CUDA functionality is requested.

    - The JIT tests somehow manage to try to touch CUDA without loading libATen_cuda.so. So
      we pass -Wl,--no-as-needed when linking libATen_cuda.so to _C.so

- There is a very subtle linking issue with lapack, which is solved by making sure libATen_cuda.so links against LAPACK. There's a comment in aten/src/ATen/CMakeLists.txt about htis as well as a follow up bug at #7353

- autogradpp used AT_CUDA_ENABLED directly. We've expunged these uses and added
  a few more things to CUDAHooks (getNumGPUs)

- Added manualSeedAll to Generator so that we can invoke it polymorphically (it
  only does something different for CUDAGenerator)

- There's a new cuda/CUDAConfig.h header for CUDA-only ifdef macros (AT_CUDNN_ENABLED, most prominently)

- CUDAHooks/VariableHooks structs live in at namespace because Registry's
  namespace support is not good enough to handle it otherwise (see Registry
  changes above)

- There's some modest moving around of native functions in ReduceOps and
  UnaryOps to get the CUDA-only function implementations into separate files, so
  they are only compiled into libATen_cuda.so. sspaddmm needed a separate CUDA
  function due to object linkage boundaries.

- Some direct uses of native functions in CUDA code has to go away, since these
  functions are not exported, so you have to go through the dispatcher
  (at::native::empty_like to at::empty_like)

- Code in THC/THCS/THCUNN now properly use THC_API macro instead of TH_API
  (which matters now that TH and THC are not in the same library)

- Added code debt in torch/_thnn/utils.py and other THNN parsing code to handle
  both TH_API and THC_API

- TensorUtils.h is now properly exported with AT_API

- Dead uses of TH_EXPORTS and co expunged; we now use ATen_cpu_exports and
  ATen_cuda_exports (new, in ATenCUDAGeneral.h) consistently

- Fix some incorrect type annotations on _cudnn_rnn_backward, where we didn't
  declare a type as possibly undefined when we should have. We didn't catch this
  previously because optional annotations are not tested on "pass-through" native
  ATen ops (which don't have dispatch). Upstream issue at #7316

- There's a new cmake macro aten_compile_options for applying all of our
  per-target compile time options. We use this on the cpu and cuda libraries.

- test/test_cpp_extensions.py can be run directly by invoking in Python,
  assuming you've setup your PYTHONPATH setup correctly

- type_from_string does some new funny business to only query for all valid CUDA
  types (which causes CUDA initialization) when we see "torch.cuda." in the
  requested string

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Last mile libtorch fixes

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* pedantic fix

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2018-05-10 10:28:33 -07:00
54a4867675 Bring back C++ extension torch.h (#7310)
* Bring back C++ extension torch.h

* Fix python.h include in python_tensor.cpp
2018-05-05 14:06:27 -07:00
67d0d14908 Rename autograd namespace to torch and change torch.h into python.h (#7267)
* Rename autograd namespace to torch and change torch.h into python.h

* Include torch.h instead of python.h in test/cpp/api

* Change some mentions of torch.h to python.h in C++ extensions

* Set paths directly, without find_path
2018-05-04 08:04:57 -07:00