Commit Graph

110 Commits

Author SHA1 Message Date
e01a5bf28b Add USE_NAMEDTENSOR compilation flag. (#20162)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20162
ghimport-source-id: 0efcd67f04aa087e1dd5faeee550daa2f13ef1a5

Reviewed By: gchanan

Differential Revision: D15278211

Pulled By: zou3519

fbshipit-source-id: 6fee981915d83e820fe8b50a8f59da22a428a9bf
2019-05-09 09:09:16 -07:00
bc5398451e Enable ROCm multi-gpu with Gloo
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18640

Differential Revision: D15185822

Pulled By: bddppq

fbshipit-source-id: 1b49ab3fb0f251cfc7ef3ddd62033ae0065a4ec3
2019-05-07 09:55:47 -07:00
8f0603b128 C++ changes toward libtorch and libcaffe2 unification (#19554)
Summary:
* adds TORCH_API and AT_CUDA_API in places
* refactor code generation Python logic to separate
  caffe2/torch outputs
* fix hip and asan
* remove profiler_cuda from hip
* fix gcc warnings for enums
* Fix PythonOp::Kind
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19554

Differential Revision: D15082727

Pulled By: kostmo

fbshipit-source-id: 83a8a99717f025ab44b29608848928d76b3147a4
2019-04-26 01:38:10 -07:00
556c8a300b Fall back to asking nvcc for detecting cuda version if no *cudaart* is found (#19741)
Summary:
This happens on Debian/Ubuntu with distribution-provided cuda repackaging.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19741

Differential Revision: D15082550

Pulled By: soumith

fbshipit-source-id: 2ca39c6cdc9305896529b6fd537270116223cd6c
2019-04-25 10:54:20 -07:00
173f224570 Turn on F401: Unused import warning. (#18598)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18598
ghimport-source-id: c74597e5e7437e94a43c163cee0639b20d0d0c6a

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18598 Turn on F401: Unused import warning.**

This was requested by someone at Facebook; this lint is turned
on for Facebook by default.  "Sure, why not."

I had to noqa a number of imports in __init__.  Hypothetically
we're supposed to use __all__ in this case, but I was too lazy
to fix it.  Left for future work.

Be careful!  flake8-2 and flake8-3 behave differently with
respect to import resolution for # type: comments.  flake8-3 will
report an import unused; flake8-2 will not.  For now, I just
noqa'd all these sites.

All the changes were done by hand.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: D14687478

fbshipit-source-id: 30d532381e914091aadfa0d2a5a89404819663e3
2019-03-30 09:01:17 -07:00
2a6cbfaccf Enable 32 bit CPU build on Windows
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18176

Differential Revision: D14539884

Pulled By: ezyang

fbshipit-source-id: 0e4bd9c1ef1830cd9bcc40df36b87534f61def08
2019-03-20 09:26:50 -07:00
36ddad3bfe Allow PyTorch to be built without NCCL (#17295)
Summary:
With this patch you can use USE_DISTRIBUTED=OFF (possibly in combination with USE_NCCL=OFF (?))

The significance is partly because the NCCL doesn't build with CUDA 8.
This is written under the assumption that NCCL is required for distributed if not, the USE_DISTRIBUTED check in nccl.py should be replaced by a check for the USE_NCCL environment variable.

Fixes: #17274
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17295

Differential Revision: D14155080

Pulled By: ezyang

fbshipit-source-id: 0d133f7c5b4d118849f041bd4d4cbbd7ffc3c7b4
2019-02-20 13:35:16 -08:00
21193bf123 try to get rid of tmp_install (#16414)
Summary:
Rehash of previous attempts. This tries a different approach where we accept the install as specified in cmake (leaving bin/ include/ and lib/ alone), and then try to adjust the rest of the files to this more standard layout.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16414

Differential Revision: D13863635

Pulled By: zdevito

fbshipit-source-id: 23725f5c64d7509bf3ca8f472dcdcad074de9828
2019-01-29 17:29:40 -08:00
9477a5d9c8 Remove bash from build (#16289)
Summary:
This commit removes the dependency on `build_pytorch_libs.sh` by moving the remaining functionality that is not expressible in cmake into python. Removing the indirection through bash also removes over 300 lines of environment munging code that is incredibly hard to understand because it passes a lot of secret parameters through `os.env`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16289

Reviewed By: ezyang

Differential Revision: D13821662

Pulled By: zdevito

fbshipit-source-id: d658d26925e3b1169ac1e3d44a159cf8a1f0d9b1
2019-01-25 16:03:53 -08:00
0cd1ab82b0 Remove dead code from setup.py, remove need for build target. (#16162)
Summary:
Now it is only necessary to use 'develop' or 'install' to build. Incremental cmake is on by default. `develop --cmake` forces it to rerun.

The NinjaBuilder stuff is dead. It was used to make building _C.so
faster but now _C.so is just an empty stub file.

Removed a bunch of custom build commands from setup.py that are
no longer meaningful now that cmake handles most of the build.

Removed unused targets in build_pytorch_lib.sh/bat
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16162

Differential Revision: D13744155

Pulled By: zdevito

fbshipit-source-id: d836484782c65b7f8e8c7a82620886f7a7777892
2019-01-21 17:27:56 -08:00
8964a2e6e6 Split Caffe2 CI into cmake-only and python builds (#15917)
Summary:
bypass-lint

- Change all Caffe2 builds to use setup.py instead of cmake
- Add a -cmake- Caffe2 build configuration that uses cmake and only builds cpp
- Move skipIfCI logic from onnx test scripts to the rest of CI logic
- Removal of old PYTHONPATH/LD_LIBRARY_PATH/etc. env management
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15917

Reviewed By: orionr

Differential Revision: D13637583

Pulled By: pjh5

fbshipit-source-id: c5c5639db0251ba12b6e4b51b2ac3b26a8953153
2019-01-14 15:20:44 -08:00
0ed3f766e9 Unify flags and environmental variable when building LibTorch/PyTorch (#15868)
Summary:
Fixes #15858.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15868

Differential Revision: D13622354

Pulled By: soumith

fbshipit-source-id: bb8c49520ebf926c6194d42db75accba867018c7
2019-01-10 06:47:14 -08:00
8a5ba577c1 Revert "remove use of tmp_install" (#15847)
Summary:
This reverts commit 04bf5285896e52ac118d2f9e9b7f582f695f13e2.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15847

Differential Revision: D13603174

Pulled By: anderspapitto

fbshipit-source-id: ae321434d3345ad94fad67bf71fd027cddeb4588
2019-01-08 16:30:19 -08:00
04bf528589 remove use of tmp_install
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14553

Differential Revision: D13583335

Pulled By: anderspapitto

fbshipit-source-id: 8711fead9eda877c1037a0bc59f91a3d2e01f3e0
2019-01-04 13:48:12 -08:00
d71fac20eb Refactor hotpatch_vars and apply it to libtorch (#14976)
Summary:
Fixes #14801.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14976

Differential Revision: D13485381

Pulled By: soumith

fbshipit-source-id: 0af3c2e1b90988d56f6f85632328d1e4b788ffd2
2018-12-16 21:53:31 -08:00
5222a1b190 Fixing reading of FBGEMM from env variables
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15023

Reviewed By: orionr

Differential Revision: D13406778

Pulled By: pjh5

fbshipit-source-id: 2265f01170fb7969cbdf4e44ca6ef183f5d8017d
2018-12-10 18:18:38 -08:00
65da7ddad6 USE_FBGEMM=True by default
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14868

Differential Revision: D13383390

Pulled By: jamesr66a

fbshipit-source-id: 1880c07dfd239e19153bd4fde2ab2c8d0604f956
2018-12-07 14:22:55 -08:00
a0ee3a279c USE_TENSORRT support and TensorRT 5 compatibility
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/13945

Differential Revision: D13317525

Pulled By: yinghai

fbshipit-source-id: 8630dfec1bbc5aac19539e344e7c38a7fd8b051d
2018-12-07 14:01:11 -08:00
1e8aeb0bee fix lint
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14076

Differential Revision: D13095528

Pulled By: suo

fbshipit-source-id: 78d08719ad5579dc0d6bb9563972df393e4286fe
2018-11-15 22:10:06 -08:00
f66cb02016 Turn fbgemm off by default for pytorch (#14048)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14048

Setting USE_FBGEMM to OFF by default until we figure out properly separating avx2 code. See [this issue](https://github.com/pytorch/pytorch/issues/13993).  Pytorch can still be compiled with fbgemm by using USE_FBGEMM=ON.

Reviewed By: jspark1105

Differential Revision: D13090454

fbshipit-source-id: 6e0e92612e4362a306e376df3dc33e8edeb066e9
2018-11-15 18:42:16 -08:00
18de330e86 CMake integration for int8 server operators
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/13558

Reviewed By: Maratyszcza

Differential Revision: D12945460

Pulled By: dskhudia

fbshipit-source-id: 1a91027b305fd6af77eebd9a4fad092a12f54712
2018-11-06 15:45:15 -08:00
dbab9b73b6 seperate mkl, mklml, and mkldnn (#12170)
Summary:
1. Remove avx2 support in mkldnn
2. Seperate mkl, mklml, and mkldnn
3. Fix convfusion test case
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12170

Reviewed By: yinghai

Differential Revision: D10207126

Pulled By: orionr

fbshipit-source-id: 1e62eb47943f426a89d57e2d2606439f2b04fd51
2018-10-29 10:52:55 -07:00
5e73b828bd CMake integration for Int8 ops
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/13145

Differential Revision: D10860849

Pulled By: Maratyszcza

fbshipit-source-id: fdbcc23ff9beaeaedfd561176df6cfe87685c1f5
2018-10-25 22:25:10 -07:00
cf235e0894 fix lint after new flake8 release added new style constraints (#13047)
Summary:
fix lint after new flake8 release added new style constraints
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13047

Differential Revision: D10527804

Pulled By: soumith

fbshipit-source-id: 6f4d02662570b6339f69117b61037c8394b0bbd8
2018-10-24 09:03:38 -07:00
4c21b2f2d3 split register_aten_ops.cpp into shards (#12615)
Summary:
after an analogous breakup of VariableType.cpp, the generated
register_aten_ops.cpp is now the slowest-to-compile file in a typical
incremental rebuild by a wide margin. Therefore, give it the same
treatment - the generated code is split across several files to allow
parallel compilation.

Note that the existing code takes some care to arrange that overloads
of the same op name are given in a particular order. This diff
preserves that behavior, by treating all overloads of the same name as
a single indivisible unit, and sharding based on these groups rather
than on individual constructors.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12615

Reviewed By: ezyang

Differential Revision: D10367363

Pulled By: anderspapitto

fbshipit-source-id: 07db5f9cb79748040909716349626412a13bc86e
2018-10-15 14:12:27 -07:00
49256ddb4a split generated VariableType.cpp (#12493)
Summary:
On my devgpu, this brings the time taken for `touch torch/csrc/jit/type.h && time python setup.py rebuild develop` (debug mode, multicore build) down from 75 seconds to 62 seconds. For the `ninja install` of libtorch portion, which this affects, the reduction is from 52 seconds to 35.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12493

Reviewed By: zdevito

Differential Revision: D10315988

Pulled By: anderspapitto

fbshipit-source-id: 316dc4ab81134aaa17a568cfc07408b7ced08c2e
2018-10-12 13:14:44 -07:00
25bd7fe488 Add USE_FFMPEG flag for setup.py and R2Plus1D (#12543)
Summary:
Needed for https://github.com/facebookresearch/R2Plus1D/pull/46
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12543

Differential Revision: D10320147

Pulled By: orionr

fbshipit-source-id: a7dcbf7c0d4b405b9e89b28ef75a0ed1cf2a3e6a
2018-10-10 18:09:48 -07:00
ceadde2a7f Add some more locations to search for nccl. (#12063)
Summary:
Users generally expect ./configure to find libraries
installed in /usr/local and /usr, so search for nccl
there too.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12063

Differential Revision: D10036248

Pulled By: ezyang

fbshipit-source-id: d331ddd2ccc8ac9846fb54222db284b1ec371659
2018-09-25 13:27:54 -07:00
a175282776 Flags for LMDB, LevelDB, and Caffe2 ops (#11462)
Summary:
Add flags for LMDB and LevelDB, default `OFF`. These can be enabled with

```
USE_LMDB=1 USE_LEVELDB=1 python setup.py build_deps
```

Also add a flag to build Caffe2 ops, which is default `ON`. Disable with

```
NO_CAFFE2_OPS=1 python setup.py build_deps
```

cc Yangqing soumith pjh5 mingzhe09088
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11462

Reviewed By: soumith

Differential Revision: D9758156

Pulled By: orionr

fbshipit-source-id: 95fd206d72fdf44df54fc5d0aeab598bff900c63
2018-09-10 17:27:50 -07:00
802d21c8f4 Remove FULL_CAFFE2 flag (#11321)
Summary:
Continuing pjh5's work to remove FULL_CAFFE2 flag completely.

With these changes you'll be able to also do something like

```
NO_TEST=1 python setup.py build_deps
```
and this will skip building tests in caffe2, aten, and c10d. By default the tests are built.

cc mingzhe09088 Yangqing
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11321

Reviewed By: mingzhe09088

Differential Revision: D9694950

Pulled By: orionr

fbshipit-source-id: ff5c4937a23d1a263378a196a5eda0cba98af0a8
2018-09-07 15:09:44 -07:00
020501b7b0 Getting rid of USE_C10D for build (#11237)
Summary:
Will use USE_DISTRIBUTED for both c10d and THD
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11237

Differential Revision: D9647825

Pulled By: teng-li

fbshipit-source-id: 06e0ec9b5e2f8f38780fc88718f8499463e9e969
2018-09-04 17:27:53 -07:00
a4c59a9dab MIOpen integration, more tests enabled, bug fixes (#10612)
Summary:
* first integration of MIOpen for batch norm and conv on ROCm
* workaround a ROCm compiler bug exposed by elementwise_kernel through explicit capture of variables in the densest packing
* workaround a ROCm compiler bug exposed by having `extern "C" __host__` as a definition and just `__host__` in the implementation through the hipify script
* use fabs() in accordance with C++11 for double absolute, not ::abs() which is integer-only on ROCm
* enable test_sparse set on CI, skip tests that don't work currently on ROCm
* enable more tests in test_optim after the elementwise_bug got fixed
* enable more tests in test_dataloader
* improvements to hipification and ROCm build

With this, resnet18 on CIFAR data trains without hang or crash in our tests.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10612

Reviewed By: bddppq

Differential Revision: D9423872

Pulled By: ezyang

fbshipit-source-id: 22c0c985217d65c593f35762b3eb16969ad96bdd
2018-08-23 15:24:47 -07:00
227635142f Delete THD master_worker (#10731)
Summary:
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10731

Differential Revision: D9423675

Pulled By: ezyang

fbshipit-source-id: 37221e11d84cc3672b944af598ea229a1d4c38cc
2018-08-22 08:54:36 -07:00
a38b572de3 enable unit tests and other changes (#10266)
Summary:
This PR for the ROCm target does the following:
* enable some unit tests on ROCm
* fix a missing static_cast that breaks BatchNorm call on ROCm
* fix BatchNorm to work on ROCm w/ ROCm warp sizes etc
* improve the pyhipify script by introducing kernel scope to some transpilations and other improvements
* fix a linking issue on ROCm
* for more unit test sets: mark currently broken tests broken (to be fixed)
* enable THINLTO (phase one) to parallelize linking
* address the first failing of the elementwise kernel by removing non-working ROCm specialization
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10266

Differential Revision: D9184178

Pulled By: ezyang

fbshipit-source-id: 03bcd1fe4ca4dd3241f09634dbd42b6a4c350297
2018-08-06 14:54:01 -07:00
3bb8c5eab1 Allow MKLDNN on macOS, and any other OS where CMake is able to detect it.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/9638

Reviewed By: soumith

Differential Revision: D8946130

Pulled By: resistor

fbshipit-source-id: 87bd9cb12608467b05bd4998fdb00bfdbd038ca2
2018-07-20 22:27:02 -07:00
01cffaa7e8 fix extra output in generate_code.py (#9339)
Summary:
operator.cpp is not generated. removing the line prevents generate_code.py from always thinking it is out of date and running.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9339

Reviewed By: ezyang

Differential Revision: D8798689

Pulled By: zdevito

fbshipit-source-id: f25a2e215fec29aa51571e6a31771f0f91e7a213
2018-07-11 10:25:31 -07:00
efefd1d7cf Unify aten_dispatch and aten_schema into a single operator abstraction with human-readable schema. (#8885)
Summary:
This is a series of two commits that should probably be read separately. They are stacked on top of #9018 since the second commit requires it for correctness.

Commit 1
=======

This commit is the first in a series that will clean up how we handle declaring operators and intrinsics in the JIT to make it more modular and readable. This introduces readable declarations that can be used to register operators and switches gen_jit_dispatch to generate this schema. A follow up PR will remove the dispatch keys like "add-3" and resolve ops directly based on the registered schema, further simplifying the generation process.

* Switches schema over to parsed declarations, in the future this will allow something like:

```
  registry.register_intrinsic("foo(Tensor a, Tensor b) -> Tensor", [](Stack& stack) {
    ...
  })
```

This will allow the scalable registration of intrinsics for lists, tuples, and other ops, as long as meta-data for these ops (e.g. derivatives and size propagation routines).

The declarations resemble those used by PythonArgParser but have been singificantly cleaned up to minimize the number of types that can appear in the declaration. We should strive to get the other parts of PyTorch switched over to this restricted declaration set when possible, but it is too much to do in a single PR. My hope is that eventually we will use a very similar language to describe declarations in C10, and this can serve as a guide for that.

Parsing is done using the script lexer, so it is very robust to whitespace and extensible for future types.

This removes the other way we encoded schema, and makes it easier to see what schema are registered.

Current generated declarations: https://gist.github.com/zdevito/a96a17766fb3a098d69a91ee00abaaf6

* Switches how we handle attempting to use an integer in the place of a fixed-sized int list, such as in conv (e.g. 'int[3] stride=1'). Now that we can statically distinguish between int and Tensor, we handle the expansion as an implicit conversion in the compiler. This allows us to simplify the interpreter since it no longer needs to handle the conversion itself.

* Schema declarations have been changed so that they match the type system in the IR exactly. In particular, attribute_info which was used by liftConstantAttributes has been dropped and constant attributes are lifted purely based on the type of the input. Type conversions in compiler have been simplified due to this change.

* Error highlighting in ErrorReport now only reports at most 20 lines of code, to make reading where an error occurred easier.

Commit 2
=======

This commit unifies aten_dispatch and aten_schema into a single Operator object that both contains schema and implementation information. In the future we can use this object to also contain functionality like shape prop and autodiff needed by all operators. Operators are registered globally, and dispatch logic uses the schema information to figure out which variant to use. Descriptor keys, a frequent source of inscrutable debug errors, have been removed.

* Introduce Operator, to replace TensorOp. Unlike TensorOp, we use Operator for all op implementations, including primitives that may occur in the graphs. The only exceptions are ops that are only known to the interpreter like jumps, and GraphExecutors where we need to record additional debug info.

* Adds a global registry for Operator implementations. aten_dispatch.cpp turns into register_aten_ops.cpp, which registers all the Operators for aten with the operator registry. register_prim_ops.cpp now contains the implementations for primitive operators that used to be in the interpreter. This means that it is now safe to use `getOperation(node)` to lookup the true interpreter function for the node, which will simplify const-propagation passes.

* Remove addInterpreterOpHandler in favor of global operator registry.

* Instead of descriptors, we match Node arguments directly against FunctionSchema describing expected inputs in `matchSchema`. `matchSchema` knows how parse both attributes and positional inputs from a node and match it to the appropriate registered operator. Debug error messages when we try to run an invalid operator are significantly improved: they now automatically display the schema for the op with the same name that are registered.

* Merge aten_schema into regsiter_aten_ops. Each Operator takes a string schema which is parsed to determine when to dispatch to that op.

* Cleans up gen_jit_dispatch.py now that we do not need to write out descriptors.  In particular, skip_scalar_overloads can be removed since Richard's code sorts declarations to put Tensor, Tensor declarations first.

* remove matchSchemaAndLiftConstantAttributes and use emitBuiltinCall instead to remove code duplication

* refactor stack manipulation functions into a separate header file.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/8885

Reviewed By: jamesr66a

Differential Revision: D8751048

Pulled By: zdevito

fbshipit-source-id: 312aabfbf88307c5f6ab947b6caf691468b94557
2018-07-10 10:24:48 -07:00
9ec0a2aef4 fbshipit-source-id: ba600fcd2b5cefc7621357bdeb05e24cea02e5af 2018-06-27 04:50:56 -07:00
22a70fbe2e Minor fixes for finding CUDNN (#8743)
* Minor fixes for finding CUDNN

* Minor fixes for comment

* Fix lints

* Fix naming conflicts

* Fix import name
2018-06-24 21:42:19 -04:00
48e90e3339 Build system changes (#8627)
* All changes needed to get rid of process_github.sh

* allow thnn_h_path
2018-06-20 17:45:26 -04:00
dc186cc9fe Remove NO_* and WITH_* across codebase, except in setup.py (#8555)
* remove legacy options from CMakeLists

* codemod WITH_ to USE_ for WITH_CUDA, WITH_CUDNN, WITH_DISTRIBUTED, WITH_DISTRIBUTED_MW, WITH_GLOO_IBVERBS, WITH_NCCL, WITH_ROCM, WITH_NUMPY

* cover SYSTEM_NCCL, MKLDNN, NNPACK, C10D, NINJA

* removed NO_* variables and hotpatch them only in setup.py

* fix lint
2018-06-15 12:29:48 -04:00
81b92f7515 Get ROCm building again on master (#8343)
Billing of changes:

- New Jenkins script for building on rocm. For now it is a bit hacked together, but we can improve it once CI is running
- New ROCM docker image for nightly HIP, and also some legacy packages that we need temporarily
- New enabled config py2-clang3.8-rocmnightly-ubuntu16.04-build based off of the existing Caffe2 image (not built yet)
- A big pile of cmake fixes, mostly to turn bits on/off when ROCM build is involved
- Switch from hiprng to hcrng
- Apply some patches directly in code, eliminating the patches
- Use __hdiv instead of hdiv, it's more portable
- THCNumerics<T>::gt doesn't work in HIP, so simulate it with sub
- Add a few more overloads HIP needs
- Turn off use of hcc to link (we plan to turn this back on to get tests running)
- Search for hiprand, hiprng, hipblas, hipsparse
- Better Python 2 portability
2018-06-12 23:05:21 -04:00
fcd9af8a25 changes to support ATen code generation inside fbcode (#8397)
* Back out "Back out "Add support for generating ATen files during fbcode build""

Original commit changeset: 7b8de22d1613

I'm re-sending this diff exactly as it was approved and
committed. Fixes to support @mode/opt will be sent separately for ease
of review.

* Enable building //caffe2:torch with @mode/opt

In @mode/opt, python runs out of a PAR, which breaks a lot of
assumptions in the code about where templates/ folders live relative
to __file__. Rather than introduce hacks with parutil, I simply turn
template_path into a parameter for all the relevant functions and
thread it through from the top level.
2018-06-12 14:57:29 -07:00
991bdd7f13 [build] remove the use of NO_CUDA (#8300)
* Only remove NO_CUDA from CMakeLists.txt

* @ezyang's catch
2018-06-12 12:14:36 -04:00
695d40efc2 Create initial Python bindings for c10d (#8119)
* Build and install c10d from tools/build_pytorch_libs.sh

* Create initial Python bindings for c10d

* clang-format

* Switch link order to include more symbols

* Add bindings and tests for ProcessGroupGloo

* Add broadcast test

* Separate build flag for c10d

* Explicit PIC property

* Skip c10d tests if not available

* Remove c10d from Windows blacklist

Let it skip by itself because it won't be available anyway.

* Make lint happy

* Comments

* Move c10d module into torch.distributed

* Close tempfile such that it is deleted
2018-06-08 12:59:51 -07:00
5596260b9e Add third wayt to determine IS_CONDA (#7971) 2018-05-31 09:04:27 -04:00
cd86d4c554 PyTorch AMD Build Scripts (#6625)
* PyTorch AMD Build Script.

* Python invocation for hipify

* Adding individual hip fles.

* Updating CWD

Use the actual path for the file instead of the current working directory, which depends on where the script is invoked.

* Updating folder path for amd_build

* Removing previous amd_build directory

* Updated setup.py to support WITH_ROCM

* Renaming the files for CuDNN BatchNorm & Conv since having two .cpp files with the same name results in a linking error in the HCC compiler used for ROCm/AMD.

* Removing old BatchNorm & Conv files since they've been renamed.

* Updating build path to handle ROCM

* Cleaned up the build path and created a FindHIP cmake file for setting up relevant hip paths.

* Seperated the individual patch files to make it easier to detect issues while building.

* Removed CMakeLists hip files and fixed directory structure

* Adding build pytorch amd script

* Merged setup patch into PyTorch setup.py & cleaned a few issues

* Added information on where to download the hipify-python script.

* Resolved linting issues inside of build_pytorch_amd.py

* Removing many unnecessary patch files. Removing unnecessary .hip files. Fixing up the build process.

* Refactored the PR for supporting HIP

* Minimizing the number of changes inside individual patches.

* Cleaned up patch files.

* Removed patch files.

* Updating patches

* Removing HIP change from file.

* Cleaned up patches

* Added AVX/SSE avoidance due to bug with ROCms stack. Just temporary for now.

* Removing the other HIP file

* Removed patch file + merged ROCm into Aten/test

* Removed ATen tests patch file and updated disbale_features yaml to remove headers that don't exist on the HIP stack.

* Reduced the number of patches down to 14 after Edward's suggestions.

* Transferred deletion of certain functions from patch to yaml file.

* Set default Thrust path

* Fixed aten files so we now use the templated pow/abs instead of std:: directly.

* Removed error from aten/src/THCUNN/Abs.cu

* Updated the locations of the cmake build files. Moved THCTensorRandom from a hip to a patch file. Added executable/library commands that can successfully handle either CUDA or HIP.

* Removed hip extraction from the build script and removed the old hip file.

* Replaced MACRO with function in upper level cmake.

* Added empty ELSE() block to prevent the loading of a command without CUDA or HIP. Also added IF guards around torch_cuda_based_add_executable in Aten tests.

* Updated aten tests.

* Removed the hip include from the ATen header.

* Can't throw exceptions on C++ AMP, using abort

* Missing IF guards for cuda/hip executables in aten tests.

* Removed a series of patch files.

* Added template keyword to help out the HCC compiler.

* Rebased the specific files displayed in the PR

* Fixing typo.

* Change flag from "WITH_CUDA" to "NOT NO_CUDA"

Replacing "WITH_CUDA" with "NOT NO_CUDA" after the rebase.

* Fix LoadHIP path

* Updating build files after rebasing.

* Reorganization after cpu/gpu separation.

* Removed HIPCC from setup.py & removed -shared extra linking args.

* Updated CMake / Setup build to correctly link when under ROCm stack.

* Removed the unnecessary argument from Extension constructor.

* Adding another test to be included with ROCm building.

* Updated the setup_helpers scripts in order to get around linter error

* Fix syntax issue

* Solving lint issue: line too long
2018-05-15 18:38:01 -07:00
ce69d3110b Improve script builtin checking using schema (#7311)
Improve script builtin checking using schema

* This add aten_schema.h which provides a barebones amount of type and
  argument information about each builtin operator
* emitBuiltinCall is updated to use this information rather than
  aten_dispatch to ensure the operator is correct.
* handling of keyword and position arguments now matches python behavior
* There is no longer a requirement that kwargs be constant or that the
  attributes of an op must be entirely constant or non-constant
* compiler now constructs a non-attributed version of the op first and
  then turns it into the constant-attribute version if all attributes
  are constants.
* default arguments for builtins now work
* SugaredValue::call and similar functions now have SourceRange information
  for their arguments so that error reporting is more accurate

Notes:
* This does not try to merge the builtin checking with python arg parser.
  Given that we will eventually have C10 schema which will replace aten_schema,
  we will eventually have a C++ description of the schema and working of that
  description directly will be the easiest form to understand.
* python function calls and script method calls do not support keyword arguments yet.
  When we add this support we should refactor the handling in tryEmitSchema
  that resolves keywords into a common function.

* default arguments work
* keyword arguments to builtins work (still need to extend to calling python and other script methods)
* much better error reporting for incorrect builtins

Lift any constants to attributes on nodes when possible

* Schema  is usable internally in the compiler as
  the function signatures of script functions as well as for builtin
  operators.
* Adds a List[T] class to better represent the arguments to cat/stack
  as a type rather than with custom checking.
* Support kwargs for calls of script methods

A future commit will be needed to add support for:
* calls to script _functions_ which are currently are GraphExecutors without schema info.
* kwargs to python functions, which will require refactoring python op
2018-05-14 14:46:36 -07:00
bcadf92ad5 Move codegen from setup.py to CMake for C++ libraries (#7121)
* Generate code without setup.py for C++ build

* Move code generation to CMake

* Set DEPENDS files correctly

* Fix some errors in codegen

* Fix blank line lint
2018-05-01 11:30:13 -07:00
15b12e6f8a Add support for MKLDNN on Windows (#7130) 2018-05-01 10:57:16 -04:00