Commit Graph

82 Commits

Author SHA1 Message Date
1943a2c317 Fix missing code in 'Installing C++ distribution of Pytorch' (#39237)
Summary:
Fix https://github.com/pytorch/pytorch/issues/39236

- Before:

![image](https://user-images.githubusercontent.com/6421097/83250998-8e0e5580-a16e-11ea-863e-ed4d9e060bdf.png)

- After:

![image](https://user-images.githubusercontent.com/6421097/83250933-73d47780-a16e-11ea-86d3-c5a77d9fa6d1.png)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39237

Differential Revision: D21818392

Pulled By: ezyang

fbshipit-source-id: d7e51de83ec84276e88cbf168bf9e7f57200ff46
2020-06-01 07:54:43 -07:00
acc181c2ea Document torch.utils.cmake_prefix_path (#38727)
Summary:
Documents new global variable pointing to PyTorch CMake config files
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38727

Differential Revision: D21694243

Pulled By: malfet

fbshipit-source-id: 652532cd5da9945caf7d7dfe1fde696dc474661b
2020-05-21 14:34:19 -07:00
f3b5c22dba Update On "check-doxygen.sh must be run from docs/cpp/source director… (#38641)
Summary:
…y" & "check-doxygen.sh suppress stderr output"

Fixes https://github.com/pytorch/pytorch/issues/36974
Fixes https://github.com/pytorch/pytorch/issues/36975

ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38641

Differential Revision: D21640474

Pulled By: ezyang

fbshipit-source-id: f25b373a3459a1a315c009fc75fdb37d4ab6d67c
2020-05-19 07:51:38 -07:00
a894fff265 Back out "Revert D21089648: Put TORCH_LIBRARY in torch/library.h; add custom class API"
Summary: Original commit changeset: 636e8a11afc6

Test Plan: export to OSS

Reviewed By: malfet

Differential Revision: D21170502

fbshipit-source-id: e8f35f103c4924aedbcaaf868475008d24bdeeab
2020-04-22 09:18:23 -07:00
2ccdc39dce Revert D21089648: Put TORCH_LIBRARY in torch/library.h; add custom class API
Test Plan: revert-hammer

Differential Revision:
D21089648

Original commit changeset: 8d54329c1252

fbshipit-source-id: 636e8a11afc628a4cdae9d44824985c10c70555e
2020-04-21 12:21:45 -07:00
01100cb477 Put TORCH_LIBRARY in torch/library.h; add custom class API (#36742)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36742

Now, you can define a custom class inside a TORCH_LIBRARY block.
It looks very similar to what you did before.  Instead of

```
static auto m = torch::class_<Class>("Namespace", "Class").def("foo", foo);
```

you write

```
TORCH_LIBRARY(Namespace, m) {
  m.class_<Class>("Class")
    .def("foo", foo);
}
```

All the old usages still work, but at some point we should start
updating the tutorials when we're ready to go 100% live with the
new pybind11 style API.

custom class API previously lived in torch/ folder and in torch
namespace, so for consistency, the new TORCH_LIBRARY also got
moved to torch/library.h The definition of Library::class_ is in the
bottom of that header because I need all of the class_ constructors
available, but there is a circular dependency between the two headers.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: D21089648

Test Plan: Imported from OSS

Pulled By: ezyang

fbshipit-source-id: 8d54329c125242605336c22fa1642aae6940b507
2020-04-21 10:05:21 -07:00
86f3305859 Improve C++ API autograd and indexing docs (#35777)
Summary:
This PR adds docs for the following components:
1. Tensor autograd APIs (such as `is_leaf` / `backward` / `detach` / `detach_` / `retain_grad` / `grad` / `register_hook` / `remove_hook`)
2. Autograd APIs: `torch::autograd::backward` / `grad` / `Function` / `AutogradContext`, `torch::NoGradGuard` / `torch::AutoGradMode`
3. Tensor indexing
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35777

Differential Revision: D20810616

Pulled By: yf225

fbshipit-source-id: 60526ec0c5b051021901d89bc3b56861c68758e8
2020-04-02 09:33:11 -07:00
b33ae23c5a Revert D20794765: [pytorch][PR] Improve C++ API autograd and indexing docs
Test Plan: revert-hammer

Differential Revision:
D20794765

Original commit changeset: fad623e5d505

fbshipit-source-id: 041fb7257d4978a3767d8229d70d6f3cc55e5f28
2020-04-01 20:14:13 -07:00
41ef2c0d58 Improve C++ API autograd and indexing docs (#35777)
Summary:
This PR adds docs for the following components:
1. Tensor autograd APIs (such as `is_leaf` / `backward` / `detach` / `detach_` / `retain_grad` / `grad` / `register_hook` / `remove_hook`)
2. Autograd APIs: `torch::autograd::backward` / `grad` / `Function` / `AutogradContext`, `torch::NoGradGuard` / `torch::AutoGradMode`
3. Tensor indexing
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35777

Differential Revision: D20794765

Pulled By: yf225

fbshipit-source-id: fad623e5d505b7cfcd76a8c5264f18b7a0a3298c
2020-04-01 16:54:08 -07:00
153b16ef4c Doxygen for torchbind (#35007)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35007

Test Plan: Imported from OSS

Reviewed By: driazati

Differential Revision: D20525680

Pulled By: jamesr66a

fbshipit-source-id: aaa768f395e30dcec8007d50e17f21837c306719
2020-03-18 21:49:24 -07:00
b09e90af1e Fix C++ at::Tensor docs generation (#34467)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/25845.

**Test Plan:**
Check `pytorch_cpp_doc_push` CI job, and see if there is `classat_1_1_tensor` generated (similar to `structat_1_1native_1_1_convolution_descriptor`).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34467

Differential Revision: D20338190

Pulled By: yf225

fbshipit-source-id: 52dc05af5e0d742e740de5576d0d2b3e17ef28dd
2020-03-09 08:04:32 -07:00
392afb9f8b Fix overlapping keywords (#34142)
Summary:
This commit fixes overlapping keywords in the CPP Docs
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34142

Test Plan: Imported from GitHub, without a `Test Plan:` line.

Differential Revision: D20319949

Pulled By: yf225

fbshipit-source-id: e7bb2efdc286c85792c6f18a260c3bba33c54008
2020-03-06 19:16:21 -08:00
b678256bfb Move glu to Aten(CPU) (#33179)
Summary:
This PR move glu to Aten(CPU).
Test script:
```
import torch
import torch.nn.functional as F
import time

torch.manual_seed(0)

def _time():
    if torch.cuda.is_available():
        torch.cuda.synchronize()
    return time.time()

device = "cpu"

#warm up
for n in [10, 100, 1000, 10000]:
    input = torch.randn(128, n, requires_grad=True, device=device)
    grad_output = torch.ones(128, n // 2, device=device)
    for i in range(1000):
        output = F.glu(input)
        output.backward(grad_output)

for n in [10, 100, 1000, 10000]:
    fwd_t = 0
    bwd_t = 0
    input = torch.randn(128, n, requires_grad=True, device=device)
    grad_output = torch.ones(128, n // 2, device=device)
    for i in range(10000):
        t1 = _time()
        output = F.glu(input)
        t2 = _time()
        output.backward(grad_output)
        t3 = _time()
        fwd_t = fwd_t + (t2 -t1)
        bwd_t = bwd_t + (t3 - t2)
    fwd_avg = fwd_t / 10000 * 1000
    bwd_avg = bwd_t / 10000 * 1000
    print("input size(128, %d) forward time is %.2f (ms); backwad avg time is %.2f (ms)."
          % (n, fwd_avg, bwd_avg))
```
Test device: **skx-8180.**
Before:
```
input size(128, 10) forward time is 0.04 (ms); backwad avg time is 0.08 (ms).
input size(128, 100) forward time is 0.06 (ms); backwad avg time is 0.14 (ms).
input size(128, 1000) forward time is 0.11 (ms); backwad avg time is 0.31 (ms).
input size(128, 10000) forward time is 1.52 (ms); backwad avg time is 2.04 (ms).
```
After:
```
input size(128, 10) forward time is 0.02 (ms); backwad avg time is 0.05 (ms).
input size(128, 100) forward time is 0.04 (ms); backwad avg time is 0.09 (ms).
input size(128, 1000) forward time is 0.07 (ms); backwad avg time is 0.17 (ms).
input size(128, 10000) forward time is 0.13 (ms); backwad avg time is 1.03 (ms).
```
Fix https://github.com/pytorch/pytorch/issues/24707, https://github.com/pytorch/pytorch/issues/24708.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33179

Differential Revision: D19839835

Pulled By: VitalyFedyunin

fbshipit-source-id: e4d3438556a1068da2c4a7e573d6bbf8d2a6e2b9
2020-02-28 14:54:38 -08:00
dbe850af5b [jit] do the code reorg (#33851)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33851

Rationale and context described in #33828.

Script to reproduce the move:
https://gist.github.com/suo/16cbefaaeb67ca5a7c6caffd49b7f6e9
ghstack-source-id: 99079645

Test Plan: Make sure CI passes

Reviewed By: jamesr66a

Differential Revision: D20133869

fbshipit-source-id: 390e9241a9c85366d9005c492ac31f10aa96488e
2020-02-27 13:02:51 -08:00
1177191c8e Synchronize with ShipIt.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2020-01-21 13:39:28 -05:00
a201027e93 Abstract atomic add calls (#31992)
Summary:
Instead of a mixture of direct calls to library provided atomicAdd calls, such as float atomicAdd(float*, float) and calls provided internally, such as void atomicAdd(long*, long), abstract to one API void gpuAtomicAdd(T*, T) in THCAtomics.cuh for the PyTorch backend.

The advantage of this approach is that it allows us to more easily distinguish between capabiltiies of different platforms (and their versions). Additionally, the abstraction of void returning atomicAdds allows us to, in the future, support fast HW instructions on some platforms that will not return the previous value.

Call sites that do not satisfy above conditions and are either highly platform specific (__half2 atomicAdd fast path in one operator) or require the return explicitly (some int atomicAdd invocations) are left untouched. The Caffe2 backend also remains untouched.

While here, add a bunch of includes of THCAtomics.cuh that were missing before.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31992

Differential Revision: D19330220

Pulled By: ezyang

fbshipit-source-id: d6ab73ec5168c77e328faeef6c6f48eefba00861
2020-01-10 09:48:42 -08:00
5cc49ed45f Document IValue (#31904)
Summary:
This is a first pass attempt at documenting `IValue` to help with problems like in #17165. Most users are probably concerned with
 * how to make an `IValue` that matches the input type to their graph (most of the constructors are pretty self explanatory, so as long as they are in the docs I think its enough)
 * how to extract the results after running their graph (there is a small note on the behavior of `.toX()` based on confusions we've had in the past)

Preview:
https://driazati.github.io/pytorch_doc_previews/31904/api/structc10_1_1_i_value.html#exhale-struct-structc10-1-1-i-value

There are also some random CSS fixes to clean up the style.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31904

Pulled By: driazati

Differential Revision: D19318733

fbshipit-source-id: b29dae3349d5a7ea5a3b8e09cd23f7ff8434edb4
2020-01-08 16:08:35 -08:00
09a22f3301 Remove C++ docs contributing page (#31908)
Summary:
Stacked PRs
 * **#31908 - Remove C++ docs contributing page**
 * #31905 - Add doc previewing instructions

We should have 1 source of truth for contribution instructions (CONTRIBUTING.md).
This PR moves the instructions from the C++ doc pages there instead of having its
own separate page.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31908

Pulled By: driazati

Differential Revision: D19296366

fbshipit-source-id: c1daf004259342bd09e09dea3b80e34db47066ec
2020-01-08 15:37:35 -08:00
5554e5b793 Docs: c++11 -> c++14 (#30530)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30530

Switch some mentions of "C++11" in the docs to "C++14"
ghstack-source-id: 95812049

Test Plan: testinprod

Differential Revision: D18733733

fbshipit-source-id: b9d0490eb3f72bad974d134bbe9eb563f6bc8775
2019-12-17 14:09:02 -08:00
bc2e6d10fa Back out "Revert D17908478: Switch PyTorch/Caffe2 to C++14"
Summary: Original commit changeset: 775d2e29be0b

Test Plan: CI

Reviewed By: mruberry

Differential Revision: D18775520

fbshipit-source-id: a350b3f86b66d97241f208786ee67e9a51172eac
2019-12-03 14:33:43 -08:00
a2ed50c920 Revert D17908478: Switch PyTorch/Caffe2 to C++14
Test Plan: revert-hammer

Differential Revision:
D17908478

Original commit changeset: 6e340024591e

fbshipit-source-id: 775d2e29be0bc3a0db64f164c8960c44d4877d5d
2019-11-27 14:57:05 -08:00
fcb7371e65 Update docs for cpp_extension on Windows (#30392)
Summary:
Targets https://github.com/pytorch/pytorch/issues/30379.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30392

Differential Revision: D18730438

Pulled By: albanD

fbshipit-source-id: f718d006ee8aaaa356c1e15e53a0469f15e8ed41
2019-11-27 10:56:29 -08:00
d0acc9c085 Switch PyTorch/Caffe2 to C++14 (#30406)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30406

ghstack-source-id: 94642238

Test Plan: waitforsandcastle

Differential Revision: D17908478

fbshipit-source-id: 6e340024591ec2c69521668022999df4a33b4ddb
2019-11-27 10:47:31 -08:00
a9c719ba82 Set TORCH_CXX_FLAGS in minimal example (#29890)
Summary:
To avoid ABI issue

EDIT: After this PR, the example CMakeLists.txt will always use the `-D_GLIBCXX_USE_CXX11_ABI` value set in `share/cmake/Torch/TorchConfig.cmake`, regardless of the `-D_GLIBCXX_USE_CXX11_ABI` value passed to the `cmake` command by the user.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29890

Differential Revision: D18531391

Pulled By: yf225

fbshipit-source-id: 2db78ae7a33a4088b579e81c60b9a74861f1ccde
2019-11-15 09:57:15 -08:00
dfa9c9e227 Replace make with cmake --build . in the docs (#29798)
Summary:
Inspired by https://discuss.pytorch.org/t/issues-with-tutorial-installing-c-distributions-of-pytorch/33295/11
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29798

Differential Revision: D18504951

Pulled By: ezyang

fbshipit-source-id: 8e80d8891ca85196f00611fe784b2f55659e52ab
2019-11-14 08:23:19 -08:00
e8e7d93293 Additional autograd unit tests for Python UDFs. (#29041)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29041

1) Enhanced autograd unit tests to test the
torch.distributed.autograd.backward() API more thoroughly on Python UDFs.
2) Enhanced `python_error` to override `what` such that it returns an
appropriate error string if we call `what()` on this error. This ensures we can
propagate exceptions over the wire during RPCs (since we get the error string
by calling what() on the exception)
ghstack-source-id: 93098679
ghstack-source-id: 93098679

Test Plan: waitforbuildbot

Reviewed By: mrshenli

Differential Revision: D18273041

fbshipit-source-id: 85d3932fed6337668a812367fdfce233c1b3ff8e
2019-11-01 18:30:09 -07:00
0c4878d550 Update index.rst 2019-10-22 21:43:58 -07:00
11172c19be codemod at::ArrayRef and torch::IntArrayRef to std::vector in C++ API tests (#27884)
Summary:
`at::ArrayRef` / `torch::IntArrayRef` should be discouraged in user code, because users might not be aware of the fact that it doesn't own the underlying data, which already leads to memory access bugs when they try to write the following:
```cpp
auto expected_sizes = torch::IntArrayRef({2, 16, 6});  // The memory that represents `{2, 16, 6}` is released after this line
ASSERT_EQ(output.sizes(), expected_sizes);  // `expected_sizes` is pointing to invalid memory region
```
This PR changes all usage of `at::ArrayRef` and `torch::IntArrayRef` to the corresponding `std::vector` version, so that users won't pick up the habit of using `ArrayRef` by looking at the test code.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27884

Differential Revision: D17921646

Pulled By: yf225

fbshipit-source-id: 461e79fc22b598aac230d36cc028085ce6cbe937
2019-10-14 18:00:30 -07:00
987e37b9c2 Enable EXE001 flake8 check. (#27560)
Summary:
According to https://github.com/pytorch/pytorch/issues/27285 , seems we do not intend to use shebang as an indication of Python version, thus
we enable EXE001 flake8 check.
For violations, we either remove shebang from non-executable Python scripts or grant them executable permission.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27560

Differential Revision: D17831782

Pulled By: ezyang

fbshipit-source-id: 6282fd3617b25676a6d959af0d318faf05c09b26
2019-10-09 09:15:29 -07:00
0b6186d778 Remove Tensor.h, TensorMethods.h from src/core. (#27086)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27086

This is a major source of merge conflicts, and AFAICT isn't necessary anymore (it may have been necessary for some mobile build stuff in the past).

This is a commandeer of #25031

Test Plan: Imported from OSS

Reviewed By: ljk53

Differential Revision: D17687345

Pulled By: ezyang

fbshipit-source-id: bf6131af835ed1f9e3c10699c81d4454a240445f
2019-10-06 09:37:50 -07:00
42e7eb0426 Minor readability fixes to C++ documentation (#27338)
Summary:
Changed `yieldings` to `yielding`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27338

Differential Revision: D17758406

Pulled By: yf225

fbshipit-source-id: 1633834a6ad80449c061ebc330ac24f3e42f5506
2019-10-03 21:45:35 -07:00
57a4b7c55d Re-organize C++ API torch::nn folder structure (#26262)
Summary:
This PR aims to re-organize C++ API `torch::nn` folder structure in the following way:
- Every module in `torch/csrc/api/include/torch/nn/modules/` (except `any.h`, `named_any.h`, `modulelist.h`, `sequential.h`, `embedding.h`) has a strictly equivalent Python file in `torch/nn/modules/`. For  example:
`torch/csrc/api/include/torch/nn/modules/pooling.h` -> `torch/nn/modules/pooling.py`
`torch/csrc/api/include/torch/nn/modules/conv.h` -> `torch/nn/modules/conv.py`
`torch/csrc/api/include/torch/nn/modules/batchnorm.h` -> `torch/nn/modules/batchnorm.py`
`torch/csrc/api/include/torch/nn/modules/sparse.h` -> `torch/nn/modules/sparse.py`
- Containers such as  `any.h`, `named_any.h`, `modulelist.h`, `sequential.h` are moved into `torch/csrc/api/include/torch/nn/modules/container/`, because their implementations are too long to be combined into one file (like `torch/nn/modules/container.py` in Python API)
- `embedding.h` is not renamed to `sparse.h` yet, because we have another work stream that works on API parity for Embedding and EmbeddingBag, and renaming the file would cause conflict. After the embedding API parity work is done, we will rename `embedding.h` to  `sparse.h` to match the Python file name, and move the embedding options out to options/ folder.
- `torch/csrc/api/include/torch/nn/functional/` is added, and the folder structure mirrors that of `torch/csrc/api/include/torch/nn/modules/`. For example, `torch/csrc/api/include/torch/nn/functional/pooling.h` contains the functions for pooling, which are then used by the pooling modules in `torch/csrc/api/include/torch/nn/modules/pooling.h`.
- `torch/csrc/api/include/torch/nn/options/` is added, and the folder structure mirrors that of `torch/csrc/api/include/torch/nn/modules/`. For example, `torch/csrc/api/include/torch/nn/options/pooling.h` contains MaxPoolOptions, which is used by both MaxPool modules in `torch/csrc/api/include/torch/nn/modules/pooling.h`, and max_pool functions in `torch/csrc/api/include/torch/nn/functional/pooling.h`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26262

Differential Revision: D17422426

Pulled By: yf225

fbshipit-source-id: c413d2a374ba716dac81db31516619bbd879db7f
2019-09-17 10:07:29 -07:00
76ee02f10d Rename packed tensor accessor (#25654)
Summary:
Closes https://github.com/pytorch/pytorch/issues/19268

This does the renaming suggested by ezyang in https://github.com/pytorch/pytorch/issues/19268#issuecomment-490478887 except that the templated version of `packed_accessor` is also renamed to `generic_packed_accessor`.

Additionally, all of the users I could find in `ATen/native/cuda` are updated without changing their index types.

The corresponding tutorial update is in pytorch/tutorials#644
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25654

Differential Revision: D17259208

Pulled By: ezyang

fbshipit-source-id: 172a46f623d544ca16f7ed5077b6e4f57a3d1f21
2019-09-10 09:18:54 -07:00
09ef107e59 Add copy logic for LibTorch to avoid issues on Windows (#25556)
Summary:
This should work both on VS and Ninja.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25556

Differential Revision: D17162045

Pulled By: ezyang

fbshipit-source-id: 18c3d62e9ba93bf603f3a5310087fac77be4a974
2019-09-03 06:33:38 -07:00
0015b188be Fix typos
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23770

Differential Revision: D16646852

Pulled By: ezyang

fbshipit-source-id: 826b041c0b528ae6e0b320d49d8141057c1f9bf3
2019-08-05 15:38:32 -07:00
77c2f5dd75 fix copyright notice in docs
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21372

Differential Revision: D15631889

Pulled By: umanwizard

fbshipit-source-id: cf764432c27cb1b01d8137ed60ec7de361450d0e
2019-06-04 14:53:45 -07:00
110ed511a4 Make check-doxygen.sh output more interpretable. (#20362)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20362
ghimport-source-id: ac791884dc6d3954f69d8fc997b2b561f435e0e7

Differential Revision: D15375139

Pulled By: ezyang

fbshipit-source-id: c8aa0f991430269090e068f828810bae7aa39a07
2019-05-17 08:47:11 -07:00
ea5c9c9267 Update installing.rst (#20354)
Summary:
Delete useless `cd`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20354

Differential Revision: D15296154

Pulled By: soumith

fbshipit-source-id: 2042b56c91b33e302b0ed9c77f29b9b64079fa98
2019-05-10 10:04:06 -07:00
0676ba0c5c Mention packed accessors in tensor basics doc (#19464)
Summary:
This is a continuation of efforts into packed accessor awareness.
A very simple example is added, along with the mention that the template can hold more arguments.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19464

Differential Revision: D15012564

Pulled By: soumith

fbshipit-source-id: a19ed536e016fae519b062d847cc58aef01b1b92
2019-04-19 07:20:16 -07:00
ff4a4d6155 Update for #19326
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19367

Differential Revision: D14981835

Pulled By: VitalyFedyunin

fbshipit-source-id: e8a97986d9669ed7f465a7ba771801bdd043b606
2019-04-17 12:56:08 -07:00
48a35135fb Convert all tabs to spaces, add CI. (#18959)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18959
ghimport-source-id: a934163fa34cb2019732d5f49dc7290c376bf156

Differential Revision: D14831246

Pulled By: ezyang

fbshipit-source-id: beb92dc4ee8c82f4c8259c081dd72e477fe7a9d0
2019-04-09 08:12:26 -07:00
173f224570 Turn on F401: Unused import warning. (#18598)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18598
ghimport-source-id: c74597e5e7437e94a43c163cee0639b20d0d0c6a

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18598 Turn on F401: Unused import warning.**

This was requested by someone at Facebook; this lint is turned
on for Facebook by default.  "Sure, why not."

I had to noqa a number of imports in __init__.  Hypothetically
we're supposed to use __all__ in this case, but I was too lazy
to fix it.  Left for future work.

Be careful!  flake8-2 and flake8-3 behave differently with
respect to import resolution for # type: comments.  flake8-3 will
report an import unused; flake8-2 will not.  For now, I just
noqa'd all these sites.

All the changes were done by hand.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: D14687478

fbshipit-source-id: 30d532381e914091aadfa0d2a5a89404819663e3
2019-03-30 09:01:17 -07:00
6ebfbdf4c6 Add named submodule support to nn::Sequential (#17552)
Summary:
Previously, we were not able to assign names to `nn::Sequential`'s submodules. This PR adds this feature to match the Python API. Example use:
```cpp
Sequential sequential(named_submodule({
      {"linear", Linear(10, 3)},
      {"conv2d", Conv2d(1, 2, 3)},
      {"dropout", Dropout(0.5)},
      {"batchnorm", BatchNorm(5)},
      {"embedding", Embedding(4, 10)},
      {"lstm", LSTM(4, 5)}
}));
```

It also enables loading parameters of Python `nn.Sequential` module with custom submodules names into C++ frontend, unblocking https://github.com/pytorch/vision/pull/728#issuecomment-466661344.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17552

Differential Revision: D14246834

Pulled By: yf225

fbshipit-source-id: 3030b5c5d68f6dd5d3e37ac4b4f98dc6d6d9ba72
2019-03-29 13:06:29 -07:00
81e030d9a6 Upgrade flake8-bugbear to master, fix the new lints. (#18507)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18507
ghimport-source-id: 1c3642befad2da78a7e5f39d6d58732b85c76267

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18507 Upgrade flake8-bugbear to master, fix the new lints.**

It turns out Facebobok is internally using the unreleased master
flake8-bugbear, so upgrading it grabs a few more lints that Phabricator
was complaining about but we didn't get in open source.

A few of the getattr sites that I fixed look very suspicious (they're
written as if Python were a lazy language), but I didn't look more
closely into the matter.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: D14633682

fbshipit-source-id: fc3f97c87dca40bbda943a1d1061953490dbacf8
2019-03-27 08:07:41 -07:00
674c274d92 Change deprecated IntList to IntArrayRef
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18262

Differential Revision: D14612244

Pulled By: ezyang

fbshipit-source-id: 5d21c7b94d64104fececcb15c6d38d9bd2a1fc70
2019-03-25 19:47:21 -07:00
4ac91b2d64 add debug/release tip to cpp docs (#17452)
Summary:
as title. These were already added to the tutorials, but I didn't add them to the cpp docs.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17452

Differential Revision: D14206501

Pulled By: suo

fbshipit-source-id: 89b5c8aaac22d05381bc4a7ab60d0bb35e43f6f5
2019-02-24 23:08:15 -08:00
1b3315ec17 improve libtorch install docs with GPU note (#17299)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/15702
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17299

Differential Revision: D14149712

Pulled By: soumith

fbshipit-source-id: 5b83110bb00e4d4dad04c1f293c2b52e41711f11
2019-02-20 06:30:08 -08:00
47bf30661f Directly include headers from ATen.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16287

Differential Revision: D13792949

Pulled By: ZolotukhinM

fbshipit-source-id: d627d8dc469df048063c70d0b5b8d33fede809a3
2019-01-24 11:22:27 -08:00
e669f72466 fix sigma in the middle of when word (#16227)
Summary:
there is a random sigma in the when word on :
https://pytorch.org/cppdocs/contributing.html
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16227

Differential Revision: D13762753

Pulled By: goldsborough

fbshipit-source-id: 3d4bf4be859a3069402fe8c3fbc8ebee4f25cc5a
2019-01-23 08:35:32 -08:00
dfcafb1f71 cpp doc fix (#16221)
Summary:
Fixed a few C++ API callsites to work with v1.0.1.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16221

Differential Revision: D13759207

Pulled By: yf225

fbshipit-source-id: bd92c2b95a0c6ff3ba5d73cb249d0bc88cfdc340
2019-01-21 21:56:22 -08:00