Commit Graph

92 Commits

Author SHA1 Message Date
12a61a172e Fix missing class in cpp tensor documentation (#54488)
Summary:
The given example in the documentation does not compile due to the missing `torch::`. It is correct in the tutorial about [writing a custom extension ](https://pytorch.org/tutorials/advanced/cpp_extension.html#writing-a-mixed-c-cuda-extension)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54488

Reviewed By: bdhirsh

Differential Revision: D27267000

Pulled By: glaringlee

fbshipit-source-id: 86a46d656c1a4fa4098287a6a43a38d1ef80171e
2021-03-24 11:10:19 -07:00
8c798e0622 Forbid trailing whitespace (#53406)
Summary:
Context: https://github.com/pytorch/pytorch/pull/53299#discussion_r587882857

These are the only hand-written parts of this diff:
- the addition to `.github/workflows/lint.yml`
- the file endings changed in these four files (to appease FB-internal land-blocking lints):
  - `GLOSSARY.md`
  - `aten/src/ATen/core/op_registration/README.md`
  - `scripts/README.md`
  - `torch/csrc/jit/codegen/fuser/README.md`

The rest was generated by running this command (on macOS):
```
git grep -I -l ' $' -- . ':(exclude)**/contrib/**' ':(exclude)third_party' | xargs gsed -i 's/ *$//'
```

I looked over the auto-generated changes and didn't see anything that looked problematic.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53406

Test Plan:
This run (after adding the lint but before removing existing trailing spaces) failed:
- https://github.com/pytorch/pytorch/runs/2043032377

This run (on the tip of this PR) succeeded:
- https://github.com/pytorch/pytorch/runs/2043296348

Reviewed By: walterddr, seemethere

Differential Revision: D26856620

Pulled By: samestep

fbshipit-source-id: 3f0de7f7c2e4b0f1c089eac9b5085a58dd7e0d97
2021-03-05 17:22:55 -08:00
e60f18c2ad Generate header with version #defines for LibTorch (#50073)
Summary:
Uses cmake's `configure_file()` macro to generate a new `torch/csrc/api/include/torch/version.h` header with `TORCH_VERSION_{MAJOR,MINOR,PATCH}` \#defines from an input file `torch/csrc/api/include/torch/version.h.in`.

For Bazel builds, this is accomplished with `header_template_rule()`.

For Buck builds, this is accomplished with `fb_native.genrule()`.

Fixes https://github.com/pytorch/pytorch/issues/44365

<img width="1229" alt="Screen Shot 2021-01-05 at 3 19 24 PM" src="https://user-images.githubusercontent.com/75754324/103809279-3fd80380-5027-11eb-9039-fd23922cebd5.png">

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50073

Reviewed By: glaringlee

Differential Revision: D25855877

Pulled By: jbschlosser

fbshipit-source-id: 6bb792718c97e2c2dbaa74b7b7b831a4f6938e49
2021-02-03 22:18:53 -08:00
3858aaab37 Fix syntax issue in c++ cuda api note (#48434)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/48434

Test Plan: Imported from OSS

Reviewed By: rohan-varma

Differential Revision: D25173692

Pulled By: glaringlee

fbshipit-source-id: bbd6fa7615200bf1eaea731a4ed251d423412593
2020-11-25 14:31:14 -08:00
3d421b3137 [pytorch] rewrite of the python binding codegen with the v2 API (#46244)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46244

- What does the generated binding code do?

The Python binding codegen produces code that takes the input list of
PyObjects, finds the matching ATen C++ function using PythonArgParser,
converts the PyObjects into C++ types and calls the ATen C++ function:

```
+--------+  parsing   +------------------------+  binding   +-----------------------+
| PyObjs | ---------> | PythonArgParser Output | ---------> | Cpp Function Dispatch |
+--------+            +------------------------+            +-----------------------+
```

- Are Python arguments 1-1 mapped to C++ arguments?

Python arguments might be reordered, packed, unpacked when binding to
C++ arguments, as illustrated below:

```
// Binding - Reorder & Packing
// aten::empty.names(int[] size, *, Dimname[]? names, ScalarType? dtype=None, Layout? layout=None,
                     Device? device=None, bool? pin_memory=None, MemoryFormat? memory_format=None) -> Tensor

            Python Args               Cpp Args
-----------------------------------------------------------
         0: size                      size
         1: names                     names
         2: memory_format -------+
         3: dtype         -----+-|--> options
         4: layout            /  |
         5: device           /   +--> memory_format
         6: pin_memory      /
         7: requires_grad -+

// Binding - Unpacking
// aten::max.names_dim(Tensor self, Dimname dim, bool keepdim=False) -> (Tensor values, Tensor indices)

            Python Args               Cpp Args
-----------------------------------------------------------
                               +----> max
                              /-----> max_values
         0: input            /        self
         1: dim             /         dim
         2: keepdim        /          keepdim
         3: out      -----+
```

- Why do we want to rewrite the python binding codegen?

The old codegen takes Declarations.yaml as input. It doesn't distinguish
between Python arguments and C++ arguments - they are all mixed together
as a bag of non-typed dict objects. Different methods process these arg
objects and add new attributes for various different purposes. It's not so
obvious to figure out the semantics of these attributes. The complicated
binding logic happens implicitly and scatteredly.

```
+--------------------+
|  Native Functions  |
+--------------------+
  |
  |
  v
+--------------------+
|   Cpp Signatures   |
+--------------------+
  |
  |
  v
+--------------------+
| Declarations.yaml  |
+--------------------+
  |                        +-------------------------------------+
  |              +-------> |       PythonArgParser Schema        |
  |              |         +-------------------------------------+
  |              |                            .
  |              |                            .
  v              |                            .
+--------------------+     +-------------------------------------+
| NonTyped Args Objs | --> | PythonArgParser -> Cpp Args Binding |
+--------------------+     +-------------------------------------+
                 |                            .
                 |                            .
                 |                            .
                 |         +-------------------------------------+
                 +-------> |        Cpp Function Dispatch        |
                           +-------------------------------------+
```

This PR leverages the new immutable data models introduced in the new
aten codegen. It introduces dedicated data models for python schema.
This way, we can not only avoid subtle Declaration.yaml conversions but
also decouple the generation of python schema, python to c++ binding and
c++ function call.

The ultimate state will be like the following diagram:

```
            +-------------------+     +-------------------------------------+
  +-------> | Python Signatures | --> |       PythonArgParser Schema        |
  |         +-------------------+     +-------------------------------------+
  |                         |                            .
  |                         |                            .
  |                         |                            .
+------------------+        |         +-------------------------------------+
| Native Functions |        +-------> | PythonArgParser -> Cpp Args Binding |
+------------------+        |         +-------------------------------------+
  |                         |                            .
  |                         |                            .
  |                         |                            .
  |         +-------------------+     +-------------------------------------+
  +-------> |  Cpp Signatures   | --> |        Cpp Function Dispatch        |
            +-------------------+     +-------------------------------------+
```

This PR has migrated the core binding logic from
tools/autograd/gen_python_functions.py to tools/codegen/api/python.py.

It produces the byte-for-byte same results (tested with #46243).

Will migrate the rest of gen_python_functions.py in subsequent PRs.

Test Plan: Imported from OSS

Reviewed By: bhosmer

Differential Revision: D24388874

Pulled By: ljk53

fbshipit-source-id: f88b6df4e917cf90d868a2bbae2d5ffb680d1841
2020-10-19 17:36:45 -07:00
255b0e839f C++ APIs CUDA Stream Note (Set/Get part) (#45754)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/45754

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D24085103

Pulled By: glaringlee

fbshipit-source-id: c9641c2baadcf93b84733c037ce91b670dde5f96
2020-10-06 14:57:16 -07:00
162717e527 grammatically update index.rst (#45801)
Summary:
This is a following up PR for https://github.com/pytorch/pytorch/issues/45652 which has a problem to rebase.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45801

Reviewed By: VitalyFedyunin

Differential Revision: D24111776

Pulled By: glaringlee

fbshipit-source-id: 2c727a17426be91a4df78a195de79197e1c5d120
2020-10-05 09:55:56 -07:00
6ea89166bd Rewrite of ATen code generator (#42629)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42629

How to approach reviewing this diff:

- The new codegen itself lives in `tools/codegen`. Start with `gen.py`, then read `model.py` and them the `api/` folder. The comments at the top of the files describe what is going on. The CLI interface of the new codegen is similar to the old one, but (1) it is no longer necessary to explicitly specify cwrap inputs (and now we will error if you do so) and (2) the default settings for source and install dir are much better; to the extent that if you run the codegen from the root source directory as just `python -m tools.codegen.gen`, something reasonable will happen.
- The old codegen is (nearly) entirely deleted; every Python file in `aten/src/ATen` was deleted except for `common_with_cwrap.py`, which now permanently finds its home in `tools/shared/cwrap_common.py` (previously cmake copied the file there), and `code_template.py`, which now lives in `tools/codegen/code_template.py`. We remove the copying logic for `common_with_cwrap.py`.
- All of the inputs to the old codegen are deleted.
- Build rules now have to be adjusted to not refer to files that no longer exist, and to abide by the (slightly modified) CLI.
- LegacyTHFunctions files have been generated and checked in. We expect these to be deleted as these final functions get ported to ATen. The deletion process is straightforward; just delete the functions of the ones you are porting. There are 39 more functions left to port.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Reviewed By: bhosmer

Differential Revision: D23183978

Pulled By: ezyang

fbshipit-source-id: 6073ba432ad182c7284a97147b05f0574a02f763
2020-08-31 09:00:22 -07:00
840ad94ef5 Add reference documentation for torch/library.h (#41470)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41470

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Reviewed By: zou3519

Differential Revision: D22577426

Pulled By: ezyang

fbshipit-source-id: 4bfe5806061e74181a74d161c868acb7c1ecd1e4
2020-07-17 10:05:16 -07:00
d90fb72b5a remove use of the term "blacklist" from docs/cpp/source/Doxyfile (#41450)
Summary:
As requested in https://github.com/pytorch/pytorch/issues/41443

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41450

Reviewed By: ezyang

Differential Revision: D22561782

Pulled By: SplitInfinity

fbshipit-source-id: b38ab5e2725735d1f0c70a4d0012678636e992c3
2020-07-15 19:45:53 -07:00
1943a2c317 Fix missing code in 'Installing C++ distribution of Pytorch' (#39237)
Summary:
Fix https://github.com/pytorch/pytorch/issues/39236

- Before:

![image](https://user-images.githubusercontent.com/6421097/83250998-8e0e5580-a16e-11ea-863e-ed4d9e060bdf.png)

- After:

![image](https://user-images.githubusercontent.com/6421097/83250933-73d47780-a16e-11ea-86d3-c5a77d9fa6d1.png)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39237

Differential Revision: D21818392

Pulled By: ezyang

fbshipit-source-id: d7e51de83ec84276e88cbf168bf9e7f57200ff46
2020-06-01 07:54:43 -07:00
acc181c2ea Document torch.utils.cmake_prefix_path (#38727)
Summary:
Documents new global variable pointing to PyTorch CMake config files
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38727

Differential Revision: D21694243

Pulled By: malfet

fbshipit-source-id: 652532cd5da9945caf7d7dfe1fde696dc474661b
2020-05-21 14:34:19 -07:00
f3b5c22dba Update On "check-doxygen.sh must be run from docs/cpp/source director… (#38641)
Summary:
…y" & "check-doxygen.sh suppress stderr output"

Fixes https://github.com/pytorch/pytorch/issues/36974
Fixes https://github.com/pytorch/pytorch/issues/36975

ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38641

Differential Revision: D21640474

Pulled By: ezyang

fbshipit-source-id: f25b373a3459a1a315c009fc75fdb37d4ab6d67c
2020-05-19 07:51:38 -07:00
a894fff265 Back out "Revert D21089648: Put TORCH_LIBRARY in torch/library.h; add custom class API"
Summary: Original commit changeset: 636e8a11afc6

Test Plan: export to OSS

Reviewed By: malfet

Differential Revision: D21170502

fbshipit-source-id: e8f35f103c4924aedbcaaf868475008d24bdeeab
2020-04-22 09:18:23 -07:00
2ccdc39dce Revert D21089648: Put TORCH_LIBRARY in torch/library.h; add custom class API
Test Plan: revert-hammer

Differential Revision:
D21089648

Original commit changeset: 8d54329c1252

fbshipit-source-id: 636e8a11afc628a4cdae9d44824985c10c70555e
2020-04-21 12:21:45 -07:00
01100cb477 Put TORCH_LIBRARY in torch/library.h; add custom class API (#36742)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36742

Now, you can define a custom class inside a TORCH_LIBRARY block.
It looks very similar to what you did before.  Instead of

```
static auto m = torch::class_<Class>("Namespace", "Class").def("foo", foo);
```

you write

```
TORCH_LIBRARY(Namespace, m) {
  m.class_<Class>("Class")
    .def("foo", foo);
}
```

All the old usages still work, but at some point we should start
updating the tutorials when we're ready to go 100% live with the
new pybind11 style API.

custom class API previously lived in torch/ folder and in torch
namespace, so for consistency, the new TORCH_LIBRARY also got
moved to torch/library.h The definition of Library::class_ is in the
bottom of that header because I need all of the class_ constructors
available, but there is a circular dependency between the two headers.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: D21089648

Test Plan: Imported from OSS

Pulled By: ezyang

fbshipit-source-id: 8d54329c125242605336c22fa1642aae6940b507
2020-04-21 10:05:21 -07:00
86f3305859 Improve C++ API autograd and indexing docs (#35777)
Summary:
This PR adds docs for the following components:
1. Tensor autograd APIs (such as `is_leaf` / `backward` / `detach` / `detach_` / `retain_grad` / `grad` / `register_hook` / `remove_hook`)
2. Autograd APIs: `torch::autograd::backward` / `grad` / `Function` / `AutogradContext`, `torch::NoGradGuard` / `torch::AutoGradMode`
3. Tensor indexing
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35777

Differential Revision: D20810616

Pulled By: yf225

fbshipit-source-id: 60526ec0c5b051021901d89bc3b56861c68758e8
2020-04-02 09:33:11 -07:00
b33ae23c5a Revert D20794765: [pytorch][PR] Improve C++ API autograd and indexing docs
Test Plan: revert-hammer

Differential Revision:
D20794765

Original commit changeset: fad623e5d505

fbshipit-source-id: 041fb7257d4978a3767d8229d70d6f3cc55e5f28
2020-04-01 20:14:13 -07:00
41ef2c0d58 Improve C++ API autograd and indexing docs (#35777)
Summary:
This PR adds docs for the following components:
1. Tensor autograd APIs (such as `is_leaf` / `backward` / `detach` / `detach_` / `retain_grad` / `grad` / `register_hook` / `remove_hook`)
2. Autograd APIs: `torch::autograd::backward` / `grad` / `Function` / `AutogradContext`, `torch::NoGradGuard` / `torch::AutoGradMode`
3. Tensor indexing
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35777

Differential Revision: D20794765

Pulled By: yf225

fbshipit-source-id: fad623e5d505b7cfcd76a8c5264f18b7a0a3298c
2020-04-01 16:54:08 -07:00
153b16ef4c Doxygen for torchbind (#35007)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35007

Test Plan: Imported from OSS

Reviewed By: driazati

Differential Revision: D20525680

Pulled By: jamesr66a

fbshipit-source-id: aaa768f395e30dcec8007d50e17f21837c306719
2020-03-18 21:49:24 -07:00
b09e90af1e Fix C++ at::Tensor docs generation (#34467)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/25845.

**Test Plan:**
Check `pytorch_cpp_doc_push` CI job, and see if there is `classat_1_1_tensor` generated (similar to `structat_1_1native_1_1_convolution_descriptor`).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34467

Differential Revision: D20338190

Pulled By: yf225

fbshipit-source-id: 52dc05af5e0d742e740de5576d0d2b3e17ef28dd
2020-03-09 08:04:32 -07:00
392afb9f8b Fix overlapping keywords (#34142)
Summary:
This commit fixes overlapping keywords in the CPP Docs
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34142

Test Plan: Imported from GitHub, without a `Test Plan:` line.

Differential Revision: D20319949

Pulled By: yf225

fbshipit-source-id: e7bb2efdc286c85792c6f18a260c3bba33c54008
2020-03-06 19:16:21 -08:00
b678256bfb Move glu to Aten(CPU) (#33179)
Summary:
This PR move glu to Aten(CPU).
Test script:
```
import torch
import torch.nn.functional as F
import time

torch.manual_seed(0)

def _time():
    if torch.cuda.is_available():
        torch.cuda.synchronize()
    return time.time()

device = "cpu"

#warm up
for n in [10, 100, 1000, 10000]:
    input = torch.randn(128, n, requires_grad=True, device=device)
    grad_output = torch.ones(128, n // 2, device=device)
    for i in range(1000):
        output = F.glu(input)
        output.backward(grad_output)

for n in [10, 100, 1000, 10000]:
    fwd_t = 0
    bwd_t = 0
    input = torch.randn(128, n, requires_grad=True, device=device)
    grad_output = torch.ones(128, n // 2, device=device)
    for i in range(10000):
        t1 = _time()
        output = F.glu(input)
        t2 = _time()
        output.backward(grad_output)
        t3 = _time()
        fwd_t = fwd_t + (t2 -t1)
        bwd_t = bwd_t + (t3 - t2)
    fwd_avg = fwd_t / 10000 * 1000
    bwd_avg = bwd_t / 10000 * 1000
    print("input size(128, %d) forward time is %.2f (ms); backwad avg time is %.2f (ms)."
          % (n, fwd_avg, bwd_avg))
```
Test device: **skx-8180.**
Before:
```
input size(128, 10) forward time is 0.04 (ms); backwad avg time is 0.08 (ms).
input size(128, 100) forward time is 0.06 (ms); backwad avg time is 0.14 (ms).
input size(128, 1000) forward time is 0.11 (ms); backwad avg time is 0.31 (ms).
input size(128, 10000) forward time is 1.52 (ms); backwad avg time is 2.04 (ms).
```
After:
```
input size(128, 10) forward time is 0.02 (ms); backwad avg time is 0.05 (ms).
input size(128, 100) forward time is 0.04 (ms); backwad avg time is 0.09 (ms).
input size(128, 1000) forward time is 0.07 (ms); backwad avg time is 0.17 (ms).
input size(128, 10000) forward time is 0.13 (ms); backwad avg time is 1.03 (ms).
```
Fix https://github.com/pytorch/pytorch/issues/24707, https://github.com/pytorch/pytorch/issues/24708.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33179

Differential Revision: D19839835

Pulled By: VitalyFedyunin

fbshipit-source-id: e4d3438556a1068da2c4a7e573d6bbf8d2a6e2b9
2020-02-28 14:54:38 -08:00
dbe850af5b [jit] do the code reorg (#33851)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33851

Rationale and context described in #33828.

Script to reproduce the move:
https://gist.github.com/suo/16cbefaaeb67ca5a7c6caffd49b7f6e9
ghstack-source-id: 99079645

Test Plan: Make sure CI passes

Reviewed By: jamesr66a

Differential Revision: D20133869

fbshipit-source-id: 390e9241a9c85366d9005c492ac31f10aa96488e
2020-02-27 13:02:51 -08:00
1177191c8e Synchronize with ShipIt.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2020-01-21 13:39:28 -05:00
a201027e93 Abstract atomic add calls (#31992)
Summary:
Instead of a mixture of direct calls to library provided atomicAdd calls, such as float atomicAdd(float*, float) and calls provided internally, such as void atomicAdd(long*, long), abstract to one API void gpuAtomicAdd(T*, T) in THCAtomics.cuh for the PyTorch backend.

The advantage of this approach is that it allows us to more easily distinguish between capabiltiies of different platforms (and their versions). Additionally, the abstraction of void returning atomicAdds allows us to, in the future, support fast HW instructions on some platforms that will not return the previous value.

Call sites that do not satisfy above conditions and are either highly platform specific (__half2 atomicAdd fast path in one operator) or require the return explicitly (some int atomicAdd invocations) are left untouched. The Caffe2 backend also remains untouched.

While here, add a bunch of includes of THCAtomics.cuh that were missing before.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31992

Differential Revision: D19330220

Pulled By: ezyang

fbshipit-source-id: d6ab73ec5168c77e328faeef6c6f48eefba00861
2020-01-10 09:48:42 -08:00
5cc49ed45f Document IValue (#31904)
Summary:
This is a first pass attempt at documenting `IValue` to help with problems like in #17165. Most users are probably concerned with
 * how to make an `IValue` that matches the input type to their graph (most of the constructors are pretty self explanatory, so as long as they are in the docs I think its enough)
 * how to extract the results after running their graph (there is a small note on the behavior of `.toX()` based on confusions we've had in the past)

Preview:
https://driazati.github.io/pytorch_doc_previews/31904/api/structc10_1_1_i_value.html#exhale-struct-structc10-1-1-i-value

There are also some random CSS fixes to clean up the style.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31904

Pulled By: driazati

Differential Revision: D19318733

fbshipit-source-id: b29dae3349d5a7ea5a3b8e09cd23f7ff8434edb4
2020-01-08 16:08:35 -08:00
09a22f3301 Remove C++ docs contributing page (#31908)
Summary:
Stacked PRs
 * **#31908 - Remove C++ docs contributing page**
 * #31905 - Add doc previewing instructions

We should have 1 source of truth for contribution instructions (CONTRIBUTING.md).
This PR moves the instructions from the C++ doc pages there instead of having its
own separate page.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31908

Pulled By: driazati

Differential Revision: D19296366

fbshipit-source-id: c1daf004259342bd09e09dea3b80e34db47066ec
2020-01-08 15:37:35 -08:00
5554e5b793 Docs: c++11 -> c++14 (#30530)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30530

Switch some mentions of "C++11" in the docs to "C++14"
ghstack-source-id: 95812049

Test Plan: testinprod

Differential Revision: D18733733

fbshipit-source-id: b9d0490eb3f72bad974d134bbe9eb563f6bc8775
2019-12-17 14:09:02 -08:00
bc2e6d10fa Back out "Revert D17908478: Switch PyTorch/Caffe2 to C++14"
Summary: Original commit changeset: 775d2e29be0b

Test Plan: CI

Reviewed By: mruberry

Differential Revision: D18775520

fbshipit-source-id: a350b3f86b66d97241f208786ee67e9a51172eac
2019-12-03 14:33:43 -08:00
a2ed50c920 Revert D17908478: Switch PyTorch/Caffe2 to C++14
Test Plan: revert-hammer

Differential Revision:
D17908478

Original commit changeset: 6e340024591e

fbshipit-source-id: 775d2e29be0bc3a0db64f164c8960c44d4877d5d
2019-11-27 14:57:05 -08:00
fcb7371e65 Update docs for cpp_extension on Windows (#30392)
Summary:
Targets https://github.com/pytorch/pytorch/issues/30379.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30392

Differential Revision: D18730438

Pulled By: albanD

fbshipit-source-id: f718d006ee8aaaa356c1e15e53a0469f15e8ed41
2019-11-27 10:56:29 -08:00
d0acc9c085 Switch PyTorch/Caffe2 to C++14 (#30406)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30406

ghstack-source-id: 94642238

Test Plan: waitforsandcastle

Differential Revision: D17908478

fbshipit-source-id: 6e340024591ec2c69521668022999df4a33b4ddb
2019-11-27 10:47:31 -08:00
a9c719ba82 Set TORCH_CXX_FLAGS in minimal example (#29890)
Summary:
To avoid ABI issue

EDIT: After this PR, the example CMakeLists.txt will always use the `-D_GLIBCXX_USE_CXX11_ABI` value set in `share/cmake/Torch/TorchConfig.cmake`, regardless of the `-D_GLIBCXX_USE_CXX11_ABI` value passed to the `cmake` command by the user.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29890

Differential Revision: D18531391

Pulled By: yf225

fbshipit-source-id: 2db78ae7a33a4088b579e81c60b9a74861f1ccde
2019-11-15 09:57:15 -08:00
dfa9c9e227 Replace make with cmake --build . in the docs (#29798)
Summary:
Inspired by https://discuss.pytorch.org/t/issues-with-tutorial-installing-c-distributions-of-pytorch/33295/11
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29798

Differential Revision: D18504951

Pulled By: ezyang

fbshipit-source-id: 8e80d8891ca85196f00611fe784b2f55659e52ab
2019-11-14 08:23:19 -08:00
e8e7d93293 Additional autograd unit tests for Python UDFs. (#29041)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29041

1) Enhanced autograd unit tests to test the
torch.distributed.autograd.backward() API more thoroughly on Python UDFs.
2) Enhanced `python_error` to override `what` such that it returns an
appropriate error string if we call `what()` on this error. This ensures we can
propagate exceptions over the wire during RPCs (since we get the error string
by calling what() on the exception)
ghstack-source-id: 93098679
ghstack-source-id: 93098679

Test Plan: waitforbuildbot

Reviewed By: mrshenli

Differential Revision: D18273041

fbshipit-source-id: 85d3932fed6337668a812367fdfce233c1b3ff8e
2019-11-01 18:30:09 -07:00
0c4878d550 Update index.rst 2019-10-22 21:43:58 -07:00
11172c19be codemod at::ArrayRef and torch::IntArrayRef to std::vector in C++ API tests (#27884)
Summary:
`at::ArrayRef` / `torch::IntArrayRef` should be discouraged in user code, because users might not be aware of the fact that it doesn't own the underlying data, which already leads to memory access bugs when they try to write the following:
```cpp
auto expected_sizes = torch::IntArrayRef({2, 16, 6});  // The memory that represents `{2, 16, 6}` is released after this line
ASSERT_EQ(output.sizes(), expected_sizes);  // `expected_sizes` is pointing to invalid memory region
```
This PR changes all usage of `at::ArrayRef` and `torch::IntArrayRef` to the corresponding `std::vector` version, so that users won't pick up the habit of using `ArrayRef` by looking at the test code.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27884

Differential Revision: D17921646

Pulled By: yf225

fbshipit-source-id: 461e79fc22b598aac230d36cc028085ce6cbe937
2019-10-14 18:00:30 -07:00
987e37b9c2 Enable EXE001 flake8 check. (#27560)
Summary:
According to https://github.com/pytorch/pytorch/issues/27285 , seems we do not intend to use shebang as an indication of Python version, thus
we enable EXE001 flake8 check.
For violations, we either remove shebang from non-executable Python scripts or grant them executable permission.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27560

Differential Revision: D17831782

Pulled By: ezyang

fbshipit-source-id: 6282fd3617b25676a6d959af0d318faf05c09b26
2019-10-09 09:15:29 -07:00
0b6186d778 Remove Tensor.h, TensorMethods.h from src/core. (#27086)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27086

This is a major source of merge conflicts, and AFAICT isn't necessary anymore (it may have been necessary for some mobile build stuff in the past).

This is a commandeer of #25031

Test Plan: Imported from OSS

Reviewed By: ljk53

Differential Revision: D17687345

Pulled By: ezyang

fbshipit-source-id: bf6131af835ed1f9e3c10699c81d4454a240445f
2019-10-06 09:37:50 -07:00
42e7eb0426 Minor readability fixes to C++ documentation (#27338)
Summary:
Changed `yieldings` to `yielding`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27338

Differential Revision: D17758406

Pulled By: yf225

fbshipit-source-id: 1633834a6ad80449c061ebc330ac24f3e42f5506
2019-10-03 21:45:35 -07:00
57a4b7c55d Re-organize C++ API torch::nn folder structure (#26262)
Summary:
This PR aims to re-organize C++ API `torch::nn` folder structure in the following way:
- Every module in `torch/csrc/api/include/torch/nn/modules/` (except `any.h`, `named_any.h`, `modulelist.h`, `sequential.h`, `embedding.h`) has a strictly equivalent Python file in `torch/nn/modules/`. For  example:
`torch/csrc/api/include/torch/nn/modules/pooling.h` -> `torch/nn/modules/pooling.py`
`torch/csrc/api/include/torch/nn/modules/conv.h` -> `torch/nn/modules/conv.py`
`torch/csrc/api/include/torch/nn/modules/batchnorm.h` -> `torch/nn/modules/batchnorm.py`
`torch/csrc/api/include/torch/nn/modules/sparse.h` -> `torch/nn/modules/sparse.py`
- Containers such as  `any.h`, `named_any.h`, `modulelist.h`, `sequential.h` are moved into `torch/csrc/api/include/torch/nn/modules/container/`, because their implementations are too long to be combined into one file (like `torch/nn/modules/container.py` in Python API)
- `embedding.h` is not renamed to `sparse.h` yet, because we have another work stream that works on API parity for Embedding and EmbeddingBag, and renaming the file would cause conflict. After the embedding API parity work is done, we will rename `embedding.h` to  `sparse.h` to match the Python file name, and move the embedding options out to options/ folder.
- `torch/csrc/api/include/torch/nn/functional/` is added, and the folder structure mirrors that of `torch/csrc/api/include/torch/nn/modules/`. For example, `torch/csrc/api/include/torch/nn/functional/pooling.h` contains the functions for pooling, which are then used by the pooling modules in `torch/csrc/api/include/torch/nn/modules/pooling.h`.
- `torch/csrc/api/include/torch/nn/options/` is added, and the folder structure mirrors that of `torch/csrc/api/include/torch/nn/modules/`. For example, `torch/csrc/api/include/torch/nn/options/pooling.h` contains MaxPoolOptions, which is used by both MaxPool modules in `torch/csrc/api/include/torch/nn/modules/pooling.h`, and max_pool functions in `torch/csrc/api/include/torch/nn/functional/pooling.h`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26262

Differential Revision: D17422426

Pulled By: yf225

fbshipit-source-id: c413d2a374ba716dac81db31516619bbd879db7f
2019-09-17 10:07:29 -07:00
76ee02f10d Rename packed tensor accessor (#25654)
Summary:
Closes https://github.com/pytorch/pytorch/issues/19268

This does the renaming suggested by ezyang in https://github.com/pytorch/pytorch/issues/19268#issuecomment-490478887 except that the templated version of `packed_accessor` is also renamed to `generic_packed_accessor`.

Additionally, all of the users I could find in `ATen/native/cuda` are updated without changing their index types.

The corresponding tutorial update is in pytorch/tutorials#644
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25654

Differential Revision: D17259208

Pulled By: ezyang

fbshipit-source-id: 172a46f623d544ca16f7ed5077b6e4f57a3d1f21
2019-09-10 09:18:54 -07:00
09ef107e59 Add copy logic for LibTorch to avoid issues on Windows (#25556)
Summary:
This should work both on VS and Ninja.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25556

Differential Revision: D17162045

Pulled By: ezyang

fbshipit-source-id: 18c3d62e9ba93bf603f3a5310087fac77be4a974
2019-09-03 06:33:38 -07:00
0015b188be Fix typos
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23770

Differential Revision: D16646852

Pulled By: ezyang

fbshipit-source-id: 826b041c0b528ae6e0b320d49d8141057c1f9bf3
2019-08-05 15:38:32 -07:00
77c2f5dd75 fix copyright notice in docs
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21372

Differential Revision: D15631889

Pulled By: umanwizard

fbshipit-source-id: cf764432c27cb1b01d8137ed60ec7de361450d0e
2019-06-04 14:53:45 -07:00
110ed511a4 Make check-doxygen.sh output more interpretable. (#20362)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20362
ghimport-source-id: ac791884dc6d3954f69d8fc997b2b561f435e0e7

Differential Revision: D15375139

Pulled By: ezyang

fbshipit-source-id: c8aa0f991430269090e068f828810bae7aa39a07
2019-05-17 08:47:11 -07:00
ea5c9c9267 Update installing.rst (#20354)
Summary:
Delete useless `cd`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20354

Differential Revision: D15296154

Pulled By: soumith

fbshipit-source-id: 2042b56c91b33e302b0ed9c77f29b9b64079fa98
2019-05-10 10:04:06 -07:00
0676ba0c5c Mention packed accessors in tensor basics doc (#19464)
Summary:
This is a continuation of efforts into packed accessor awareness.
A very simple example is added, along with the mention that the template can hold more arguments.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19464

Differential Revision: D15012564

Pulled By: soumith

fbshipit-source-id: a19ed536e016fae519b062d847cc58aef01b1b92
2019-04-19 07:20:16 -07:00
ff4a4d6155 Update for #19326
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19367

Differential Revision: D14981835

Pulled By: VitalyFedyunin

fbshipit-source-id: e8a97986d9669ed7f465a7ba771801bdd043b606
2019-04-17 12:56:08 -07:00