Commit Graph

44 Commits

Author SHA1 Message Date
54abfda124 Completely synchronize behavior of Facebook flake8 and public flake8. (#18538)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18538
ghimport-source-id: 665b09f158d1c5dd94686d4212792504b55b7f73

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18538 Completely synchronize behavior of Facebook flake8 and public flake8.**

Previously, developers at Facebook had the very funny experience
wherein /usr/local/bin/flake8 behaved differently than a freshly
installed flake8 from pip.  In this commit, I add enough ignores to
.flake8 and install enough plugins to make the Facebook flake8
and public flake8 line up exactly.  These means you don't have
to care which flake8 you use; they all will report accurate information
on your Python files.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: D14652336

fbshipit-source-id: ba7776eaa139cf2e3df2e65349da6fd7c99acca4
2019-03-27 19:51:21 -07:00
81e030d9a6 Upgrade flake8-bugbear to master, fix the new lints. (#18507)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18507
ghimport-source-id: 1c3642befad2da78a7e5f39d6d58732b85c76267

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18507 Upgrade flake8-bugbear to master, fix the new lints.**

It turns out Facebobok is internally using the unreleased master
flake8-bugbear, so upgrading it grabs a few more lints that Phabricator
was complaining about but we didn't get in open source.

A few of the getattr sites that I fixed look very suspicious (they're
written as if Python were a lazy language), but I didn't look more
closely into the matter.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: D14633682

fbshipit-source-id: fc3f97c87dca40bbda943a1d1061953490dbacf8
2019-03-27 08:07:41 -07:00
e1c272797b Don't require pygraphviz for regenerate.sh (#17485)
Summary:
closes #17336

Do not overwrite config.yml if script throws an error
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17485

Differential Revision: D14604388

Pulled By: kostmo

fbshipit-source-id: 5024545e3a8711abdbc0800911c766929dbca196
2019-03-25 18:04:53 -07:00
08aa973fb8 Turn on Travis builds for ghstack PRs. (#18193)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18193
ghimport-source-id: 540859cf0b238a9832f45b3f4c2351e3343fc1a2

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18193 Turn on Travis builds for ghstack PRs.**

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: D14529945

fbshipit-source-id: 4476e996e311a04f2a997ca9b7c4cf2157dd6286
2019-03-19 14:51:07 -07:00
6758f5587f Delete bugbear from Python 2 lint. (#18192)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18192
ghimport-source-id: 9523a09d7ec202ef08cf0ecdf48c42739ea6b0ce

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18192 Delete bugbear from Python 2 lint.**

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: D14529240

fbshipit-source-id: 1a433b53dd38d1c455e8c0750d97c594ac51ef09
2019-03-19 14:24:03 -07:00
916a670828 Enable flake8-bugbear line length checking. (#18138)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18138
ghimport-source-id: be62a71ef98714e6f168a00f84120f612363528e

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18138 Enable flake8-bugbear line length checking.**

flake8-bugbear's line length checker (B950) which permits violations
of up to 10% but specifies the "true" limit when you go over.

I had to ignore a bunch of flake8-bugbear's other checks when I
turned this on.  They're good checks though (they're turned on
in fbcode) and we should fix them eventually.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Reviewed By: salexspb

Differential Revision: D14508678

fbshipit-source-id: 2610ecc0dd43cc0788d77f4d024ebd85b26b8d41
2019-03-19 13:31:04 -07:00
561037aef8 use flake8-mypy (#17721)
Summary:
Use flake8 installed with mypy checks so that our linter matches fbcode. Mypy type errors also provide valuable signal
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17721

Differential Revision: D14357778

Pulled By: eellison

fbshipit-source-id: d8c9ea3fe3b5f550c3b70fe259e0eabf95e4c92d
2019-03-07 09:15:54 -08:00
dd3acbc6d5 add readme and notice at the top of config.yml (#17323)
Summary:
reorder some envars for consistency

add readme and notice at the top of config.yml

generate more yaml from Python

closes #17322
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17323

Differential Revision: D14186734

Pulled By: kostmo

fbshipit-source-id: 23b2b2c1960df6f387f1730c8df1ec24a30433fd
2019-02-22 11:30:49 -08:00
09c9af9451 U/kostmo/gen circle conf (#17189)
Summary:
Diagram preview:
![binarysmoketests-config-dimensions](https://user-images.githubusercontent.com/261693/53040977-a0f88d00-3437-11e9-9190-796cc243e0f9.png)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17189

Differential Revision: D14141362

Pulled By: kostmo

fbshipit-source-id: 0625a1234d0307c6be79f17e756ddb1cc445b374
2019-02-19 15:37:09 -08:00
01686db21b Generate CircleCI config.yml from a script (#17039)
Summary:
This initial PR splits the `.circleci/config.yml` file into several smaller files that are stitched verbatim back into the original.  A proof of concept of dynamically generating yaml for the job configuration list is also introduced.

Since the `config.yml` file must exist in the repo in its final form, there must exist a manual update and check-in step to regenerate `config.yml` from its constituent parts.
Consistency between the checked-in `config.yml` file and the authoritative source data is enforced at build time through TravisCI.

closes #17038
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17039

Reviewed By: yf225

Differential Revision: D14109059

Pulled By: kostmo

fbshipit-source-id: bc04a73145290358854f5a5e552a45e559118fc3
2019-02-15 12:21:25 -08:00
d29912f59e Only run Travis on master branch, not on export-DXXXXX branches. (#16628)
Summary:
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16628

Differential Revision: D13922097

Pulled By: ezyang

fbshipit-source-id: eb16d90cc61167af5edc0c4e361d7a807a3099e5
2019-02-01 09:31:46 -08:00
c6d9c51c7e fix for clang-tidy (#16164)
Summary:
It turns out that clang-tidy is bundled with travis's standard trusty distribution, so no need to install it manually.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16164

Differential Revision: D13738986

Pulled By: suo

fbshipit-source-id: d0cd76c615625b2ed7f18951289412989f15849d
2019-01-18 14:04:26 -08:00
ed949e20cb Revert D13552080: [pytorch][PR] add clang-format check to CI
Differential Revision:
D13552080

Original commit changeset: 462a73894c16

fbshipit-source-id: ebfc5aa3343cebabbc24ff39e4e9841a372443e2
2018-12-27 10:56:52 -08:00
80cc280c68 add clang-format check to CI (#15543)
Summary:
Simple check that runs against your PR's changes and complains if running clang-format would have created a change. Does nothing when run against master, so it's "safe" to accept changes that fail this check and it won't break the build.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15543

Reviewed By: soumith

Differential Revision: D13552080

Pulled By: suo

fbshipit-source-id: 462a73894c16e7108806af7fa88440c377d4d0d2
2018-12-26 22:20:32 -08:00
6ca1d93473 add whitelisted clang-format checks (#15254)
Summary:
This PR adds clang-format automation:
- It only checks on whitelisted files, so we can enable incrementally without noise
- There is a pre-commit hook provided that will do the same check, plus prompt users to apply the clang-format changes (no change is made without the user agreeing).

My plan is to migrate over whole files at a time, clang-formatting them and then adding them to the whitelist. Doing it this way should avoid too many merge pains (the most you'll have to is run clang-format on the affected file before rebasing).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15254

Differential Revision: D13515888

Pulled By: suo

fbshipit-source-id: d098eabcc97aa228c4dfce8fc096c3b5a45b591f
2018-12-18 22:34:20 -08:00
49fe678fec Add variable_factories.h to cppdocs (#14381)
Summary:
This will document `torch::from_blob` and such.

soumith ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14381

Differential Revision: D13216560

Pulled By: goldsborough

fbshipit-source-id: 112f60e45e4d38a8a9983fa71e9cc56bc1a73465
2018-11-27 10:13:23 -08:00
8e4bea107a Fix clang-tidy 404 in Travis
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/12963

Differential Revision: D10510026

Pulled By: goldsborough

fbshipit-source-id: b6b9634a7a2575ff4e2983321d2e4e5829626347
2018-10-23 09:34:43 -07:00
bcc2a0599b Enable clang-tidy in CI (#12213)
Summary:
At long last, we will have clang-tidy enabled in CI. For a while I thought I could clean up the project enough to enable clang-tidy with all checks enabled, but I figure it's smarter to set up the minimal checks and at least have those in CI. We can fix more going forward.

ezyang apaszke
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12213

Differential Revision: D10183069

Pulled By: goldsborough

fbshipit-source-id: 7ecd2d368258f46efe23a2449c0a206d10f3a769
2018-10-03 17:25:06 -07:00
e585f2fb48 Polish CPP docs, Minor Python Docs Fixes (#11722)
Differential Revision: D9919120

Pulled By: goldsborough

fbshipit-source-id: bf14cbe4ab79524495957cb749828046af864aab
2018-09-18 14:55:57 -07:00
e6d6aed12e Check doxygen output in travis (#11124)
Summary:
This PR adds a .travis.yml check for our C++ documentation. The goal is to avoid any documentation/comments in our C++ code that would break the doxygen output and possibly ruin the C++ documentation site (currently https://pytorch.org/cppdocs).

For this, we:
1. Run doxygen and record any warnings,
2. Filter out some known bogus warnings,
3. Count the remaining warnings,
4. Fail the check if (3) is non-zero.

soumith
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11124

Differential Revision: D9651011

Pulled By: goldsborough

fbshipit-source-id: 30f776d23bb6d6c482c54db32828b4b99547e87b
2018-09-05 10:25:56 -07:00
eb9bb1f09a Travis CI: Run flake on Python 2.7 and 3.7 (#9953)
Summary:
Flake8 will produce different results on Python 2 and 3.  Python 3.7 has __async__ as a reserved word https://github.com/pytorch/pytorch/pull/4999.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9953

Differential Revision: D9035415

Pulled By: soumith

fbshipit-source-id: 8a46e028a2e20a7e3f6d90137020268d65a7cc64
2018-07-27 14:43:26 -07:00
dcbbf346c2 Change output_declarations in function_wrapper.py to be a NamedTuple (#5312)
* Add python typing module as build dependency

* Change output_declarations to be a NamedTuple

* Add mypy configuration files

mypy-files.txt includes a list of all files that should be typed checked
with mypy. Run mypy with `mypy @mypyfiles.txt`.

mypy.ini includes mypy options. Unfortunately this can't be merged with
mypy-files.txt.

Update .travis.yml so that one doesn't have to specify what files to
type check inside it.

* Add RuntimeError on missing `typing` module

Alerts users to the new build dependency.
2018-02-23 13:33:59 -05:00
0629785645 Initial type hints for function_wrapper (#4947)
* Initial type hints for function_wrapper

* Don't break python 2

* Update TopEnvironment

* Add mypy check to travis

* Add .mypy_cache to .gitignore
2018-02-08 13:52:31 -05:00
0988e328c9 Fix errors in travis config 2018-01-11 12:10:23 +01:00
6646c3e542 remove CPU builds from Travis, as they are now covered by Jenkins 2017-12-24 06:27:03 +08:00
5b8fe5cbb5 Batchnorm in ATen (#4285)
* Batchnorm in ATen

This commit moves BatchNorm derivatives into ATen, eliminating
torch/csrc/autograd/functions/batch_normalization.cpp

Some refactoring along the way:

- Functions got renamed to remove _forward from their names
- CuDNN batchnorm forward was modified to return save_mean/save_std instead of
  take it as parameters. To avoid returning undefined Variables, these return
  (small) uninitialized tensors when they are not used.
- THNN batch normalization takes care of resizing save_mean and save_std on
  forward.
- There are some shenanigans re batchnorm backwards in eval mode. I'm tracking
  that in #4284
- I decided not to introduce buffers as a proper concept in ATen, which means
  that tensors like running_mean/running_var are variables in ATen.  This meant
  there needed to be some adjustments to how we *trace* such variables; the
  new strategy is if we can't find a Value for a variable, we look and see
  if we have a Value for the buffer pointed to by the variable, before
  finally falling back on constant.
- This PR finally reliably triggered OOM on Travis builds; I fixed this by reducing
  the number of parallel jobs.
- Stop using std::string when it's not necessary.
- Remove training parameter from cudnn_batch_norm_backward, because it
  doesn't make sense; cuDNN doesn't implement the math for evaluation mode
  batchnorm backwards.
- batchnorm_double_backward is now in an anonymous namespace, as it
  no longer needs to be called from torch/csrc

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-12-21 11:38:31 -05:00
cbedba373c use valgrind to make aten test pass 2017-11-02 20:39:11 -04:00
531a20b312 enable ATen in the travis build tests. 2017-11-02 19:53:36 -04:00
77ede8fc1c .travis.yml cleanup
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-09-05 17:48:55 -04:00
6264996169 ToffeeIR CI hotfix
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-09-05 17:48:55 -04:00
dd58b145c3 Toffee graph exporting for PyTorch.
This commit adds a new exporter pass which takes a graph and returns
a string of the human-readable protobuf representation of a model.

We have two strategies for how conversions are implemented:

- If a Python autograd function has a primspec static method, we invoke
  it to get the Toffee conversion.  Use torch.toffee.op to generate the
  format expected to be returned.  The particular data representation is opaque
  and subject to change in the future.

- Otherwise, there's a giant if statement in the exporter, which manually
  uses the JIT IR C++ API and Toffee IR C++ protobuf API to convert.

You must check out a copy of the ToffeeIR repo
https://github.com/ProjectToffee/ToffeeIR at torch/lib; at the moment
we don't have a subtree/submodule set up.

Technical debt in this commit:

- To get protobuf headers in scope, we unconditionally add $CONDA_PREFIX/include
  to the include path.  This needs to be replaced with a more robust mechanism.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-09-05 17:48:55 -04:00
c304d04fc6 Replace thpp::Tensor with ATen Tensor in autograd csrc (#2170) 2017-07-28 10:18:37 -04:00
cb9ad7a892 Opt into Trusty builds. (#2214)
* Opt into Trusty builds.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* Bump to 2.7.9.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-07-27 04:04:57 +05:30
37e05485d9 added initialization schemes in torch.nn.init (#833) 2017-03-01 19:34:13 +01:00
7cbe255296 [Lint] Use flake8 instead of pep8 2017-02-27 19:33:00 -05:00
e7c1e6a8e3 [pep8] Fix most lint automatically with autopep8
Here's the command I used to invoke autopep8 (in parallel!):

    git ls-files | grep '\.py$' | xargs -n1 -P`nproc` autopep8 -i

Several rules are ignored in setup.cfg. The goal is to let autopep8
handle everything which it can handle safely, and to disable any rules
which are tricky or controversial to address. We may want to come back
and re-enable some of these rules later, but I'm trying to make this
patch as safe as possible.

Also configures flake8 to match pep8's behavior.

Also configures TravisCI to check the whole project for lint.
2017-01-28 01:15:51 +01:00
ce78bc898b Fix travis builds and add ccache 2017-01-28 00:28:33 +01:00
7415c090ac Check setup.py for pep8 lint on TravisCI 2017-01-25 22:23:22 -05:00
9d74e139e5 removing 3.3 and 3.4 from travis build 2016-12-25 15:13:13 -05:00
3d6b805652 Make travis use run_test.sh 2016-09-08 11:23:42 -07:00
d467a068c2 Add tests for new modules 2016-08-19 14:57:01 -07:00
b06c000478 Fix <3.5 compatibility and travis configuration 2016-08-16 21:11:10 -07:00
eaa24dc7c8 adding requirements.txt 2016-08-16 13:58:15 -04:00
34c669222d adding travis yaml 2016-08-16 13:24:19 -04:00