Summary: As above, also changes a bunch of the build files to be better
Test Plan:
internal and external CI
did run buck2 build fbcode//caffe2:torch and it succeeded
Rollback Plan:
Reviewed By: swolchok
Differential Revision: D78016591
Pull Request resolved: https://github.com/pytorch/pytorch/pull/158035
Approved by: https://github.com/swolchok
This PR adds bazel python, so that bazel build could be used from python like `import torch`.
Notable changes:
- Add the python targets.
- Add the version.py.tpl generation.
- In order to archive the `USE_GLOBAL_DEPS = False` just for the bazel build, employ a monkey-patch hack in the mentioned `version.py.tpl`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/101003
Approved by: https://github.com/huydhn
enable -Werror=sign-compare in our Bazel build
Summary:
This is already turned on for CMake, let's see what breaks.
Test Plan: Rely on CI.
Reviewers: sahanp
Subscribers:
Tasks:
Tags:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98671
Approved by: https://github.com/kit1980
Bazel quality of life improvement.
This change adds a new option `--config=shell` which allows you to drop-in the shell right before the `bazel run` command is executed. For example you will have ability to examine bazel sandbox this way, run things under gdb instead of a normal run and so on.
Example usage:
```
bazel run --config=shell //:init_test
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79350
Approved by: https://github.com/dagitses
Summary:
These are mostly helpful warnings but we explicitly disable two of
them that are problematic in our codebase.
We also remove -Werror=type-limits and -Werror=unused-but-set-variable
since they are both included as part of -Wextra.
Test Plan: Rely on CI.
Reviewers: alband
Subscribers:
Tasks:
Tags:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79327
Approved by: https://github.com/malfet
Summary:
We don't own this code, don't spam our logs with their issues.
Test Plan: Verified actions manually with --subcommands.
Reviewers: seemethere
Subscribers:
Tasks:
Tags:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79345
Approved by: https://github.com/malfet
Summary:
We add the following exceptions:
* sign-compare: this is heavily violated in our codebase
* unknown-pragmas: we use this intentionally for some loop unrolling
in CUDA
Because they are included in -Wall by default, we remove the following
warnings from our explicit list:
* unused-function
* unused-variable
Test Plan: Rely on CI.
Reviewers: alband, seemethere
Subscribers:
Tasks:
Tags:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79306
Approved by: https://github.com/malfet
Summary:
This has a few advantages:
* changes do not discard the Bazel analysis cache
* allows for per-target overrides
Test Plan: Verified with `bazel build --subcommands`.
Reviewers: seemethere
Subscribers:
Tasks:
Tags:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79414
Approved by: https://github.com/malfet
Summary:
The previous values were not obviously regular expressions, but they
were evaluated as such. Thus, they could have failed under a few
circumstances:
* the same path exists under a different repository (^ at the
beginning protects against this)
* the file matches with the prefix, any character, and then cpp, and
then has some additional suffix ($ at the end and \. protects against this)
Test Plan: Relied on CI.
Reviewers: seemethere
Subscribers:
Tasks:
Tags:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79313
Approved by: https://github.com/malfet
Fixes https://github.com/pytorch/pytorch/issues/77509
This PR supersedes https://github.com/pytorch/pytorch/pull/77510.
It allows both `bazel query //...` and `bazel build --config=gpu //...` to work.
Concretely the changes are:
1. Add "GenerateAten" mnemonic -- this is a convenience thing, so anybody who uses [Remote Execution](https://bazel.build/docs/remote-execution) can add a
```
build:rbe --strategy=GenerateAten=sandboxed,local
```
line to the `~/.bazelrc` and build this action locally (it doesn't have hermetic dependencies at the moment).
2. Replaced few `http_archive` repos by the proper existing submodules to avoid code drift.
3. Updated `pybind11_bazel` and added `python_version="3"` to `python_configure`. This prevents hard-to-debug error that are caused by an attempt to build with python2 on the systems where it's a default python (Ubuntu 18.04 for example).
4. Added `unused_` repos, they purpose is to hide the unwanted submodules of submodules that often have bazel targets in them.
5. Updated CI to build //... -- this is a great step forward to prevent regressions in targets not only in the top-level BUILD.bazel file, but in other folders too.
6. Switch default bazel build to use gpu support.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78870
Approved by: https://github.com/ezyang
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73593
Clang is a bit more persnickety than GCC for this flag.
Test Plan: Imported from OSS
Reviewed By: malfet
Differential Revision: D34558339
Pulled By: dagitses
fbshipit-source-id: 5b5e4e474edfb5e800e8db634327afa212d08c7e
(cherry picked from commit 02b9ffe0a26a0f0652311407f734878c6bdcc4b5)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70854
We can't do the entire package since parts of it depend on //c10/core.
ghstack-source-id: 147170901
Test Plan: Rely on CI.
Reviewed By: malfet
Differential Revision: D33321821
fbshipit-source-id: 6d634da872a382a60548e2eea37a0f9f93c6f080
(cherry picked from commit 0afa808367ff92b6011b61dcbb398a2a32e5e90d)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/35316
On master, bazel cuda build is disabled due to lack of a proper `cu_library` rule. This PR:
- Add `rules_cuda` to the WORKSPACE and forward `cu_library` to `rules_cuda`.
- Use a simple local cuda and cudnn repositories (adopted from TRTorch) for cuda 11.3.
- Fix current broken cuda build.
- Enable cuda build in CI, not just for `:torch` target but all the test binaries to catch undefined symbols.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66241
Reviewed By: ejguan
Differential Revision: D31544091
Pulled By: malfet
fbshipit-source-id: fd3c34d0e8f80fee06f015694a4c13a8e9e12206
Summary:
## Context
We take the first step at tackling the GPU-bazel support by adding bazel external workspaces `local_config_cuda` and `cuda`, where the first one has some hardcoded values and lists of files, and the second one provides a nicer, high-level wrapper that maps into the already expected by pytorch bazel targets that are guarded with `if_cuda` macro.
The prefix `local_config_` signifies the fact that we are breaking the bazel hermeticity philosophy by explicitly relaying on the CUDA installation that is present on the machine.
## Testing
Notice an important scenario that is unlocked by this change: compilation of cpp code that depends on cuda libraries (i.e. cuda.h and so on).
Before:
```
sergei.vorobev@cs-sv7xn77uoy-gpu-1628706590:~/src/pytorch4$ bazelisk build --define=cuda=true //:c10
ERROR: /home/sergei.vorobev/src/pytorch4/tools/config/BUILD:12:1: no such package 'tools/toolchain': BUILD file not found in any of the following directories. Add a BUILD file to a directory to mark it as a package.
- /home/sergei.vorobev/src/pytorch4/tools/toolchain and referenced by '//tools/config:cuda_enabled_and_capable'
ERROR: While resolving configuration keys for //:c10: Analysis failed
ERROR: Analysis of target '//:c10' failed; build aborted: Analysis failed
INFO: Elapsed time: 0.259s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (2 packages loaded, 2 targets configured)
```
After:
```
sergei.vorobev@cs-sv7xn77uoy-gpu-1628706590:~/src/pytorch4$ bazelisk build --define=cuda=true //:c10
INFO: Analyzed target //:c10 (6 packages loaded, 246 targets configured).
INFO: Found 1 target...
Target //:c10 up-to-date:
bazel-bin/libc10.lo
bazel-bin/libc10.so
INFO: Elapsed time: 0.617s, Critical Path: 0.04s
INFO: 0 processes.
INFO: Build completed successfully, 1 total action
```
The `//:c10` target is a good testing one for this, because it has such cases where the [glob is different](075024b9a3/BUILD.bazel (L76-L81)), based on do we compile for CUDA or not.
## What is out of scope of this PR
This PR is a first in a series of providing the comprehensive GPU bazel build support. Namely, we don't tackle the [cu_library](11a40ad915/tools/rules/cu.bzl (L2)) implementation here. This would be a separate large chunk of work.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63604
Reviewed By: soulitzer
Differential Revision: D30442083
Pulled By: malfet
fbshipit-source-id: b2a8e4f7e5a25a69b960a82d9e36ba568eb64595
Summary:
Fixes https://github.com/pytorch/pytorch/issues/62600
Adds `bazel --config=no-tty` that is useful for less verbose output in environments that don't implement full tty like CI.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62601
Reviewed By: soulitzer
Differential Revision: D30070154
Pulled By: malfet
fbshipit-source-id: 5b89af8441c3c6c7ca7e9a0ebdfddee00c9ab576