mirror of
https://github.com/pytorch/pytorch.git
synced 2025-10-20 21:14:14 +08:00
Summary: ## Context We take the first step at tackling the GPU-bazel support by adding bazel external workspaces `local_config_cuda` and `cuda`, where the first one has some hardcoded values and lists of files, and the second one provides a nicer, high-level wrapper that maps into the already expected by pytorch bazel targets that are guarded with `if_cuda` macro. The prefix `local_config_` signifies the fact that we are breaking the bazel hermeticity philosophy by explicitly relaying on the CUDA installation that is present on the machine. ## Testing Notice an important scenario that is unlocked by this change: compilation of cpp code that depends on cuda libraries (i.e. cuda.h and so on). Before: ``` sergei.vorobev@cs-sv7xn77uoy-gpu-1628706590:~/src/pytorch4$ bazelisk build --define=cuda=true //:c10 ERROR: /home/sergei.vorobev/src/pytorch4/tools/config/BUILD:12:1: no such package 'tools/toolchain': BUILD file not found in any of the following directories. Add a BUILD file to a directory to mark it as a package. - /home/sergei.vorobev/src/pytorch4/tools/toolchain and referenced by '//tools/config:cuda_enabled_and_capable' ERROR: While resolving configuration keys for //:c10: Analysis failed ERROR: Analysis of target '//:c10' failed; build aborted: Analysis failed INFO: Elapsed time: 0.259s INFO: 0 processes. FAILED: Build did NOT complete successfully (2 packages loaded, 2 targets configured) ``` After: ``` sergei.vorobev@cs-sv7xn77uoy-gpu-1628706590:~/src/pytorch4$ bazelisk build --define=cuda=true //:c10 INFO: Analyzed target //:c10 (6 packages loaded, 246 targets configured). INFO: Found 1 target... Target //:c10 up-to-date: bazel-bin/libc10.lo bazel-bin/libc10.so INFO: Elapsed time: 0.617s, Critical Path: 0.04s INFO: 0 processes. INFO: Build completed successfully, 1 total action ``` The `//:c10` target is a good testing one for this, because it has such cases where the [glob is different](075024b9a3/BUILD.bazel (L76-L81)
), based on do we compile for CUDA or not. ## What is out of scope of this PR This PR is a first in a series of providing the comprehensive GPU bazel build support. Namely, we don't tackle the [cu_library](11a40ad915/tools/rules/cu.bzl (L2)
) implementation here. This would be a separate large chunk of work. Pull Request resolved: https://github.com/pytorch/pytorch/pull/63604 Reviewed By: soulitzer Differential Revision: D30442083 Pulled By: malfet fbshipit-source-id: b2a8e4f7e5a25a69b960a82d9e36ba568eb64595
42 lines
972 B
Python
42 lines
972 B
Python
load("@bazel_skylib//lib:selects.bzl", "selects")
|
|
|
|
config_setting(
|
|
name = "cuda",
|
|
define_values = {
|
|
"cuda": "true",
|
|
},
|
|
)
|
|
|
|
# Even when building with --config=cuda, host targets should be built with cuda disabled
|
|
# as these targets will run on CI machines that have no GPUs.
|
|
selects.config_setting_group(
|
|
name = "cuda_enabled_and_capable",
|
|
match_all = [
|
|
":cuda",
|
|
],
|
|
)
|
|
|
|
# Configures the system to build with cuda using clang.
|
|
config_setting(
|
|
name = "cuda_clang",
|
|
define_values = {
|
|
"cuda_clang": "true",
|
|
},
|
|
)
|
|
|
|
# Indicates that cuda code should be compiled with nvcc
|
|
# Mostly exists to support _analysis_ of tensorflow; more work is needed to actually make this
|
|
# setting work.
|
|
config_setting(
|
|
name = "cuda_nvcc",
|
|
define_values = {
|
|
"cuda_nvcc": "true",
|
|
},
|
|
)
|
|
|
|
config_setting(
|
|
name = "thread_sanitizer",
|
|
define_values = {"thread_sanitizer": "1"},
|
|
visibility = ["//visibility:public"],
|
|
)
|