Split cpu/gpu in caffe2/distributed + some clean up (#20674)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20674

A few targets in caffe2/caffe2/distribute needs to be split too, otherwise won't compile. Also some clean ups and make select_gpu_type to gpu_library_selector

Differential Revision: D15406019

fbshipit-source-id: 6455ab885b248502b48d4c7565597e00fecfd547
This commit is contained in:
Xiaodong Wang
2019-05-21 10:37:54 -07:00
committed by Facebook Github Bot
parent d7cd2d7a8c
commit b5edeca39d

View File

@ -3,7 +3,7 @@
# not currently relevant so they are combined into one list.
from __future__ import absolute_import, division, print_function, unicode_literals
load("@bazel_skylib//lib:new_sets.bzl", "sets")
load("//caffe2/caffe2/fb:defs_gpu.bzl", "gpu_library_targets")
load("//caffe2/caffe2/fb:defs_gpu.bzl", "gpu_library_selector")
GENERATED_CPP = [
"Functions.cpp",
@ -347,11 +347,11 @@ def add_torch_libs():
)
# TODO: split it into cpp and cuda parts similarly to libtorch
gpu_library_targets(
gpu_library_selector(
name="_C_impl",
deps=[":_C_impl_cuda"],
deps_cpu=[":_C_impl_cpu"],
merge_only=True,
deps_cuda=[":_C_impl_cuda"],
merge_cpu_deps=False,
)
cpp_library(