[1/4] Intel GPU Runtime Upstreaming for Device (#116019)

# Motivation
As mentioned in [[RFC] Intel GPU Runtime Upstreaming](https://github.com/pytorch/pytorch/issues/114842), The first runtime component we would like to upstream is `Device` which contains the device management functions of Intel GPU's runtime. To facilitate the code review, we split the code changes into 4 PRs. This is one of the 4 PRs and covers the changes under `c10`.

# Design
Intel GPU device is a wrapper of sycl device on which kernels can be executed. In our design, we will maintain a sycl device pool containing all the GPU devices of the current machine, and manage the status of the device pool by PyTorch. The thread local safe is considered in this design. The corresponding C++ files related to `Device` will be placed in c10/xpu folder. And we provide the c10 device runtime APIs, like
  - `c10::xpu::device_count`
  - `c10::xpu::set_device`
  - ...

# Additional Context
In our plan, 4 PRs should be submitted to PyTorch for `Device`:
1. for c10
2. for aten
3. for python frontend
4. for lazy initialization shared with CUDA

Pull Request resolved: https://github.com/pytorch/pytorch/pull/116019
Approved by: https://github.com/gujinghui, https://github.com/jgong5, https://github.com/EikanWang, https://github.com/malfet
This commit is contained in:
Yu, Guangye
2024-01-12 07:36:25 +00:00
committed by PyTorch MergeBot
parent 7dac2f9f2d
commit 50049cfaa0
17 changed files with 637 additions and 0 deletions

View File

@ -8,6 +8,7 @@ cxx_library(
"test/**/*.cpp",
"benchmark/**/*.cpp",
"cuda/**/*.cpp",
"xpu/**/*.cpp",
],
),
deps = [
@ -30,6 +31,7 @@ cxx_library(
"test/**/*.h",
"benchmark/**/*.h",
"cuda/**/*.h",
"xpu/**/*.h",
],
),
exported_linker_flags = [],