mirror of
https://github.com/pytorch/pytorch.git
synced 2025-10-20 12:54:11 +08:00
[Doc] Update Intel GPU Support on README (#126001)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/126001 Approved by: https://github.com/jgong5, https://github.com/albanD, https://github.com/gujinghui, https://github.com/EikanWang
This commit is contained in:
committed by
PyTorch MergeBot
parent
812534d27e
commit
d0f3ae8e67
13
README.md
13
README.md
@ -24,6 +24,9 @@ Our trunk health (Continuous Integration signals) can be found at [hud.pytorch.o
|
||||
- [NVIDIA Jetson Platforms](#nvidia-jetson-platforms)
|
||||
- [From Source](#from-source)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [NVIDIA CUDA Support](#nvidia-cuda-support)
|
||||
- [AMD ROCm Support](#amd-rocm-support)
|
||||
- [Intel GPU Support](#intel-gpu-support)
|
||||
- [Install Dependencies](#install-dependencies)
|
||||
- [Get the PyTorch Source](#get-the-pytorch-source)
|
||||
- [Install PyTorch](#install-pytorch)
|
||||
@ -162,6 +165,7 @@ If you are installing from source, you will need:
|
||||
|
||||
We highly recommend installing an [Anaconda](https://www.anaconda.com/download) environment. You will get a high-quality BLAS library (MKL) and you get controlled dependency versions regardless of your Linux distro.
|
||||
|
||||
##### NVIDIA CUDA Support
|
||||
If you want to compile with CUDA support, [select a supported version of CUDA from our support matrix](https://pytorch.org/get-started/locally/), then install the following:
|
||||
- [NVIDIA CUDA](https://developer.nvidia.com/cuda-downloads)
|
||||
- [NVIDIA cuDNN](https://developer.nvidia.com/cudnn) v8.5 or above
|
||||
@ -174,6 +178,7 @@ Other potentially useful environment variables may be found in `setup.py`.
|
||||
|
||||
If you are building for NVIDIA's Jetson platforms (Jetson Nano, TX1, TX2, AGX Xavier), Instructions to install PyTorch for Jetson Nano are [available here](https://devtalk.nvidia.com/default/topic/1049071/jetson-nano/pytorch-for-jetson-nano/)
|
||||
|
||||
##### AMD ROCm Support
|
||||
If you want to compile with ROCm support, install
|
||||
- [AMD ROCm](https://rocm.docs.amd.com/en/latest/deploy/linux/quick_start.html) 4.0 and above installation
|
||||
- ROCm is currently supported only for Linux systems.
|
||||
@ -181,6 +186,14 @@ If you want to compile with ROCm support, install
|
||||
If you want to disable ROCm support, export the environment variable `USE_ROCM=0`.
|
||||
Other potentially useful environment variables may be found in `setup.py`.
|
||||
|
||||
##### Intel GPU Support
|
||||
If you want to compile with Intel GPU support, follow these
|
||||
- [PyTorch Prerequisites for Intel GPUs](https://www.intel.com/content/www/us/en/developer/articles/tool/pytorch-prerequisites-for-intel-gpus.html) instructions.
|
||||
- Intel GPU is currently supported only for Linux systems.
|
||||
|
||||
If you want to disable Intel GPU support, export the environment variable `USE_XPU=0`.
|
||||
Other potentially useful environment variables may be found in `setup.py`.
|
||||
|
||||
#### Install Dependencies
|
||||
|
||||
**Common**
|
||||
|
Reference in New Issue
Block a user