Fix out of date documentation & remove friction points (#78682)

Fixes various friction points with the documentation for onboarding new users and remove instructions that were no longer valid

Changes include:
- Listing prerequisites earlier, so that devs can ensure they're met before encountering error messages
- Removing linter invocations that are no longer valid
- Modifying instructions to install mkl packages to only apply to x86 based CPUs

[skip ci]
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78682
Approved by: https://github.com/seemethere, https://github.com/janeyx99, https://github.com/malfet
This commit is contained in:
Zain Rizvi
2022-06-02 19:31:48 +00:00
committed by PyTorch MergeBot
parent 4220799ea7
commit 5ef378a30f
2 changed files with 72 additions and 81 deletions

View File

@ -4,6 +4,8 @@
- [Contributing to PyTorch](#contributing-to-pytorch)
- [Developing PyTorch](#developing-pytorch)
- [Prerequisites](#prerequisites)
- [Instructions](#instructions)
- [Tips and Debugging](#tips-and-debugging)
- [Nightly Checkout & Pull](#nightly-checkout--pull)
- [Codebase structure](#codebase-structure)
@ -11,7 +13,7 @@
- [Python Unit Testing](#python-unit-testing)
- [Better local unit tests with `pytest`](#better-local-unit-tests-with-pytest)
- [Local linting](#local-linting)
- [Running `mypy`](#running-mypy)
- [Running `mypy`](#running-mypy)
- [C++ Unit Testing](#c-unit-testing)
- [Run Specific CI Jobs](#run-specific-ci-jobs)
- [Writing documentation](#writing-documentation)
@ -81,6 +83,12 @@ https://github.com/pytorch/pytorch#from-source
To develop PyTorch on your machine, here are some tips:
### Prerequisites
* CMake. You can install it via `pip install cmake`
* Python >= 3.7 (3.7.6+ recommended)
### Instructions
1. Uninstall all existing PyTorch installs. You may need to run `pip
uninstall torch` multiple times. You'll know `torch` is fully
uninstalled when you see `WARNING: Skipping torch as it is not
@ -100,7 +108,7 @@ git clone https://github.com/pytorch/pytorch
cd pytorch
```
2.1. If you already have PyTorch from source, update it:
If you already have PyTorch from source, update it:
```bash
git pull --rebase
@ -110,9 +118,9 @@ git submodule update --init --recursive --jobs 0
If you want to have no-op incremental rebuilds (which are fast), see the section below titled "Make no-op build fast."
3. Install PyTorch in `develop` mode:
3. Follow the instructions for [installing PyTorch from source](https://github.com/pytorch/pytorch#from-source), except when it's time to install PyTorch instead of invoking `setup.py install` you'll want to call `setup.py develop` instead:
The change you have to make is to replace
Specifically, the change you have to make is to replace
```bash
python setup.py install
@ -125,8 +133,8 @@ python setup.py develop
```
This mode will symlink the Python files from the current local source
tree into the Python install. Hence, if you modify a Python file, you
do not need to reinstall PyTorch again and again. This is especially
tree into the Python install. This way when you modify a Python file, you
won't need to reinstall PyTorch again and again. This is especially
useful if you are only changing Python files.
For example:
@ -143,10 +151,6 @@ torch as it is not installed`; next run `python setup.py clean`. After
that, you can install in `develop` mode again.
### Tips and Debugging
* A prerequisite to installing PyTorch is CMake. We recommend installing it with [Homebrew](https://brew.sh/)
with `brew install cmake` if you are developing on MacOS or Linux system.
* Our `setup.py` requires Python >= 3.7
* If a commit is simple and doesn't affect any code (keep in mind that some docstrings contain code
that is used in tests), you can add `[skip ci]` (case sensitive) somewhere in your commit message to
[skip all build / test steps](https://github.blog/changelog/2021-02-08-github-actions-skip-pull-request-and-push-workflows-with-skip-ci/).
@ -172,14 +176,14 @@ with `brew install cmake` if you are developing on MacOS or Linux system.
ENV_KEY1=ENV_VAL1[, ENV_KEY2=ENV_VAL2]* python setup.py develop
```
* If you run into issue running `git submodule update --init --recursive --jobs 0`. Please try the following:
- If you encountered error such as
- If you encounter an error such as
```
error: Submodule 'third_party/pybind11' could not be updated
```
check whether your Git local or global config file contains any `submodule.*` settings. If yes, remove them and try again.
(please reference [this doc](https://git-scm.com/docs/git-config#Documentation/git-config.txt-submoduleltnamegturl) for more info).
- If you encountered error such as
- If you encounter an error such as
```
fatal: unable to access 'https://github.com/pybind11/pybind11.git': could not load PEM client certificate ...
```
@ -189,7 +193,7 @@ with `brew install cmake` if you are developing on MacOS or Linux system.
openssl x509 -noout -in <cert_file> -dates
```
- If you encountered error that some third_party modules are not checkout correctly, such as
- If you encounter an error that some third_party modules are not checked out correctly, such as
```
Could not find .../pytorch/third_party/pybind11/CMakeLists.txt
```
@ -308,6 +312,12 @@ into the repo directory.
### Python Unit Testing
**Prerequisites**:
The following packages should be installed with either `conda` or `pip`:
- `expecttest` and `hypothesis` - required to run tests
- `mypy` - recommended for linting
- `pytest` - recommended to run tests more selectively
All PyTorch test suites are located in the `test` folder and start with
`test_`. Run the entire test
suite with
@ -340,10 +350,6 @@ in `test/test_jit.py`. Your command would be:
python test/test_jit.py TestJit.test_Sequential
```
The `expecttest` and `hypothesis` libraries must be installed to run the tests. `mypy` is
an optional dependency, and `pytest` may help run tests more selectively.
All these packages can be installed with `conda` or `pip`.
**Weird note:** In our CI (Continuous Integration) jobs, we actually run the tests from the `test` folder and **not** the root of the repo, since there are various dependencies we set up for CI that expects the tests to be run from the test folder. As such, there may be some inconsistencies between local testing and CI testing--if you observe an inconsistency, please [file an issue](https://github.com/pytorch/pytorch/issues/new/choose).
### Better local unit tests with `pytest`
@ -365,54 +371,24 @@ command runs tests such as `TestNN.test_BCELoss` and
### Local linting
You can run the same linting steps that are used in CI locally via `make`:
```bash
# Lint all files
make lint -j 6 # run lint (using 6 parallel jobs)
# Lint only the files you have changed
make quicklint -j 6
```
These jobs may require extra dependencies that aren't dependencies of PyTorch
itself, so you can install them via this command, which you should only have to
run once:
Install all prerequisites by running
```bash
make setup_lint
```
To run a specific linting step, use one of these targets or see the
[`Makefile`](Makefile) for a complete list of options.
You can now run the same linting steps that are used in CI locally via `make`:
```bash
# Check for tabs, trailing newlines, etc.
make quick_checks
make flake8
make mypy
make cmakelint
make clang-tidy
make lint
```
To run a lint only on changes, add the `CHANGED_ONLY` option:
Learn more about the linter on the [lintrunner wiki page](https://github.com/pytorch/pytorch/wiki/lintrunner)
```bash
make <name of lint> CHANGED_ONLY=--changed-only
```
### Running `mypy`
#### Running `mypy`
`mypy` is an optional static type checker for Python. We have multiple `mypy`
configs for the PyTorch codebase, so you can run them all using this command:
```bash
make mypy
```
configs for the PyTorch codebase that are automatically validated against whenever the linter is run.
See [Guide for adding type annotations to
PyTorch](https://github.com/pytorch/pytorch/wiki/Guide-for-adding-type-annotations-to-PyTorch)
@ -462,9 +438,9 @@ of very low signal to reviewers.
So you want to write some documentation and don't know where to start?
PyTorch has two main types of documentation:
- user-facing documentation.
- **User facing documentation**:
These are the docs that you see over at [our docs website](https://pytorch.org/docs).
- developer facing documentation.
- **Developer facing documentation**:
Developer facing documentation is spread around our READMEs in our codebase and in
the [PyTorch Developer Wiki](https://pytorch.org/wiki).
If you're interested in adding new developer docs, please read this [page on the wiki](https://github.com/pytorch/pytorch/wiki/Where-or-how-should-I-add-documentation%3F) on our best practices for where to put it.
@ -472,8 +448,7 @@ If you're interested in adding new developer docs, please read this [page on the
The rest of this section is about user-facing documentation.
PyTorch uses [Google style](http://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html)
for formatting docstrings. Length of line inside docstrings block must be limited to 80 characters to
fit into Jupyter documentation popups.
for formatting docstrings. Each line inside a docstrings block must be limited to 80 characters so that it fits into Jupyter documentation popups.
### Building documentation

View File

@ -23,6 +23,7 @@ Our trunk health (Continuous Integration signals) can be found at [hud.pytorch.o
- [Binaries](#binaries)
- [NVIDIA Jetson Platforms](#nvidia-jetson-platforms)
- [From Source](#from-source)
- [Prerequisites](#prerequisites)
- [Install Dependencies](#install-dependencies)
- [Get the PyTorch Source](#get-the-pytorch-source)
- [Install PyTorch](#install-pytorch)
@ -152,16 +153,19 @@ They require JetPack 4.2 and above, and [@dusty-nv](https://github.com/dusty-nv)
### From Source
If you are installing from source, you will need Python 3.7 or later and a C++14 compiler. Also, we highly recommend installing an [Anaconda](https://www.anaconda.com/distribution/#download-section) environment.
You will get a high-quality BLAS library (MKL) and you get controlled dependency versions regardless of your Linux distro.
#### Prerequisites
If you are installing from source, you will need:
- Python 3.7 or later (for Linux, Python 3.7.6+ or 3.8.1+ is needed)
- A C++14 compatible compiler, such as clang
Once you have [Anaconda](https://www.anaconda.com/distribution/#download-section) installed, here are the instructions.
We highly recommend installing an [Anaconda](https://www.anaconda.com/distribution/#download-section) environment. You will get a high-quality BLAS library (MKL) and you get controlled dependency versions regardless of your Linux distro.
If you want to compile with CUDA support, install
If you want to compile with CUDA support, install the following (note that CUDA is not supported on macOS)
- [NVIDIA CUDA](https://developer.nvidia.com/cuda-downloads) 10.2 or above
- [NVIDIA cuDNN](https://developer.nvidia.com/cudnn) v7 or above
- [Compiler](https://gist.github.com/ax3l/9489132) compatible with CUDA
Note: You could refer to the [cuDNN Support Matrix](https://docs.nvidia.com/deeplearning/cudnn/pdf/cuDNN-Support-Matrix.pdf) for cuDNN versions with the various supported CUDA, CUDA driver and NVIDIA hardwares
Note: You could refer to the [cuDNN Support Matrix](https://docs.nvidia.com/deeplearning/cudnn/pdf/cuDNN-Support-Matrix.pdf) for cuDNN versions with the various supported CUDA, CUDA driver and NVIDIA hardware
If you want to disable CUDA support, export the environment variable `USE_CUDA=0`.
Other potentially useful environment variables may be found in `setup.py`.
@ -177,25 +181,33 @@ Other potentially useful environment variables may be found in `setup.py`.
#### Install Dependencies
Common
**Common**
```bash
conda install astunparse numpy ninja pyyaml mkl mkl-include setuptools cmake cffi typing_extensions future six requests dataclasses
conda install astunparse numpy ninja pyyaml setuptools cmake cffi typing_extensions future six requests dataclasses
```
On Linux
**On Linux**
```bash
conda install mkl mkl-include
# CUDA only: Add LAPACK support for the GPU if needed
conda install -c pytorch magma-cuda110 # or the magma-cuda* that matches your CUDA version from https://anaconda.org/pytorch/repo
```
On MacOS
**On MacOS**
```bash
# Add this package on intel x86 processor machines only
conda install mkl mkl-include
# Add these packages if torch.distributed is needed
conda install pkg-config libuv
```
On Windows
**On Windows**
```bash
conda install mkl mkl-include
# Add these packages if torch.distributed is needed.
# Distributed package support on Windows is a prototype feature and is subject to changes.
conda install -c conda-forge libuv=1.39
@ -211,17 +223,20 @@ git submodule update --init --recursive --jobs 0
```
#### Install PyTorch
On Linux
**On Linux**
If you're compiling for AMD ROCm then first run this command:
```bash
# Only run this if you're compiling for ROCm
python tools/amd_build/build_amd.py
```
Install PyTorch
```bash
export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
python setup.py install
```
Note that if you are compiling for ROCm, you must run this command first:
```bash
python tools/amd_build/build_amd.py
```
Note that if you are using [Anaconda](https://www.anaconda.com/distribution/#download-section), you may experience an error caused by the linker:
```plaintext
@ -232,16 +247,14 @@ error: command 'g++' failed with exit status 1
This is caused by `ld` from Conda environment shadowing the system `ld`. You should use a newer version of Python that fixes this issue. The recommended Python version is 3.7.6+ and 3.8.1+.
On macOS
**On macOS**
```bash
export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ python setup.py install
```
CUDA is not supported on macOS.
On Windows
**On Windows**
Choose Correct Visual Studio Version.
@ -255,9 +268,10 @@ come with Visual Studio Code by default.
If you want to build legacy python code, please refer to [Building on legacy code and CUDA](https://github.com/pytorch/pytorch/blob/master/CONTRIBUTING.md#building-on-legacy-code-and-cuda)
Build with CPU
**CPU-only builds**
In this mode PyTorch computations will run on your CPU, not your GPU
It's fairly easy to build with CPU.
```cmd
conda activate
python setup.py install
@ -265,7 +279,9 @@ python setup.py install
Note on OpenMP: The desired OpenMP implementation is Intel OpenMP (iomp). In order to link against iomp, you'll need to manually download the library and set up the building environment by tweaking `CMAKE_INCLUDE_PATH` and `LIB`. The instruction [here](https://github.com/pytorch/pytorch/blob/master/docs/source/notes/windows.rst#building-from-source) is an example for setting up both MKL and Intel OpenMP. Without these configurations for CMake, Microsoft Visual C OpenMP runtime (vcomp) will be used.
Build with CUDA
**CUDA based build**
In this mode PyTorch computations will leverage your GPU via CUDA for faster number crunching
[NVTX](https://docs.nvidia.com/gameworks/content/gameworkslibrary/nvtx/nvidia_tools_extension_library_nvtx.htm) is needed to build Pytorch with CUDA.
NVTX is a part of CUDA distributive, where it is called "Nsight Compute". To install it onto already installed CUDA run CUDA installation once again and check the corresponding checkbox.