[build] modernize build-frontend: python setup.py develop/install -> [uv ]pip install --no-build-isolation [-e ]. (#156027)

Modernize the development installation:

```bash
# python setup.py develop
python -m pip install --no-build-isolation -e .

# python setup.py install
python -m pip install --no-build-isolation .
```

Now, the `python setup.py develop` is a wrapper around `python -m pip install -e .` since `setuptools>=80.0`:

- pypa/setuptools#4955

`python setup.py install` is deprecated and will emit a warning during run. The warning will become an error on October 31, 2025.

- 9c4d383631/setuptools/command/install.py (L58-L67)

> ```python
> SetuptoolsDeprecationWarning.emit(
>     "setup.py install is deprecated.",
>     """
>     Please avoid running ``setup.py`` directly.
>     Instead, use pypa/build, pypa/installer or other
>     standards-based tools.
>     """,
>     see_url="https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html",
>     due_date=(2025, 10, 31),
> )
> ```

- pypa/setuptools#3849

Additional Resource:

- [Why you shouldn't invoke setup.py directly](https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/156027
Approved by: https://github.com/ezyang
This commit is contained in:
Xuehai Pan
2025-07-09 16:16:10 +08:00
committed by PyTorch MergeBot
parent fc0376e8b1
commit 4dce5b71a0
21 changed files with 91 additions and 73 deletions

View File

@ -104,7 +104,7 @@ if [[ "$DESIRED_CUDA" == *"rocm"* ]]; then
export ROCclr_DIR=/opt/rocm/rocclr/lib/cmake/rocclr export ROCclr_DIR=/opt/rocm/rocclr/lib/cmake/rocclr
fi fi
echo "Calling setup.py install at $(date)" echo "Calling 'python -m pip install .' at $(date)"
if [[ $LIBTORCH_VARIANT = *"static"* ]]; then if [[ $LIBTORCH_VARIANT = *"static"* ]]; then
STATIC_CMAKE_FLAG="-DTORCH_STATIC=1" STATIC_CMAKE_FLAG="-DTORCH_STATIC=1"
@ -120,7 +120,7 @@ fi
# TODO: Remove this flag once https://github.com/pytorch/pytorch/issues/55952 is closed # TODO: Remove this flag once https://github.com/pytorch/pytorch/issues/55952 is closed
CFLAGS='-Wno-deprecated-declarations' \ CFLAGS='-Wno-deprecated-declarations' \
BUILD_LIBTORCH_CPU_WITH_DEBUG=1 \ BUILD_LIBTORCH_CPU_WITH_DEBUG=1 \
python setup.py install python -m pip install --no-build-isolation -v .
mkdir -p libtorch/{lib,bin,include,share} mkdir -p libtorch/{lib,bin,include,share}

View File

@ -436,11 +436,11 @@ test_inductor_aoti() {
python3 tools/amd_build/build_amd.py python3 tools/amd_build/build_amd.py
fi fi
if [[ "$BUILD_ENVIRONMENT" == *sm86* ]]; then if [[ "$BUILD_ENVIRONMENT" == *sm86* ]]; then
BUILD_COMMAND=(TORCH_CUDA_ARCH_LIST=8.6 USE_FLASH_ATTENTION=OFF python setup.py develop) BUILD_COMMAND=(TORCH_CUDA_ARCH_LIST=8.6 USE_FLASH_ATTENTION=OFF python -m pip install --no-build-isolation -v -e .)
# TODO: Replace me completely, as one should not use conda libstdc++, nor need special path to TORCH_LIB # TODO: Replace me completely, as one should not use conda libstdc++, nor need special path to TORCH_LIB
TEST_ENVS=(CPP_TESTS_DIR="${BUILD_BIN_DIR}" LD_LIBRARY_PATH="/opt/conda/envs/py_3.10/lib:${TORCH_LIB_DIR}:${LD_LIBRARY_PATH}") TEST_ENVS=(CPP_TESTS_DIR="${BUILD_BIN_DIR}" LD_LIBRARY_PATH="/opt/conda/envs/py_3.10/lib:${TORCH_LIB_DIR}:${LD_LIBRARY_PATH}")
else else
BUILD_COMMAND=(python setup.py develop) BUILD_COMMAND=(python -m pip install --no-build-isolation -v -e .)
TEST_ENVS=(CPP_TESTS_DIR="${BUILD_BIN_DIR}" LD_LIBRARY_PATH="${TORCH_LIB_DIR}") TEST_ENVS=(CPP_TESTS_DIR="${BUILD_BIN_DIR}" LD_LIBRARY_PATH="${TORCH_LIB_DIR}")
fi fi
@ -1579,7 +1579,7 @@ test_operator_benchmark() {
test_inductor_set_cpu_affinity test_inductor_set_cpu_affinity
cd benchmarks/operator_benchmark/pt_extension cd benchmarks/operator_benchmark/pt_extension
python setup.py install python -m pip install .
cd "${TEST_DIR}"/benchmarks/operator_benchmark cd "${TEST_DIR}"/benchmarks/operator_benchmark
$TASKSET python -m benchmark_all_test --device "$1" --tag-filter "$2" \ $TASKSET python -m benchmark_all_test --device "$1" --tag-filter "$2" \

View File

@ -61,8 +61,8 @@ You are now all set to start developing with PyTorch in a DevContainer environme
## Step 8: Build PyTorch ## Step 8: Build PyTorch
To build pytorch from source, simply run: To build pytorch from source, simply run:
``` ```bash
python setup.py develop python -m pip install --no-build-isolation -v -e .
``` ```
The process involves compiling thousands of files, and would take a long time. Fortunately, the compiled objects can be useful for your next build. When you modify some files, you only need to compile the changed files the next time. The process involves compiling thousands of files, and would take a long time. Fortunately, the compiled objects can be useful for your next build. When you modify some files, you only need to compile the changed files the next time.

View File

@ -6,7 +6,7 @@ set -euxo pipefail
cd llm-target-determinator cd llm-target-determinator
pip install -q -r requirements.txt pip install -q -r requirements.txt
cd ../codellama cd ../codellama
pip install -e . pip install --no-build-isolation -v -e .
pip install numpy==1.26.0 pip install numpy==1.26.0
# Run indexer # Run indexer

View File

@ -88,20 +88,19 @@ source venv/bin/activate # or `& .\venv\Scripts\Activate.ps1` on Windows
* If you want to have no-op incremental rebuilds (which are fast), see [Make no-op build fast](#make-no-op-build-fast) below. * If you want to have no-op incremental rebuilds (which are fast), see [Make no-op build fast](#make-no-op-build-fast) below.
* When installing with `python setup.py develop` (in contrast to `python setup.py install`) Python runtime will use * When installing with `python -m pip install -e .` (in contrast to `python -m pip install .`) Python runtime will use
the current local source-tree when importing `torch` package. (This is done by creating [`.egg-link`](https://wiki.python.org/moin/PythonPackagingTerminology#egg-link) file in `site-packages` folder) the current local source-tree when importing `torch` package. (This is done by creating [`.egg-link`](https://wiki.python.org/moin/PythonPackagingTerminology#egg-link) file in `site-packages` folder)
This way you do not need to repeatedly install after modifying Python files (`.py`). This way you do not need to repeatedly install after modifying Python files (`.py`).
However, you would need to reinstall if you modify Python interface (`.pyi`, `.pyi.in`) or However, you would need to reinstall if you modify Python interface (`.pyi`, `.pyi.in`) or non-Python files (`.cpp`, `.cc`, `.cu`, `.h`, ...).
non-Python files (`.cpp`, `.cc`, `.cu`, `.h`, ...).
One way to avoid running `python setup.py develop` every time one makes a change to C++/CUDA/ObjectiveC files on Linux/Mac, One way to avoid running `python -m pip install -e .` every time one makes a change to C++/CUDA/ObjectiveC files on Linux/Mac,
is to create a symbolic link from `build` folder to `torch/lib`, for example, by issuing following: is to create a symbolic link from `build` folder to `torch/lib`, for example, by issuing following:
```bash ```bash
pushd torch/lib; sh -c "ln -sf ../../build/lib/libtorch_cpu.* ."; popd pushd torch/lib; sh -c "ln -sf ../../build/lib/libtorch_cpu.* ."; popd
``` ```
Afterwards rebuilding a library (for example to rebuild `libtorch_cpu.so` issue `ninja torch_cpu` from `build` folder), Afterwards rebuilding a library (for example to rebuild `libtorch_cpu.so` issue `ninja torch_cpu` from `build` folder),
would be sufficient to make change visible in `torch` package. would be sufficient to make change visible in `torch` package.
To reinstall, first uninstall all existing PyTorch installs. You may need to run `pip To reinstall, first uninstall all existing PyTorch installs. You may need to run `pip
@ -115,9 +114,9 @@ source venv/bin/activate # or `& .\venv\Scripts\Activate.ps1` on Windows
pip uninstall torch pip uninstall torch
``` ```
Next run `python setup.py clean`. After that, you can install in `develop` mode again. Next run `python setup.py clean`. After that, you can install in editable mode again.
* If you run into errors when running `python setup.py develop`, here are some debugging steps: * If you run into errors when running `python -m pip install -e .`, here are some debugging steps:
1. Run `printf '#include <stdio.h>\nint main() { printf("Hello World");}'|clang -x c -; ./a.out` to make sure 1. Run `printf '#include <stdio.h>\nint main() { printf("Hello World");}'|clang -x c -; ./a.out` to make sure
your CMake works and can compile this simple Hello World program without errors. your CMake works and can compile this simple Hello World program without errors.
2. Nuke your `build` directory. The `setup.py` script compiles binaries into the `build` folder and caches many 2. Nuke your `build` directory. The `setup.py` script compiles binaries into the `build` folder and caches many
@ -130,13 +129,20 @@ source venv/bin/activate # or `& .\venv\Scripts\Activate.ps1` on Windows
git clean -xdf git clean -xdf
python setup.py clean python setup.py clean
git submodule update --init --recursive git submodule update --init --recursive
python setup.py develop python -m pip install -r requirements.txt
python -m pip install --no-build-isolation -v -e .
``` ```
4. The main step within `python setup.py develop` is running `make` from the `build` directory. If you want to 4. The main step within `python -m pip install -e .` is running `cmake --build build` from the `build` directory. If you want to
experiment with some environment variables, you can pass them into the command: experiment with some environment variables, you can pass them into the command:
```bash ```bash
ENV_KEY1=ENV_VAL1[, ENV_KEY2=ENV_VAL2]* python setup.py develop ENV_KEY1=ENV_VAL1[, ENV_KEY2=ENV_VAL2]* CMAKE_FRESH=1 python -m pip install --no-build-isolation -v -e .
``` ```
5. Try installing PyTorch without build isolation by adding `--no-build-isolation` to the `pip install` command.
This will use the current environment's packages instead of creating a new isolated environment for the build.
```bash
python -m pip install --no-build-isolation -v -e .
```
* If you run into issue running `git submodule update --init --recursive`. Please try the following: * If you run into issue running `git submodule update --init --recursive`. Please try the following:
- If you encounter an error such as - If you encounter an error such as
@ -639,9 +645,9 @@ can be selected interactively with your mouse to zoom in on a particular part of
the program execution timeline. The `--native` command-line option tells the program execution timeline. The `--native` command-line option tells
`py-spy` to record stack frame entries for PyTorch C++ code. To get line numbers `py-spy` to record stack frame entries for PyTorch C++ code. To get line numbers
for C++ code it may be necessary to compile PyTorch in debug mode by prepending for C++ code it may be necessary to compile PyTorch in debug mode by prepending
your `setup.py develop` call to compile PyTorch with `DEBUG=1`. Depending on your `python -m pip install -e .` call to compile PyTorch with `DEBUG=1`.
your operating system it may also be necessary to run `py-spy` with root Depending on your operating system it may also be necessary to run `py-spy` with
privileges. root privileges.
`py-spy` can also work in an `htop`-like "live profiling" mode and can be `py-spy` can also work in an `htop`-like "live profiling" mode and can be
tweaked to adjust the stack sampling rate, see the `py-spy` readme for more tweaked to adjust the stack sampling rate, see the `py-spy` readme for more
@ -649,7 +655,7 @@ details.
## Managing multiple build trees ## Managing multiple build trees
One downside to using `python setup.py develop` is that your development One downside to using `python -m pip install -e .` is that your development
version of PyTorch will be installed globally on your account (e.g., if version of PyTorch will be installed globally on your account (e.g., if
you run `import torch` anywhere else, the development version will be you run `import torch` anywhere else, the development version will be
used). used).
@ -663,7 +669,7 @@ specific build of PyTorch. To set one up:
python -m venv pytorch-myfeature python -m venv pytorch-myfeature
source pytorch-myfeature/bin/activate # or `& .\pytorch-myfeature\Scripts\Activate.ps1` on Windows source pytorch-myfeature/bin/activate # or `& .\pytorch-myfeature\Scripts\Activate.ps1` on Windows
# if you run python now, torch will NOT be installed # if you run python now, torch will NOT be installed
python setup.py develop python -m pip install --no-build-isolation -v -e .
``` ```
## C++ development tips ## C++ development tips
@ -701,7 +707,9 @@ variables `DEBUG`, `USE_DISTRIBUTED`, `USE_MKLDNN`, `USE_CUDA`, `USE_FLASH_ATTEN
For example: For example:
```bash ```bash
DEBUG=1 USE_DISTRIBUTED=0 USE_MKLDNN=0 USE_CUDA=0 BUILD_TEST=0 USE_FBGEMM=0 USE_NNPACK=0 USE_QNNPACK=0 USE_XNNPACK=0 python setup.py develop DEBUG=1 USE_DISTRIBUTED=0 USE_MKLDNN=0 USE_CUDA=0 BUILD_TEST=0 \
USE_FBGEMM=0 USE_NNPACK=0 USE_QNNPACK=0 USE_XNNPACK=0 \
python -m pip install --no-build-isolation -v -e .
``` ```
For subsequent builds (i.e., when `build/CMakeCache.txt` exists), the build For subsequent builds (i.e., when `build/CMakeCache.txt` exists), the build
@ -711,7 +719,7 @@ options.
### Code completion and IDE support ### Code completion and IDE support
When using `python setup.py develop`, PyTorch will generate When using `python -m pip install -e .`, PyTorch will generate
a `compile_commands.json` file that can be used by many editors a `compile_commands.json` file that can be used by many editors
to provide command completion and error highlighting for PyTorch's to provide command completion and error highlighting for PyTorch's
C++ code. You need to `pip install ninja` to generate accurate C++ code. You need to `pip install ninja` to generate accurate
@ -772,7 +780,7 @@ If not, you can define these variables on the command line before invoking `setu
export CMAKE_C_COMPILER_LAUNCHER=ccache export CMAKE_C_COMPILER_LAUNCHER=ccache
export CMAKE_CXX_COMPILER_LAUNCHER=ccache export CMAKE_CXX_COMPILER_LAUNCHER=ccache
export CMAKE_CUDA_COMPILER_LAUNCHER=ccache export CMAKE_CUDA_COMPILER_LAUNCHER=ccache
python setup.py develop python -m pip install --no-build-isolation -v -e .
``` ```
#### Use a faster linker #### Use a faster linker
@ -785,7 +793,7 @@ If you are editing a single file and rebuilding in a tight loop, the time spent
Starting with CMake 3.29, you can specify the linker type using the [`CMAKE_LINKER_TYPE`](https://cmake.org/cmake/help/latest/variable/CMAKE_LINKER_TYPE.html) variable. For example, with `mold` installed: Starting with CMake 3.29, you can specify the linker type using the [`CMAKE_LINKER_TYPE`](https://cmake.org/cmake/help/latest/variable/CMAKE_LINKER_TYPE.html) variable. For example, with `mold` installed:
```sh ```sh
CMAKE_LINKER_TYPE=MOLD python setup.py develop CMAKE_LINKER_TYPE=MOLD python -m pip install --no-build-isolation -v -e .
``` ```
#### Use pre-compiled headers #### Use pre-compiled headers
@ -797,7 +805,7 @@ setting `USE_PRECOMPILED_HEADERS=1` either on first setup, or in the
`CMakeCache.txt` file. `CMakeCache.txt` file.
```sh ```sh
USE_PRECOMPILED_HEADERS=1 python setup.py develop USE_PRECOMPILED_HEADERS=1 python -m pip install --no-build-isolation -v -e .
``` ```
This adds a build step where the compiler takes `<ATen/ATen.h>` and essentially This adds a build step where the compiler takes `<ATen/ATen.h>` and essentially
@ -820,7 +828,7 @@ A compiler-wrapper to fix this is provided in `tools/nvcc_fix_deps.py`. You can
this as a compiler launcher, similar to `ccache` this as a compiler launcher, similar to `ccache`
```bash ```bash
export CMAKE_CUDA_COMPILER_LAUNCHER="python;`pwd`/tools/nvcc_fix_deps.py;ccache" export CMAKE_CUDA_COMPILER_LAUNCHER="python;`pwd`/tools/nvcc_fix_deps.py;ccache"
python setup.py develop python -m pip install --no-build-isolation -v -e .
``` ```
### Rebuild few files with debug information ### Rebuild few files with debug information
@ -1171,7 +1179,7 @@ build_with_asan()
CFLAGS="-fsanitize=address -fno-sanitize-recover=all -shared-libasan -pthread" \ CFLAGS="-fsanitize=address -fno-sanitize-recover=all -shared-libasan -pthread" \
CXX_FLAGS="-pthread" \ CXX_FLAGS="-pthread" \
USE_CUDA=0 USE_OPENMP=0 USE_DISTRIBUTED=0 DEBUG=1 \ USE_CUDA=0 USE_OPENMP=0 USE_DISTRIBUTED=0 DEBUG=1 \
python setup.py develop python -m pip install --no-build-isolation -v -e .
} }
run_with_asan() run_with_asan()

View File

@ -57,7 +57,7 @@ RUN --mount=type=cache,target=/opt/ccache \
export eval ${CMAKE_VARS} && \ export eval ${CMAKE_VARS} && \
TORCH_CUDA_ARCH_LIST="7.0 7.2 7.5 8.0 8.6 8.7 8.9 9.0 9.0a" TORCH_NVCC_FLAGS="-Xfatbin -compress-all" \ TORCH_CUDA_ARCH_LIST="7.0 7.2 7.5 8.0 8.6 8.7 8.9 9.0 9.0a" TORCH_NVCC_FLAGS="-Xfatbin -compress-all" \
CMAKE_PREFIX_PATH="$(dirname $(which conda))/../" \ CMAKE_PREFIX_PATH="$(dirname $(which conda))/../" \
python setup.py install python -m pip install --no-build-isolation -v .
FROM conda as conda-installs FROM conda as conda-installs
ARG PYTHON_VERSION=3.11 ARG PYTHON_VERSION=3.11

View File

@ -228,6 +228,7 @@ If you want to disable Intel GPU support, export the environment variable `USE_X
Other potentially useful environment variables may be found in `setup.py`. Other potentially useful environment variables may be found in `setup.py`.
#### Get the PyTorch Source #### Get the PyTorch Source
```bash ```bash
git clone https://github.com/pytorch/pytorch git clone https://github.com/pytorch/pytorch
cd pytorch cd pytorch
@ -279,24 +280,29 @@ conda install -c conda-forge libuv=1.39
``` ```
#### Install PyTorch #### Install PyTorch
**On Linux** **On Linux**
If you're compiling for AMD ROCm then first run this command: If you're compiling for AMD ROCm then first run this command:
```bash ```bash
# Only run this if you're compiling for ROCm # Only run this if you're compiling for ROCm
python tools/amd_build/build_amd.py python tools/amd_build/build_amd.py
``` ```
Install PyTorch Install PyTorch
```bash ```bash
export CMAKE_PREFIX_PATH="${CONDA_PREFIX:-'$(dirname $(which conda))/../'}:${CMAKE_PREFIX_PATH}" export CMAKE_PREFIX_PATH="${CONDA_PREFIX:-'$(dirname $(which conda))/../'}:${CMAKE_PREFIX_PATH}"
python setup.py develop python -m pip install -r requirements.txt
python -m pip install --no-build-isolation -v -e .
``` ```
**On macOS** **On macOS**
```bash ```bash
python3 setup.py develop python -m pip install -r requirements.txt
python -m pip install --no-build-isolation -v -e .
``` ```
**On Windows** **On Windows**
@ -308,7 +314,7 @@ If you want to build legacy python code, please refer to [Building on legacy cod
In this mode PyTorch computations will run on your CPU, not your GPU. In this mode PyTorch computations will run on your CPU, not your GPU.
```cmd ```cmd
python setup.py develop python -m pip install --no-build-isolation -v -e .
``` ```
Note on OpenMP: The desired OpenMP implementation is Intel OpenMP (iomp). In order to link against iomp, you'll need to manually download the library and set up the building environment by tweaking `CMAKE_INCLUDE_PATH` and `LIB`. The instruction [here](https://github.com/pytorch/pytorch/blob/main/docs/source/notes/windows.rst#building-from-source) is an example for setting up both MKL and Intel OpenMP. Without these configurations for CMake, Microsoft Visual C OpenMP runtime (vcomp) will be used. Note on OpenMP: The desired OpenMP implementation is Intel OpenMP (iomp). In order to link against iomp, you'll need to manually download the library and set up the building environment by tweaking `CMAKE_INCLUDE_PATH` and `LIB`. The instruction [here](https://github.com/pytorch/pytorch/blob/main/docs/source/notes/windows.rst#building-from-source) is an example for setting up both MKL and Intel OpenMP. Without these configurations for CMake, Microsoft Visual C OpenMP runtime (vcomp) will be used.
@ -329,7 +335,6 @@ Additional libraries such as
You can refer to the [build_pytorch.bat](https://github.com/pytorch/pytorch/blob/main/.ci/pytorch/win-test-helpers/build_pytorch.bat) script for some other environment variables configurations You can refer to the [build_pytorch.bat](https://github.com/pytorch/pytorch/blob/main/.ci/pytorch/win-test-helpers/build_pytorch.bat) script for some other environment variables configurations
```cmd ```cmd
cmd cmd
@ -349,8 +354,7 @@ for /f "usebackq tokens=*" %i in (`"%ProgramFiles(x86)%\Microsoft Visual Studio\
:: [Optional] If you want to override the CUDA host compiler :: [Optional] If you want to override the CUDA host compiler
set CUDAHOSTCXX=C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.27.29110\bin\HostX64\x64\cl.exe set CUDAHOSTCXX=C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.27.29110\bin\HostX64\x64\cl.exe
python setup.py develop python -m pip install --no-build-isolation -v -e .
``` ```
**Intel GPU builds** **Intel GPU builds**
@ -372,7 +376,7 @@ if defined CMAKE_PREFIX_PATH (
set "CMAKE_PREFIX_PATH=%CONDA_PREFIX%\Library" set "CMAKE_PREFIX_PATH=%CONDA_PREFIX%\Library"
) )
python setup.py develop python -m pip install --no-build-isolation -v -e .
``` ```
##### Adjust Build Options (Optional) ##### Adjust Build Options (Optional)
@ -382,6 +386,7 @@ the following. For example, adjusting the pre-detected directories for CuDNN or
with such a step. with such a step.
On Linux On Linux
```bash ```bash
export CMAKE_PREFIX_PATH="${CONDA_PREFIX:-'$(dirname $(which conda))/../'}:${CMAKE_PREFIX_PATH}" export CMAKE_PREFIX_PATH="${CONDA_PREFIX:-'$(dirname $(which conda))/../'}:${CMAKE_PREFIX_PATH}"
CMAKE_ONLY=1 python setup.py build CMAKE_ONLY=1 python setup.py build
@ -389,6 +394,7 @@ ccmake build # or cmake-gui build
``` ```
On macOS On macOS
```bash ```bash
export CMAKE_PREFIX_PATH="${CONDA_PREFIX:-'$(dirname $(which conda))/../'}:${CMAKE_PREFIX_PATH}" export CMAKE_PREFIX_PATH="${CONDA_PREFIX:-'$(dirname $(which conda))/../'}:${CMAKE_PREFIX_PATH}"
MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ CMAKE_ONLY=1 python setup.py build MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ CMAKE_ONLY=1 python setup.py build

View File

@ -8,12 +8,12 @@ It also provides mechanisms to compare PyTorch with other frameworks.
Make sure you're on a machine with CUDA, torchvision, and pytorch installed. Install in the following order: Make sure you're on a machine with CUDA, torchvision, and pytorch installed. Install in the following order:
``` ```
# Install torchvision. It comes with the pytorch stable release binary # Install torchvision. It comes with the pytorch stable release binary
pip3 install torch torchvision python -m pip install torch torchvision
# Install the latest pytorch master from source. # Install the latest pytorch master from source.
# It should supersede the installation from the release binary. # It should supersede the installation from the release binary.
cd $PYTORCH_HOME cd $PYTORCH_HOME
python setup.py build develop python -m pip install --no-build-isolation -v -e .
# Check the pytorch installation version # Check the pytorch installation version
python -c "import torch; print(torch.__version__)" python -c "import torch; print(torch.__version__)"

View File

@ -17,8 +17,8 @@ export DEBUG=0
export OMP_NUM_THREADS=10 export OMP_NUM_THREADS=10
# Compile pytorch with the base revision # Compile pytorch with the base revision
git checkout master git checkout main
python setup.py develop python -m pip install --no-build-isolation -v -e .
# Install dependencies: # Install dependencies:
# Scipy is required by detr # Scipy is required by detr
@ -32,7 +32,7 @@ python functional_autograd_benchmark.py --output before.txt
# Compile pytorch with your change # Compile pytorch with your change
popd popd
git checkout your_feature_branch git checkout your_feature_branch
python setup.py develop python -m pip install --no-build-isolation -v -e .
# Run the benchmark for the new version # Run the benchmark for the new version
pushd benchmarks/functional_autograd_benchmark pushd benchmarks/functional_autograd_benchmark

View File

@ -20,7 +20,7 @@ Key Features:
The instruction below installs a cpp\_extension for PyTorch and it is required to run the benchmark suite. The instruction below installs a cpp\_extension for PyTorch and it is required to run the benchmark suite.
```bash ```bash
cd pt_extension cd pt_extension
python setup.py install python -m pip install .
``` ```
## How to run the benchmarks: ## How to run the benchmarks:

View File

@ -11,7 +11,7 @@ export USE_MKL=1
CMAKE_ONLY=1 python setup.py build CMAKE_ONLY=1 python setup.py build
ccmake build # or cmake-gui build ccmake build # or cmake-gui build
python setup.py install python -m pip install --no-build-isolation -v .
cd benchmarks cd benchmarks
echo "!! SPARSE SPMM TIME BENCHMARK!! " >> $OUTFILE echo "!! SPARSE SPMM TIME BENCHMARK!! " >> $OUTFILE
@ -28,7 +28,7 @@ echo "----- USE_MKL=0 ------" >> $OUTFILE
rm -rf build rm -rf build
export USE_MKL=0 export USE_MKL=0
python setup.py install python -m pip install --no-build-isolation -v .
cd benchmarks cd benchmarks
for dim0 in 1000 5000 10000; do for dim0 in 1000 5000 10000; do

View File

@ -15,4 +15,4 @@ pip install --no-use-pep517 -e "$tp2_dir/onnx"
# Install caffe2 and pytorch # Install caffe2 and pytorch
pip install -r "$top_dir/caffe2/requirements.txt" pip install -r "$top_dir/caffe2/requirements.txt"
pip install -r "$top_dir/requirements.txt" pip install -r "$top_dir/requirements.txt"
python setup.py develop python -m pip install --no-build-isolation -v -e .

View File

@ -35,4 +35,4 @@ _pip_install -b "$BUILD_DIR/onnx" "file://$tp2_dir/onnx#egg=onnx"
# Install caffe2 and pytorch # Install caffe2 and pytorch
pip install -r "$top_dir/caffe2/requirements.txt" pip install -r "$top_dir/caffe2/requirements.txt"
pip install -r "$top_dir/requirements.txt" pip install -r "$top_dir/requirements.txt"
python setup.py install python -m pip install --no-build-isolation -v .

View File

@ -263,6 +263,7 @@ import json
import shutil import shutil
import subprocess import subprocess
import sysconfig import sysconfig
import textwrap
import time import time
from collections import defaultdict from collections import defaultdict
from pathlib import Path from pathlib import Path
@ -601,7 +602,7 @@ def build_deps() -> None:
report( report(
'Finished running cmake. Run "ccmake build" or ' 'Finished running cmake. Run "ccmake build" or '
'"cmake-gui build" to adjust build options and ' '"cmake-gui build" to adjust build options and '
'"python setup.py install" to build.' '"python -m pip install --no-build-isolation -v ." to build.'
) )
sys.exit() sys.exit()
@ -1207,24 +1208,25 @@ def configure_extension_build() -> tuple[
# post run, warnings, printed at the end to make them more visible # post run, warnings, printed at the end to make them more visible
build_update_message = """ build_update_message = """
It is no longer necessary to use the 'build' or 'rebuild' targets It is no longer necessary to use the 'build' or 'rebuild' targets
To install: To install:
$ python setup.py install $ python -m pip install --no-build-isolation -v .
To develop locally: To develop locally:
$ python setup.py develop $ python -m pip install --no-build-isolation -v -e .
To force cmake to re-generate native build files (off by default): To force cmake to re-generate native build files (off by default):
$ CMAKE_FRESH=1 python setup.py develop $ CMAKE_FRESH=1 python -m pip install --no-build-isolation -v -e .
""" """.strip()
def print_box(msg: str) -> None: def print_box(msg: str) -> None:
lines = msg.split("\n") msg = textwrap.dedent(msg).strip()
size = max(len(l) + 1 for l in lines) lines = ["", *msg.split("\n"), ""]
print("-" * (size + 2)) max_width = max(len(l) for l in lines)
for l in lines: print("+" + "-" * (max_width + 4) + "+", file=sys.stderr, flush=True)
print("|{}{}|".format(l, " " * (size - len(l)))) for line in lines:
print("-" * (size + 2)) print(f"| {line:<{max_width}s} |", file=sys.stderr, flush=True)
print("+" + "-" * (max_width + 4) + "+", file=sys.stderr, flush=True)
def main() -> None: def main() -> None:

View File

@ -36,7 +36,7 @@ The following commands assume you are in PyTorch root.
```bash ```bash
# ... Build PyTorch from source, e.g. # ... Build PyTorch from source, e.g.
python setup.py develop python -m pip install --no-build-isolation -v -e .
# (re)build just the binary # (re)build just the binary
ninja -C build bin/test_jit ninja -C build bin/test_jit
# run tests # run tests

View File

@ -4,8 +4,8 @@ This folder contains a self-contained example of a PyTorch out-of-tree backend l
## How to use ## How to use
Install as standalone with `python setup.py develop` (or install) from this folder. Install as standalone with `python -m pip install -e .` (or `python -m pip install .`)
You can run test via `python {PYTORCH_ROOT_PATH}/test/test_openreg.py`. from this folder. You can run test via `python {PYTORCH_ROOT_PATH}/test/test_openreg.py`.
## Design principles ## Design principles

View File

@ -667,6 +667,7 @@ def install_cpp_extensions(cpp_extensions_test_dir, env=os.environ):
shutil.rmtree(cpp_extensions_test_build_dir) shutil.rmtree(cpp_extensions_test_build_dir)
# Build the test cpp extensions modules # Build the test cpp extensions modules
# FIXME: change setup.py command to pip command
cmd = [sys.executable, "setup.py", "install", "--root", "./install"] cmd = [sys.executable, "setup.py", "install", "--root", "./install"]
return_code = shell(cmd, cwd=cpp_extensions_test_dir, env=env) return_code = shell(cmd, cwd=cpp_extensions_test_dir, env=env)
if return_code != 0: if return_code != 0:

View File

@ -148,7 +148,7 @@ class TestCppExtensionAOT(common.TestCase):
@unittest.skipIf(IS_WINDOWS, "Not available on Windows") @unittest.skipIf(IS_WINDOWS, "Not available on Windows")
def test_no_python_abi_suffix_sets_the_correct_library_name(self): def test_no_python_abi_suffix_sets_the_correct_library_name(self):
# For this test, run_test.py will call `python setup.py install` in the # For this test, run_test.py will call `python -m pip install .` in the
# cpp_extensions/no_python_abi_suffix_test folder, where the # cpp_extensions/no_python_abi_suffix_test folder, where the
# `BuildExtension` class has a `no_python_abi_suffix` option set to # `BuildExtension` class has a `no_python_abi_suffix` option set to
# `True`. This *should* mean that on Python 3, the produced shared # `True`. This *should* mean that on Python 3, the produced shared

View File

@ -95,7 +95,8 @@ def main() -> None:
sys.exit(-95) sys.exit(-95)
if not is_devel_setup(): if not is_devel_setup():
print( print(
"Not a devel setup of PyTorch, please run `python3 setup.py develop --user` first" "Not a devel setup of PyTorch, "
"please run `python -m pip install --no-build-isolation -v -e .` first"
) )
sys.exit(-1) sys.exit(-1)
if not has_build_ninja(): if not has_build_ninja():

View File

@ -1019,10 +1019,10 @@ except ImportError:
of the PyTorch repository rather than the C extensions which of the PyTorch repository rather than the C extensions which
are expected in the `torch._C` namespace. This can occur when are expected in the `torch._C` namespace. This can occur when
using the `install` workflow. e.g. using the `install` workflow. e.g.
$ python setup.py install && python -c "import torch" $ python -m pip install --no-build-isolation -v . && python -c "import torch"
This error can generally be solved using the `develop` workflow This error can generally be solved using the `develop` workflow
$ python setup.py develop && python -c "import torch" # This should succeed $ python -m pip install --no-build-isolation -v -e . && python -c "import torch" # This should succeed
or by running Python from a different directory. or by running Python from a different directory.
""" """
).strip() ).strip()

View File

@ -60,7 +60,7 @@ for a particular GPU can be computed by simply running this script in
the pytorch development tree:: the pytorch development tree::
cd /path/to/pytorch cd /path/to/pytorch
python setup.py develop python -m pip install --no-build-isolation -v -e .
python torch/sparse/_triton_ops_meta.py python torch/sparse/_triton_ops_meta.py
This will compute the optimal kernel parameters for the GPU device This will compute the optimal kernel parameters for the GPU device