16caa8c1b3
[BE]: Update Typeguard to TypeIs for better type inference ( #133814 )
...
Uses TypeIs instead of TypeGuard for better inference. See https://peps.python.org/pep-0742/
Pull Request resolved: https://github.com/pytorch/pytorch/pull/133814
Approved by: https://github.com/ezyang
2024-10-21 17:20:06 +00:00
d1027c2be6
Revert "Update sympy version constraint to 1.13.3 ( #138338 )"
...
This reverts commit d8279ad9d162b5ce71699f462d3664c3745b14f5.
Reverted https://github.com/pytorch/pytorch/pull/138338 on behalf of https://github.com/huydhn due to Sorry for reverting your change, but I think a bunch of inductor tests and test_dynamic_shapes are failing in trunk after this lands d8279ad9d1
([comment](https://github.com/pytorch/pytorch/pull/138338#issuecomment-2424487225 ))
2024-10-20 03:19:02 +00:00
d8279ad9d1
Update sympy version constraint to 1.13.3 ( #138338 )
...
`simpy` was pinned to version 1.13.1 due to test failures with version 1.13.2 on Windows and mac, as reported in https://github.com/pytorch/pytorch/pull/133235 . Now that a newer version, 1.13.3, has been released, this PR aims to verify if the test failure has been resolved and also allow building with newer versions for packaging purposes (e.g., https://github.com/conda-forge/pytorch-cpu-feedstock/pull/277#discussion_r1806721862 ).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/138338
Approved by: https://github.com/Skylion007 , https://github.com/malfet
Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com >
2024-10-20 00:20:02 +00:00
3cfd244495
Add USE_SYSTEM_NVTX option ( #138287 )
...
## Summary
We are currently [updating](https://github.com/conda-forge/pytorch-cpu-feedstock/pull/277 ) the [`conda-forge::pytorch`](https://anaconda.org/conda-forge/pytorch ) package to version 2.5.0. This update includes a new dependency, the third_party/NVTX submodule. However, like other package management frameworks (e.g., apt), conda-forge prefers using system-installed packages instead of vendor-provided third-party packages.
This pull request aims to add an option, `USE_SYSTEM_NVTX`, to select whether to use the vendored nvtx or the system-installed one, with the default being the vendored one (which is the current behavior).
## Test Plan
The `USE_SYSTEM_NVTX` option is tested by building the `conda-forge::pytorch` package with the change applied as a [patch](cd1d2464dd/recipe/patches/0005-Use-system-nvtx3.patch
).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/138287
Approved by: https://github.com/albanD
2024-10-19 04:26:01 +00:00
8cda774a03
Add torch.xpu.get_arch_list and torch.xpu.get_gencode_flags for XPU ( #137773 )
...
# Motivation
Add `torch.xpu.get_arch_list()` and `torch.xpu.get_gencode_flags()` methods that return architecture list and AOT flags to preserve what flags PyTorch XPU was built with.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/137773
Approved by: https://github.com/EikanWang , https://github.com/albanD
2024-10-18 02:28:08 +00:00
6016b8a9be
Remove CI/CD python 3.8 requirements ( #137893 )
...
Python 3.8 is deprecated from CI/CD. No reason have these pins
Pull Request resolved: https://github.com/pytorch/pytorch/pull/137893
Approved by: https://github.com/Skylion007 , https://github.com/huydhn , https://github.com/albanD , https://github.com/kit1980
2024-10-14 20:28:48 +00:00
59cdd8ddf1
Bump optree version to 0.13.0 to enable Python 3.13 and Python 3.13t support ( #137396 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/137396
Approved by: https://github.com/albanD
2024-10-08 06:49:04 +00:00
5a6ddbcc3b
Extending the Pytorch vec backend for SVE (ARM) ( #119571 )
...
**Motivation:**
In Pytorch, Aten vectorization supports multiple platforms, including x86 and Arm, as well as multiple data types. It provides a generic implementation of Vector (Vec) type that allows the programmer to write code packing various primitives (such as floats) within 256bit & 512bits registers. It can be extended to support other ISAs easily by adding more VecISA sub-classes.
**Reference Link:** https://github.com/pytorch/pytorch/tree/main/aten/src/ATen/cpu/vec
**This PR:**
* Our goal with this contribution is to add support for SVE backend for Vec in the Aten vectorization for CPU backend which can be benefitted by any ARM architecture supported CPU's that supports SVE.
* More about SVE ISA for ARM: [https://developer.arm.com/Architectures/Scalable Vector Extensions](https://developer.arm.com/Architectures/Scalable%20Vector%20Extensions )
* We are using the ARM C Language Extensions for SVE (https://developer.arm.com/documentation/102699/0100/Optimizing-with-intrinsics ) to accelerate performance for various operators in the SVE backend for Vec.
* Currently we are adding support only for SVE ISA with the vector length of 256 bits (SVE 256). In future, we plan to extend this SVE support for other vector lengths as well.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/119571
Approved by: https://github.com/malfet , https://github.com/snadampal
Co-authored-by: Divya Kotadiya <divya.kotadiya@fujitsu.com >
2024-09-18 18:59:10 +00:00
cd9ee49a69
[aoti] Add cpp loader ( #135374 )
...
* Added a cpp loader, AOTIModelPackageLoader, which can load the .pt2, build the .so, and create a runner. The python-facing API is that users can directly call the `run` function, whereas in cpp users can directly access the `runner_` if they are more familiar with that. I couldn't figure out how to bind the `get_runner()` function to python...
* Added a new config, `aot_inductor.package_cpp_only` which will **not** package the so. This means that whenever the package is loaded, we will need to build the so. This is turned off by default so that new environments do not need to rebuild their so. The `package_cpp_only` is a feature which torchchat intends to use to provide flexibility to users.
* Added a new config, `aot_inductor.metadata` which stores user-provided metadata, serialized to the pt2 as a json file. It also stores the device used when exporting, "cuda" or "cpu", so that during load time, we can use that data to determine which AOTIModelContainerRunner to use. The metadata can be accessed through `loader.get_metadata()`. TODO is to move this metadata to the toplevel `package_aoti` function so that we can remove the metadata as a config.
* Separated out `package_aoti` as a standalone function, instead of it automatically being called in inductor. This is to prepare for the case where users will compile multiple models, and want to bundle it in one package. The specific use case is in torchchat, where we want to package the separately-exported encoder and decoder layers. An example of how to use this is in `test_multiple_methods`.
* `load_package` will load a singular model, given the model name.
* The loader doesn't support windows for now, I think I need to add some more casing to make the build commands work on windows?
Differential Revision: [D62329906](https://our.internmc.facebook.com/intern/diff/D62329906 )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/135374
Approved by: https://github.com/desertfire , https://github.com/malfet
2024-09-11 03:00:01 +00:00
e000cf0ad9
Fix license metadata in setup.py ( #129219 )
...
Package metadata in setup.py lists license as BSD-3 which is not a valid SPDX id. The correct id would be BSD-3-Clause.
Specifying an SPDX id is beneficial to license compliance scanning.
*Taking up #129123 from my personal account.*
Pull Request resolved: https://github.com/pytorch/pytorch/pull/129219
Approved by: https://github.com/malfet , https://github.com/kit1980
2024-09-04 00:21:22 +00:00
bdfa94b787
[RFC] Make fr trace script a console scripts ( #134729 )
...
We want to make fr analyzer script a command after users `pip install torch`, that's why we want to mimic what torchrun is doing.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/134729
Approved by: https://github.com/c-p-i-o , https://github.com/malfet
ghstack dependencies: #134528 , #134780
2024-08-30 18:17:06 +00:00
90c821814e
SparseCsrCUDA: cuDSS backend for linalg.solve ( #129856 )
...
This PR switches to cuDSS library and has the same purpose of #127692 , which is to add Sparse CSR tensor support to linalg.solve.
Fixes #69538
Minimum example of usage:
```
import torch
if __name__ == '__main__':
spd = torch.rand(4, 3)
A = spd.T @ spd
b = torch.rand(3).to(torch.float64).cuda()
A = A.to_sparse_csr().to(torch.float64).cuda()
x = torch.linalg.solve(A, b)
print((A @ x - b).norm())
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/129856
Approved by: https://github.com/amjames , https://github.com/lezcano , https://github.com/huydhn
Co-authored-by: Zihang Fang <zhfang1108@gmail.com >
Co-authored-by: Huy Do <huydhn@gmail.com >
2024-08-22 07:57:30 +00:00
2db28a9611
Revert "[BE]: Update Typeguard to TypeIs for better type inference ( #133814 )"
...
This reverts commit bce0caba7804b0787684dbf1f4e1c4d9e3acded5.
Reverted https://github.com/pytorch/pytorch/pull/133814 on behalf of https://github.com/ezyang due to root cause of internal failures not addressed ([comment](https://github.com/pytorch/pytorch/pull/133814#issuecomment-2302466444 ))
2024-08-21 16:13:34 +00:00
bce0caba78
[BE]: Update Typeguard to TypeIs for better type inference ( #133814 )
...
Uses TypeIs instead of TypeGuard for better inference. See https://peps.python.org/pep-0742/
Pull Request resolved: https://github.com/pytorch/pytorch/pull/133814
Approved by: https://github.com/ezyang
2024-08-20 17:19:57 +00:00
c3d02fa390
[Reland2] Update NVTX to NVTX3 ( #109843 )
...
Another attempt to update NVTX to NVTX3. We now avoid changing NVTX header inclusion of existing code. The advantage of NVTX3 over NVTX is that it is a header-only library so that linking with NVTX3 can greatly simplify our CMake and other building scripts for finding libraries in user environments. In addition, NVTX are indeed still present in the latest CUDA versions, but they're no longer a compiled library: It's now a header-only library. That's why there isn't a .lib file anymore.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/109843
Approved by: https://github.com/peterbell10 , https://github.com/eqy
Co-authored-by: Ivan Zaitsev <108101595+izaitsevfb@users.noreply.github.com >
2024-08-20 16:33:26 +00:00
d6368985af
[BE]: Fix setuptools not installed with Python 3.12 ( #133561 )
...
setuptools is not installed correctly for Python 3.12.
See https://github.com/python-poetry/poetry/issues/9630#issuecomment-2291114885
Pull Request resolved: https://github.com/pytorch/pytorch/pull/133561
Approved by: https://github.com/Skylion007
Co-authored-by: Aaron Gokaslan <aaronGokaslan@gmail.com >
2024-08-17 17:42:04 +00:00
018e48c337
[Reland] Add wrappers for synchronous GPUDirect Storage APIs ( #133489 )
...
Reland #130633
USE_CUFILE turned off by default in this version
Pull Request resolved: https://github.com/pytorch/pytorch/pull/133489
Approved by: https://github.com/albanD
2024-08-15 17:11:52 +00:00
e76f0e0646
Remove QNNPACK reference from setup.py ( #133177 )
...
QNNPACK has been removed from third party
Pull Request resolved: https://github.com/pytorch/pytorch/pull/133177
Approved by: https://github.com/albanD
2024-08-13 01:19:12 +00:00
4f0d5f6551
Pin sympy to 1.13.1 ( #133235 )
...
Sympy 1.13.2 release yesterday, and it results in test failures on windows and mac
454713fe9d/1
Hopefully these are the places it needs to get pinned
Pull Request resolved: https://github.com/pytorch/pytorch/pull/133235
Approved by: https://github.com/atalman , https://github.com/ZainRizvi
2024-08-12 20:10:09 +00:00
05e8e87a69
[Submodule] Remove foxi ( #132976 )
...
It is not used after removal of Caffe2 code.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132976
Approved by: https://github.com/ezyang
2024-08-09 03:46:52 +00:00
e191b83462
Revert "Add wrappers for synchronous GPUDirect Storage APIs ( #130633 )"
...
This reverts commit 709ddf7a9dcfa1268848b72f6f56b55afa6728d6.
Reverted https://github.com/pytorch/pytorch/pull/130633 on behalf of https://github.com/clee2000 due to still failing internally D60265673 ([comment](https://github.com/pytorch/pytorch/pull/130633#issuecomment-2253239607 ))
2024-07-26 18:08:20 +00:00
709ddf7a9d
Add wrappers for synchronous GPUDirect Storage APIs ( #130633 )
...
Based in part on https://github.com/NVIDIA/apex/pull/1774
Differential Revision: [D60155434](https://our.internmc.facebook.com/intern/diff/D60155434 )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/130633
Approved by: https://github.com/albanD
2024-07-25 22:23:38 +00:00
e4b5645f83
Revert "Add wrappers for synchronous GPUDirect Storage APIs ( #130633 )"
...
This reverts commit 5b5e0698a5f560decb9bbdd150ed7b0622eb7777.
Reverted https://github.com/pytorch/pytorch/pull/130633 on behalf of https://github.com/clee2000 due to breaking a lot of jobs and build rules internally D60085885, possibly needs to update some bazel build? ([comment](https://github.com/pytorch/pytorch/pull/130633#issuecomment-2245806738 ))
2024-07-23 17:19:34 +00:00
5b5e0698a5
Add wrappers for synchronous GPUDirect Storage APIs ( #130633 )
...
Based in part on https://github.com/NVIDIA/apex/pull/1774
Pull Request resolved: https://github.com/pytorch/pytorch/pull/130633
Approved by: https://github.com/albanD
2024-07-22 14:51:24 +00:00
d2bd9acabd
[BE] bump optree
version to 0.12.1 ( #130139 )
...
0.12.0 Major Updates:
- Add context manager to temporarily set the dictionary sorting mode
- Add accessor APIs
- Use `stable` tag for `pybind11` for Python 3.13 support
- Fix potential segmentation fault for pickling support
0.12.1 Updates:
- Fix warning regression during import when launch with strict warning filters
Closes #130155
Pull Request resolved: https://github.com/pytorch/pytorch/pull/130139
Approved by: https://github.com/zou3519
ghstack dependencies: #130895
2024-07-20 02:41:10 +00:00
f0075c179b
Pin sympy >= 1.13.0
( #130895 )
...
------
The opposite of #130836 . Pin `sympy >= 1.13.0` for Python >= 3.9 and `sympy == 1.12.1` for Python 3.8.
- #130836
See the PR description of #130836 for more details.
`sympy` 1.13.0 introduces some breaking changes which break our tests. More specifically:
- Ref [Backwards compatibility breaks and deprecations](https://github.com/sympy/sympy/wiki/release-notes-for-1.13.0#backwards-compatibility-breaks-and-deprecations )
> BREAKING CHANGE: Float and Integer/Rational no longer compare equal with a == b. From now on Float(2.0) != Integer(2). Previously expressions involving Float would compare unequal e.g. x*2.0 != x*2 but an individual Float would compare equal to an Integer. In SymPy 1.7 a Float will always compare unequal to an Integer even if they have the same "value". Use sympy.numbers.int_valued(number) to test if a number is a concrete number with no decimal part. ([#25614 ](https://github.com/sympy/sympy/pull/25614 ) by [@smichr](https://github.com/smichr ))
`sympy >= 1.13.0` is required to enable Python 3.13 support. This should be part of #130689 .
- #130689
Pull Request resolved: https://github.com/pytorch/pytorch/pull/130895
Approved by: https://github.com/ezyang
2024-07-20 00:59:24 +00:00
a3abfa5cb5
[BE][Easy][1/19] enforce style for empty lines in import segments ( #129752 )
...
See https://github.com/pytorch/pytorch/pull/129751#issue-2380881501 . Most changes are auto-generated by linter.
You can review these PRs via:
```bash
git diff --ignore-all-space --ignore-blank-lines HEAD~1
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/129752
Approved by: https://github.com/ezyang , https://github.com/malfet
2024-07-16 00:42:56 +00:00
074a5c0c9b
Revert "[BE] bump optree
version to 0.12.1 ( #130139 )"
...
This reverts commit 8fcb156e8b5697a8f292db6db2a1803c5f4ce2d7.
Reverted https://github.com/pytorch/pytorch/pull/130139 on behalf of https://github.com/clee2000 due to broke inductor/test_torchinductor_codegen_dynamic_shapes.py and test_sympy_utils.py 8fcb156e8b
([comment](https://github.com/pytorch/pytorch/pull/130139#issuecomment-2229248447 ))
2024-07-15 19:42:11 +00:00
8fcb156e8b
[BE] bump optree
version to 0.12.1 ( #130139 )
...
0.12.0 Major Updates:
- Add context manager to temporarily set the dictionary sorting mode
- Add accessor APIs
- Use `stable` tag for `pybind11` for Python 3.13 support
- Fix potential segmentation fault for pickling support
0.12.1 Updates:
- Fix warning regression during import when launch with strict warning filters
Closes #130155
Pull Request resolved: https://github.com/pytorch/pytorch/pull/130139
Approved by: https://github.com/zou3519
2024-07-15 17:27:07 +00:00
df50452279
Pin optree==0.11.0 on windows CI ( #130155 )
...
Fixes #ISSUE_NUMBER
doctests
test_testing
Failing run has 0.12.0 https://github.com/pytorch/pytorch/actions/runs/9804335516/job/27072891998
Succeeding run has 0.11.0 https://github.com/pytorch/pytorch/actions/runs/9798330845/job/27057359554
It is already pinned for mac and linux
Pull Request resolved: https://github.com/pytorch/pytorch/pull/130155
Approved by: https://github.com/huydhn , https://github.com/atalman
2024-07-05 20:28:58 +00:00
3d56673b24
[Split Build][BE] remove extraneous .py, .a, and .so files ( #130053 )
...
Removes extraneous .a, .so, and .py files from the split build. From here we can also clean up the builder script which produces the binary to do this. That pr is https://github.com/pytorch/builder/pull/1912
Verification:
The built wheel with BUILD_LIBTORCH_WHL=1 has the following files only (with .a, .so, and .py extensions)
```
sahanp@devgpu086 ~/p/dist (viable/strict)> pwd (pytorch-3.10)
/home/sahanp/pytorch/dist
sahanp@devgpu086 ~/p/dist (viable/strict)> find . -type f \( -name "*.py" -o -name "*.a" -o -name "*.so" \) (pytorch-3.10)
./torch/__init__.py
./torch/lib/libbackend_with_compiler.so
./torch/lib/libc10.so
./torch/lib/libjitbackend_test.so
./torch/lib/libtorch.so
./torch/lib/libtorch_cpu.so
./torch/lib/libtorch_global_deps.so
./torch/lib/libtorchbind_test.so
sahanp@devgpu086 ~/p/dist (viable/strict)>
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/130053
Approved by: https://github.com/atalman
2024-07-05 19:05:32 +00:00
22a06869f2
include jit/*.pyi ( #129654 )
...
Fixes #108781 , see https://github.com/pytorch/pytorch/pull/108782#issuecomment-1927321532
Pull Request resolved: https://github.com/pytorch/pytorch/pull/129654
Approved by: https://github.com/ezyang
2024-06-28 12:40:11 +00:00
424068d0d2
[Windows] remove mkl shared library dependency. ( #129493 )
...
# Background
I have fixed pytorch Windows missing mkl shared library dependency issue: https://github.com/pytorch/pytorch/issues/124009
The solution is change torch_cpu module static link mkl library:
1. pytorch static link mkl PR: https://github.com/pytorch/pytorch/pull/124925
2. builder install mkl static library: https://github.com/pytorch/builder/pull/1790
Double confirmed current build is using mkl static link: https://github.com/pytorch/pytorch/issues/124009#issuecomment-2160941802
# Goal
Remove setup.py `install_requires` will install mkl shared lib on pytorch Windows. It is not required now, due to we have static linked it.
It will reduce the pytorch install network traffic and avoid install useless mkl shared library package.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/129493
Approved by: https://github.com/malfet
2024-06-28 11:42:21 +00:00
816e8a3f21
[MacOS] Improve libomp packaging ( #129473 )
...
Instead of replacing `@rpath/libomp.dylib` with `@loadper_path/libomp.dylib`, keep it in place and add `@loadper_path` as new rpath
This should prevent double-loading of OpenMP runtime, because in case of `@rpath` loader is allowed to reuse other libraries, but `loadper_path` directive forces it to load it from the location relative to the executable
Test plan:
- Prepare the environment
```shell
conda create -n py310-cf python=3.10 numpy pip -c conda-forge
conda activate py310-cf
pip install torch --index-url https://download.pytorch.org/whl/test/cpu
```
- Verify that OpenMP is loaded twice and than crashes
```shell
KMP_VERSION=true python -c "import numpy as np; import torch; print(torch.__version__, torch.backends.openmp.is_available()); print(torch.rand(300, 300).abs().max())"
```
output:
```
LLVM OMP version: 5.0.20140926
LLVM OMP library type: performance
LLVM OMP link type: dynamic
LLVM OMP build time: no_timestamp
LLVM OMP build compiler: Clang 16.0
LLVM OMP alternative compiler support: yes
LLVM OMP API version: 5.0 (201611)
LLVM OMP dynamic error checking: no
LLVM OMP thread affinity support: no
LLVM OMP version: 5.0.20140926
LLVM OMP library type: performance
LLVM OMP link type: dynamic
LLVM OMP build time: no_timestamp
LLVM OMP build compiler: Clang 12.0
LLVM OMP alternative compiler support: yes
LLVM OMP API version: 5.0 (201611)
LLVM OMP dynamic error checking: no
LLVM OMP thread affinity support: no
2.4.0 True
zsh: segmentation fault KMP_VERSION=true python -c
```
- Install artifact from this PR and make sure it passes the same test
```shell
python -mpip install ~/Downloads/torch-2.5.0.dev20240625-cp310-none-macosx_11_0_arm64.whl
KMP_VERSION=true python -c "import numpy as np; import torch; print(torch.__version__, torch.backends.openmp.is_available()); print(torch.rand(300, 300).abs().max())"
```
output
```
LLVM OMP version: 5.0.20140926
LLVM OMP library type: performance
LLVM OMP link type: dynamic
LLVM OMP build time: no_timestamp
LLVM OMP build compiler: Clang 16.0
LLVM OMP alternative compiler support: yes
LLVM OMP API version: 5.0 (201611)
LLVM OMP dynamic error checking: no
LLVM OMP thread affinity support: no
2.5.0.dev20240625 True
tensor(1.0000)
```
- Make sure it still uses bundled OpenMP if none is available in the environment
```
conda uninstall numpy -c conda-forge
KMP_VERSION=true python -c "from ctypes import cdll, c_char_p, c_uint32; import torch; from ctypes import cdll, c_char_p, c_uint32; libdyld = cdll.LoadLibrary('libSystem.dylib'); libdyld._dyld_image_count.restype = c_uint32; libdyld._dyld_get_image_name.restype = c_char_p; libdyld._dyld_get_image_name.argtypes = [c_uint32]; print(torch.rand(300, 300).abs().max()); libs = [libdyld._dyld_get_image_name(i).decode('ascii') for i in range(libdyld._dyld_image_count())]; print([l for l in libs if 'libomp.dylib' in l])"
```
Fixes https://github.com/pytorch/pytorch/issues/124497 and https://github.com/pytorch/pytorch/issues/126385
Pull Request resolved: https://github.com/pytorch/pytorch/pull/129473
Approved by: https://github.com/atalman
2024-06-25 19:12:34 +00:00
b0044e2e18
[Split Build] Support nightly release ( #129011 )
...
This PR adds the split build to our binaries workflow. Validation for the workflow is done using the PR above in conjunction with https://github.com/pytorch/builder/pull/1876 .
Test Workflow: Check CI in the workflow above
Pull Request resolved: https://github.com/pytorch/pytorch/pull/129011
Approved by: https://github.com/atalman
2024-06-22 05:45:14 +00:00
479ce5e2f4
Remove outdated CUDA code from CMake ( #128801 )
...
It's possible to simplify some CUDA handling logic in CMake.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/128801
Approved by: https://github.com/r-barnes , https://github.com/malfet
2024-06-21 15:00:00 +00:00
7d33ff59ba
[Split Build]Use same package ( #127934 )
...
This PR removes the second separate package we were using for the libtorch wheel.
In terms of testing that this works we will look use the PRs above this in the stack.
As for sanity checking these are the wheels that are produced by running
```
python setup.py clean && BUILD_LIBTORCH_WHL=1 with-proxy python setup.py bdist_whee
l && BUILD_PYTHON_ONLY=1 with-proxy python setup.py bdist_wheel --cmake
```
```
sahanp@devgpu086 ~/pytorch ((5f15e171…))> ls -al dist/ (pytorch-3.10)
total 677236
drwxr-xr-x 1 sahanp users 188 Jun 4 12:19 ./
drwxr-xr-x 1 sahanp users 1696 Jun 4 12:59 ../
-rw-r--r-- 1 sahanp users 81405742 Jun 4 12:19 torch-2.4.0a0+gitca0a73c-cp310-cp310-linux_x86_64.whl
-rw-r--r-- 1 sahanp users 612076919 Jun 4 12:19 libtorch-2.4.0a0+gitca0a73c-py3-none-any.whl
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/127934
Approved by: https://github.com/atalman
2024-06-19 15:57:21 +00:00
8bcebc8dae
Add runtime dependency on setuptools for cpp_extensions ( #127921 )
...
As per title since this was removed from the builtin python binary in 3.12 and we use it `torch.utils.cpp_extension.*`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/127921
Approved by: https://github.com/Skylion007
2024-06-05 23:59:38 +00:00
d44daebdbc
[Submodule] Remove deprecated USE_TBB option and TBB submodule ( #127051 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/127051
Approved by: https://github.com/cpuhrsch , https://github.com/malfet
2024-05-31 01:20:45 +00:00
67739d8c6f
Revert "[Submodule] Remove deprecated USE_TBB option and TBB submodule ( #127051 )"
...
This reverts commit 699db7988d84d163ebb6919f78885e4630182a7a.
Reverted https://github.com/pytorch/pytorch/pull/127051 on behalf of https://github.com/PaliC due to This PR needs to be synced using the import button as there is a bug in our diff train ([comment](https://github.com/pytorch/pytorch/pull/127051#issuecomment-2138496995 ))
2024-05-30 01:16:57 +00:00
9257a0698b
[Split Build] Load dependencies from libtorch in __init__.py ( #126826 )
...
This PR makes it such that we search for a libtorch wheel when initializing pytorch in order to find the necessary shared libraries.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/126826
Approved by: https://github.com/huydhn , https://github.com/atalman , https://github.com/ZainRizvi
2024-05-29 22:03:50 +00:00
699db7988d
[Submodule] Remove deprecated USE_TBB option and TBB submodule ( #127051 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/127051
Approved by: https://github.com/cpuhrsch , https://github.com/malfet
2024-05-29 11:58:03 +00:00
a25b28a753
[Split Build] Add option to create libtorch wheel and use it to build pytorch as a separate wheel ( #126328 )
...
Creates an option to just build the libtorch portion of pytorch such that we have the necessary .so files. Then it builds a torch package using the libtorch wheel. These options are enabled using ` BUILD_LIBTORCH_WHL` and `BUILD_PYTHON_ONLY`.
We run
```
BUILD_LIBTORCH_WHL=1 python setup.py install
python setup.py clean
BUILD_PYTHON_ONLY=1 python setup.py install
```
to produce
```
sahanp@devgpu086 ~/pytorch (detached HEAD|REBASE-i 3/5)> ls /home/sahanp/.conda/envs/pytorch-3.10/lib/python3.10/site-packages/torch/lib/ (pytorch-3.10)
libshm.so* libtorch_global_deps.so* libtorch_python.so*
sahanp@devgpu086 ~/pytorch (detached HEAD|REBASE-i 3/5)> ldd build/lib/libtorch_python.so (pytorch-3.10)
linux-vdso.so.1 (0x00007ffdc2d37000)
libtorch.so => /home/sahanp/.conda/envs/pytorch-3.10/lib/python3.10/site-packages/libtorch/lib/libtorch.so (0x00007f539fe99000)
libshm.so => /home/sahanp/pytorch/build/lib/libshm.so (0x00007f539fe90000)
libcudnn.so.8 => /usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn.so.8 (0x00007f539e800000)
libnvToolsExt.so.1 => /usr/local/cuda/lib64/libnvToolsExt.so.1 (0x00007f539e400000)
libstdc++.so.6 => /lib64/libstdc++.so.6 (0x00007f539e000000)
libm.so.6 => /lib64/libm.so.6 (0x00007f539fda5000)
libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007f539ebe5000)
libc.so.6 => /lib64/libc.so.6 (0x00007f539dc00000)
/lib64/ld-linux-x86-64.so.2 (0x00007f539fea0000)
libtorch_cpu.so => /home/sahanp/.conda/envs/pytorch-3.10/lib/python3.10/site-packages/libtorch/lib/libtorch_cpu.so (0x00007f5392400000)
libtorch_cuda.so => /home/sahanp/.conda/envs/pytorch-3.10/lib/python3.10/site-packages/libtorch/lib/libtorch_cuda.so (0x00007f5380000000)
librt.so.1 => /lib64/librt.so.1 (0x00007f539fd9e000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f539fd99000)
libdl.so.2 => /lib64/libdl.so.2 (0x00007f539fd94000)
libc10.so => /home/sahanp/.conda/envs/pytorch-3.10/lib/python3.10/site-packages/libtorch/lib/libc10.so (0x00007f539eb07000)
libmkl_intel_lp64.so.2 => /home/sahanp/.conda/envs/pytorch-3.10/lib/libmkl_intel_lp64.so.2 (0x00007f537ec00000)
libmkl_gnu_thread.so.2 => /home/sahanp/.conda/envs/pytorch-3.10/lib/libmkl_gnu_thread.so.2 (0x00007f537ce00000)
libmkl_core.so.2 => /home/sahanp/.conda/envs/pytorch-3.10/lib/libmkl_core.so.2 (0x00007f5378800000)
libomp.so => /home/sahanp/.conda/envs/pytorch-3.10/lib/libomp.so (0x00007f539e707000)
libcupti.so.12 => /usr/local/cuda/lib64/libcupti.so.12 (0x00007f5377e00000)
libcudart.so.12 => /usr/local/cuda/lib64/libcudart.so.12 (0x00007f5377a00000)
libc10_cuda.so => /home/sahanp/.conda/envs/pytorch-3.10/lib/python3.10/site-packages/libtorch/lib/libc10_cuda.so (0x00007f539ea6a000)
libcusparse.so.12 => /usr/local/cuda/lib64/libcusparse.so.12 (0x00007f5368400000)
libcufft.so.11 => /usr/local/cuda/lib64/libcufft.so.11 (0x00007f535ee00000)
libcusolver.so.11 => /usr/local/cuda/lib64/libcusolver.so.11 (0x00007f534c800000)
libcurand.so.10 => /usr/local/cuda/lib64/libcurand.so.10 (0x00007f5346200000)
libcublas.so.12 => /usr/local/cuda/lib64/libcublas.so.12 (0x00007f533f800000)
libcublasLt.so.12 => /usr/local/cuda/lib64/libcublasLt.so.12 (0x00007f531e800000)
libutil.so.1 => /lib64/libutil.so.1 (0x00007f539ea63000)
libnvJitLink.so.12 => /usr/local/cuda/lib64/libnvJitLink.so.12 (0x00007f531b800000)
sahanp@devgpu086 ~/pytorch (detached HEAD|REBASE-i 3/5)> ldd build/lib/libtorch_global_deps.so (pytorch-3.10)
linux-vdso.so.1 (0x00007ffc265df000)
libmkl_intel_lp64.so.2 => /home/sahanp/.conda/envs/pytorch-3.10/lib/libmkl_intel_lp64.so.2 (0x00007fa93fc00000)
libmkl_gnu_thread.so.2 => /home/sahanp/.conda/envs/pytorch-3.10/lib/libmkl_gnu_thread.so.2 (0x00007fa93de00000)
libmkl_core.so.2 => /home/sahanp/.conda/envs/pytorch-3.10/lib/libmkl_core.so.2 (0x00007fa939800000)
libm.so.6 => /lib64/libm.so.6 (0x00007fa940f05000)
libcudart.so.12 => /usr/local/cuda/lib64/libcudart.so.12 (0x00007fa939400000)
libnvToolsExt.so.1 => /usr/local/cuda/lib64/libnvToolsExt.so.1 (0x00007fa939000000)
libgomp.so.1 => /home/sahanp/.conda/envs/pytorch-3.10/lib/libgomp.so.1 (0x00007fa93fb07000)
libc.so.6 => /lib64/libc.so.6 (0x00007fa938c00000)
libdl.so.2 => /lib64/libdl.so.2 (0x00007fa940efe000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007fa940ef9000)
/lib64/ld-linux-x86-64.so.2 (0x00007fa940ff5000)
librt.so.1 => /lib64/librt.so.1 (0x00007fa940ef2000)
libstdc++.so.6 => /home/sahanp/.conda/envs/pytorch-3.10/lib/libstdc++.so.6 (0x00007fa93921d000)
libgcc_s.so.1 => /home/sahanp/.conda/envs/pytorch-3.10/lib/libgcc_s.so.1 (0x00007fa93faec000)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/126328
Approved by: https://github.com/atalman
2024-05-29 04:33:56 +00:00
cdbb2c9acc
Revert "[Submodule] Remove deprecated USE_TBB option and TBB submodule ( #127051 )"
...
This reverts commit 4fdbaa794f9d5af2f171f772a51cb710c51c925f.
Reverted https://github.com/pytorch/pytorch/pull/127051 on behalf of https://github.com/PaliC due to This PR needs to be synced using the import button as there is a bug in our diff train ([comment](https://github.com/pytorch/pytorch/pull/127051#issuecomment-2136428735 ))
2024-05-29 03:02:35 +00:00
4fdbaa794f
[Submodule] Remove deprecated USE_TBB option and TBB submodule ( #127051 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/127051
Approved by: https://github.com/cpuhrsch , https://github.com/malfet
2024-05-27 03:54:03 +00:00
7428fd19fe
Remove outdated options from setup.py ( #125988 )
...
Since the recent removal of Caffe2 files.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/125988
Approved by: https://github.com/ezyang
2024-05-21 18:48:23 +00:00
4ed93d6e0c
[Submodule] Remove zstd dependency ( #126485 )
...
After searching in the codebase, it seems that zstd is not in use now.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/126485
Approved by: https://github.com/ezyang
2024-05-17 12:49:23 +00:00
b950217f19
Support third-party devices emit a range for each autograd operator ( #125822 )
...
Fixes #125752
Pull Request resolved: https://github.com/pytorch/pytorch/pull/125822
Approved by: https://github.com/aaronenyeshi
2024-05-15 05:06:24 +00:00
b9e7b35912
Remove caffe2 from more build files ( #125898 )
...
Co-authored-by: Aaron Gokaslan <aaronGokaslan@gmail.com >
Pull Request resolved: https://github.com/pytorch/pytorch/pull/125898
Approved by: https://github.com/Skylion007
2024-05-13 18:37:59 +00:00
33fae4fcf4
Revert "Use recursive blob for package data ( #119257 )"
...
This reverts commit f20e3ae0c36146c962a5665018e9ad662a7cf211.
Reverted https://github.com/pytorch/pytorch/pull/119257 on behalf of https://github.com/malfet due to This likely caused https://github.com/pytorch/pytorch/issues/124941 , not sure why warning about recursive grep was ignored ([comment](https://github.com/pytorch/pytorch/pull/119257#issuecomment-2078312309 ))
2024-04-25 23:08:22 +00:00