Smoke Test - disable pypi package validation for binaries that package cuda libs. These binaries do not install packages via pypi.
Should Resolve this from `linux-binary-manywheel / manywheel-py3_11-cuda12_6-full-test / test`:
```
Traceback (most recent call last):
File "/pytorch/.ci/pytorch/smoke_test/smoke_test.py", line 468, in <module>
main()
File "/pytorch/.ci/pytorch/smoke_test/smoke_test.py", line 462, in main
smoke_test_cuda(
File "/pytorch/.ci/pytorch/smoke_test/smoke_test.py", line 274, in smoke_test_cuda
compare_pypi_to_torch_versions(
File "/pytorch/.ci/pytorch/smoke_test/smoke_test.py", line 220, in compare_pypi_to_torch_versions
raise RuntimeError(f"Can't find {package} in PyPI for Torch: {torch_version}")
RuntimeError: Can't find cudnn in PyPI for Torch: 9.5.1
```
Link: https://github.com/pytorch/pytorch/actions/runs/14101221665/job/39505479587#step:15:982
Pull Request resolved: https://github.com/pytorch/pytorch/pull/150194
Approved by: https://github.com/ZainRizvi
------
The opposite of #130836. Pin `sympy >= 1.13.0` for Python >= 3.9 and `sympy == 1.12.1` for Python 3.8.
- #130836
See the PR description of #130836 for more details.
`sympy` 1.13.0 introduces some breaking changes which break our tests. More specifically:
- Ref [Backwards compatibility breaks and deprecations](https://github.com/sympy/sympy/wiki/release-notes-for-1.13.0#backwards-compatibility-breaks-and-deprecations)
> BREAKING CHANGE: Float and Integer/Rational no longer compare equal with a == b. From now on Float(2.0) != Integer(2). Previously expressions involving Float would compare unequal e.g. x*2.0 != x*2 but an individual Float would compare equal to an Integer. In SymPy 1.7 a Float will always compare unequal to an Integer even if they have the same "value". Use sympy.numbers.int_valued(number) to test if a number is a concrete number with no decimal part. ([#25614](https://github.com/sympy/sympy/pull/25614) by [@smichr](https://github.com/smichr))
`sympy >= 1.13.0` is required to enable Python 3.13 support. This should be part of #130689.
- #130689
Pull Request resolved: https://github.com/pytorch/pytorch/pull/130895
Approved by: https://github.com/ezyang
Adding Workflows for building aarch64 Linux PyTorch PIP wheels
Updates:
* Created aarch64 template for generated workflows
* Updated generate_ci_workflows.py to include aarch64
* Generated the aarch64 wheel workflow
* added _binary-build-aarch64.yml for building aarch64 wheel
* added _binary-test-aarch64.yml for sanity check of aarch64 wheel
* Updated binary_linux_test.sh to use --extra-index-url for aarch64 till needed aarch64 dependencies are available at https://download.pytorch.org/whl/nightly/cpu
NOTES:
* The build and test workflows are using arm64v8/alpine and quay.io/pypa/manylinux2014_aarch64:latest docker images at this time.
* Conda generated workflow not included at this time and being worked on.
Workflows were successfully tested at https://github.com/xncqr/pytorch/actions/runs/5351891068
Pull Request resolved: https://github.com/pytorch/pytorch/pull/104109
Approved by: https://github.com/malfet, https://github.com/atalman
This PR almost a no-op, as most of the logic resides in the builder repo, namely:
6342242c508f361d91e1
Remove `conda-forge` channel dependency for test job, but add `malfet` channel for 3.11 testing (as numpy is not in default channel yet)
Build and upload following dependencies to `pytorch-nightly` channel:
```
anaconda copy --to-owner pytorch-nightly malfet/numpy/1.23.5
anaconda copy --to-owner pytorch-nightly malfet/numpy-base/1.23.5
anaconda copy --to-owner pytorch-nightly malfet/mkl-service/2.4.0
anaconda copy --to-owner pytorch-nightly malfet/mkl_random/1.2.2
anaconda copy --to-owner pytorch-nightly malfet/mkl_fft/1.3.1
anaconda copy --to-owner pytorch-nightly malfet/sympy/1.11.1
anaconda copy --to-owner pytorch-nightly malfet/mpmath/1.2.1
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93186
Approved by: https://github.com/atalman, https://github.com/ZainRizvi
Saw some places we missed some old requirements that are no longer necessary (dataclasses and future). Testing to see if all the CIs still work. We don't need dataclasses anymore now that we are on Python >= 3.7
Pull Request resolved: https://github.com/pytorch/pytorch/pull/92763
Approved by: https://github.com/ezyang
Summary:
Adding CUDA 11.6 workflows .
Please note we still depend on conda-forge for cuda 11.6.
Issue created to remove conda-forge dependency: [75532](https://github.com/pytorch/pytorch/issues/75532)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75518
Reviewed By: janeyx99
Differential Revision: D35516057
Pulled By: atalman
fbshipit-source-id: 44a3a0f8954d98adca2280b2e9f203267ebe98cd
(cherry picked from commit 97a4e52ecee8540453e2871714275796dc1c4abb)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68388
Updates the gpu architectures as well as adding a trigger for
on_pull_request for the binary build workflows so that we can iterate on
this later
TODO:
* Create follow up PR to enable nightly linux GHA builds / disable CircleCI nighlty linux builds
Signed-off-by: Eli Uriegas <eliuriegas@fb.com>
Test Plan: Imported from OSS
Reviewed By: janeyx99
Differential Revision: D33462294
Pulled By: seemethere
fbshipit-source-id: 5fa30517550d36f504b491cf6c1e5c9da56d8191
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62290
No longer needed anymore.
Fixes nightly failures that we're observing as well:
```
Jul 27 07:33:02 Found conflicts! Looking for incompatible packages.
Jul 27 07:33:02 This can take several minutes. Press CTRL-C to abort.
Jul 27 07:33:02 failed
Jul 27 07:33:02
Jul 27 07:33:02 UnsatisfiableError: The following specifications were found
Jul 27 07:33:02 to be incompatible with the existing python installation in your environment:
Jul 27 07:33:02
Jul 27 07:33:02 Specifications:
Jul 27 07:33:02
Jul 27 07:33:02 - conda-package-handling=1.6.0 -> python[version='>=2.7,<2.8.0a0|>=3.6,<3.7.0a0|>=3.7,<3.8.0a0|>=3.8,<3.9.0a0']
Jul 27 07:33:02
Jul 27 07:33:02 Your python: python=3.9
```
From: https://app.circleci.com/pipelines/github/pytorch/pytorch/356478/workflows/2102acf1-c92a-4a59-919c-61d32d3bcd71/jobs/15027876
Signed-off-by: Eli Uriegas <eliuriegas@fb.com>
Test Plan: Imported from OSS
Reviewed By: driazati
Differential Revision: D29946501
Pulled By: seemethere
fbshipit-source-id: 3e9182f4cbcf2aab185dbbc21b7a6171746e2281
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58685
This moves debug packages out of the artifacts dir before running tests (as a counterpart to https://github.com/pytorch/builder/pull/770). Doing it this way allows us to keep the CI configs simple since there's one directory to use for artifacts / upload to S3.
See #58684 for actual CI signals (the ones on this PR are all cancelled since it depends on the builder branch set in the next PR up the stack)
Test Plan: Imported from OSS
Reviewed By: nikithamalgifb
Differential Revision: D28646995
Pulled By: driazati
fbshipit-source-id: 965265861968906770a6e6eeecfe7c9458631b5a
Summary:
Replacing 11.0 with 11.2 in our nightlies.
(am slightly uncertain why the manywheel linux tests worked before we added the GPU driver for 11.2)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51611
Reviewed By: malfet, seemethere, zhangguanheng66
Differential Revision: D26282829
Pulled By: janeyx99
fbshipit-source-id: b15380e5c44a957e6a85e4f5fb9691ab9c6103a5
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50505
Even with +u set for the the conda install it still seems to fail out
with an unbound variable error. Let's try and give it a default value
instead.
Signed-off-by: Eli Uriegas <eliuriegas@fb.com>
Test Plan: Imported from OSS
Reviewed By: pbelevich
Differential Revision: D25913692
Pulled By: seemethere
fbshipit-source-id: 4b898f56bff25c7523f10b4933ea6cd17a57df80
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50053
For some reason conda likes to re-activate the conda environment when attempting this install
which means that a deactivate is run and some variables might not exist when that happens,
namely CONDA_MKL_INTERFACE_LAYER_BACKUP from libblas so let's just ignore unbound variables when
it comes to the conda installation commands
Signed-off-by: Eli Uriegas <eliuriegas@fb.com>
Test Plan: Imported from OSS
Reviewed By: samestep
Differential Revision: D25760737
Pulled By: seemethere
fbshipit-source-id: 9e7720eb8a4f8028dbaa7bcfc304e5c1ca73ad08