[BE] Remove the default TORCH_CUDA_ARCH_LIST in CI Docker image (#161137)

This doesn't make sense to have this default to Maxwell, which is too old.  All other places in CI/CD needs to overwrite this value.  IMO, it makes more sense to not set this at all and let CI/CD jobs set it for their own use cases instead.  This is partly responsible for the build failure in https://github.com/pytorch/pytorch/issues/160988
Pull Request resolved: https://github.com/pytorch/pytorch/pull/161137
Approved by: https://github.com/msaroufim
This commit is contained in:
Huy Do
2025-08-22 06:03:11 +00:00
committed by PyTorch MergeBot
parent 0dea191ff7
commit bc7eaa0d8a
2 changed files with 0 additions and 3 deletions

View File

@ -181,7 +181,6 @@ COPY --from=pytorch/llvm:9.0.1 /opt/llvm /opt/llvm
RUN if [ -n "${SKIP_LLVM_SRC_BUILD_INSTALL}" ]; then set -eu; rm -rf /opt/llvm; fi
# AWS specific CUDA build guidance
ENV TORCH_CUDA_ARCH_LIST Maxwell
ENV TORCH_NVCC_FLAGS "-Xfatbin -compress-all"
ENV CUDA_PATH /usr/local/cuda

View File

@ -152,8 +152,6 @@ function get_pinned_commit() {
function install_torchaudio() {
local commit
commit=$(get_pinned_commit audio)
# TODO (huydhn): PyTorch CI docker image set the default TORCH_CUDA_ARCH_LIST
# to Maxwell. This default doesn't make sense anymore and should be cleaned up
if [[ "${BUILD_ENVIRONMENT}" == *cuda* ]] && command -v nvidia-smi; then
TORCH_CUDA_ARCH_LIST=$(nvidia-smi --query-gpu=compute_cap --format=csv | tail -n 1)
export TORCH_CUDA_ARCH_LIST