Files
pytorch/.github/scripts/upload_aws_ossci.sh
Andrea Frittoli c1c94cb0be Build magma binary tarballs for various cuda (#139888)
This is a first step towards removing builds dependency to conda.

Currently we build magma as a conda package in a pytorch conda channel, implemented in a1b372dbda/magma.

This commit adapts the logic from pytorch/builder as follows:
- use pytorch/manylinux-cuda<cuda-version> as base image
- apply patches and invoke the build.sh script directly (not anymore through conda build)
- stores license and build files along with the built artifact, in an info subfolder
- create a tarball file which resembles that created by conda, without any conda-specific metadata

A new matrix workflow is added, which runs the build for each supported cuda version, and uploads the binaries to pyorch s3 bucket.

For the upload, define an upload.sh script, which will be used by the magma windows job as well, to upload to `s3://ossci-*` buckets.

The build runs on PR and push, upload runs in DRY_RUN mode in case of PR.

Fixes #139397

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139888
Approved by: https://github.com/atalman, https://github.com/malfet, https://github.com/seemethere
2024-11-08 13:28:27 +00:00

42 lines
1.3 KiB
Bash

#!/usr/bin/env bash
# Upload a binary to a bucket, supports dry-run mode
set -euo pipefail
# Optional inputs. By default upload to s3://ossci-linux
TARGET_OS=${TARGET_OS:-linux}
UPLOAD_BUCKET=${UPLOAD_BUCKET:-s3://ossci-${TARGET_OS}}
UPLOAD_SUBFOLDER=${UPLOAD_SUBFOLDER:-}
# Download to ${{ runner.temp }}/artifacts to match the default
PKG_DIR=${PKG_DIR:-/tmp/workspace/artifacts}
# Optional package include.
# By default looks for and uploads *.tar.bz2 files only
PKG_INCLUDE=${PKG_INCLUDE:-'*.tar.bz2'}
# Dry-run logs the upload command without actually executing it
# Dry-run is enabled by default, it has to be disabled to upload
DRY_RUN=${DRY_RUN:-enabled}
# Don't actually do work unless explicit
AWS_S3_CP="aws s3 cp --dryrun"
if [[ "${DRY_RUN}" = "disabled" ]]; then
AWS_S3_CP="aws s3 cp"
fi
# Install dependencies (should be a no-op if previously installed)
pip install -q awscli
# Handle subfolders, if provided
s3_root_dir="${UPLOAD_BUCKET}"
if [[ -z ${UPLOAD_SUBFOLDER:-} ]]; then
s3_upload_dir="${s3_root_dir}/"
else
s3_upload_dir="${s3_root_dir}/${UPLOAD_SUBFOLDER}/"
fi
# Upload all packages that match PKG_INCLUDE within PKG_DIR and subdirs
set -x
${AWS_S3_CP} --no-progress --acl public-read --exclude="*" --include="${PKG_INCLUDE}" --recursive "${PKG_DIR}" "${s3_upload_dir}"