mirror of
https://github.com/pytorch/pytorch.git
synced 2025-10-26 08:34:52 +08:00
Compare commits
59 Commits
v1.7.1-rc2
...
release/1.
| Author | SHA1 | Date | |
|---|---|---|---|
| 74044638f7 | |||
| 7f73f1d591 | |||
| ac15471de4 | |||
| 49364eb426 | |||
| bcf2d65446 | |||
| f7a33f1eef | |||
| bd584d52df | |||
| c697af4667 | |||
| 0f3f4ec64c | |||
| 509df600bb | |||
| 187101a88e | |||
| e011d4a16e | |||
| 8ada95e950 | |||
| 21c2481dfe | |||
| 398e8ba182 | |||
| 074b30cdcb | |||
| 319bd5d431 | |||
| 5a20bbd377 | |||
| fa59a9e190 | |||
| 143868c3df | |||
| 964929fcc2 | |||
| cd20ecb472 | |||
| 19d4fd4910 | |||
| a7d187baa4 | |||
| 0541546ac5 | |||
| 369ab73efd | |||
| 9f558e1ee6 | |||
| f0ddfff200 | |||
| 2de184b5a9 | |||
| e0eeddfc78 | |||
| 7727b57d08 | |||
| 9e7dc37f90 | |||
| 227017059f | |||
| aeeccc1486 | |||
| 0b91246cbd | |||
| 0856d6f53c | |||
| 336e0d2874 | |||
| 3b36f2068d | |||
| 6207945564 | |||
| aecae514ab | |||
| 27a2ecb0a5 | |||
| e36fd7b0ba | |||
| 799cb646a6 | |||
| f60c63155a | |||
| 954d9ea466 | |||
| 71185fb2a0 | |||
| a06f26560c | |||
| e4cec279c6 | |||
| b8b50aa909 | |||
| db686de13f | |||
| 288e463693 | |||
| 73783d1048 | |||
| 8891d4eeb1 | |||
| 2085a6f329 | |||
| 3eda9e7da2 | |||
| fb8aa0e98c | |||
| c79b79dadd | |||
| 21acca4528 | |||
| f710757557 |
3
.bazelrc
3
.bazelrc
@ -1,3 +0,0 @@
|
||||
build --copt=--std=c++14
|
||||
build --copt=-I.
|
||||
build --copt=-isystem --copt bazel-out/k8-fastbuild/bin
|
||||
@ -1 +0,0 @@
|
||||
3.1.0
|
||||
@ -71,9 +71,9 @@ A **binary configuration** is a collection of
|
||||
* release or nightly
|
||||
* releases are stable, nightlies are beta and built every night
|
||||
* python version
|
||||
* linux: 3.5m, 3.6m 3.7m (mu is wide unicode or something like that. It usually doesn't matter but you should know that it exists)
|
||||
* macos: 3.6, 3.7, 3.8
|
||||
* windows: 3.6, 3.7, 3.8
|
||||
* linux: 2.7m, 2.7mu, 3.5m, 3.6m 3.7m (mu is wide unicode or something like that. It usually doesn't matter but you should know that it exists)
|
||||
* macos: 2.7, 3.5, 3.6, 3.7
|
||||
* windows: 3.5, 3.6, 3.7
|
||||
* cpu version
|
||||
* cpu, cuda 9.0, cuda 10.0
|
||||
* The supported cuda versions occasionally change
|
||||
@ -178,7 +178,8 @@ CircleCI creates a final yaml file by inlining every <<* segment, so if we were
|
||||
So, CircleCI has several executor types: macos, machine, and docker are the ones we use. The 'machine' executor gives you two cores on some linux vm. The 'docker' executor gives you considerably more cores (nproc was 32 instead of 2 back when I tried in February). Since the dockers are faster, we try to run everything that we can in dockers. Thus
|
||||
|
||||
* linux build jobs use the docker executor. Running them on the docker executor was at least 2x faster than running them on the machine executor
|
||||
* linux test jobs use the machine executor in order for them to properly interface with GPUs since docker executors cannot execute with attached GPUs
|
||||
* linux test jobs use the machine executor and spin up their own docker. Why this nonsense? It's cause we run nvidia-docker for our GPU tests; any code that calls into the CUDA runtime needs to be run on nvidia-docker. To run a nvidia-docker you need to install some nvidia packages on the host machine and then call docker with the '—runtime nvidia' argument. CircleCI doesn't support this, so we have to do it ourself.
|
||||
* This is not just a mere inconvenience. **This blocks all of our linux tests from using more than 2 cores.** But there is nothing that we can do about it, but wait for a fix on circleci's side. Right now, we only run some smoke tests (some simple imports) on the binaries, but this also affects non-binary test jobs.
|
||||
* linux upload jobs use the machine executor. The upload jobs are so short that it doesn't really matter what they use
|
||||
* linux smoke test jobs use the machine executor for the same reason as the linux test jobs
|
||||
|
||||
@ -418,6 +419,8 @@ You can build Linux binaries locally easily using docker.
|
||||
# in the docker container then you will see path/to/foo/baz on your local
|
||||
# machine. You could also clone the pytorch and builder repos in the docker.
|
||||
#
|
||||
# If you're building a CUDA binary then use `nvidia-docker run` instead, see below.
|
||||
#
|
||||
# If you know how, add ccache as a volume too and speed up everything
|
||||
docker run \
|
||||
-v your/pytorch/repo:/pytorch \
|
||||
@ -441,7 +444,9 @@ export DESIRED_CUDA=cpu
|
||||
|
||||
**Building CUDA binaries on docker**
|
||||
|
||||
You can build CUDA binaries on CPU only machines, but you can only run CUDA binaries on CUDA machines. This means that you can build a CUDA binary on a docker on your laptop if you so choose (though it’s gonna take a long time).
|
||||
To build a CUDA binary you need to use `nvidia-docker run` instead of just `docker run` (or you can manually pass `--runtime=nvidia`). This adds some needed libraries and things to build CUDA stuff.
|
||||
|
||||
You can build CUDA binaries on CPU only machines, but you can only run CUDA binaries on CUDA machines. This means that you can build a CUDA binary on a docker on your laptop if you so choose (though it’s gonna take a loong time).
|
||||
|
||||
For Facebook employees, ask about beefy machines that have docker support and use those instead of your laptop; it will be 5x as fast.
|
||||
|
||||
@ -461,7 +466,7 @@ But if you want to try, then I’d recommend
|
||||
# Always install miniconda 3, even if building for Python <3
|
||||
new_conda="~/my_new_conda"
|
||||
conda_sh="$new_conda/install_miniconda.sh"
|
||||
curl -o "$conda_sh" https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh
|
||||
curl -o "$conda_sh" https://repo.continuum.io/miniconda/Miniconda3-latest-MacOSX-x86_64.sh
|
||||
chmod +x "$conda_sh"
|
||||
"$conda_sh" -b -p "$MINICONDA_ROOT"
|
||||
rm -f "$conda_sh"
|
||||
|
||||
@ -5,6 +5,9 @@ for "smoketest" builds.
|
||||
Each subclass of ConfigNode represents a layer of the configuration hierarchy.
|
||||
These tree nodes encapsulate the logic for whether a branch of the hierarchy
|
||||
should be "pruned".
|
||||
|
||||
In addition to generating config.yml content, the tree is also traversed
|
||||
to produce a visualization of config dimensions.
|
||||
"""
|
||||
|
||||
from collections import OrderedDict
|
||||
@ -25,44 +28,33 @@ DEPS_INCLUSION_DIMENSIONS = [
|
||||
]
|
||||
|
||||
|
||||
def get_processor_arch_name(gpu_version):
|
||||
return "cpu" if not gpu_version else (
|
||||
"cu" + gpu_version.strip("cuda") if gpu_version.startswith("cuda") else gpu_version
|
||||
)
|
||||
def get_processor_arch_name(cuda_version):
|
||||
return "cpu" if not cuda_version else "cu" + cuda_version
|
||||
|
||||
|
||||
LINUX_PACKAGE_VARIANTS = OrderedDict(
|
||||
manywheel=[
|
||||
"2.7m",
|
||||
"2.7mu",
|
||||
"3.5m",
|
||||
"3.6m",
|
||||
"3.7m",
|
||||
"3.8m",
|
||||
"3.9m"
|
||||
],
|
||||
conda=dimensions.STANDARD_PYTHON_VERSIONS,
|
||||
libtorch=[
|
||||
"3.7m",
|
||||
"2.7m",
|
||||
],
|
||||
)
|
||||
|
||||
CONFIG_TREE_DATA = OrderedDict(
|
||||
linux=(dimensions.GPU_VERSIONS, LINUX_PACKAGE_VARIANTS),
|
||||
linux=(dimensions.CUDA_VERSIONS, LINUX_PACKAGE_VARIANTS),
|
||||
macos=([None], OrderedDict(
|
||||
wheel=dimensions.STANDARD_PYTHON_VERSIONS,
|
||||
conda=dimensions.STANDARD_PYTHON_VERSIONS,
|
||||
libtorch=[
|
||||
"3.7",
|
||||
"2.7",
|
||||
],
|
||||
)),
|
||||
# Skip CUDA-9.2 builds on Windows
|
||||
windows=(
|
||||
[v for v in dimensions.GPU_VERSIONS if v not in ['cuda92'] + dimensions.ROCM_VERSION_LABELS],
|
||||
OrderedDict(
|
||||
wheel=dimensions.STANDARD_PYTHON_VERSIONS,
|
||||
conda=dimensions.STANDARD_PYTHON_VERSIONS,
|
||||
libtorch=[
|
||||
"3.7",
|
||||
],
|
||||
)
|
||||
),
|
||||
)
|
||||
|
||||
# GCC config variants:
|
||||
@ -81,11 +73,6 @@ LINUX_GCC_CONFIG_VARIANTS = OrderedDict(
|
||||
],
|
||||
)
|
||||
|
||||
WINDOWS_LIBTORCH_CONFIG_VARIANTS = [
|
||||
"debug",
|
||||
"release",
|
||||
]
|
||||
|
||||
|
||||
class TopLevelNode(ConfigNode):
|
||||
def __init__(self, node_name, config_tree_data, smoke):
|
||||
@ -99,12 +86,12 @@ class TopLevelNode(ConfigNode):
|
||||
|
||||
|
||||
class OSConfigNode(ConfigNode):
|
||||
def __init__(self, parent, os_name, gpu_versions, py_tree):
|
||||
def __init__(self, parent, os_name, cuda_versions, py_tree):
|
||||
super(OSConfigNode, self).__init__(parent, os_name)
|
||||
|
||||
self.py_tree = py_tree
|
||||
self.props["os_name"] = os_name
|
||||
self.props["gpu_versions"] = gpu_versions
|
||||
self.props["cuda_versions"] = cuda_versions
|
||||
|
||||
def get_children(self):
|
||||
return [PackageFormatConfigNode(self, k, v) for k, v in self.py_tree.items()]
|
||||
@ -120,10 +107,8 @@ class PackageFormatConfigNode(ConfigNode):
|
||||
def get_children(self):
|
||||
if self.find_prop("os_name") == "linux":
|
||||
return [LinuxGccConfigNode(self, v) for v in LINUX_GCC_CONFIG_VARIANTS[self.find_prop("package_format")]]
|
||||
elif self.find_prop("os_name") == "windows" and self.find_prop("package_format") == "libtorch":
|
||||
return [WindowsLibtorchConfigNode(self, v) for v in WINDOWS_LIBTORCH_CONFIG_VARIANTS]
|
||||
else:
|
||||
return [ArchConfigNode(self, v) for v in self.find_prop("gpu_versions")]
|
||||
return [ArchConfigNode(self, v) for v in self.find_prop("cuda_versions")]
|
||||
|
||||
|
||||
class LinuxGccConfigNode(ConfigNode):
|
||||
@ -133,39 +118,21 @@ class LinuxGccConfigNode(ConfigNode):
|
||||
self.props["gcc_config_variant"] = gcc_config_variant
|
||||
|
||||
def get_children(self):
|
||||
gpu_versions = self.find_prop("gpu_versions")
|
||||
cuda_versions = self.find_prop("cuda_versions")
|
||||
|
||||
# XXX devtoolset7 on CUDA 9.0 is temporarily disabled
|
||||
# see https://github.com/pytorch/pytorch/issues/20066
|
||||
if self.find_prop("gcc_config_variant") == 'devtoolset7':
|
||||
gpu_versions = filter(lambda x: x != "cuda_90", gpu_versions)
|
||||
cuda_versions = filter(lambda x: x != "90", cuda_versions)
|
||||
|
||||
# XXX disabling conda rocm build since docker images are not there
|
||||
if self.find_prop("package_format") == 'conda':
|
||||
gpu_versions = filter(lambda x: x not in dimensions.ROCM_VERSION_LABELS, gpu_versions)
|
||||
|
||||
# XXX libtorch rocm build is temporarily disabled
|
||||
if self.find_prop("package_format") == 'libtorch':
|
||||
gpu_versions = filter(lambda x: x not in dimensions.ROCM_VERSION_LABELS, gpu_versions)
|
||||
|
||||
return [ArchConfigNode(self, v) for v in gpu_versions]
|
||||
|
||||
|
||||
class WindowsLibtorchConfigNode(ConfigNode):
|
||||
def __init__(self, parent, libtorch_config_variant):
|
||||
super(WindowsLibtorchConfigNode, self).__init__(parent, "LIBTORCH_CONFIG_VARIANT=" + str(libtorch_config_variant))
|
||||
|
||||
self.props["libtorch_config_variant"] = libtorch_config_variant
|
||||
|
||||
def get_children(self):
|
||||
return [ArchConfigNode(self, v) for v in self.find_prop("gpu_versions")]
|
||||
return [ArchConfigNode(self, v) for v in cuda_versions]
|
||||
|
||||
|
||||
class ArchConfigNode(ConfigNode):
|
||||
def __init__(self, parent, gpu):
|
||||
super(ArchConfigNode, self).__init__(parent, get_processor_arch_name(gpu))
|
||||
def __init__(self, parent, cu):
|
||||
super(ArchConfigNode, self).__init__(parent, get_processor_arch_name(cu))
|
||||
|
||||
self.props["gpu"] = gpu
|
||||
self.props["cu"] = cu
|
||||
|
||||
def get_children(self):
|
||||
return [PyVersionConfigNode(self, v) for v in self.find_prop("python_versions")]
|
||||
@ -178,6 +145,8 @@ class PyVersionConfigNode(ConfigNode):
|
||||
self.props["pyver"] = pyver
|
||||
|
||||
def get_children(self):
|
||||
|
||||
smoke = self.find_prop("smoke")
|
||||
package_format = self.find_prop("package_format")
|
||||
os_name = self.find_prop("os_name")
|
||||
|
||||
|
||||
@ -1,33 +1,30 @@
|
||||
from collections import OrderedDict
|
||||
|
||||
import cimodel.data.simple.util.branch_filters as branch_filters
|
||||
import cimodel.data.binary_build_data as binary_build_data
|
||||
import cimodel.lib.conf_tree as conf_tree
|
||||
import cimodel.lib.miniutils as miniutils
|
||||
|
||||
|
||||
class Conf(object):
|
||||
def __init__(self, os, gpu_version, pydistro, parms, smoke, libtorch_variant, gcc_config_variant, libtorch_config_variant):
|
||||
def __init__(self, os, cuda_version, pydistro, parms, smoke, libtorch_variant, gcc_config_variant):
|
||||
|
||||
self.os = os
|
||||
self.gpu_version = gpu_version
|
||||
self.cuda_version = cuda_version
|
||||
self.pydistro = pydistro
|
||||
self.parms = parms
|
||||
self.smoke = smoke
|
||||
self.libtorch_variant = libtorch_variant
|
||||
self.gcc_config_variant = gcc_config_variant
|
||||
self.libtorch_config_variant = libtorch_config_variant
|
||||
|
||||
def gen_build_env_parms(self):
|
||||
elems = [self.pydistro] + self.parms + [binary_build_data.get_processor_arch_name(self.gpu_version)]
|
||||
elems = [self.pydistro] + self.parms + [binary_build_data.get_processor_arch_name(self.cuda_version)]
|
||||
if self.gcc_config_variant is not None:
|
||||
elems.append(str(self.gcc_config_variant))
|
||||
if self.libtorch_config_variant is not None:
|
||||
elems.append(str(self.libtorch_config_variant))
|
||||
return elems
|
||||
|
||||
def gen_docker_image(self):
|
||||
if self.gcc_config_variant == 'gcc5.4_cxx11-abi':
|
||||
return miniutils.quote("pytorch/pytorch-binary-docker-image-ubuntu16.04:latest")
|
||||
return miniutils.quote("pytorch/conda-cuda-cxx11-ubuntu1604:latest")
|
||||
|
||||
docker_word_substitution = {
|
||||
"manywheel": "manylinux",
|
||||
@ -36,13 +33,12 @@ class Conf(object):
|
||||
|
||||
docker_distro_prefix = miniutils.override(self.pydistro, docker_word_substitution)
|
||||
|
||||
# The cpu nightlies are built on the pytorch/manylinux-cuda102 docker image
|
||||
# TODO cuda images should consolidate into tag-base images similar to rocm
|
||||
alt_docker_suffix = "cuda102" if not self.gpu_version else (
|
||||
"rocm:" + self.gpu_version.strip("rocm") if self.gpu_version.startswith("rocm") else self.gpu_version)
|
||||
docker_distro_suffix = alt_docker_suffix if self.pydistro != "conda" else (
|
||||
"cuda" if alt_docker_suffix.startswith("cuda") else "rocm")
|
||||
return miniutils.quote("pytorch/" + docker_distro_prefix + "-" + docker_distro_suffix)
|
||||
# The cpu nightlies are built on the pytorch/manylinux-cuda100 docker image
|
||||
alt_docker_suffix = self.cuda_version or "100"
|
||||
docker_distro_suffix = "" if self.pydistro == "conda" else alt_docker_suffix
|
||||
if self.cuda_version == "101":
|
||||
return "soumith/manylinux-cuda101@sha256:5d62be90d5b7777121180e6137c7eed73d37aaf9f669c51b783611e37e0b4916"
|
||||
return miniutils.quote("pytorch/" + docker_distro_prefix + "-cuda" + docker_distro_suffix)
|
||||
|
||||
def get_name_prefix(self):
|
||||
return "smoke" if self.smoke else "binary"
|
||||
@ -67,82 +63,38 @@ class Conf(object):
|
||||
job_def = OrderedDict()
|
||||
job_def["name"] = self.gen_build_name(phase, nightly)
|
||||
job_def["build_environment"] = miniutils.quote(" ".join(self.gen_build_env_parms()))
|
||||
job_def["requires"] = ["setup"]
|
||||
if self.smoke:
|
||||
job_def["requires"] = [
|
||||
"update_s3_htmls",
|
||||
]
|
||||
job_def["filters"] = branch_filters.gen_filter_dict(
|
||||
branches_list=["postnightly"],
|
||||
)
|
||||
job_def["requires"].append("update_s3_htmls_for_nightlies")
|
||||
job_def["requires"].append("update_s3_htmls_for_nightlies_devtoolset7")
|
||||
job_def["filters"] = {"branches": {"only": "postnightly"}}
|
||||
else:
|
||||
filter_branch = r"/.*/"
|
||||
job_def["filters"] = branch_filters.gen_filter_dict(
|
||||
branches_list=[filter_branch],
|
||||
tags_list=[branch_filters.RC_PATTERN],
|
||||
)
|
||||
job_def["filters"] = {"branches": {"only": "nightly"}}
|
||||
if self.libtorch_variant:
|
||||
job_def["libtorch_variant"] = miniutils.quote(self.libtorch_variant)
|
||||
if phase == "test":
|
||||
if not self.smoke:
|
||||
job_def["requires"] = [self.gen_build_name("build", nightly)]
|
||||
if not (self.smoke and self.os == "macos") and self.os != "windows":
|
||||
job_def["requires"].append(self.gen_build_name("build", nightly))
|
||||
if not (self.smoke and self.os == "macos"):
|
||||
job_def["docker_image"] = self.gen_docker_image()
|
||||
|
||||
# fix this. only works on cuda not rocm
|
||||
if self.os != "windows" and self.gpu_version:
|
||||
if self.cuda_version:
|
||||
job_def["use_cuda_docker_runtime"] = miniutils.quote("1")
|
||||
else:
|
||||
if self.os == "linux" and phase != "upload":
|
||||
job_def["docker_image"] = self.gen_docker_image()
|
||||
|
||||
if phase == "test":
|
||||
if self.gpu_version:
|
||||
if self.os == "windows":
|
||||
job_def["executor"] = "windows-with-nvidia-gpu"
|
||||
else:
|
||||
job_def["resource_class"] = "gpu.medium"
|
||||
if self.cuda_version:
|
||||
job_def["resource_class"] = "gpu.medium"
|
||||
if phase == "upload":
|
||||
job_def["context"] = "org-member"
|
||||
job_def["requires"] = ["setup", self.gen_build_name(upload_phase_dependency, nightly)]
|
||||
|
||||
os_name = miniutils.override(self.os, {"macos": "mac"})
|
||||
job_name = "_".join([self.get_name_prefix(), os_name, phase])
|
||||
return {job_name : job_def}
|
||||
|
||||
def gen_upload_job(self, phase, requires_dependency):
|
||||
"""Generate binary_upload job for configuration
|
||||
|
||||
Output looks similar to:
|
||||
|
||||
- binary_upload:
|
||||
name: binary_linux_manywheel_3_7m_cu92_devtoolset7_nightly_upload
|
||||
context: org-member
|
||||
requires: binary_linux_manywheel_3_7m_cu92_devtoolset7_nightly_test
|
||||
filters:
|
||||
branches:
|
||||
only:
|
||||
- nightly
|
||||
tags:
|
||||
only: /v[0-9]+(\\.[0-9]+)*-rc[0-9]+/
|
||||
package_type: manywheel
|
||||
upload_subfolder: cu92
|
||||
"""
|
||||
return {
|
||||
"binary_upload": OrderedDict({
|
||||
"name": self.gen_build_name(phase, nightly=True),
|
||||
"context": "org-member",
|
||||
"requires": [self.gen_build_name(
|
||||
requires_dependency,
|
||||
nightly=True
|
||||
)],
|
||||
"filters": branch_filters.gen_filter_dict(
|
||||
branches_list=["nightly"],
|
||||
tags_list=[branch_filters.RC_PATTERN],
|
||||
),
|
||||
"package_type": self.pydistro,
|
||||
"upload_subfolder": binary_build_data.get_processor_arch_name(
|
||||
self.gpu_version,
|
||||
),
|
||||
})
|
||||
}
|
||||
|
||||
def get_root(smoke, name):
|
||||
|
||||
return binary_build_data.TopLevelNode(
|
||||
@ -161,47 +113,35 @@ def gen_build_env_list(smoke):
|
||||
for c in config_list:
|
||||
conf = Conf(
|
||||
c.find_prop("os_name"),
|
||||
c.find_prop("gpu"),
|
||||
c.find_prop("cu"),
|
||||
c.find_prop("package_format"),
|
||||
[c.find_prop("pyver")],
|
||||
c.find_prop("smoke"),
|
||||
c.find_prop("libtorch_variant"),
|
||||
c.find_prop("gcc_config_variant"),
|
||||
c.find_prop("libtorch_config_variant"),
|
||||
)
|
||||
newlist.append(conf)
|
||||
|
||||
return newlist
|
||||
|
||||
def predicate_exclude_macos(config):
|
||||
return config.os == "linux" or config.os == "windows"
|
||||
|
||||
def predicate_exclude_nonlinux_and_libtorch(config):
|
||||
return config.os == "linux"
|
||||
|
||||
|
||||
def get_nightly_uploads():
|
||||
configs = gen_build_env_list(False)
|
||||
mylist = []
|
||||
for conf in configs:
|
||||
phase_dependency = "test" if predicate_exclude_macos(conf) else "build"
|
||||
mylist.append(conf.gen_upload_job("upload", phase_dependency))
|
||||
phase_dependency = "test" if predicate_exclude_nonlinux_and_libtorch(conf) else "build"
|
||||
mylist.append(conf.gen_workflow_job("upload", phase_dependency, nightly=True))
|
||||
|
||||
return mylist
|
||||
|
||||
def get_post_upload_jobs():
|
||||
return [
|
||||
{
|
||||
"update_s3_htmls": {
|
||||
"name": "update_s3_htmls",
|
||||
"context": "org-member",
|
||||
"filters": branch_filters.gen_filter_dict(
|
||||
branches_list=["postnightly"],
|
||||
),
|
||||
},
|
||||
},
|
||||
]
|
||||
|
||||
def get_nightly_tests():
|
||||
|
||||
configs = gen_build_env_list(False)
|
||||
filtered_configs = filter(predicate_exclude_macos, configs)
|
||||
filtered_configs = filter(predicate_exclude_nonlinux_and_libtorch, configs)
|
||||
|
||||
tests = []
|
||||
for conf_options in filtered_configs:
|
||||
|
||||
81
.circleci/cimodel/data/caffe2_build_data.py
Normal file
81
.circleci/cimodel/data/caffe2_build_data.py
Normal file
@ -0,0 +1,81 @@
|
||||
from cimodel.lib.conf_tree import ConfigNode, XImportant
|
||||
from cimodel.lib.conf_tree import Ver
|
||||
|
||||
|
||||
CONFIG_TREE_DATA = [
|
||||
(Ver("ubuntu", "16.04"), [
|
||||
([Ver("gcc", "5")], [XImportant("onnx_py2")]),
|
||||
([Ver("clang", "7")], [XImportant("onnx_py3.6")]),
|
||||
]),
|
||||
]
|
||||
|
||||
|
||||
class TreeConfigNode(ConfigNode):
|
||||
def __init__(self, parent, node_name, subtree):
|
||||
super(TreeConfigNode, self).__init__(parent, self.modify_label(node_name))
|
||||
self.subtree = subtree
|
||||
self.init2(node_name)
|
||||
|
||||
# noinspection PyMethodMayBeStatic
|
||||
def modify_label(self, label):
|
||||
return str(label)
|
||||
|
||||
def init2(self, node_name):
|
||||
pass
|
||||
|
||||
def get_children(self):
|
||||
return [self.child_constructor()(self, k, v) for (k, v) in self.subtree]
|
||||
|
||||
def is_build_only(self):
|
||||
if str(self.find_prop("language_version")) == "onnx_py3.6":
|
||||
return False
|
||||
return set(str(c) for c in self.find_prop("compiler_version")).intersection({
|
||||
"clang3.8",
|
||||
"clang3.9",
|
||||
"clang7",
|
||||
"android",
|
||||
}) or self.find_prop("distro_version").name == "macos"
|
||||
|
||||
|
||||
class TopLevelNode(TreeConfigNode):
|
||||
def __init__(self, node_name, subtree):
|
||||
super(TopLevelNode, self).__init__(None, node_name, subtree)
|
||||
|
||||
# noinspection PyMethodMayBeStatic
|
||||
def child_constructor(self):
|
||||
return DistroConfigNode
|
||||
|
||||
|
||||
class DistroConfigNode(TreeConfigNode):
|
||||
def init2(self, node_name):
|
||||
self.props["distro_version"] = node_name
|
||||
|
||||
# noinspection PyMethodMayBeStatic
|
||||
def child_constructor(self):
|
||||
return CompilerConfigNode
|
||||
|
||||
|
||||
class CompilerConfigNode(TreeConfigNode):
|
||||
def init2(self, node_name):
|
||||
self.props["compiler_version"] = node_name
|
||||
|
||||
# noinspection PyMethodMayBeStatic
|
||||
def child_constructor(self):
|
||||
return LanguageConfigNode
|
||||
|
||||
|
||||
class LanguageConfigNode(TreeConfigNode):
|
||||
def init2(self, node_name):
|
||||
self.props["language_version"] = node_name
|
||||
self.props["build_only"] = self.is_build_only()
|
||||
|
||||
def child_constructor(self):
|
||||
return ImportantConfigNode
|
||||
|
||||
|
||||
class ImportantConfigNode(TreeConfigNode):
|
||||
def init2(self, node_name):
|
||||
self.props["important"] = True
|
||||
|
||||
def get_children(self):
|
||||
return []
|
||||
161
.circleci/cimodel/data/caffe2_build_definitions.py
Normal file
161
.circleci/cimodel/data/caffe2_build_definitions.py
Normal file
@ -0,0 +1,161 @@
|
||||
from collections import OrderedDict
|
||||
|
||||
import cimodel.data.dimensions as dimensions
|
||||
import cimodel.lib.conf_tree as conf_tree
|
||||
from cimodel.lib.conf_tree import Ver
|
||||
import cimodel.lib.miniutils as miniutils
|
||||
from cimodel.data.caffe2_build_data import CONFIG_TREE_DATA, TopLevelNode
|
||||
|
||||
|
||||
from dataclasses import dataclass
|
||||
|
||||
|
||||
DOCKER_IMAGE_PATH_BASE = "308535385114.dkr.ecr.us-east-1.amazonaws.com/caffe2/"
|
||||
|
||||
DOCKER_IMAGE_VERSION = 345
|
||||
|
||||
|
||||
@dataclass
|
||||
class Conf:
|
||||
language: str
|
||||
distro: Ver
|
||||
# There could be multiple compiler versions configured (e.g. nvcc
|
||||
# for gpu files and host compiler (gcc/clang) for cpu files)
|
||||
compilers: [Ver]
|
||||
build_only: bool
|
||||
is_important: bool
|
||||
|
||||
@property
|
||||
def compiler_names(self):
|
||||
return [c.name for c in self.compilers]
|
||||
|
||||
# TODO: Eventually we can probably just remove the cudnn7 everywhere.
|
||||
def get_cudnn_insertion(self):
|
||||
|
||||
omit = self.language == "onnx_py2" \
|
||||
or self.language == "onnx_py3.6" \
|
||||
or set(self.compiler_names).intersection({"android", "mkl", "clang"}) \
|
||||
or str(self.distro) in ["ubuntu14.04", "macos10.13"]
|
||||
|
||||
return [] if omit else ["cudnn7"]
|
||||
|
||||
def get_build_name_root_parts(self):
|
||||
return [
|
||||
"caffe2",
|
||||
self.language,
|
||||
] + self.get_build_name_middle_parts()
|
||||
|
||||
def get_build_name_middle_parts(self):
|
||||
return [str(c) for c in self.compilers] + self.get_cudnn_insertion() + [str(self.distro)]
|
||||
|
||||
def construct_phase_name(self, phase):
|
||||
root_parts = self.get_build_name_root_parts()
|
||||
return "_".join(root_parts + [phase]).replace(".", "_")
|
||||
|
||||
def get_platform(self):
|
||||
platform = self.distro.name
|
||||
if self.distro.name != "macos":
|
||||
platform = "linux"
|
||||
return platform
|
||||
|
||||
def gen_docker_image(self):
|
||||
|
||||
lang_substitutions = {
|
||||
"onnx_py2": "py2",
|
||||
"onnx_py3.6": "py3.6",
|
||||
"cmake": "py2",
|
||||
}
|
||||
|
||||
lang = miniutils.override(self.language, lang_substitutions)
|
||||
parts = [lang] + self.get_build_name_middle_parts()
|
||||
return miniutils.quote(DOCKER_IMAGE_PATH_BASE + "-".join(parts) + ":" + str(DOCKER_IMAGE_VERSION))
|
||||
|
||||
def gen_workflow_params(self, phase):
|
||||
parameters = OrderedDict()
|
||||
lang_substitutions = {
|
||||
"onnx_py2": "onnx-py2",
|
||||
"onnx_py3.6": "onnx-py3.6",
|
||||
}
|
||||
|
||||
lang = miniutils.override(self.language, lang_substitutions)
|
||||
|
||||
parts = [
|
||||
"caffe2",
|
||||
lang,
|
||||
] + self.get_build_name_middle_parts() + [phase]
|
||||
|
||||
build_env_name = "-".join(parts)
|
||||
parameters["build_environment"] = miniutils.quote(build_env_name)
|
||||
if "ios" in self.compiler_names:
|
||||
parameters["build_ios"] = miniutils.quote("1")
|
||||
if phase == "test":
|
||||
# TODO cuda should not be considered a compiler
|
||||
if "cuda" in self.compiler_names:
|
||||
parameters["use_cuda_docker_runtime"] = miniutils.quote("1")
|
||||
|
||||
if self.distro.name != "macos":
|
||||
parameters["docker_image"] = self.gen_docker_image()
|
||||
if self.build_only:
|
||||
parameters["build_only"] = miniutils.quote("1")
|
||||
if phase == "test":
|
||||
resource_class = "large" if "cuda" not in self.compiler_names else "gpu.medium"
|
||||
parameters["resource_class"] = resource_class
|
||||
|
||||
return parameters
|
||||
|
||||
def gen_workflow_job(self, phase):
|
||||
job_def = OrderedDict()
|
||||
job_def["name"] = self.construct_phase_name(phase)
|
||||
job_def["requires"] = ["setup"]
|
||||
|
||||
if phase == "test":
|
||||
job_def["requires"].append(self.construct_phase_name("build"))
|
||||
job_name = "caffe2_" + self.get_platform() + "_test"
|
||||
else:
|
||||
job_name = "caffe2_" + self.get_platform() + "_build"
|
||||
|
||||
if not self.is_important:
|
||||
job_def["filters"] = {"branches": {"only": ["master", r"/ci-all\/.*/"]}}
|
||||
job_def.update(self.gen_workflow_params(phase))
|
||||
return {job_name : job_def}
|
||||
|
||||
|
||||
def get_root():
|
||||
return TopLevelNode("Caffe2 Builds", CONFIG_TREE_DATA)
|
||||
|
||||
|
||||
def instantiate_configs():
|
||||
|
||||
config_list = []
|
||||
|
||||
root = get_root()
|
||||
found_configs = conf_tree.dfs(root)
|
||||
for fc in found_configs:
|
||||
c = Conf(
|
||||
language=fc.find_prop("language_version"),
|
||||
distro=fc.find_prop("distro_version"),
|
||||
compilers=fc.find_prop("compiler_version"),
|
||||
build_only=fc.find_prop("build_only"),
|
||||
is_important=fc.find_prop("important"),
|
||||
)
|
||||
|
||||
config_list.append(c)
|
||||
|
||||
return config_list
|
||||
|
||||
|
||||
def get_workflow_jobs():
|
||||
|
||||
configs = instantiate_configs()
|
||||
|
||||
x = []
|
||||
for conf_options in configs:
|
||||
|
||||
phases = ["build"]
|
||||
if not conf_options.build_only:
|
||||
phases = dimensions.PHASES
|
||||
|
||||
for phase in phases:
|
||||
x.append(conf_options.gen_workflow_job(phase))
|
||||
|
||||
return x
|
||||
@ -1,24 +1,15 @@
|
||||
PHASES = ["build", "test"]
|
||||
|
||||
CUDA_VERSIONS = [
|
||||
None, # cpu build
|
||||
"92",
|
||||
"100",
|
||||
"101",
|
||||
"102",
|
||||
"110",
|
||||
]
|
||||
|
||||
ROCM_VERSIONS = [
|
||||
"3.7",
|
||||
"3.8",
|
||||
]
|
||||
|
||||
ROCM_VERSION_LABELS = ["rocm" + v for v in ROCM_VERSIONS]
|
||||
|
||||
GPU_VERSIONS = [None] + ["cuda" + v for v in CUDA_VERSIONS] + ROCM_VERSION_LABELS
|
||||
|
||||
STANDARD_PYTHON_VERSIONS = [
|
||||
"2.7",
|
||||
"3.5",
|
||||
"3.6",
|
||||
"3.7",
|
||||
"3.8",
|
||||
"3.9"
|
||||
]
|
||||
|
||||
@ -3,13 +3,18 @@ from cimodel.lib.conf_tree import ConfigNode, X, XImportant
|
||||
|
||||
CONFIG_TREE_DATA = [
|
||||
("xenial", [
|
||||
(None, [
|
||||
XImportant("2.7.9"),
|
||||
X("2.7"),
|
||||
XImportant("3.5"), # Not run on all PRs, but should be included on [test all]
|
||||
X("nightly"),
|
||||
]),
|
||||
("gcc", [
|
||||
("5.4", [ # All this subtree rebases to master and then build
|
||||
XImportant("3.6"),
|
||||
("3.6", [
|
||||
("important", [X(True)]),
|
||||
("parallel_tbb", [X(True)]),
|
||||
("parallel_native", [X(True)]),
|
||||
("pure_torch", [X(True)]),
|
||||
("parallel_tbb", [XImportant(True)]),
|
||||
("parallel_native", [XImportant(True)]),
|
||||
]),
|
||||
]),
|
||||
# TODO: bring back libtorch test
|
||||
@ -17,70 +22,40 @@ CONFIG_TREE_DATA = [
|
||||
]),
|
||||
("clang", [
|
||||
("5", [
|
||||
("3.6", [
|
||||
("asan", [XImportant(True)]),
|
||||
]),
|
||||
]),
|
||||
("7", [
|
||||
("3.6", [
|
||||
("onnx", [XImportant(True)]),
|
||||
]),
|
||||
XImportant("3.6"), # This is actually the ASAN build
|
||||
]),
|
||||
# ("7", [
|
||||
# ("3.6", [
|
||||
# ("xla", [XImportant(True)]),
|
||||
# ]),
|
||||
# ]),
|
||||
]),
|
||||
("cuda", [
|
||||
("9.2", [
|
||||
("9", [
|
||||
# Note there are magic strings here
|
||||
# https://github.com/pytorch/pytorch/blob/master/.jenkins/pytorch/build.sh#L21
|
||||
# and
|
||||
# https://github.com/pytorch/pytorch/blob/master/.jenkins/pytorch/build.sh#L143
|
||||
# and
|
||||
# https://github.com/pytorch/pytorch/blob/master/.jenkins/pytorch/build.sh#L153
|
||||
# (from https://github.com/pytorch/pytorch/pull/17323#discussion_r259453144)
|
||||
XImportant("3.6"),
|
||||
("3.6", [
|
||||
X(True),
|
||||
("cuda_gcc_override", [
|
||||
("gcc5.4", [
|
||||
('build_only', [XImportant(True)]),
|
||||
]),
|
||||
]),
|
||||
])
|
||||
]),
|
||||
("10.1", [
|
||||
("3.6", [
|
||||
('build_only', [X(True)]),
|
||||
]),
|
||||
]),
|
||||
("10.2", [
|
||||
("3.6", [
|
||||
("important", [X(True)]),
|
||||
("libtorch", [X(True)]),
|
||||
]),
|
||||
]),
|
||||
("11.0", [
|
||||
("3.8", [
|
||||
X(True),
|
||||
("libtorch", [XImportant(True)])
|
||||
]),
|
||||
]),
|
||||
("9.2", [X("3.6")]),
|
||||
("10", [X("3.6")]),
|
||||
("10.1", [X("3.6")]),
|
||||
]),
|
||||
]),
|
||||
("bionic", [
|
||||
("clang", [
|
||||
("9", [
|
||||
XImportant("3.6"),
|
||||
]),
|
||||
("9", [
|
||||
("android", [
|
||||
("r19c", [
|
||||
("3.6", [
|
||||
("xla", [XImportant(True)]),
|
||||
("vulkan", [XImportant(True)]),
|
||||
]),
|
||||
]),
|
||||
]),
|
||||
("gcc", [
|
||||
("9", [
|
||||
("3.8", [
|
||||
("coverage", [XImportant(True)]),
|
||||
]),
|
||||
]),
|
||||
]),
|
||||
("rocm", [
|
||||
("3.7", [
|
||||
("3.6", [
|
||||
('build_only', [XImportant(True)]),
|
||||
]),
|
||||
("android_abi", [XImportant("x86_32")]),
|
||||
("android_abi", [X("x86_64")]),
|
||||
("android_abi", [X("arm-v7a")]),
|
||||
("android_abi", [X("arm-v8a")]),
|
||||
])
|
||||
]),
|
||||
]),
|
||||
]),
|
||||
@ -126,7 +101,6 @@ class DistroConfigNode(TreeConfigNode):
|
||||
|
||||
next_nodes = {
|
||||
"xenial": XenialCompilerConfigNode,
|
||||
"bionic": BionicCompilerConfigNode,
|
||||
}
|
||||
return next_nodes[distro]
|
||||
|
||||
@ -149,33 +123,16 @@ class ExperimentalFeatureConfigNode(TreeConfigNode):
|
||||
experimental_feature = self.find_prop("experimental_feature")
|
||||
|
||||
next_nodes = {
|
||||
"asan": AsanConfigNode,
|
||||
"xla": XlaConfigNode,
|
||||
"vulkan": VulkanConfigNode,
|
||||
"parallel_tbb": ParallelTBBConfigNode,
|
||||
"parallel_native": ParallelNativeConfigNode,
|
||||
"onnx": ONNXConfigNode,
|
||||
"libtorch": LibTorchConfigNode,
|
||||
"important": ImportantConfigNode,
|
||||
"build_only": BuildOnlyConfigNode,
|
||||
"cuda_gcc_override": CudaGccOverrideConfigNode,
|
||||
"coverage": CoverageConfigNode,
|
||||
"pure_torch": PureTorchConfigNode,
|
||||
"android_abi": AndroidAbiConfigNode,
|
||||
}
|
||||
return next_nodes[experimental_feature]
|
||||
|
||||
|
||||
class PureTorchConfigNode(TreeConfigNode):
|
||||
def modify_label(self, label):
|
||||
return "PURE_TORCH=" + str(label)
|
||||
|
||||
def init2(self, node_name):
|
||||
self.props["is_pure_torch"] = node_name
|
||||
|
||||
def child_constructor(self):
|
||||
return ImportantConfigNode
|
||||
|
||||
|
||||
class XlaConfigNode(TreeConfigNode):
|
||||
def modify_label(self, label):
|
||||
return "XLA=" + str(label)
|
||||
@ -186,40 +143,6 @@ class XlaConfigNode(TreeConfigNode):
|
||||
def child_constructor(self):
|
||||
return ImportantConfigNode
|
||||
|
||||
|
||||
class AsanConfigNode(TreeConfigNode):
|
||||
def modify_label(self, label):
|
||||
return "Asan=" + str(label)
|
||||
|
||||
def init2(self, node_name):
|
||||
self.props["is_asan"] = node_name
|
||||
|
||||
def child_constructor(self):
|
||||
return ImportantConfigNode
|
||||
|
||||
|
||||
class ONNXConfigNode(TreeConfigNode):
|
||||
def modify_label(self, label):
|
||||
return "Onnx=" + str(label)
|
||||
|
||||
def init2(self, node_name):
|
||||
self.props["is_onnx"] = node_name
|
||||
|
||||
def child_constructor(self):
|
||||
return ImportantConfigNode
|
||||
|
||||
|
||||
class VulkanConfigNode(TreeConfigNode):
|
||||
def modify_label(self, label):
|
||||
return "Vulkan=" + str(label)
|
||||
|
||||
def init2(self, node_name):
|
||||
self.props["is_vulkan"] = node_name
|
||||
|
||||
def child_constructor(self):
|
||||
return ImportantConfigNode
|
||||
|
||||
|
||||
class ParallelTBBConfigNode(TreeConfigNode):
|
||||
def modify_label(self, label):
|
||||
return "PARALLELTBB=" + str(label)
|
||||
@ -230,7 +153,6 @@ class ParallelTBBConfigNode(TreeConfigNode):
|
||||
def child_constructor(self):
|
||||
return ImportantConfigNode
|
||||
|
||||
|
||||
class ParallelNativeConfigNode(TreeConfigNode):
|
||||
def modify_label(self, label):
|
||||
return "PARALLELNATIVE=" + str(label)
|
||||
@ -241,7 +163,6 @@ class ParallelNativeConfigNode(TreeConfigNode):
|
||||
def child_constructor(self):
|
||||
return ImportantConfigNode
|
||||
|
||||
|
||||
class LibTorchConfigNode(TreeConfigNode):
|
||||
def modify_label(self, label):
|
||||
return "BUILD_TEST_LIBTORCH=" + str(label)
|
||||
@ -252,31 +173,13 @@ class LibTorchConfigNode(TreeConfigNode):
|
||||
def child_constructor(self):
|
||||
return ImportantConfigNode
|
||||
|
||||
|
||||
class CudaGccOverrideConfigNode(TreeConfigNode):
|
||||
def init2(self, node_name):
|
||||
self.props["cuda_gcc_override"] = node_name
|
||||
|
||||
def child_constructor(self):
|
||||
return ExperimentalFeatureConfigNode
|
||||
|
||||
class BuildOnlyConfigNode(TreeConfigNode):
|
||||
class AndroidAbiConfigNode(TreeConfigNode):
|
||||
|
||||
def init2(self, node_name):
|
||||
self.props["build_only"] = node_name
|
||||
self.props["android_abi"] = node_name
|
||||
|
||||
def child_constructor(self):
|
||||
return ExperimentalFeatureConfigNode
|
||||
|
||||
|
||||
class CoverageConfigNode(TreeConfigNode):
|
||||
|
||||
def init2(self, node_name):
|
||||
self.props["is_coverage"] = node_name
|
||||
|
||||
def child_constructor(self):
|
||||
return ExperimentalFeatureConfigNode
|
||||
|
||||
return ImportantConfigNode
|
||||
|
||||
class ImportantConfigNode(TreeConfigNode):
|
||||
def modify_label(self, label):
|
||||
@ -303,20 +206,6 @@ class XenialCompilerConfigNode(TreeConfigNode):
|
||||
return XenialCompilerVersionConfigNode if self.props["compiler_name"] else PyVerConfigNode
|
||||
|
||||
|
||||
class BionicCompilerConfigNode(TreeConfigNode):
|
||||
|
||||
def modify_label(self, label):
|
||||
return label or "<unspecified>"
|
||||
|
||||
def init2(self, node_name):
|
||||
self.props["compiler_name"] = node_name
|
||||
|
||||
# noinspection PyMethodMayBeStatic
|
||||
def child_constructor(self):
|
||||
|
||||
return BionicCompilerVersionConfigNode if self.props["compiler_name"] else PyVerConfigNode
|
||||
|
||||
|
||||
class XenialCompilerVersionConfigNode(TreeConfigNode):
|
||||
def init2(self, node_name):
|
||||
self.props["compiler_version"] = node_name
|
||||
@ -324,12 +213,3 @@ class XenialCompilerVersionConfigNode(TreeConfigNode):
|
||||
# noinspection PyMethodMayBeStatic
|
||||
def child_constructor(self):
|
||||
return PyVerConfigNode
|
||||
|
||||
|
||||
class BionicCompilerVersionConfigNode(TreeConfigNode):
|
||||
def init2(self, node_name):
|
||||
self.props["compiler_version"] = node_name
|
||||
|
||||
# noinspection PyMethodMayBeStatic
|
||||
def child_constructor(self):
|
||||
return PyVerConfigNode
|
||||
|
||||
@ -1,13 +1,19 @@
|
||||
from collections import OrderedDict
|
||||
from dataclasses import dataclass, field
|
||||
from typing import List, Optional
|
||||
|
||||
from cimodel.data.pytorch_build_data import TopLevelNode, CONFIG_TREE_DATA
|
||||
import cimodel.data.dimensions as dimensions
|
||||
import cimodel.lib.conf_tree as conf_tree
|
||||
import cimodel.lib.miniutils as miniutils
|
||||
from cimodel.data.pytorch_build_data import CONFIG_TREE_DATA, TopLevelNode
|
||||
from cimodel.data.simple.util.branch_filters import gen_filter_dict, RC_PATTERN
|
||||
from cimodel.data.simple.util.docker_constants import gen_docker_image
|
||||
|
||||
from dataclasses import dataclass, field
|
||||
from typing import List, Optional
|
||||
|
||||
|
||||
DOCKER_IMAGE_PATH_BASE = "308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/"
|
||||
|
||||
# ARE YOU EDITING THIS NUMBER? MAKE SURE YOU READ THE GUIDANCE AT THE
|
||||
# TOP OF .circleci/config.yml
|
||||
DOCKER_IMAGE_VERSION = 405
|
||||
|
||||
|
||||
@dataclass
|
||||
@ -17,25 +23,18 @@ class Conf:
|
||||
parms_list_ignored_for_docker_image: Optional[List[str]] = None
|
||||
pyver: Optional[str] = None
|
||||
cuda_version: Optional[str] = None
|
||||
rocm_version: Optional[str] = None
|
||||
# TODO expand this to cover all the USE_* that we want to test for
|
||||
# tesnrorrt, leveldb, lmdb, redis, opencv, mkldnn, ideep, etc.
|
||||
# (from https://github.com/pytorch/pytorch/pull/17323#discussion_r259453608)
|
||||
is_xla: bool = False
|
||||
is_vulkan: bool = False
|
||||
is_pure_torch: bool = False
|
||||
restrict_phases: Optional[List[str]] = None
|
||||
gpu_resource: Optional[str] = None
|
||||
dependent_tests: List = field(default_factory=list)
|
||||
parent_build: Optional["Conf"] = None
|
||||
parent_build: Optional['Conf'] = None
|
||||
is_libtorch: bool = False
|
||||
is_important: bool = False
|
||||
parallel_backend: Optional[str] = None
|
||||
|
||||
@staticmethod
|
||||
def is_test_phase(phase):
|
||||
return "test" in phase
|
||||
|
||||
# TODO: Eliminate the special casing for docker paths
|
||||
# In the short term, we *will* need to support special casing as docker images are merged for caffe2 and pytorch
|
||||
def get_parms(self, for_docker):
|
||||
@ -47,47 +46,31 @@ class Conf:
|
||||
leading.append("pytorch")
|
||||
if self.is_xla and not for_docker:
|
||||
leading.append("xla")
|
||||
if self.is_vulkan and not for_docker:
|
||||
leading.append("vulkan")
|
||||
if self.is_libtorch and not for_docker:
|
||||
leading.append("libtorch")
|
||||
if self.is_pure_torch and not for_docker:
|
||||
leading.append("pure_torch")
|
||||
if self.parallel_backend is not None and not for_docker:
|
||||
leading.append(self.parallel_backend)
|
||||
|
||||
cuda_parms = []
|
||||
if self.cuda_version:
|
||||
cudnn = "cudnn8" if self.cuda_version.startswith("11.") else "cudnn7"
|
||||
cuda_parms.extend(["cuda" + self.cuda_version, cudnn])
|
||||
if self.rocm_version:
|
||||
cuda_parms.extend([f"rocm{self.rocm_version}"])
|
||||
cuda_parms.extend(["cuda" + self.cuda_version, "cudnn7"])
|
||||
result = leading + ["linux", self.distro] + cuda_parms + self.parms
|
||||
if not for_docker and self.parms_list_ignored_for_docker_image is not None:
|
||||
if (not for_docker and self.parms_list_ignored_for_docker_image is not None):
|
||||
result = result + self.parms_list_ignored_for_docker_image
|
||||
return result
|
||||
|
||||
def gen_docker_image_path(self):
|
||||
parms_source = self.parent_build or self
|
||||
base_build_env_name = "-".join(parms_source.get_parms(True))
|
||||
image_name, _ = gen_docker_image(base_build_env_name)
|
||||
return miniutils.quote(image_name)
|
||||
|
||||
def gen_docker_image_requires(self):
|
||||
parms_source = self.parent_build or self
|
||||
base_build_env_name = "-".join(parms_source.get_parms(True))
|
||||
_, requires = gen_docker_image(base_build_env_name)
|
||||
return miniutils.quote(requires)
|
||||
|
||||
return miniutils.quote(DOCKER_IMAGE_PATH_BASE + base_build_env_name + ":" + str(DOCKER_IMAGE_VERSION))
|
||||
|
||||
def get_build_job_name_pieces(self, build_or_test):
|
||||
return self.get_parms(False) + [build_or_test]
|
||||
|
||||
def gen_build_name(self, build_or_test):
|
||||
return (
|
||||
("_".join(map(str, self.get_build_job_name_pieces(build_or_test))))
|
||||
.replace(".", "_")
|
||||
.replace("-", "_")
|
||||
)
|
||||
return ("_".join(map(str, self.get_build_job_name_pieces(build_or_test)))).replace(".", "_").replace("-", "_")
|
||||
|
||||
def get_dependents(self):
|
||||
return self.dependent_tests or []
|
||||
@ -99,26 +82,22 @@ class Conf:
|
||||
build_env_name = "-".join(map(str, build_job_name_pieces))
|
||||
parameters["build_environment"] = miniutils.quote(build_env_name)
|
||||
parameters["docker_image"] = self.gen_docker_image_path()
|
||||
if Conf.is_test_phase(phase) and self.gpu_resource:
|
||||
if phase == "test" and self.gpu_resource:
|
||||
parameters["use_cuda_docker_runtime"] = miniutils.quote("1")
|
||||
if Conf.is_test_phase(phase):
|
||||
if phase == "test":
|
||||
resource_class = "large"
|
||||
if self.gpu_resource:
|
||||
resource_class = "gpu." + self.gpu_resource
|
||||
if self.rocm_version is not None:
|
||||
resource_class = "pytorch/amd-gpu"
|
||||
parameters["resource_class"] = resource_class
|
||||
if phase == "build" and self.rocm_version is not None:
|
||||
parameters["resource_class"] = "xlarge"
|
||||
if hasattr(self, 'filters'):
|
||||
parameters['filters'] = self.filters
|
||||
return parameters
|
||||
|
||||
def gen_workflow_job(self, phase):
|
||||
# All jobs require the setup job
|
||||
job_def = OrderedDict()
|
||||
job_def["name"] = self.gen_build_name(phase)
|
||||
job_def["requires"] = ["setup"]
|
||||
|
||||
if Conf.is_test_phase(phase):
|
||||
if phase == "test":
|
||||
|
||||
# TODO When merging the caffe2 and pytorch jobs, it might be convenient for a while to make a
|
||||
# caffe2 test job dependent on a pytorch build job. This way we could quickly dedup the repeated
|
||||
@ -126,63 +105,43 @@ class Conf:
|
||||
# pytorch build job (from https://github.com/pytorch/pytorch/pull/17323#discussion_r259452641)
|
||||
|
||||
dependency_build = self.parent_build or self
|
||||
job_def["requires"] = [dependency_build.gen_build_name("build")]
|
||||
job_def["requires"].append(dependency_build.gen_build_name("build"))
|
||||
job_name = "pytorch_linux_test"
|
||||
else:
|
||||
job_name = "pytorch_linux_build"
|
||||
job_def["requires"] = [self.gen_docker_image_requires()]
|
||||
|
||||
|
||||
if not self.is_important:
|
||||
job_def["filters"] = gen_filter_dict()
|
||||
# If you update this, update
|
||||
# caffe2_build_definitions.py too
|
||||
job_def["filters"] = {"branches": {"only": ["master", r"/ci-all\/.*/"]}}
|
||||
job_def.update(self.gen_workflow_params(phase))
|
||||
|
||||
return {job_name: job_def}
|
||||
return {job_name : job_def}
|
||||
|
||||
|
||||
# TODO This is a hack to special case some configs just for the workflow list
|
||||
class HiddenConf(object):
|
||||
def __init__(self, name, parent_build=None, filters=None):
|
||||
def __init__(self, name, parent_build=None):
|
||||
self.name = name
|
||||
self.parent_build = parent_build
|
||||
self.filters = filters
|
||||
|
||||
def gen_workflow_job(self, phase):
|
||||
return {
|
||||
self.gen_build_name(phase): {
|
||||
"requires": [self.parent_build.gen_build_name("build")],
|
||||
"filters": self.filters,
|
||||
}
|
||||
}
|
||||
return {self.gen_build_name(phase): {"requires": [self.parent_build.gen_build_name("build")]}}
|
||||
|
||||
def gen_build_name(self, _):
|
||||
return self.name
|
||||
|
||||
class DocPushConf(object):
|
||||
def __init__(self, name, parent_build=None, branch="master"):
|
||||
self.name = name
|
||||
self.parent_build = parent_build
|
||||
self.branch = branch
|
||||
|
||||
def gen_workflow_job(self, phase):
|
||||
return {
|
||||
"pytorch_doc_push": {
|
||||
"name": self.name,
|
||||
"branch": self.branch,
|
||||
"requires": [self.parent_build],
|
||||
"context": "org-member",
|
||||
"filters": gen_filter_dict(branches_list=["nightly"],
|
||||
tags_list=RC_PATTERN)
|
||||
}
|
||||
}
|
||||
|
||||
# TODO Convert these to graph nodes
|
||||
def gen_dependent_configs(xenial_parent_config):
|
||||
|
||||
extra_parms = [
|
||||
(["multigpu"], "large"),
|
||||
(["nogpu", "NO_AVX2"], None),
|
||||
(["nogpu", "NO_AVX"], None),
|
||||
(["NO_AVX2"], "medium"),
|
||||
(["NO_AVX", "NO_AVX2"], "medium"),
|
||||
(["slow"], "medium"),
|
||||
(["nogpu"], None),
|
||||
]
|
||||
|
||||
configs = []
|
||||
@ -191,60 +150,19 @@ def gen_dependent_configs(xenial_parent_config):
|
||||
c = Conf(
|
||||
xenial_parent_config.distro,
|
||||
["py3"] + parms,
|
||||
pyver=xenial_parent_config.pyver,
|
||||
pyver="3.6",
|
||||
cuda_version=xenial_parent_config.cuda_version,
|
||||
restrict_phases=["test"],
|
||||
gpu_resource=gpu,
|
||||
parent_build=xenial_parent_config,
|
||||
is_important=False,
|
||||
is_important=xenial_parent_config.is_important,
|
||||
)
|
||||
|
||||
configs.append(c)
|
||||
|
||||
return configs
|
||||
for x in ["pytorch_python_doc_push", "pytorch_cpp_doc_push"]:
|
||||
configs.append(HiddenConf(x, parent_build=xenial_parent_config))
|
||||
|
||||
|
||||
def gen_docs_configs(xenial_parent_config):
|
||||
configs = []
|
||||
|
||||
configs.append(
|
||||
HiddenConf(
|
||||
"pytorch_python_doc_build",
|
||||
parent_build=xenial_parent_config,
|
||||
filters=gen_filter_dict(branches_list=r"/.*/",
|
||||
tags_list=RC_PATTERN),
|
||||
)
|
||||
)
|
||||
configs.append(
|
||||
DocPushConf(
|
||||
"pytorch_python_doc_push",
|
||||
parent_build="pytorch_python_doc_build",
|
||||
branch="site",
|
||||
)
|
||||
)
|
||||
|
||||
configs.append(
|
||||
HiddenConf(
|
||||
"pytorch_cpp_doc_build",
|
||||
parent_build=xenial_parent_config,
|
||||
filters=gen_filter_dict(branches_list=r"/.*/",
|
||||
tags_list=RC_PATTERN),
|
||||
)
|
||||
)
|
||||
configs.append(
|
||||
DocPushConf(
|
||||
"pytorch_cpp_doc_push",
|
||||
parent_build="pytorch_cpp_doc_build",
|
||||
branch="master",
|
||||
)
|
||||
)
|
||||
|
||||
configs.append(
|
||||
HiddenConf(
|
||||
"pytorch_doc_test",
|
||||
parent_build=xenial_parent_config
|
||||
)
|
||||
)
|
||||
return configs
|
||||
|
||||
|
||||
@ -264,17 +182,13 @@ def instantiate_configs():
|
||||
|
||||
root = get_root()
|
||||
found_configs = conf_tree.dfs(root)
|
||||
restrict_phases = None
|
||||
for fc in found_configs:
|
||||
|
||||
restrict_phases = None
|
||||
distro_name = fc.find_prop("distro_name")
|
||||
compiler_name = fc.find_prop("compiler_name")
|
||||
compiler_version = fc.find_prop("compiler_version")
|
||||
is_xla = fc.find_prop("is_xla") or False
|
||||
is_asan = fc.find_prop("is_asan") or False
|
||||
is_onnx = fc.find_prop("is_onnx") or False
|
||||
is_pure_torch = fc.find_prop("is_pure_torch") or False
|
||||
is_vulkan = fc.find_prop("is_vulkan") or False
|
||||
parms_list_ignored_for_docker_image = []
|
||||
|
||||
python_version = None
|
||||
@ -285,14 +199,9 @@ def instantiate_configs():
|
||||
parms_list = ["py" + fc.find_prop("pyver")]
|
||||
|
||||
cuda_version = None
|
||||
rocm_version = None
|
||||
if compiler_name == "cuda":
|
||||
cuda_version = fc.find_prop("compiler_version")
|
||||
|
||||
elif compiler_name == "rocm":
|
||||
rocm_version = fc.find_prop("compiler_version")
|
||||
restrict_phases = ["build", "test1", "test2", "caffe2_test"]
|
||||
|
||||
elif compiler_name == "android":
|
||||
android_ndk_version = fc.find_prop("compiler_version")
|
||||
# TODO: do we need clang to compile host binaries like protoc?
|
||||
@ -301,38 +210,25 @@ def instantiate_configs():
|
||||
android_abi = fc.find_prop("android_abi")
|
||||
parms_list_ignored_for_docker_image.append(android_abi)
|
||||
restrict_phases = ["build"]
|
||||
fc.props["is_important"] = True
|
||||
|
||||
elif compiler_name:
|
||||
gcc_version = compiler_name + (fc.find_prop("compiler_version") or "")
|
||||
parms_list.append(gcc_version)
|
||||
|
||||
if is_asan:
|
||||
parms_list.append("asan")
|
||||
python_version = fc.find_prop("pyver")
|
||||
parms_list[0] = fc.find_prop("abbreviated_pyver")
|
||||
restrict_phases = ["build", "test1", "test2"]
|
||||
# TODO: This is a nasty special case
|
||||
if compiler_name == "clang" and not is_xla:
|
||||
parms_list.append("asan")
|
||||
python_version = fc.find_prop("pyver")
|
||||
parms_list[0] = fc.find_prop("abbreviated_pyver")
|
||||
|
||||
if is_onnx:
|
||||
parms_list.append("onnx")
|
||||
python_version = fc.find_prop("pyver")
|
||||
parms_list[0] = fc.find_prop("abbreviated_pyver")
|
||||
restrict_phases = ["build", "ort_test1", "ort_test2"]
|
||||
|
||||
if cuda_version:
|
||||
cuda_gcc_version = fc.find_prop("cuda_gcc_override") or "gcc7"
|
||||
parms_list.append(cuda_gcc_version)
|
||||
if cuda_version in ["9.2", "10", "10.1"]:
|
||||
# TODO The gcc version is orthogonal to CUDA version?
|
||||
parms_list.append("gcc7")
|
||||
|
||||
is_libtorch = fc.find_prop("is_libtorch") or False
|
||||
is_important = fc.find_prop("is_important") or False
|
||||
parallel_backend = fc.find_prop("parallel_backend") or None
|
||||
build_only = fc.find_prop("build_only") or False
|
||||
is_coverage = fc.find_prop("is_coverage") or False
|
||||
# TODO: fix pure_torch python test packaging issue.
|
||||
if build_only or is_pure_torch:
|
||||
restrict_phases = ["build"]
|
||||
if is_coverage and restrict_phases is None:
|
||||
restrict_phases = ["build", "coverage_test"]
|
||||
|
||||
|
||||
gpu_resource = None
|
||||
if cuda_version and cuda_version != "10":
|
||||
@ -344,10 +240,7 @@ def instantiate_configs():
|
||||
parms_list_ignored_for_docker_image,
|
||||
python_version,
|
||||
cuda_version,
|
||||
rocm_version,
|
||||
is_xla,
|
||||
is_vulkan,
|
||||
is_pure_torch,
|
||||
restrict_phases,
|
||||
gpu_resource,
|
||||
is_libtorch=is_libtorch,
|
||||
@ -355,35 +248,13 @@ def instantiate_configs():
|
||||
parallel_backend=parallel_backend,
|
||||
)
|
||||
|
||||
# run docs builds on "pytorch-linux-xenial-py3.6-gcc5.4". Docs builds
|
||||
# should run on a CPU-only build that runs on all PRs.
|
||||
# XXX should this be updated to a more modern build? Projects are
|
||||
# beginning to drop python3.6
|
||||
if (
|
||||
distro_name == "xenial"
|
||||
and fc.find_prop("pyver") == "3.6"
|
||||
and cuda_version is None
|
||||
and parallel_backend is None
|
||||
and not is_vulkan
|
||||
and not is_pure_torch
|
||||
and compiler_name == "gcc"
|
||||
and fc.find_prop("compiler_version") == "5.4"
|
||||
):
|
||||
c.filters = gen_filter_dict(branches_list=r"/.*/",
|
||||
tags_list=RC_PATTERN)
|
||||
c.dependent_tests = gen_docs_configs(c)
|
||||
|
||||
if cuda_version == "10.2" and python_version == "3.6" and not is_libtorch:
|
||||
if cuda_version == "9" and python_version == "3.6" and not is_libtorch:
|
||||
c.dependent_tests = gen_dependent_configs(c)
|
||||
|
||||
if (
|
||||
compiler_name == "gcc"
|
||||
and compiler_version == "5.4"
|
||||
and not is_libtorch
|
||||
and not is_vulkan
|
||||
and not is_pure_torch
|
||||
and parallel_backend is None
|
||||
):
|
||||
if (compiler_name == "gcc"
|
||||
and compiler_version == "5.4"
|
||||
and not is_libtorch
|
||||
and parallel_backend is None):
|
||||
bc_breaking_check = Conf(
|
||||
"backward-compatibility-check",
|
||||
[],
|
||||
@ -404,7 +275,7 @@ def get_workflow_jobs():
|
||||
|
||||
config_list = instantiate_configs()
|
||||
|
||||
x = []
|
||||
x = ["setup"]
|
||||
for conf_options in config_list:
|
||||
|
||||
phases = conf_options.restrict_phases or dimensions.PHASES
|
||||
@ -412,7 +283,7 @@ def get_workflow_jobs():
|
||||
for phase in phases:
|
||||
|
||||
# TODO why does this not have a test?
|
||||
if Conf.is_test_phase(phase) and conf_options.cuda_version == "10":
|
||||
if phase == "test" and conf_options.cuda_version == "10":
|
||||
continue
|
||||
|
||||
x.append(conf_options.gen_workflow_job(phase))
|
||||
|
||||
@ -1,28 +0,0 @@
|
||||
from collections import OrderedDict
|
||||
|
||||
from cimodel.data.simple.util.branch_filters import gen_filter_dict
|
||||
from cimodel.lib.miniutils import quote
|
||||
|
||||
|
||||
CHANNELS_TO_PRUNE = ["pytorch-nightly", "pytorch-test"]
|
||||
PACKAGES_TO_PRUNE = "pytorch torchvision torchaudio torchtext ignite torchcsprng"
|
||||
|
||||
|
||||
def gen_workflow_job(channel: str):
|
||||
return OrderedDict(
|
||||
{
|
||||
"anaconda_prune": OrderedDict(
|
||||
{
|
||||
"name": f"anaconda-prune-{channel}",
|
||||
"context": quote("org-member"),
|
||||
"packages": quote(PACKAGES_TO_PRUNE),
|
||||
"channel": channel,
|
||||
"filters": gen_filter_dict(branches_list=["postnightly"]),
|
||||
}
|
||||
)
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
def get_workflow_jobs():
|
||||
return [gen_workflow_job(channel) for channel in CHANNELS_TO_PRUNE]
|
||||
@ -1,106 +0,0 @@
|
||||
import cimodel.data.simple.util.branch_filters as branch_filters
|
||||
from cimodel.data.simple.util.docker_constants import (
|
||||
DOCKER_IMAGE_NDK, DOCKER_REQUIREMENT_NDK
|
||||
)
|
||||
|
||||
|
||||
class AndroidJob:
|
||||
def __init__(self,
|
||||
variant,
|
||||
template_name,
|
||||
is_master_only=True):
|
||||
|
||||
self.variant = variant
|
||||
self.template_name = template_name
|
||||
self.is_master_only = is_master_only
|
||||
|
||||
def gen_tree(self):
|
||||
|
||||
base_name_parts = [
|
||||
"pytorch",
|
||||
"linux",
|
||||
"xenial",
|
||||
"py3",
|
||||
"clang5",
|
||||
"android",
|
||||
"ndk",
|
||||
"r19c",
|
||||
] + self.variant + [
|
||||
"build",
|
||||
]
|
||||
|
||||
full_job_name = "_".join(base_name_parts)
|
||||
build_env_name = "-".join(base_name_parts)
|
||||
|
||||
props_dict = {
|
||||
"name": full_job_name,
|
||||
"build_environment": "\"{}\"".format(build_env_name),
|
||||
"docker_image": "\"{}\"".format(DOCKER_IMAGE_NDK),
|
||||
"requires": [DOCKER_REQUIREMENT_NDK]
|
||||
}
|
||||
|
||||
if self.is_master_only:
|
||||
props_dict["filters"] = branch_filters.gen_filter_dict(branch_filters.NON_PR_BRANCH_LIST)
|
||||
|
||||
return [{self.template_name: props_dict}]
|
||||
|
||||
|
||||
class AndroidGradleJob:
|
||||
def __init__(self,
|
||||
job_name,
|
||||
template_name,
|
||||
dependencies,
|
||||
is_master_only=True,
|
||||
is_pr_only=False):
|
||||
|
||||
self.job_name = job_name
|
||||
self.template_name = template_name
|
||||
self.dependencies = dependencies
|
||||
self.is_master_only = is_master_only
|
||||
self.is_pr_only = is_pr_only
|
||||
|
||||
def gen_tree(self):
|
||||
|
||||
props_dict = {
|
||||
"name": self.job_name,
|
||||
"requires": self.dependencies,
|
||||
}
|
||||
|
||||
if self.is_master_only:
|
||||
props_dict["filters"] = branch_filters.gen_filter_dict(branch_filters.NON_PR_BRANCH_LIST)
|
||||
elif self.is_pr_only:
|
||||
props_dict["filters"] = branch_filters.gen_filter_dict(branch_filters.PR_BRANCH_LIST)
|
||||
|
||||
return [{self.template_name: props_dict}]
|
||||
|
||||
|
||||
WORKFLOW_DATA = [
|
||||
AndroidJob(["x86_32"], "pytorch_linux_build", is_master_only=False),
|
||||
AndroidJob(["x86_64"], "pytorch_linux_build"),
|
||||
AndroidJob(["arm", "v7a"], "pytorch_linux_build"),
|
||||
AndroidJob(["arm", "v8a"], "pytorch_linux_build"),
|
||||
AndroidJob(["vulkan", "x86_32"], "pytorch_linux_build", is_master_only=False),
|
||||
AndroidGradleJob(
|
||||
"pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-build-x86_32",
|
||||
"pytorch_android_gradle_build-x86_32",
|
||||
["pytorch_linux_xenial_py3_clang5_android_ndk_r19c_x86_32_build"],
|
||||
is_master_only=False,
|
||||
is_pr_only=True),
|
||||
AndroidGradleJob(
|
||||
"pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-custom-build-single",
|
||||
"pytorch_android_gradle_custom_build_single",
|
||||
[DOCKER_REQUIREMENT_NDK],
|
||||
is_master_only=False,
|
||||
is_pr_only=True),
|
||||
AndroidGradleJob(
|
||||
"pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-build",
|
||||
"pytorch_android_gradle_build",
|
||||
["pytorch_linux_xenial_py3_clang5_android_ndk_r19c_x86_32_build",
|
||||
"pytorch_linux_xenial_py3_clang5_android_ndk_r19c_x86_64_build",
|
||||
"pytorch_linux_xenial_py3_clang5_android_ndk_r19c_arm_v7a_build",
|
||||
"pytorch_linux_xenial_py3_clang5_android_ndk_r19c_arm_v8a_build"]),
|
||||
]
|
||||
|
||||
|
||||
def get_workflow_jobs():
|
||||
return [item.gen_tree() for item in WORKFLOW_DATA]
|
||||
@ -1,69 +0,0 @@
|
||||
from cimodel.data.simple.util.docker_constants import (
|
||||
DOCKER_IMAGE_GCC7,
|
||||
DOCKER_REQUIREMENT_GCC7
|
||||
)
|
||||
|
||||
|
||||
def gen_job_name(phase):
|
||||
job_name_parts = [
|
||||
"pytorch",
|
||||
"bazel",
|
||||
phase,
|
||||
]
|
||||
|
||||
return "_".join(job_name_parts)
|
||||
|
||||
|
||||
class BazelJob:
|
||||
def __init__(self, phase, extra_props=None):
|
||||
self.phase = phase
|
||||
self.extra_props = extra_props or {}
|
||||
|
||||
def gen_tree(self):
|
||||
|
||||
template_parts = [
|
||||
"pytorch",
|
||||
"linux",
|
||||
"bazel",
|
||||
self.phase,
|
||||
]
|
||||
|
||||
build_env_parts = [
|
||||
"pytorch",
|
||||
"linux",
|
||||
"xenial",
|
||||
"py3.6",
|
||||
"gcc7",
|
||||
"bazel",
|
||||
self.phase,
|
||||
]
|
||||
|
||||
full_job_name = gen_job_name(self.phase)
|
||||
build_env_name = "-".join(build_env_parts)
|
||||
|
||||
extra_requires = (
|
||||
[gen_job_name("build")] if self.phase == "test" else
|
||||
[DOCKER_REQUIREMENT_GCC7]
|
||||
)
|
||||
|
||||
props_dict = {
|
||||
"build_environment": build_env_name,
|
||||
"docker_image": DOCKER_IMAGE_GCC7,
|
||||
"name": full_job_name,
|
||||
"requires": extra_requires,
|
||||
}
|
||||
|
||||
props_dict.update(self.extra_props)
|
||||
|
||||
template_name = "_".join(template_parts)
|
||||
return [{template_name: props_dict}]
|
||||
|
||||
|
||||
WORKFLOW_DATA = [
|
||||
BazelJob("build", {"resource_class": "large"}),
|
||||
BazelJob("test"),
|
||||
]
|
||||
|
||||
|
||||
def get_workflow_jobs():
|
||||
return [item.gen_tree() for item in WORKFLOW_DATA]
|
||||
@ -1,193 +0,0 @@
|
||||
"""
|
||||
TODO: Refactor circleci/cimodel/data/binary_build_data.py to generate this file
|
||||
instead of doing one offs here
|
||||
Binary builds (subset, to smoke test that they'll work)
|
||||
|
||||
NB: If you modify this file, you need to also modify
|
||||
the binary_and_smoke_tests_on_pr variable in
|
||||
pytorch-ci-hud to adjust the allowed build list
|
||||
at https://github.com/ezyang/pytorch-ci-hud/blob/master/src/BuildHistoryDisplay.js
|
||||
|
||||
Note:
|
||||
This binary build is currently broken, see https://github_com/pytorch/pytorch/issues/16710
|
||||
- binary_linux_conda_3_6_cu90_devtoolset7_build
|
||||
- binary_linux_conda_3_6_cu90_devtoolset7_test
|
||||
|
||||
TODO
|
||||
we should test a libtorch cuda build, but they take too long
|
||||
- binary_linux_libtorch_3_6m_cu90_devtoolset7_static-without-deps_build
|
||||
"""
|
||||
|
||||
import cimodel.lib.miniutils as miniutils
|
||||
import cimodel.data.simple.util.branch_filters
|
||||
|
||||
|
||||
class SmoketestJob:
|
||||
def __init__(self,
|
||||
template_name,
|
||||
build_env_parts,
|
||||
docker_image,
|
||||
job_name,
|
||||
is_master_only=False,
|
||||
requires=None,
|
||||
has_libtorch_variant=False,
|
||||
extra_props=None):
|
||||
|
||||
self.template_name = template_name
|
||||
self.build_env_parts = build_env_parts
|
||||
self.docker_image = docker_image
|
||||
self.job_name = job_name
|
||||
self.is_master_only = is_master_only
|
||||
self.requires = requires or []
|
||||
self.has_libtorch_variant = has_libtorch_variant
|
||||
self.extra_props = extra_props or {}
|
||||
|
||||
def gen_tree(self):
|
||||
|
||||
props_dict = {
|
||||
"build_environment": " ".join(self.build_env_parts),
|
||||
"name": self.job_name,
|
||||
"requires": self.requires,
|
||||
}
|
||||
|
||||
if self.docker_image:
|
||||
props_dict["docker_image"] = self.docker_image
|
||||
|
||||
if self.is_master_only:
|
||||
props_dict["filters"] = cimodel.data.simple.util.branch_filters.gen_filter_dict()
|
||||
|
||||
if self.has_libtorch_variant:
|
||||
props_dict["libtorch_variant"] = "shared-with-deps"
|
||||
|
||||
props_dict.update(self.extra_props)
|
||||
|
||||
return [{self.template_name: props_dict}]
|
||||
|
||||
|
||||
WORKFLOW_DATA = [
|
||||
SmoketestJob(
|
||||
"binary_linux_build",
|
||||
["manywheel", "3.7m", "cu102", "devtoolset7"],
|
||||
"pytorch/manylinux-cuda102",
|
||||
"binary_linux_manywheel_3_7m_cu102_devtoolset7_build",
|
||||
is_master_only=True,
|
||||
),
|
||||
SmoketestJob(
|
||||
"binary_linux_build",
|
||||
["libtorch", "3.7m", "cpu", "devtoolset7"],
|
||||
"pytorch/manylinux-cuda102",
|
||||
"binary_linux_libtorch_3_7m_cpu_devtoolset7_shared-with-deps_build",
|
||||
is_master_only=False,
|
||||
has_libtorch_variant=True,
|
||||
),
|
||||
SmoketestJob(
|
||||
"binary_linux_build",
|
||||
["libtorch", "3.7m", "cpu", "gcc5.4_cxx11-abi"],
|
||||
"pytorch/pytorch-binary-docker-image-ubuntu16.04:latest",
|
||||
"binary_linux_libtorch_3_7m_cpu_gcc5_4_cxx11-abi_shared-with-deps_build",
|
||||
is_master_only=False,
|
||||
has_libtorch_variant=True,
|
||||
),
|
||||
SmoketestJob(
|
||||
"binary_mac_build",
|
||||
["wheel", "3.7", "cpu"],
|
||||
None,
|
||||
"binary_macos_wheel_3_7_cpu_build",
|
||||
is_master_only=True,
|
||||
),
|
||||
# This job has an average run time of 3 hours o.O
|
||||
# Now only running this on master to reduce overhead
|
||||
SmoketestJob(
|
||||
"binary_mac_build",
|
||||
["libtorch", "3.7", "cpu"],
|
||||
None,
|
||||
"binary_macos_libtorch_3_7_cpu_build",
|
||||
is_master_only=True,
|
||||
),
|
||||
SmoketestJob(
|
||||
"binary_windows_build",
|
||||
["libtorch", "3.7", "cpu", "debug"],
|
||||
None,
|
||||
"binary_windows_libtorch_3_7_cpu_debug_build",
|
||||
is_master_only=False,
|
||||
),
|
||||
SmoketestJob(
|
||||
"binary_windows_build",
|
||||
["libtorch", "3.7", "cpu", "release"],
|
||||
None,
|
||||
"binary_windows_libtorch_3_7_cpu_release_build",
|
||||
is_master_only=False,
|
||||
),
|
||||
SmoketestJob(
|
||||
"binary_windows_build",
|
||||
["wheel", "3.7", "cu102"],
|
||||
None,
|
||||
"binary_windows_wheel_3_7_cu102_build",
|
||||
is_master_only=True,
|
||||
),
|
||||
|
||||
SmoketestJob(
|
||||
"binary_windows_test",
|
||||
["libtorch", "3.7", "cpu", "debug"],
|
||||
None,
|
||||
"binary_windows_libtorch_3_7_cpu_debug_test",
|
||||
is_master_only=False,
|
||||
requires=["binary_windows_libtorch_3_7_cpu_debug_build"],
|
||||
),
|
||||
SmoketestJob(
|
||||
"binary_windows_test",
|
||||
["libtorch", "3.7", "cpu", "release"],
|
||||
None,
|
||||
"binary_windows_libtorch_3_7_cpu_release_test",
|
||||
is_master_only=False,
|
||||
requires=["binary_windows_libtorch_3_7_cpu_release_build"],
|
||||
),
|
||||
SmoketestJob(
|
||||
"binary_windows_test",
|
||||
["wheel", "3.7", "cu102"],
|
||||
None,
|
||||
"binary_windows_wheel_3_7_cu102_test",
|
||||
is_master_only=True,
|
||||
requires=["binary_windows_wheel_3_7_cu102_build"],
|
||||
extra_props={
|
||||
"executor": "windows-with-nvidia-gpu",
|
||||
},
|
||||
),
|
||||
|
||||
|
||||
|
||||
SmoketestJob(
|
||||
"binary_linux_test",
|
||||
["manywheel", "3.7m", "cu102", "devtoolset7"],
|
||||
"pytorch/manylinux-cuda102",
|
||||
"binary_linux_manywheel_3_7m_cu102_devtoolset7_test",
|
||||
is_master_only=True,
|
||||
requires=["binary_linux_manywheel_3_7m_cu102_devtoolset7_build"],
|
||||
extra_props={
|
||||
"resource_class": "gpu.medium",
|
||||
"use_cuda_docker_runtime": miniutils.quote((str(1))),
|
||||
},
|
||||
),
|
||||
SmoketestJob(
|
||||
"binary_linux_test",
|
||||
["libtorch", "3.7m", "cpu", "devtoolset7"],
|
||||
"pytorch/manylinux-cuda102",
|
||||
"binary_linux_libtorch_3_7m_cpu_devtoolset7_shared-with-deps_test",
|
||||
is_master_only=False,
|
||||
requires=["binary_linux_libtorch_3_7m_cpu_devtoolset7_shared-with-deps_build"],
|
||||
has_libtorch_variant=True,
|
||||
),
|
||||
SmoketestJob(
|
||||
"binary_linux_test",
|
||||
["libtorch", "3.7m", "cpu", "gcc5.4_cxx11-abi"],
|
||||
"pytorch/pytorch-binary-docker-image-ubuntu16.04:latest",
|
||||
"binary_linux_libtorch_3_7m_cpu_gcc5_4_cxx11-abi_shared-with-deps_test",
|
||||
is_master_only=False,
|
||||
requires=["binary_linux_libtorch_3_7m_cpu_gcc5_4_cxx11-abi_shared-with-deps_build"],
|
||||
has_libtorch_variant=True,
|
||||
),
|
||||
]
|
||||
|
||||
|
||||
def get_workflow_jobs():
|
||||
return [item.gen_tree() for item in WORKFLOW_DATA]
|
||||
@ -1,54 +0,0 @@
|
||||
from collections import OrderedDict
|
||||
|
||||
from cimodel.lib.miniutils import quote
|
||||
from cimodel.data.simple.util.branch_filters import gen_filter_dict, RC_PATTERN
|
||||
|
||||
|
||||
# TODO: make this generated from a matrix rather than just a static list
|
||||
IMAGE_NAMES = [
|
||||
"pytorch-linux-bionic-cuda11.0-cudnn8-py3.6-gcc9",
|
||||
"pytorch-linux-bionic-cuda11.0-cudnn8-py3.8-gcc9",
|
||||
"pytorch-linux-bionic-cuda10.2-cudnn7-py3.8-gcc9",
|
||||
"pytorch-linux-bionic-py3.6-clang9",
|
||||
"pytorch-linux-bionic-cuda10.2-cudnn7-py3.6-clang9",
|
||||
"pytorch-linux-bionic-py3.8-gcc9",
|
||||
"pytorch-linux-bionic-rocm3.5.1-py3.6",
|
||||
"pytorch-linux-xenial-cuda10-cudnn7-py3-gcc7",
|
||||
"pytorch-linux-xenial-cuda10.1-cudnn7-py3-gcc7",
|
||||
"pytorch-linux-xenial-cuda10.2-cudnn7-py3-gcc7",
|
||||
"pytorch-linux-xenial-cuda11.0-cudnn8-py3-gcc7",
|
||||
"pytorch-linux-xenial-cuda9.2-cudnn7-py3-gcc5.4",
|
||||
"pytorch-linux-xenial-cuda9.2-cudnn7-py3-gcc7",
|
||||
"pytorch-linux-xenial-py3-clang5-android-ndk-r19c",
|
||||
"pytorch-linux-xenial-py3-clang5-asan",
|
||||
"pytorch-linux-xenial-py3-clang7-onnx",
|
||||
"pytorch-linux-xenial-py3.8",
|
||||
"pytorch-linux-xenial-py3.6-clang7",
|
||||
"pytorch-linux-xenial-py3.6-gcc4.8",
|
||||
"pytorch-linux-xenial-py3.6-gcc5.4", # this one is used in doc builds
|
||||
"pytorch-linux-xenial-py3.6-gcc7.2",
|
||||
"pytorch-linux-xenial-py3.6-gcc7",
|
||||
"pytorch-linux-bionic-rocm3.7-py3.6",
|
||||
"pytorch-linux-bionic-rocm3.8-py3.6",
|
||||
]
|
||||
|
||||
|
||||
def get_workflow_jobs():
|
||||
"""Generates a list of docker image build definitions"""
|
||||
ret = []
|
||||
for image_name in IMAGE_NAMES:
|
||||
parameters = OrderedDict({
|
||||
"name": quote(f"docker-{image_name}"),
|
||||
"image_name": quote(image_name),
|
||||
})
|
||||
if image_name == "pytorch-linux-xenial-py3.6-gcc5.4":
|
||||
# pushing documentation on tags requires CircleCI to also
|
||||
# build all the dependencies on tags, including this docker image
|
||||
parameters['filters'] = gen_filter_dict(branches_list=r"/.*/",
|
||||
tags_list=RC_PATTERN)
|
||||
ret.append(OrderedDict(
|
||||
{
|
||||
"docker_build_job": parameters
|
||||
}
|
||||
))
|
||||
return ret
|
||||
@ -1,103 +0,0 @@
|
||||
import cimodel.lib.miniutils as miniutils
|
||||
from cimodel.data.simple.util.versions import MultiPartVersion, CudaVersion
|
||||
from cimodel.data.simple.util.docker_constants import DOCKER_IMAGE_BASIC, DOCKER_IMAGE_CUDA_10_2
|
||||
|
||||
|
||||
class GeConfigTestJob:
|
||||
def __init__(self,
|
||||
py_version,
|
||||
gcc_version,
|
||||
cuda_version,
|
||||
variant_parts,
|
||||
extra_requires,
|
||||
use_cuda_docker=False,
|
||||
build_env_override=None):
|
||||
|
||||
self.py_version = py_version
|
||||
self.gcc_version = gcc_version
|
||||
self.cuda_version = cuda_version
|
||||
self.variant_parts = variant_parts
|
||||
self.extra_requires = extra_requires
|
||||
self.use_cuda_docker = use_cuda_docker
|
||||
self.build_env_override = build_env_override
|
||||
|
||||
def get_all_parts(self, with_dots):
|
||||
|
||||
maybe_py_version = self.py_version.render_dots_or_parts(with_dots) if self.py_version else []
|
||||
maybe_gcc_version = self.gcc_version.render_dots_or_parts(with_dots) if self.gcc_version else []
|
||||
maybe_cuda_version = self.cuda_version.render_dots_or_parts(with_dots) if self.cuda_version else []
|
||||
|
||||
common_parts = [
|
||||
"pytorch",
|
||||
"linux",
|
||||
"xenial",
|
||||
] + maybe_cuda_version + maybe_py_version + maybe_gcc_version
|
||||
|
||||
return common_parts + self.variant_parts
|
||||
|
||||
def gen_tree(self):
|
||||
|
||||
resource_class = "gpu.medium" if self.use_cuda_docker else "large"
|
||||
docker_image = DOCKER_IMAGE_CUDA_10_2 if self.use_cuda_docker else DOCKER_IMAGE_BASIC
|
||||
full_name = "_".join(self.get_all_parts(False))
|
||||
build_env = self.build_env_override or "-".join(self.get_all_parts(True))
|
||||
|
||||
props_dict = {
|
||||
"name": full_name,
|
||||
"build_environment": build_env,
|
||||
"requires": self.extra_requires,
|
||||
"resource_class": resource_class,
|
||||
"docker_image": docker_image,
|
||||
}
|
||||
|
||||
if self.use_cuda_docker:
|
||||
props_dict["use_cuda_docker_runtime"] = miniutils.quote(str(1))
|
||||
|
||||
return [{"pytorch_linux_test": props_dict}]
|
||||
|
||||
|
||||
WORKFLOW_DATA = [
|
||||
GeConfigTestJob(
|
||||
MultiPartVersion([3, 6], "py"),
|
||||
MultiPartVersion([5, 4], "gcc"),
|
||||
None,
|
||||
["ge_config_legacy", "test"],
|
||||
["pytorch_linux_xenial_py3_6_gcc5_4_build"]),
|
||||
GeConfigTestJob(
|
||||
MultiPartVersion([3, 6], "py"),
|
||||
MultiPartVersion([5, 4], "gcc"),
|
||||
None,
|
||||
["ge_config_profiling", "test"],
|
||||
["pytorch_linux_xenial_py3_6_gcc5_4_build"]),
|
||||
GeConfigTestJob(
|
||||
MultiPartVersion([3, 6], "py"),
|
||||
MultiPartVersion([5, 4], "gcc"),
|
||||
None,
|
||||
["ge_config_simple", "test"],
|
||||
["pytorch_linux_xenial_py3_6_gcc5_4_build"],
|
||||
),
|
||||
GeConfigTestJob(
|
||||
None,
|
||||
None,
|
||||
CudaVersion(10, 2),
|
||||
["cudnn7", "py3", "ge_config_legacy", "test"],
|
||||
["pytorch_linux_xenial_cuda10_2_cudnn7_py3_gcc7_build"],
|
||||
use_cuda_docker=True,
|
||||
# TODO Why does the build environment specify cuda10.1, while the
|
||||
# job name is cuda10_2?
|
||||
build_env_override="pytorch-linux-xenial-cuda10.1-cudnn7-ge_config_legacy-test"),
|
||||
GeConfigTestJob(
|
||||
None,
|
||||
None,
|
||||
CudaVersion(10, 2),
|
||||
["cudnn7", "py3", "ge_config_profiling", "test"],
|
||||
["pytorch_linux_xenial_cuda10_2_cudnn7_py3_gcc7_build"],
|
||||
use_cuda_docker=True,
|
||||
# TODO Why does the build environment specify cuda10.1, while the
|
||||
# job name is cuda10_2?
|
||||
build_env_override="pytorch-linux-xenial-cuda10.1-cudnn7-ge_config_profiling-test"),
|
||||
]
|
||||
|
||||
|
||||
def get_workflow_jobs():
|
||||
return [item.gen_tree() for item in WORKFLOW_DATA]
|
||||
@ -1,71 +0,0 @@
|
||||
from cimodel.data.simple.util.versions import MultiPartVersion
|
||||
|
||||
|
||||
IOS_VERSION = MultiPartVersion([12, 0, 0])
|
||||
|
||||
|
||||
class ArchVariant:
|
||||
def __init__(self, name, is_custom=False):
|
||||
self.name = name
|
||||
self.is_custom = is_custom
|
||||
|
||||
def render(self):
|
||||
extra_parts = ["custom"] if self.is_custom else []
|
||||
return "_".join([self.name] + extra_parts)
|
||||
|
||||
|
||||
def get_platform(arch_variant_name):
|
||||
return "SIMULATOR" if arch_variant_name == "x86_64" else "OS"
|
||||
|
||||
|
||||
class IOSJob:
|
||||
def __init__(self, ios_version, arch_variant, is_org_member_context=True, extra_props=None):
|
||||
self.ios_version = ios_version
|
||||
self.arch_variant = arch_variant
|
||||
self.is_org_member_context = is_org_member_context
|
||||
self.extra_props = extra_props
|
||||
|
||||
def gen_name_parts(self, with_version_dots):
|
||||
|
||||
version_parts = self.ios_version.render_dots_or_parts(with_version_dots)
|
||||
build_variant_suffix = "_".join([self.arch_variant.render(), "build"])
|
||||
|
||||
return [
|
||||
"pytorch",
|
||||
"ios",
|
||||
] + version_parts + [
|
||||
build_variant_suffix,
|
||||
]
|
||||
|
||||
def gen_job_name(self):
|
||||
return "_".join(self.gen_name_parts(False))
|
||||
|
||||
def gen_tree(self):
|
||||
|
||||
platform_name = get_platform(self.arch_variant.name)
|
||||
|
||||
props_dict = {
|
||||
"build_environment": "-".join(self.gen_name_parts(True)),
|
||||
"ios_arch": self.arch_variant.name,
|
||||
"ios_platform": platform_name,
|
||||
"name": self.gen_job_name(),
|
||||
}
|
||||
|
||||
if self.is_org_member_context:
|
||||
props_dict["context"] = "org-member"
|
||||
|
||||
if self.extra_props:
|
||||
props_dict.update(self.extra_props)
|
||||
|
||||
return [{"pytorch_ios_build": props_dict}]
|
||||
|
||||
|
||||
WORKFLOW_DATA = [
|
||||
IOSJob(IOS_VERSION, ArchVariant("x86_64"), is_org_member_context=False),
|
||||
IOSJob(IOS_VERSION, ArchVariant("arm64")),
|
||||
IOSJob(IOS_VERSION, ArchVariant("arm64", True), extra_props={"op_list": "mobilenetv2.yaml"}),
|
||||
]
|
||||
|
||||
|
||||
def get_workflow_jobs():
|
||||
return [item.gen_tree() for item in WORKFLOW_DATA]
|
||||
@ -1,28 +0,0 @@
|
||||
class MacOsJob:
|
||||
def __init__(self, os_version, is_test=False):
|
||||
self.os_version = os_version
|
||||
self.is_test = is_test
|
||||
|
||||
def gen_tree(self):
|
||||
non_phase_parts = ["pytorch", "macos", self.os_version, "py3"]
|
||||
|
||||
phase_name = "test" if self.is_test else "build"
|
||||
|
||||
full_job_name = "_".join(non_phase_parts + [phase_name])
|
||||
|
||||
test_build_dependency = "_".join(non_phase_parts + ["build"])
|
||||
extra_dependencies = [test_build_dependency] if self.is_test else []
|
||||
job_dependencies = extra_dependencies
|
||||
|
||||
# Yes we name the job after itself, it needs a non-empty value in here
|
||||
# for the YAML output to work.
|
||||
props_dict = {"requires": job_dependencies, "name": full_job_name}
|
||||
|
||||
return [{full_job_name: props_dict}]
|
||||
|
||||
|
||||
WORKFLOW_DATA = [MacOsJob("10_13"), MacOsJob("10_13", True)]
|
||||
|
||||
|
||||
def get_workflow_jobs():
|
||||
return [item.gen_tree() for item in WORKFLOW_DATA]
|
||||
@ -1,80 +0,0 @@
|
||||
"""
|
||||
PyTorch Mobile PR builds (use linux host toolchain + mobile build options)
|
||||
"""
|
||||
|
||||
import cimodel.lib.miniutils as miniutils
|
||||
import cimodel.data.simple.util.branch_filters
|
||||
from cimodel.data.simple.util.docker_constants import (
|
||||
DOCKER_IMAGE_ASAN,
|
||||
DOCKER_REQUIREMENT_ASAN,
|
||||
DOCKER_IMAGE_NDK,
|
||||
DOCKER_REQUIREMENT_NDK
|
||||
)
|
||||
|
||||
|
||||
class MobileJob:
|
||||
def __init__(
|
||||
self,
|
||||
docker_image,
|
||||
docker_requires,
|
||||
variant_parts,
|
||||
is_master_only=False):
|
||||
self.docker_image = docker_image
|
||||
self.docker_requires = docker_requires
|
||||
self.variant_parts = variant_parts
|
||||
self.is_master_only = is_master_only
|
||||
|
||||
def gen_tree(self):
|
||||
non_phase_parts = [
|
||||
"pytorch",
|
||||
"linux",
|
||||
"xenial",
|
||||
"py3",
|
||||
"clang5",
|
||||
"mobile",
|
||||
] + self.variant_parts
|
||||
|
||||
full_job_name = "_".join(non_phase_parts)
|
||||
build_env_name = "-".join(non_phase_parts)
|
||||
|
||||
props_dict = {
|
||||
"build_environment": build_env_name,
|
||||
"build_only": miniutils.quote(str(int(True))),
|
||||
"docker_image": self.docker_image,
|
||||
"requires": self.docker_requires,
|
||||
"name": full_job_name,
|
||||
}
|
||||
|
||||
if self.is_master_only:
|
||||
props_dict["filters"] = cimodel.data.simple.util.branch_filters.gen_filter_dict()
|
||||
|
||||
return [{"pytorch_linux_build": props_dict}]
|
||||
|
||||
|
||||
WORKFLOW_DATA = [
|
||||
MobileJob(
|
||||
DOCKER_IMAGE_ASAN,
|
||||
[DOCKER_REQUIREMENT_ASAN],
|
||||
["build"]
|
||||
),
|
||||
|
||||
# Use LLVM-DEV toolchain in android-ndk-r19c docker image
|
||||
MobileJob(
|
||||
DOCKER_IMAGE_NDK,
|
||||
[DOCKER_REQUIREMENT_NDK],
|
||||
["custom", "build", "dynamic"]
|
||||
),
|
||||
|
||||
# Use LLVM-DEV toolchain in android-ndk-r19c docker image
|
||||
# Most of this CI is already covered by "mobile-custom-build-dynamic" job
|
||||
MobileJob(
|
||||
DOCKER_IMAGE_NDK,
|
||||
[DOCKER_REQUIREMENT_NDK],
|
||||
["code", "analysis"],
|
||||
True
|
||||
),
|
||||
]
|
||||
|
||||
|
||||
def get_workflow_jobs():
|
||||
return [item.gen_tree() for item in WORKFLOW_DATA]
|
||||
@ -1,77 +0,0 @@
|
||||
from cimodel.data.simple.util.docker_constants import (
|
||||
DOCKER_IMAGE_NDK,
|
||||
DOCKER_REQUIREMENT_NDK
|
||||
)
|
||||
|
||||
|
||||
class AndroidNightlyJob:
|
||||
def __init__(self,
|
||||
variant,
|
||||
template_name,
|
||||
extra_props=None,
|
||||
with_docker=True,
|
||||
requires=None,
|
||||
no_build_suffix=False):
|
||||
|
||||
self.variant = variant
|
||||
self.template_name = template_name
|
||||
self.extra_props = extra_props or {}
|
||||
self.with_docker = with_docker
|
||||
self.requires = requires
|
||||
self.no_build_suffix = no_build_suffix
|
||||
|
||||
def gen_tree(self):
|
||||
|
||||
base_name_parts = [
|
||||
"pytorch",
|
||||
"linux",
|
||||
"xenial",
|
||||
"py3",
|
||||
"clang5",
|
||||
"android",
|
||||
"ndk",
|
||||
"r19c",
|
||||
] + self.variant
|
||||
|
||||
build_suffix = [] if self.no_build_suffix else ["build"]
|
||||
full_job_name = "_".join(["nightly"] + base_name_parts + build_suffix)
|
||||
build_env_name = "-".join(base_name_parts)
|
||||
|
||||
props_dict = {
|
||||
"name": full_job_name,
|
||||
"requires": self.requires,
|
||||
"filters": {"branches": {"only": "nightly"}},
|
||||
}
|
||||
|
||||
props_dict.update(self.extra_props)
|
||||
|
||||
if self.with_docker:
|
||||
props_dict["docker_image"] = DOCKER_IMAGE_NDK
|
||||
props_dict["build_environment"] = build_env_name
|
||||
|
||||
return [{self.template_name: props_dict}]
|
||||
|
||||
BASE_REQUIRES = [DOCKER_REQUIREMENT_NDK]
|
||||
|
||||
WORKFLOW_DATA = [
|
||||
AndroidNightlyJob(["x86_32"], "pytorch_linux_build", requires=BASE_REQUIRES),
|
||||
AndroidNightlyJob(["x86_64"], "pytorch_linux_build", requires=BASE_REQUIRES),
|
||||
AndroidNightlyJob(["arm", "v7a"], "pytorch_linux_build", requires=BASE_REQUIRES),
|
||||
AndroidNightlyJob(["arm", "v8a"], "pytorch_linux_build", requires=BASE_REQUIRES),
|
||||
AndroidNightlyJob(["android_gradle"], "pytorch_android_gradle_build",
|
||||
with_docker=False,
|
||||
requires=[
|
||||
"nightly_pytorch_linux_xenial_py3_clang5_android_ndk_r19c_x86_32_build",
|
||||
"nightly_pytorch_linux_xenial_py3_clang5_android_ndk_r19c_x86_64_build",
|
||||
"nightly_pytorch_linux_xenial_py3_clang5_android_ndk_r19c_arm_v7a_build",
|
||||
"nightly_pytorch_linux_xenial_py3_clang5_android_ndk_r19c_arm_v8a_build"]),
|
||||
AndroidNightlyJob(["x86_32_android_publish_snapshot"], "pytorch_android_publish_snapshot",
|
||||
extra_props={"context": "org-member"},
|
||||
with_docker=False,
|
||||
requires=["nightly_pytorch_linux_xenial_py3_clang5_android_ndk_r19c_android_gradle_build"],
|
||||
no_build_suffix=True),
|
||||
]
|
||||
|
||||
|
||||
def get_workflow_jobs():
|
||||
return [item.gen_tree() for item in WORKFLOW_DATA]
|
||||
@ -1,68 +0,0 @@
|
||||
import cimodel.data.simple.ios_definitions as ios_definitions
|
||||
|
||||
|
||||
class IOSNightlyJob:
|
||||
def __init__(self,
|
||||
variant,
|
||||
is_upload=False):
|
||||
|
||||
self.variant = variant
|
||||
self.is_upload = is_upload
|
||||
|
||||
def get_phase_name(self):
|
||||
return "upload" if self.is_upload else "build"
|
||||
|
||||
def get_common_name_pieces(self, with_version_dots):
|
||||
|
||||
extra_name_suffix = [self.get_phase_name()] if self.is_upload else []
|
||||
|
||||
common_name_pieces = [
|
||||
"ios",
|
||||
] + ios_definitions.IOS_VERSION.render_dots_or_parts(with_version_dots) + [
|
||||
"nightly",
|
||||
self.variant,
|
||||
"build",
|
||||
] + extra_name_suffix
|
||||
|
||||
return common_name_pieces
|
||||
|
||||
def gen_job_name(self):
|
||||
return "_".join(["pytorch"] + self.get_common_name_pieces(False))
|
||||
|
||||
def gen_tree(self):
|
||||
extra_requires = [x.gen_job_name() for x in BUILD_CONFIGS] if self.is_upload else []
|
||||
|
||||
props_dict = {
|
||||
"build_environment": "-".join(["libtorch"] + self.get_common_name_pieces(True)),
|
||||
"requires": extra_requires,
|
||||
"context": "org-member",
|
||||
"filters": {"branches": {"only": "nightly"}},
|
||||
}
|
||||
|
||||
if not self.is_upload:
|
||||
props_dict["ios_arch"] = self.variant
|
||||
props_dict["ios_platform"] = ios_definitions.get_platform(self.variant)
|
||||
props_dict["name"] = self.gen_job_name()
|
||||
|
||||
template_name = "_".join([
|
||||
"binary",
|
||||
"ios",
|
||||
self.get_phase_name(),
|
||||
])
|
||||
|
||||
return [{template_name: props_dict}]
|
||||
|
||||
|
||||
BUILD_CONFIGS = [
|
||||
IOSNightlyJob("x86_64"),
|
||||
IOSNightlyJob("arm64"),
|
||||
]
|
||||
|
||||
|
||||
WORKFLOW_DATA = BUILD_CONFIGS + [
|
||||
IOSNightlyJob("binary", is_upload=True),
|
||||
]
|
||||
|
||||
|
||||
def get_workflow_jobs():
|
||||
return [item.gen_tree() for item in WORKFLOW_DATA]
|
||||
@ -1,27 +0,0 @@
|
||||
NON_PR_BRANCH_LIST = [
|
||||
"master",
|
||||
r"/ci-all\/.*/",
|
||||
r"/release\/.*/",
|
||||
]
|
||||
|
||||
PR_BRANCH_LIST = [
|
||||
r"/gh\/.*\/head/",
|
||||
r"/pull\/.*/",
|
||||
]
|
||||
|
||||
RC_PATTERN = r"/v[0-9]+(\.[0-9]+)*-rc[0-9]+/"
|
||||
|
||||
def gen_filter_dict(
|
||||
branches_list=NON_PR_BRANCH_LIST,
|
||||
tags_list=None
|
||||
):
|
||||
"""Generates a filter dictionary for use with CircleCI's job filter"""
|
||||
filter_dict = {
|
||||
"branches": {
|
||||
"only": branches_list,
|
||||
},
|
||||
}
|
||||
|
||||
if tags_list is not None:
|
||||
filter_dict["tags"] = {"only": tags_list}
|
||||
return filter_dict
|
||||
@ -1,33 +0,0 @@
|
||||
AWS_DOCKER_HOST = "308535385114.dkr.ecr.us-east-1.amazonaws.com"
|
||||
|
||||
def gen_docker_image(container_type):
|
||||
return (
|
||||
"/".join([AWS_DOCKER_HOST, "pytorch", container_type]),
|
||||
f"docker-{container_type}",
|
||||
)
|
||||
|
||||
def gen_docker_image_requires(image_name):
|
||||
return [f"docker-{image_name}"]
|
||||
|
||||
|
||||
DOCKER_IMAGE_BASIC, DOCKER_REQUIREMENT_BASE = gen_docker_image(
|
||||
"pytorch-linux-xenial-py3.6-gcc5.4"
|
||||
)
|
||||
|
||||
DOCKER_IMAGE_CUDA_10_2, DOCKER_REQUIREMENT_CUDA_10_2 = gen_docker_image(
|
||||
"pytorch-linux-xenial-cuda10.2-cudnn7-py3-gcc7"
|
||||
)
|
||||
|
||||
DOCKER_IMAGE_GCC7, DOCKER_REQUIREMENT_GCC7 = gen_docker_image(
|
||||
"pytorch-linux-xenial-py3.6-gcc7"
|
||||
)
|
||||
|
||||
|
||||
def gen_mobile_docker(specifier):
|
||||
container_type = "pytorch-linux-xenial-py3-clang5-" + specifier
|
||||
return gen_docker_image(container_type)
|
||||
|
||||
|
||||
DOCKER_IMAGE_ASAN, DOCKER_REQUIREMENT_ASAN = gen_mobile_docker("asan")
|
||||
|
||||
DOCKER_IMAGE_NDK, DOCKER_REQUIREMENT_NDK = gen_mobile_docker("android-ndk-r19c")
|
||||
@ -1,31 +0,0 @@
|
||||
class MultiPartVersion:
|
||||
def __init__(self, parts, prefix=""):
|
||||
self.parts = parts
|
||||
self.prefix = prefix
|
||||
|
||||
def prefixed_parts(self):
|
||||
"""
|
||||
Prepends the first element of the version list
|
||||
with the prefix string.
|
||||
"""
|
||||
if self.parts:
|
||||
return [self.prefix + str(self.parts[0])] + list(map(str, self.parts[1:]))
|
||||
else:
|
||||
return [self.prefix]
|
||||
|
||||
def render_dots(self):
|
||||
return ".".join(self.prefixed_parts())
|
||||
|
||||
def render_dots_or_parts(self, with_dots):
|
||||
if with_dots:
|
||||
return [self.render_dots()]
|
||||
else:
|
||||
return self.prefixed_parts()
|
||||
|
||||
|
||||
class CudaVersion(MultiPartVersion):
|
||||
def __init__(self, major, minor):
|
||||
self.major = major
|
||||
self.minor = minor
|
||||
|
||||
super().__init__([self.major, self.minor], "cuda")
|
||||
@ -1,147 +0,0 @@
|
||||
import cimodel.data.simple.util.branch_filters
|
||||
import cimodel.lib.miniutils as miniutils
|
||||
from cimodel.data.simple.util.versions import CudaVersion
|
||||
|
||||
|
||||
class WindowsJob:
|
||||
def __init__(
|
||||
self,
|
||||
test_index,
|
||||
vscode_spec,
|
||||
cuda_version,
|
||||
force_on_cpu=False,
|
||||
master_only_pred=lambda job: job.vscode_spec.year != 2019,
|
||||
):
|
||||
self.test_index = test_index
|
||||
self.vscode_spec = vscode_spec
|
||||
self.cuda_version = cuda_version
|
||||
self.force_on_cpu = force_on_cpu
|
||||
self.master_only_pred = master_only_pred
|
||||
|
||||
def gen_tree(self):
|
||||
|
||||
base_phase = "build" if self.test_index is None else "test"
|
||||
numbered_phase = (
|
||||
base_phase if self.test_index is None else base_phase + str(self.test_index)
|
||||
)
|
||||
|
||||
key_name = "_".join(["pytorch", "windows", base_phase])
|
||||
|
||||
cpu_forcing_name_parts = ["on", "cpu"] if self.force_on_cpu else []
|
||||
|
||||
target_arch = self.cuda_version.render_dots() if self.cuda_version else "cpu"
|
||||
|
||||
base_name_parts = [
|
||||
"pytorch",
|
||||
"windows",
|
||||
self.vscode_spec.render(),
|
||||
"py36",
|
||||
target_arch,
|
||||
]
|
||||
|
||||
prerequisite_jobs = []
|
||||
if base_phase == "test":
|
||||
prerequisite_jobs.append("_".join(base_name_parts + ["build"]))
|
||||
|
||||
if self.cuda_version:
|
||||
self.cudnn_version = 8 if self.cuda_version.major == 11 else 7
|
||||
|
||||
arch_env_elements = (
|
||||
["cuda" + str(self.cuda_version.major), "cudnn" + str(self.cudnn_version)]
|
||||
if self.cuda_version
|
||||
else ["cpu"]
|
||||
)
|
||||
|
||||
build_environment_string = "-".join(
|
||||
["pytorch", "win"]
|
||||
+ self.vscode_spec.get_elements()
|
||||
+ arch_env_elements
|
||||
+ ["py3"]
|
||||
)
|
||||
|
||||
is_running_on_cuda = bool(self.cuda_version) and not self.force_on_cpu
|
||||
|
||||
props_dict = {
|
||||
"build_environment": build_environment_string,
|
||||
"python_version": miniutils.quote("3.6"),
|
||||
"vc_version": miniutils.quote(self.vscode_spec.dotted_version()),
|
||||
"vc_year": miniutils.quote(str(self.vscode_spec.year)),
|
||||
"vc_product": self.vscode_spec.get_product(),
|
||||
"use_cuda": miniutils.quote(str(int(is_running_on_cuda))),
|
||||
"requires": prerequisite_jobs,
|
||||
}
|
||||
|
||||
if self.master_only_pred(self):
|
||||
props_dict[
|
||||
"filters"
|
||||
] = cimodel.data.simple.util.branch_filters.gen_filter_dict()
|
||||
|
||||
name_parts = base_name_parts + cpu_forcing_name_parts + [numbered_phase]
|
||||
|
||||
if base_phase == "test":
|
||||
test_name = "-".join(["pytorch", "windows", numbered_phase])
|
||||
props_dict["test_name"] = test_name
|
||||
|
||||
if is_running_on_cuda:
|
||||
props_dict["executor"] = "windows-with-nvidia-gpu"
|
||||
|
||||
props_dict["cuda_version"] = (
|
||||
miniutils.quote(str(self.cuda_version.major))
|
||||
if self.cuda_version
|
||||
else "cpu"
|
||||
)
|
||||
props_dict["name"] = "_".join(name_parts)
|
||||
|
||||
return [{key_name: props_dict}]
|
||||
|
||||
|
||||
class VcSpec:
|
||||
def __init__(self, year, version_elements=None, hide_version=False):
|
||||
self.year = year
|
||||
self.version_elements = version_elements or []
|
||||
self.hide_version = hide_version
|
||||
|
||||
def get_elements(self):
|
||||
if self.hide_version:
|
||||
return [self.prefixed_year()]
|
||||
return [self.prefixed_year()] + self.version_elements
|
||||
|
||||
def get_product(self):
|
||||
return "Community" if self.year == 2019 else "BuildTools"
|
||||
|
||||
def dotted_version(self):
|
||||
return ".".join(self.version_elements)
|
||||
|
||||
def prefixed_year(self):
|
||||
return "vs" + str(self.year)
|
||||
|
||||
def render(self):
|
||||
return "_".join(self.get_elements())
|
||||
|
||||
def FalsePred(_):
|
||||
return False
|
||||
|
||||
def TruePred(_):
|
||||
return True
|
||||
|
||||
_VC2019 = VcSpec(2019)
|
||||
|
||||
WORKFLOW_DATA = [
|
||||
# VS2019 CUDA-10.1
|
||||
WindowsJob(None, _VC2019, CudaVersion(10, 1)),
|
||||
WindowsJob(1, _VC2019, CudaVersion(10, 1)),
|
||||
WindowsJob(2, _VC2019, CudaVersion(10, 1)),
|
||||
# VS2019 CUDA-11.0
|
||||
WindowsJob(None, _VC2019, CudaVersion(11, 0)),
|
||||
WindowsJob(1, _VC2019, CudaVersion(11, 0), master_only_pred=TruePred),
|
||||
WindowsJob(2, _VC2019, CudaVersion(11, 0), master_only_pred=TruePred),
|
||||
# VS2019 CPU-only
|
||||
WindowsJob(None, _VC2019, None),
|
||||
WindowsJob(1, _VC2019, None, master_only_pred=TruePred),
|
||||
WindowsJob(2, _VC2019, None, master_only_pred=TruePred),
|
||||
WindowsJob(1, _VC2019, CudaVersion(10, 1), force_on_cpu=True, master_only_pred=TruePred),
|
||||
]
|
||||
|
||||
|
||||
def get_windows_workflows():
|
||||
return [item.gen_tree() for item in WORKFLOW_DATA]
|
||||
@ -1,7 +1,5 @@
|
||||
from collections import OrderedDict
|
||||
|
||||
import cimodel.lib.miniutils as miniutils
|
||||
|
||||
|
||||
LIST_MARKER = "- "
|
||||
INDENTATION_WIDTH = 2
|
||||
@ -31,8 +29,7 @@ def render(fh, data, depth, is_list_member=False):
|
||||
tuples.sort()
|
||||
|
||||
for i, (k, v) in enumerate(tuples):
|
||||
if not v:
|
||||
continue
|
||||
|
||||
# If this dict is itself a list member, the first key gets prefixed with a list marker
|
||||
list_marker_prefix = LIST_MARKER if is_list_member and not i else ""
|
||||
|
||||
@ -46,7 +43,5 @@ def render(fh, data, depth, is_list_member=False):
|
||||
render(fh, v, depth, True)
|
||||
|
||||
else:
|
||||
# use empty quotes to denote an empty string value instead of blank space
|
||||
modified_data = miniutils.quote(data) if data == "" else data
|
||||
list_member_prefix = indentation + LIST_MARKER if is_list_member else ""
|
||||
fh.write(list_member_prefix + str(modified_data) + "\n")
|
||||
fh.write(list_member_prefix + str(data) + "\n")
|
||||
|
||||
84
.circleci/cimodel/lib/visualization.py
Normal file
84
.circleci/cimodel/lib/visualization.py
Normal file
@ -0,0 +1,84 @@
|
||||
"""
|
||||
This module encapsulates dependencies on pygraphviz
|
||||
"""
|
||||
|
||||
import colorsys
|
||||
|
||||
import cimodel.lib.conf_tree as conf_tree
|
||||
|
||||
|
||||
def rgb2hex(rgb_tuple):
|
||||
def to_hex(f):
|
||||
return "%02x" % int(f * 255)
|
||||
|
||||
return "#" + "".join(map(to_hex, list(rgb_tuple)))
|
||||
|
||||
|
||||
def handle_missing_graphviz(f):
|
||||
"""
|
||||
If the user has not installed pygraphviz, this causes
|
||||
calls to the draw() method of the returned object to do nothing.
|
||||
"""
|
||||
try:
|
||||
import pygraphviz # noqa: F401
|
||||
return f
|
||||
|
||||
except ModuleNotFoundError:
|
||||
|
||||
class FakeGraph:
|
||||
def draw(self, *args, **kwargs):
|
||||
pass
|
||||
|
||||
return lambda _: FakeGraph()
|
||||
|
||||
|
||||
@handle_missing_graphviz
|
||||
def generate_graph(toplevel_config_node):
|
||||
"""
|
||||
Traverses the graph once first just to find the max depth
|
||||
"""
|
||||
|
||||
config_list = conf_tree.dfs(toplevel_config_node)
|
||||
|
||||
max_depth = 0
|
||||
for config in config_list:
|
||||
max_depth = max(max_depth, config.get_depth())
|
||||
|
||||
# color the nodes using the max depth
|
||||
|
||||
from pygraphviz import AGraph
|
||||
dot = AGraph()
|
||||
|
||||
def node_discovery_callback(node, sibling_index, sibling_count):
|
||||
depth = node.get_depth()
|
||||
|
||||
sat_min, sat_max = 0.1, 0.6
|
||||
sat_range = sat_max - sat_min
|
||||
|
||||
saturation_fraction = sibling_index / float(sibling_count - 1) if sibling_count > 1 else 1
|
||||
saturation = sat_min + sat_range * saturation_fraction
|
||||
|
||||
# TODO Use a hash of the node label to determine the color
|
||||
hue = depth / float(max_depth + 1)
|
||||
|
||||
rgb_tuple = colorsys.hsv_to_rgb(hue, saturation, 1)
|
||||
|
||||
this_node_key = node.get_node_key()
|
||||
|
||||
dot.add_node(
|
||||
this_node_key,
|
||||
label=node.get_label(),
|
||||
style="filled",
|
||||
# fillcolor=hex_color + ":orange",
|
||||
fillcolor=rgb2hex(rgb_tuple),
|
||||
penwidth=3,
|
||||
color=rgb2hex(colorsys.hsv_to_rgb(hue, saturation, 0.9))
|
||||
)
|
||||
|
||||
def child_callback(node, child):
|
||||
this_node_key = node.get_node_key()
|
||||
child_node_key = child.get_node_key()
|
||||
dot.add_edge((this_node_key, child_node_key))
|
||||
|
||||
conf_tree.dfs_recurse(toplevel_config_node, lambda x: None, node_discovery_callback, child_callback)
|
||||
return dot
|
||||
@ -1,17 +0,0 @@
|
||||
#!/bin/bash -xe
|
||||
|
||||
|
||||
YAML_FILENAME=verbatim-sources/workflows-pytorch-ge-config-tests.yml
|
||||
DIFF_TOOL=meld
|
||||
|
||||
|
||||
# Allows this script to be invoked from any directory:
|
||||
cd $(dirname "$0")
|
||||
|
||||
pushd ..
|
||||
|
||||
|
||||
$DIFF_TOOL $YAML_FILENAME <(./codegen_validation/normalize_yaml_fragment.py < $YAML_FILENAME)
|
||||
|
||||
|
||||
popd
|
||||
@ -1,24 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import os
|
||||
import sys
|
||||
import yaml
|
||||
|
||||
# Need to import modules that lie on an upward-relative path
|
||||
sys.path.append(os.path.join(sys.path[0], '..'))
|
||||
|
||||
import cimodel.lib.miniyaml as miniyaml
|
||||
|
||||
|
||||
def regurgitate(depth, use_pyyaml_formatter=False):
|
||||
data = yaml.safe_load(sys.stdin)
|
||||
|
||||
if use_pyyaml_formatter:
|
||||
output = yaml.dump(data, sort_keys=True)
|
||||
sys.stdout.write(output)
|
||||
else:
|
||||
miniyaml.render(sys.stdout, data, depth)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
regurgitate(3)
|
||||
@ -1,15 +0,0 @@
|
||||
#!/bin/bash -xe
|
||||
|
||||
YAML_FILENAME=$1
|
||||
|
||||
# Allows this script to be invoked from any directory:
|
||||
cd $(dirname "$0")
|
||||
|
||||
pushd ..
|
||||
|
||||
TEMP_FILENAME=$(mktemp)
|
||||
|
||||
cat $YAML_FILENAME | ./codegen_validation/normalize_yaml_fragment.py > $TEMP_FILENAME
|
||||
mv $TEMP_FILENAME $YAML_FILENAME
|
||||
|
||||
popd
|
||||
12269
.circleci/config.yml
12269
.circleci/config.yml
File diff suppressed because it is too large
Load Diff
@ -10,35 +10,12 @@ if [ -z "${image}" ]; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
function extract_version_from_image_name() {
|
||||
eval export $2=$(echo "${image}" | perl -n -e"/$1(\d+(\.\d+)?(\.\d+)?)/ && print \$1")
|
||||
if [ "x${!2}" = x ]; then
|
||||
echo "variable '$2' not correctly parsed from image='$image'"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
function extract_all_from_image_name() {
|
||||
# parts $image into array, splitting on '-'
|
||||
keep_IFS="$IFS"
|
||||
IFS="-"
|
||||
declare -a parts=($image)
|
||||
IFS="$keep_IFS"
|
||||
unset keep_IFS
|
||||
|
||||
for part in "${parts[@]}"; do
|
||||
name=$(echo "${part}" | perl -n -e"/([a-zA-Z]+)\d+(\.\d+)?(\.\d+)?/ && print \$1")
|
||||
vername="${name^^}_VERSION"
|
||||
# "py" is the odd one out, needs this special case
|
||||
if [ "x${name}" = xpy ]; then
|
||||
vername=ANACONDA_PYTHON_VERSION
|
||||
fi
|
||||
# skip non-conforming fields such as "pytorch", "linux" or "xenial" without version string
|
||||
if [ -n "${name}" ]; then
|
||||
extract_version_from_image_name "${name}" "${vername}"
|
||||
fi
|
||||
done
|
||||
}
|
||||
# TODO: Generalize
|
||||
OS="ubuntu"
|
||||
DOCKERFILE="${OS}/Dockerfile"
|
||||
if [[ "$image" == *-cuda* ]]; then
|
||||
DOCKERFILE="${OS}-cuda/Dockerfile"
|
||||
fi
|
||||
|
||||
if [[ "$image" == *-trusty* ]]; then
|
||||
UBUNTU_VERSION=14.04
|
||||
@ -48,40 +25,32 @@ elif [[ "$image" == *-artful* ]]; then
|
||||
UBUNTU_VERSION=17.10
|
||||
elif [[ "$image" == *-bionic* ]]; then
|
||||
UBUNTU_VERSION=18.04
|
||||
elif [[ "$image" == *-focal* ]]; then
|
||||
UBUNTU_VERSION=20.04
|
||||
elif [[ "$image" == *ubuntu* ]]; then
|
||||
extract_version_from_image_name ubuntu UBUNTU_VERSION
|
||||
elif [[ "$image" == *centos* ]]; then
|
||||
extract_version_from_image_name centos CENTOS_VERSION
|
||||
fi
|
||||
|
||||
if [ -n "${UBUNTU_VERSION}" ]; then
|
||||
OS="ubuntu"
|
||||
elif [ -n "${CENTOS_VERSION}" ]; then
|
||||
OS="centos"
|
||||
else
|
||||
echo "Unable to derive operating system base..."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
DOCKERFILE="${OS}/Dockerfile"
|
||||
if [[ "$image" == *cuda* ]]; then
|
||||
DOCKERFILE="${OS}-cuda/Dockerfile"
|
||||
elif [[ "$image" == *rocm* ]]; then
|
||||
DOCKERFILE="${OS}-rocm/Dockerfile"
|
||||
fi
|
||||
|
||||
TRAVIS_DL_URL_PREFIX="https://s3.amazonaws.com/travis-python-archives/binaries/ubuntu/14.04/x86_64"
|
||||
|
||||
# It's annoying to rename jobs every time you want to rewrite a
|
||||
# configuration, so we hardcode everything here rather than do it
|
||||
# from scratch
|
||||
case "$image" in
|
||||
pytorch-linux-xenial-py3.8)
|
||||
# TODO: This is a hack, get rid of this as soon as you get rid of the travis downloads
|
||||
TRAVIS_DL_URL_PREFIX="https://s3.amazonaws.com/travis-python-archives/binaries/ubuntu/16.04/x86_64"
|
||||
TRAVIS_PYTHON_VERSION=3.8
|
||||
pytorch-linux-bionic-clang9-thrift-llvmdev)
|
||||
CLANG_VERSION=9
|
||||
THRIFT=yes
|
||||
LLVMDEV=yes
|
||||
PROTOBUF=yes
|
||||
;;
|
||||
pytorch-linux-xenial-py2.7.9)
|
||||
TRAVIS_PYTHON_VERSION=2.7.9
|
||||
GCC_VERSION=7
|
||||
# Do not install PROTOBUF, DB, and VISION as a test
|
||||
;;
|
||||
pytorch-linux-xenial-py2.7)
|
||||
TRAVIS_PYTHON_VERSION=2.7
|
||||
GCC_VERSION=7
|
||||
PROTOBUF=yes
|
||||
DB=yes
|
||||
VISION=yes
|
||||
;;
|
||||
pytorch-linux-xenial-py3.5)
|
||||
TRAVIS_PYTHON_VERSION=3.5
|
||||
GCC_VERSION=7
|
||||
# Do not install PROTOBUF, DB, and VISION as a test
|
||||
;;
|
||||
@ -98,7 +67,6 @@ case "$image" in
|
||||
PROTOBUF=yes
|
||||
DB=yes
|
||||
VISION=yes
|
||||
KATEX=yes
|
||||
;;
|
||||
pytorch-linux-xenial-py3.6-gcc7.2)
|
||||
ANACONDA_PYTHON_VERSION=3.6
|
||||
@ -112,15 +80,46 @@ case "$image" in
|
||||
DB=yes
|
||||
VISION=yes
|
||||
;;
|
||||
pytorch-linux-xenial-cuda9.2-cudnn7-py3-gcc5.4)
|
||||
CUDA_VERSION=9.2
|
||||
CUDNN_VERSION=7
|
||||
ANACONDA_PYTHON_VERSION=3.6
|
||||
GCC_VERSION=5
|
||||
pytorch-linux-xenial-pynightly)
|
||||
TRAVIS_PYTHON_VERSION=nightly
|
||||
GCC_VERSION=7
|
||||
PROTOBUF=yes
|
||||
DB=yes
|
||||
VISION=yes
|
||||
;;
|
||||
pytorch-linux-xenial-cuda8-cudnn7-py2)
|
||||
CUDA_VERSION=8.0
|
||||
CUDNN_VERSION=7
|
||||
ANACONDA_PYTHON_VERSION=2.7
|
||||
PROTOBUF=yes
|
||||
DB=yes
|
||||
VISION=yes
|
||||
;;
|
||||
pytorch-linux-xenial-cuda8-cudnn7-py3)
|
||||
CUDA_VERSION=8.0
|
||||
CUDNN_VERSION=7
|
||||
ANACONDA_PYTHON_VERSION=3.6
|
||||
PROTOBUF=yes
|
||||
DB=yes
|
||||
VISION=yes
|
||||
;;
|
||||
pytorch-linux-xenial-cuda9-cudnn7-py2)
|
||||
CUDA_VERSION=9.0
|
||||
CUDNN_VERSION=7
|
||||
ANACONDA_PYTHON_VERSION=2.7
|
||||
PROTOBUF=yes
|
||||
DB=yes
|
||||
VISION=yes
|
||||
;;
|
||||
pytorch-linux-xenial-cuda9-cudnn7-py3)
|
||||
CUDA_VERSION=9.0
|
||||
CUDNN_VERSION=7
|
||||
ANACONDA_PYTHON_VERSION=3.6
|
||||
PROTOBUF=yes
|
||||
DB=yes
|
||||
VISION=yes
|
||||
KATEX=yes
|
||||
;;
|
||||
pytorch-linux-xenial-cuda9.2-cudnn7-py3-gcc7)
|
||||
CUDA_VERSION=9.2
|
||||
CUDNN_VERSION=7
|
||||
@ -147,27 +146,6 @@ case "$image" in
|
||||
PROTOBUF=yes
|
||||
DB=yes
|
||||
VISION=yes
|
||||
KATEX=yes
|
||||
;;
|
||||
pytorch-linux-xenial-cuda10.2-cudnn7-py3-gcc7)
|
||||
CUDA_VERSION=10.2
|
||||
CUDNN_VERSION=7
|
||||
ANACONDA_PYTHON_VERSION=3.6
|
||||
GCC_VERSION=7
|
||||
PROTOBUF=yes
|
||||
DB=yes
|
||||
VISION=yes
|
||||
KATEX=yes
|
||||
;;
|
||||
pytorch-linux-xenial-cuda11.0-cudnn8-py3-gcc7)
|
||||
CUDA_VERSION=11.0
|
||||
CUDNN_VERSION=8
|
||||
ANACONDA_PYTHON_VERSION=3.6
|
||||
GCC_VERSION=7
|
||||
PROTOBUF=yes
|
||||
DB=yes
|
||||
VISION=yes
|
||||
KATEX=yes
|
||||
;;
|
||||
pytorch-linux-xenial-py3-clang5-asan)
|
||||
ANACONDA_PYTHON_VERSION=3.6
|
||||
@ -176,17 +154,9 @@ case "$image" in
|
||||
DB=yes
|
||||
VISION=yes
|
||||
;;
|
||||
pytorch-linux-xenial-py3-clang7-onnx)
|
||||
ANACONDA_PYTHON_VERSION=3.6
|
||||
CLANG_VERSION=7
|
||||
PROTOBUF=yes
|
||||
DB=yes
|
||||
VISION=yes
|
||||
;;
|
||||
pytorch-linux-xenial-py3-clang5-android-ndk-r19c)
|
||||
ANACONDA_PYTHON_VERSION=3.6
|
||||
CLANG_VERSION=5.0
|
||||
LLVMDEV=yes
|
||||
PROTOBUF=yes
|
||||
ANDROID=yes
|
||||
ANDROID_NDK_VERSION=r19c
|
||||
@ -201,106 +171,6 @@ case "$image" in
|
||||
DB=yes
|
||||
VISION=yes
|
||||
;;
|
||||
pytorch-linux-bionic-py3.6-clang9)
|
||||
ANACONDA_PYTHON_VERSION=3.6
|
||||
CLANG_VERSION=9
|
||||
PROTOBUF=yes
|
||||
DB=yes
|
||||
VISION=yes
|
||||
VULKAN_SDK_VERSION=1.2.148.0
|
||||
SWIFTSHADER=yes
|
||||
;;
|
||||
pytorch-linux-bionic-py3.8-gcc9)
|
||||
ANACONDA_PYTHON_VERSION=3.8
|
||||
GCC_VERSION=9
|
||||
PROTOBUF=yes
|
||||
DB=yes
|
||||
VISION=yes
|
||||
;;
|
||||
pytorch-linux-bionic-cuda10.2-cudnn7-py3.6-clang9)
|
||||
CUDA_VERSION=10.2
|
||||
CUDNN_VERSION=7
|
||||
ANACONDA_PYTHON_VERSION=3.6
|
||||
CLANG_VERSION=9
|
||||
PROTOBUF=yes
|
||||
DB=yes
|
||||
VISION=yes
|
||||
;;
|
||||
pytorch-linux-bionic-cuda10.2-cudnn7-py3.8-gcc9)
|
||||
CUDA_VERSION=10.2
|
||||
CUDNN_VERSION=7
|
||||
ANACONDA_PYTHON_VERSION=3.8
|
||||
GCC_VERSION=9
|
||||
PROTOBUF=yes
|
||||
DB=yes
|
||||
VISION=yes
|
||||
;;
|
||||
pytorch-linux-bionic-cuda11.0-cudnn8-py3.6-gcc9)
|
||||
CUDA_VERSION=11.0
|
||||
CUDNN_VERSION=8
|
||||
ANACONDA_PYTHON_VERSION=3.6
|
||||
GCC_VERSION=9
|
||||
PROTOBUF=yes
|
||||
DB=yes
|
||||
VISION=yes
|
||||
KATEX=yes
|
||||
;;
|
||||
pytorch-linux-bionic-cuda11.0-cudnn8-py3.8-gcc9)
|
||||
CUDA_VERSION=11.0
|
||||
CUDNN_VERSION=8
|
||||
ANACONDA_PYTHON_VERSION=3.8
|
||||
GCC_VERSION=9
|
||||
PROTOBUF=yes
|
||||
DB=yes
|
||||
VISION=yes
|
||||
KATEX=yes
|
||||
;;
|
||||
pytorch-linux-bionic-rocm3.7-py3.6)
|
||||
ANACONDA_PYTHON_VERSION=3.6
|
||||
PROTOBUF=yes
|
||||
DB=yes
|
||||
VISION=yes
|
||||
ROCM_VERSION=3.7
|
||||
;;
|
||||
pytorch-linux-bionic-rocm3.8-py3.6)
|
||||
ANACONDA_PYTHON_VERSION=3.6
|
||||
PROTOBUF=yes
|
||||
DB=yes
|
||||
VISION=yes
|
||||
ROCM_VERSION=3.8
|
||||
;;
|
||||
*)
|
||||
# Catch-all for builds that are not hardcoded.
|
||||
PROTOBUF=yes
|
||||
DB=yes
|
||||
VISION=yes
|
||||
echo "image '$image' did not match an existing build configuration"
|
||||
if [[ "$image" == *py* ]]; then
|
||||
extract_version_from_image_name py ANACONDA_PYTHON_VERSION
|
||||
fi
|
||||
if [[ "$image" == *cuda* ]]; then
|
||||
extract_version_from_image_name cuda CUDA_VERSION
|
||||
extract_version_from_image_name cudnn CUDNN_VERSION
|
||||
fi
|
||||
if [[ "$image" == *rocm* ]]; then
|
||||
extract_version_from_image_name rocm ROCM_VERSION
|
||||
fi
|
||||
if [[ "$image" == *gcc* ]]; then
|
||||
extract_version_from_image_name gcc GCC_VERSION
|
||||
fi
|
||||
if [[ "$image" == *clang* ]]; then
|
||||
extract_version_from_image_name clang CLANG_VERSION
|
||||
fi
|
||||
if [[ "$image" == *devtoolset* ]]; then
|
||||
extract_version_from_image_name devtoolset DEVTOOLSET_VERSION
|
||||
fi
|
||||
if [[ "$image" == *glibc* ]]; then
|
||||
extract_version_from_image_name glibc GLIBC_VERSION
|
||||
fi
|
||||
if [[ "$image" == *cmake* ]]; then
|
||||
extract_version_from_image_name cmake CMAKE_VERSION
|
||||
fi
|
||||
;;
|
||||
esac
|
||||
|
||||
# Set Jenkins UID and GID if running Jenkins
|
||||
@ -312,12 +182,8 @@ fi
|
||||
tmp_tag="tmp-$(cat /dev/urandom | tr -dc 'a-z' | fold -w 32 | head -n 1)"
|
||||
|
||||
# Build image
|
||||
# TODO: build-arg THRIFT is not turned on for any image, remove it once we confirm
|
||||
# it's no longer needed.
|
||||
docker build \
|
||||
--no-cache \
|
||||
--progress=plain \
|
||||
--build-arg "TRAVIS_DL_URL_PREFIX=${TRAVIS_DL_URL_PREFIX}" \
|
||||
--build-arg "BUILD_ENVIRONMENT=${image}" \
|
||||
--build-arg "PROTOBUF=${PROTOBUF:-}" \
|
||||
--build-arg "THRIFT=${THRIFT:-}" \
|
||||
@ -329,9 +195,6 @@ docker build \
|
||||
--build-arg "JENKINS_UID=${JENKINS_UID:-}" \
|
||||
--build-arg "JENKINS_GID=${JENKINS_GID:-}" \
|
||||
--build-arg "UBUNTU_VERSION=${UBUNTU_VERSION}" \
|
||||
--build-arg "CENTOS_VERSION=${CENTOS_VERSION}" \
|
||||
--build-arg "DEVTOOLSET_VERSION=${DEVTOOLSET_VERSION}" \
|
||||
--build-arg "GLIBC_VERSION=${GLIBC_VERSION}" \
|
||||
--build-arg "CLANG_VERSION=${CLANG_VERSION}" \
|
||||
--build-arg "ANACONDA_PYTHON_VERSION=${ANACONDA_PYTHON_VERSION}" \
|
||||
--build-arg "TRAVIS_PYTHON_VERSION=${TRAVIS_PYTHON_VERSION}" \
|
||||
@ -341,25 +204,14 @@ docker build \
|
||||
--build-arg "ANDROID=${ANDROID}" \
|
||||
--build-arg "ANDROID_NDK=${ANDROID_NDK_VERSION}" \
|
||||
--build-arg "GRADLE_VERSION=${GRADLE_VERSION}" \
|
||||
--build-arg "VULKAN_SDK_VERSION=${VULKAN_SDK_VERSION}" \
|
||||
--build-arg "SWIFTSHADER=${SWIFTSHADER}" \
|
||||
--build-arg "CMAKE_VERSION=${CMAKE_VERSION:-}" \
|
||||
--build-arg "NINJA_VERSION=${NINJA_VERSION:-}" \
|
||||
--build-arg "KATEX=${KATEX:-}" \
|
||||
--build-arg "ROCM_VERSION=${ROCM_VERSION:-}" \
|
||||
-f $(dirname ${DOCKERFILE})/Dockerfile \
|
||||
-t "$tmp_tag" \
|
||||
"$@" \
|
||||
.
|
||||
|
||||
# NVIDIA dockers for RC releases use tag names like `11.0-cudnn8-devel-ubuntu18.04-rc`,
|
||||
# for this case we will set UBUNTU_VERSION to `18.04-rc` so that the Dockerfile could
|
||||
# find the correct image. As a result, here we have to replace the
|
||||
# "$UBUNTU_VERSION" == "18.04-rc"
|
||||
# with
|
||||
# "$UBUNTU_VERSION" == "18.04"
|
||||
UBUNTU_VERSION=$(echo ${UBUNTU_VERSION} | sed 's/-rc$//')
|
||||
|
||||
function drun() {
|
||||
docker run --rm "$tmp_tag" $*
|
||||
}
|
||||
|
||||
@ -13,7 +13,7 @@ retry () {
|
||||
|
||||
#until we find a way to reliably reuse previous build, this last_tag is not in use
|
||||
# last_tag="$(( CIRCLE_BUILD_NUM - 1 ))"
|
||||
tag="${DOCKER_TAG}"
|
||||
tag="${CIRCLE_WORKFLOW_ID}"
|
||||
|
||||
|
||||
registry="308535385114.dkr.ecr.us-east-1.amazonaws.com"
|
||||
|
||||
@ -1,93 +0,0 @@
|
||||
ARG CENTOS_VERSION
|
||||
|
||||
FROM centos:${CENTOS_VERSION}
|
||||
|
||||
ARG CENTOS_VERSION
|
||||
|
||||
# Install required packages to build Caffe2
|
||||
|
||||
# Install common dependencies (so that this step can be cached separately)
|
||||
ARG EC2
|
||||
ADD ./common/install_base.sh install_base.sh
|
||||
RUN bash ./install_base.sh && rm install_base.sh
|
||||
|
||||
# Install devtoolset
|
||||
ARG DEVTOOLSET_VERSION
|
||||
ADD ./common/install_devtoolset.sh install_devtoolset.sh
|
||||
RUN bash ./install_devtoolset.sh && rm install_devtoolset.sh
|
||||
ENV BASH_ENV "/etc/profile"
|
||||
|
||||
# (optional) Install non-default glibc version
|
||||
ARG GLIBC_VERSION
|
||||
ADD ./common/install_glibc.sh install_glibc.sh
|
||||
RUN if [ -n "${GLIBC_VERSION}" ]; then bash ./install_glibc.sh; fi
|
||||
RUN rm install_glibc.sh
|
||||
|
||||
# Install user
|
||||
ADD ./common/install_user.sh install_user.sh
|
||||
RUN bash ./install_user.sh && rm install_user.sh
|
||||
|
||||
# Install conda
|
||||
ENV PATH /opt/conda/bin:$PATH
|
||||
ARG ANACONDA_PYTHON_VERSION
|
||||
ADD ./common/install_conda.sh install_conda.sh
|
||||
RUN bash ./install_conda.sh && rm install_conda.sh
|
||||
|
||||
# (optional) Install protobuf for ONNX
|
||||
ARG PROTOBUF
|
||||
ADD ./common/install_protobuf.sh install_protobuf.sh
|
||||
RUN if [ -n "${PROTOBUF}" ]; then bash ./install_protobuf.sh; fi
|
||||
RUN rm install_protobuf.sh
|
||||
ENV INSTALLED_PROTOBUF ${PROTOBUF}
|
||||
|
||||
# (optional) Install database packages like LMDB and LevelDB
|
||||
ARG DB
|
||||
ADD ./common/install_db.sh install_db.sh
|
||||
RUN if [ -n "${DB}" ]; then bash ./install_db.sh; fi
|
||||
RUN rm install_db.sh
|
||||
ENV INSTALLED_DB ${DB}
|
||||
|
||||
# (optional) Install vision packages like OpenCV and ffmpeg
|
||||
ARG VISION
|
||||
ADD ./common/install_vision.sh install_vision.sh
|
||||
RUN if [ -n "${VISION}" ]; then bash ./install_vision.sh; fi
|
||||
RUN rm install_vision.sh
|
||||
ENV INSTALLED_VISION ${VISION}
|
||||
|
||||
# Install rocm
|
||||
ARG ROCM_VERSION
|
||||
ADD ./common/install_rocm.sh install_rocm.sh
|
||||
RUN bash ./install_rocm.sh
|
||||
RUN rm install_rocm.sh
|
||||
ENV PATH /opt/rocm/bin:$PATH
|
||||
ENV PATH /opt/rocm/hcc/bin:$PATH
|
||||
ENV PATH /opt/rocm/hip/bin:$PATH
|
||||
ENV PATH /opt/rocm/opencl/bin:$PATH
|
||||
ENV PATH /opt/rocm/llvm/bin:$PATH
|
||||
ENV HIP_PLATFORM hcc
|
||||
ENV LANG en_US.utf8
|
||||
ENV LC_ALL en_US.utf8
|
||||
|
||||
# (optional) Install non-default CMake version
|
||||
ARG CMAKE_VERSION
|
||||
ADD ./common/install_cmake.sh install_cmake.sh
|
||||
RUN if [ -n "${CMAKE_VERSION}" ]; then bash ./install_cmake.sh; fi
|
||||
RUN rm install_cmake.sh
|
||||
|
||||
# (optional) Install non-default Ninja version
|
||||
ARG NINJA_VERSION
|
||||
ADD ./common/install_ninja.sh install_ninja.sh
|
||||
RUN if [ -n "${NINJA_VERSION}" ]; then bash ./install_ninja.sh; fi
|
||||
RUN rm install_ninja.sh
|
||||
|
||||
# Install ccache/sccache (do this last, so we get priority in PATH)
|
||||
ADD ./common/install_cache.sh install_cache.sh
|
||||
ENV PATH /opt/cache/bin:$PATH
|
||||
RUN bash ./install_cache.sh && rm install_cache.sh
|
||||
|
||||
# Include BUILD_ENVIRONMENT environment variable in image
|
||||
ARG BUILD_ENVIRONMENT
|
||||
ENV BUILD_ENVIRONMENT ${BUILD_ENVIRONMENT}
|
||||
|
||||
USER jenkins
|
||||
CMD ["bash"]
|
||||
@ -4,15 +4,13 @@ set -ex
|
||||
|
||||
[ -n "${ANDROID_NDK}" ]
|
||||
|
||||
_https_amazon_aws=https://ossci-android.s3.amazonaws.com
|
||||
|
||||
apt-get update
|
||||
apt-get install -y --no-install-recommends autotools-dev autoconf unzip
|
||||
apt-get autoclean && apt-get clean
|
||||
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
|
||||
|
||||
pushd /tmp
|
||||
curl -Os --retry 3 $_https_amazon_aws/android-ndk-${ANDROID_NDK}-linux-x86_64.zip
|
||||
curl -Os https://dl.google.com/android/repository/android-ndk-${ANDROID_NDK}-linux-x86_64.zip
|
||||
popd
|
||||
_ndk_dir=/opt/ndk
|
||||
mkdir -p "$_ndk_dir"
|
||||
@ -47,22 +45,43 @@ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64/
|
||||
# Installing android sdk
|
||||
# https://github.com/circleci/circleci-images/blob/staging/android/Dockerfile.m4
|
||||
|
||||
_tmp_sdk_zip=/tmp/android-sdk-linux.zip
|
||||
_sdk_version=sdk-tools-linux-3859397.zip
|
||||
_android_home=/opt/android/sdk
|
||||
|
||||
rm -rf $_android_home
|
||||
sudo mkdir -p $_android_home
|
||||
curl --silent --show-error --location --fail --retry 3 --output /tmp/android-sdk-linux.zip $_https_amazon_aws/android-sdk-linux-tools3859397-build-tools2803-2902-platforms28-29.zip
|
||||
sudo unzip -q $_tmp_sdk_zip -d $_android_home
|
||||
rm $_tmp_sdk_zip
|
||||
curl --silent --show-error --location --fail --retry 3 --output /tmp/$_sdk_version https://dl.google.com/android/repository/$_sdk_version
|
||||
sudo unzip -q /tmp/$_sdk_version -d $_android_home
|
||||
rm /tmp/$_sdk_version
|
||||
|
||||
sudo chmod -R 777 $_android_home
|
||||
|
||||
export ANDROID_HOME=$_android_home
|
||||
export ADB_INSTALL_TIMEOUT=120
|
||||
|
||||
export PATH="${ANDROID_HOME}/tools:${ANDROID_HOME}/tools/bin:${ANDROID_HOME}/platform-tools:${PATH}"
|
||||
export PATH="${ANDROID_HOME}/emulator:${ANDROID_HOME}/tools:${ANDROID_HOME}/tools/bin:${ANDROID_HOME}/platform-tools:${PATH}"
|
||||
echo "PATH:${PATH}"
|
||||
alias sdkmanager="$ANDROID_HOME/tools/bin/sdkmanager"
|
||||
|
||||
sudo mkdir ~/.android && sudo echo '### User Sources for Android SDK Manager' > ~/.android/repositories.cfg
|
||||
sudo chmod -R 777 ~/.android
|
||||
|
||||
yes | sdkmanager --licenses
|
||||
yes | sdkmanager --update
|
||||
|
||||
sdkmanager \
|
||||
"tools" \
|
||||
"platform-tools" \
|
||||
"emulator"
|
||||
|
||||
sdkmanager \
|
||||
"build-tools;28.0.3" \
|
||||
"build-tools;29.0.2"
|
||||
|
||||
sdkmanager \
|
||||
"platforms;android-28" \
|
||||
"platforms;android-29"
|
||||
sdkmanager --list
|
||||
|
||||
# Installing Gradle
|
||||
echo "GRADLE_VERSION:${GRADLE_VERSION}"
|
||||
@ -70,7 +89,8 @@ _gradle_home=/opt/gradle
|
||||
sudo rm -rf $gradle_home
|
||||
sudo mkdir -p $_gradle_home
|
||||
|
||||
curl --silent --output /tmp/gradle.zip --retry 3 $_https_amazon_aws/gradle-${GRADLE_VERSION}-bin.zip
|
||||
wget --no-verbose --output-document=/tmp/gradle.zip \
|
||||
"https://services.gradle.org/distributions/gradle-${GRADLE_VERSION}-bin.zip"
|
||||
|
||||
sudo unzip -q /tmp/gradle.zip -d $_gradle_home
|
||||
rm /tmp/gradle.zip
|
||||
|
||||
@ -2,132 +2,74 @@
|
||||
|
||||
set -ex
|
||||
|
||||
install_ubuntu() {
|
||||
# NVIDIA dockers for RC releases use tag names like `11.0-cudnn8-devel-ubuntu18.04-rc`,
|
||||
# for this case we will set UBUNTU_VERSION to `18.04-rc` so that the Dockerfile could
|
||||
# find the correct image. As a result, here we have to check for
|
||||
# "$UBUNTU_VERSION" == "18.04"*
|
||||
# instead of
|
||||
# "$UBUNTU_VERSION" == "18.04"
|
||||
if [[ "$UBUNTU_VERSION" == "18.04"* ]]; then
|
||||
cmake3="cmake=3.10*"
|
||||
else
|
||||
cmake3="cmake=3.5*"
|
||||
fi
|
||||
if [[ "$UBUNTU_VERSION" == "14.04" ]]; then
|
||||
# cmake 2 is too old
|
||||
cmake3=cmake3
|
||||
else
|
||||
cmake3=cmake
|
||||
fi
|
||||
|
||||
# Install common dependencies
|
||||
apt-get update
|
||||
# TODO: Some of these may not be necessary
|
||||
# TODO: libiomp also gets installed by conda, aka there's a conflict
|
||||
ccache_deps="asciidoc docbook-xml docbook-xsl xsltproc"
|
||||
numpy_deps="gfortran"
|
||||
apt-get install -y --no-install-recommends \
|
||||
$ccache_deps \
|
||||
$numpy_deps \
|
||||
${cmake3} \
|
||||
apt-transport-https \
|
||||
autoconf \
|
||||
automake \
|
||||
build-essential \
|
||||
ca-certificates \
|
||||
curl \
|
||||
git \
|
||||
libatlas-base-dev \
|
||||
libc6-dbg \
|
||||
libiomp-dev \
|
||||
libyaml-dev \
|
||||
libz-dev \
|
||||
libjpeg-dev \
|
||||
libasound2-dev \
|
||||
libsndfile-dev \
|
||||
python \
|
||||
python-dev \
|
||||
python-setuptools \
|
||||
python-wheel \
|
||||
software-properties-common \
|
||||
sudo \
|
||||
wget \
|
||||
vim
|
||||
if [[ "$UBUNTU_VERSION" == "18.04" ]]; then
|
||||
cmake3="cmake=3.10*"
|
||||
else
|
||||
cmake3="${cmake3}=3.5*"
|
||||
fi
|
||||
|
||||
# TODO: THIS IS A HACK!!!
|
||||
# distributed nccl(2) tests are a bit busted, see https://github.com/pytorch/pytorch/issues/5877
|
||||
if dpkg -s libnccl-dev; then
|
||||
apt-get remove -y libnccl-dev libnccl2 --allow-change-held-packages
|
||||
fi
|
||||
|
||||
# Cleanup package manager
|
||||
apt-get autoclean && apt-get clean
|
||||
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
|
||||
}
|
||||
|
||||
install_centos() {
|
||||
# Need EPEL for many packages we depend on.
|
||||
# See http://fedoraproject.org/wiki/EPEL
|
||||
yum --enablerepo=extras install -y epel-release
|
||||
|
||||
ccache_deps="asciidoc docbook-dtds docbook-style-xsl libxslt"
|
||||
numpy_deps="gcc-gfortran"
|
||||
# Note: protobuf-c-{compiler,devel} on CentOS are too old to be used
|
||||
# for Caffe2. That said, we still install them to make sure the build
|
||||
# system opts to build/use protoc and libprotobuf from third-party.
|
||||
yum install -y \
|
||||
$ccache_deps \
|
||||
$numpy_deps \
|
||||
autoconf \
|
||||
automake \
|
||||
bzip2 \
|
||||
cmake \
|
||||
cmake3 \
|
||||
curl \
|
||||
gcc \
|
||||
gcc-c++ \
|
||||
gflags-devel \
|
||||
git \
|
||||
glibc-devel \
|
||||
glibc-headers \
|
||||
glog-devel \
|
||||
hiredis-devel \
|
||||
libstdc++-devel \
|
||||
make \
|
||||
opencv-devel \
|
||||
sudo \
|
||||
wget \
|
||||
vim
|
||||
|
||||
# Cleanup
|
||||
yum clean all
|
||||
rm -rf /var/cache/yum
|
||||
rm -rf /var/lib/yum/yumdb
|
||||
rm -rf /var/lib/yum/history
|
||||
}
|
||||
|
||||
# Install base packages depending on the base OS
|
||||
ID=$(grep -oP '(?<=^ID=).+' /etc/os-release | tr -d '"')
|
||||
case "$ID" in
|
||||
ubuntu)
|
||||
install_ubuntu
|
||||
;;
|
||||
centos)
|
||||
install_centos
|
||||
;;
|
||||
*)
|
||||
echo "Unable to determine OS..."
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
# Install common dependencies
|
||||
apt-get update
|
||||
# TODO: Some of these may not be necessary
|
||||
# TODO: libiomp also gets installed by conda, aka there's a conflict
|
||||
ccache_deps="asciidoc docbook-xml docbook-xsl xsltproc"
|
||||
numpy_deps="gfortran"
|
||||
apt-get install -y --no-install-recommends \
|
||||
$ccache_deps \
|
||||
$numpy_deps \
|
||||
${cmake3} \
|
||||
apt-transport-https \
|
||||
autoconf \
|
||||
automake \
|
||||
build-essential \
|
||||
ca-certificates \
|
||||
curl \
|
||||
git \
|
||||
libatlas-base-dev \
|
||||
libc6-dbg \
|
||||
libiomp-dev \
|
||||
libyaml-dev \
|
||||
libz-dev \
|
||||
libjpeg-dev \
|
||||
libasound2-dev \
|
||||
libsndfile-dev \
|
||||
python \
|
||||
python-dev \
|
||||
python-setuptools \
|
||||
python-wheel \
|
||||
software-properties-common \
|
||||
sudo \
|
||||
wget \
|
||||
vim
|
||||
|
||||
# Install Valgrind separately since the apt-get version is too old.
|
||||
mkdir valgrind_build && cd valgrind_build
|
||||
VALGRIND_VERSION=3.16.1
|
||||
if ! wget http://valgrind.org/downloads/valgrind-${VALGRIND_VERSION}.tar.bz2
|
||||
if ! wget http://valgrind.org/downloads/valgrind-3.14.0.tar.bz2
|
||||
then
|
||||
wget https://sourceware.org/ftp/valgrind/valgrind-${VALGRIND_VERSION}.tar.bz2
|
||||
wget https://sourceware.org/ftp/valgrind/valgrind-3.14.0.tar.bz2
|
||||
fi
|
||||
tar -xjf valgrind-${VALGRIND_VERSION}.tar.bz2
|
||||
cd valgrind-${VALGRIND_VERSION}
|
||||
tar -xjf valgrind-3.14.0.tar.bz2
|
||||
cd valgrind-3.14.0
|
||||
./configure --prefix=/usr/local
|
||||
make -j 4
|
||||
make
|
||||
sudo make install
|
||||
cd ../../
|
||||
rm -rf valgrind_build
|
||||
alias valgrind="/usr/local/bin/valgrind"
|
||||
|
||||
# TODO: THIS IS A HACK!!!
|
||||
# distributed nccl(2) tests are a bit busted, see https://github.com/pytorch/pytorch/issues/5877
|
||||
if dpkg -s libnccl-dev; then
|
||||
apt-get remove -y libnccl-dev libnccl2 --allow-change-held-packages
|
||||
fi
|
||||
|
||||
# Cleanup package manager
|
||||
apt-get autoclean && apt-get clean
|
||||
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
|
||||
|
||||
@ -8,11 +8,7 @@ sed -e 's|PATH="\(.*\)"|PATH="/opt/cache/bin:\1"|g' -i /etc/environment
|
||||
export PATH="/opt/cache/bin:$PATH"
|
||||
|
||||
# Setup compiler cache
|
||||
if [ -n "$ROCM_VERSION" ]; then
|
||||
curl --retry 3 http://repo.radeon.com/misc/.sccache_amd/sccache -o /opt/cache/bin/sccache
|
||||
else
|
||||
curl --retry 3 https://s3.amazonaws.com/ossci-linux/sccache -o /opt/cache/bin/sccache
|
||||
fi
|
||||
curl https://s3.amazonaws.com/ossci-linux/sccache -o /opt/cache/bin/sccache
|
||||
chmod a+x /opt/cache/bin/sccache
|
||||
|
||||
function write_sccache_stub() {
|
||||
@ -24,12 +20,8 @@ write_sccache_stub cc
|
||||
write_sccache_stub c++
|
||||
write_sccache_stub gcc
|
||||
write_sccache_stub g++
|
||||
|
||||
# NOTE: See specific ROCM_VERSION case below.
|
||||
if [ "x$ROCM_VERSION" = x ]; then
|
||||
write_sccache_stub clang
|
||||
write_sccache_stub clang++
|
||||
fi
|
||||
write_sccache_stub clang
|
||||
write_sccache_stub clang++
|
||||
|
||||
if [ -n "$CUDA_VERSION" ]; then
|
||||
# TODO: This is a workaround for the fact that PyTorch's FindCUDA
|
||||
@ -41,47 +33,3 @@ if [ -n "$CUDA_VERSION" ]; then
|
||||
printf "#!/bin/sh\nexec sccache $(which nvcc) \"\$@\"" > /opt/cache/lib/nvcc
|
||||
chmod a+x /opt/cache/lib/nvcc
|
||||
fi
|
||||
|
||||
if [ -n "$ROCM_VERSION" ]; then
|
||||
# ROCm compiler is hcc or clang. However, it is commonly invoked via hipcc wrapper.
|
||||
# hipcc will call either hcc or clang using an absolute path starting with /opt/rocm,
|
||||
# causing the /opt/cache/bin to be skipped. We must create the sccache wrappers
|
||||
# directly under /opt/rocm while also preserving the original compiler names.
|
||||
# Note symlinks will chain as follows: [hcc or clang++] -> clang -> clang-??
|
||||
# Final link in symlink chain must point back to original directory.
|
||||
|
||||
# Original compiler is moved one directory deeper. Wrapper replaces it.
|
||||
function write_sccache_stub_rocm() {
|
||||
OLDCOMP=$1
|
||||
COMPNAME=$(basename $OLDCOMP)
|
||||
TOPDIR=$(dirname $OLDCOMP)
|
||||
WRAPPED="$TOPDIR/original/$COMPNAME"
|
||||
mv "$OLDCOMP" "$WRAPPED"
|
||||
printf "#!/bin/sh\nexec sccache $WRAPPED \$*" > "$OLDCOMP"
|
||||
chmod a+x "$1"
|
||||
}
|
||||
|
||||
if [[ -e "/opt/rocm/hcc/bin/hcc" ]]; then
|
||||
# ROCm 3.3 or earlier.
|
||||
mkdir /opt/rocm/hcc/bin/original
|
||||
write_sccache_stub_rocm /opt/rocm/hcc/bin/hcc
|
||||
write_sccache_stub_rocm /opt/rocm/hcc/bin/clang
|
||||
write_sccache_stub_rocm /opt/rocm/hcc/bin/clang++
|
||||
# Fix last link in symlink chain, clang points to versioned clang in prior dir
|
||||
pushd /opt/rocm/hcc/bin/original
|
||||
ln -s ../$(readlink clang)
|
||||
popd
|
||||
elif [[ -e "/opt/rocm/llvm/bin/clang" ]]; then
|
||||
# ROCm 3.5 and beyond.
|
||||
mkdir /opt/rocm/llvm/bin/original
|
||||
write_sccache_stub_rocm /opt/rocm/llvm/bin/clang
|
||||
write_sccache_stub_rocm /opt/rocm/llvm/bin/clang++
|
||||
# Fix last link in symlink chain, clang points to versioned clang in prior dir
|
||||
pushd /opt/rocm/llvm/bin/original
|
||||
ln -s ../$(readlink clang)
|
||||
popd
|
||||
else
|
||||
echo "Cannot find ROCm compiler."
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
@ -10,7 +10,7 @@ file="cmake-${CMAKE_VERSION}-Linux-x86_64.tar.gz"
|
||||
|
||||
# Download and install specific CMake version in /usr/local
|
||||
pushd /tmp
|
||||
curl -Os --retry 3 "https://cmake.org/files/${path}/${file}"
|
||||
curl -Os "https://cmake.org/files/${path}/${file}"
|
||||
tar -C /usr/local --strip-components 1 --no-same-owner -zxf cmake-*.tar.gz
|
||||
rm -f cmake-*.tar.gz
|
||||
popd
|
||||
|
||||
@ -4,7 +4,7 @@ set -ex
|
||||
|
||||
# Optionally install conda
|
||||
if [ -n "$ANACONDA_PYTHON_VERSION" ]; then
|
||||
BASE_URL="https://repo.anaconda.com/miniconda"
|
||||
BASE_URL="https://repo.continuum.io/miniconda"
|
||||
|
||||
MAJOR_PYTHON_VERSION=$(echo "$ANACONDA_PYTHON_VERSION" | cut -d . -f 1)
|
||||
|
||||
@ -24,20 +24,13 @@ if [ -n "$ANACONDA_PYTHON_VERSION" ]; then
|
||||
mkdir /opt/conda
|
||||
chown jenkins:jenkins /opt/conda
|
||||
|
||||
# Work around bug where devtoolset replaces sudo and breaks it.
|
||||
if [ -n "$DEVTOOLSET_VERSION" ]; then
|
||||
SUDO=/bin/sudo
|
||||
else
|
||||
SUDO=sudo
|
||||
fi
|
||||
|
||||
as_jenkins() {
|
||||
# NB: unsetting the environment variables works around a conda bug
|
||||
# https://github.com/conda/conda/issues/6576
|
||||
# NB: Pass on PATH and LD_LIBRARY_PATH to sudo invocation
|
||||
# NB: This must be run from a directory that jenkins has access to,
|
||||
# works around https://github.com/conda/conda-package-handling/pull/34
|
||||
$SUDO -H -u jenkins env -u SUDO_UID -u SUDO_GID -u SUDO_COMMAND -u SUDO_USER env "PATH=$PATH" "LD_LIBRARY_PATH=$LD_LIBRARY_PATH" $*
|
||||
sudo -H -u jenkins env -u SUDO_UID -u SUDO_GID -u SUDO_COMMAND -u SUDO_USER env "PATH=$PATH" "LD_LIBRARY_PATH=$LD_LIBRARY_PATH" $*
|
||||
}
|
||||
|
||||
pushd /tmp
|
||||
@ -56,10 +49,10 @@ if [ -n "$ANACONDA_PYTHON_VERSION" ]; then
|
||||
pushd /opt/conda
|
||||
|
||||
# Track latest conda update
|
||||
as_jenkins conda update -y -n base conda
|
||||
as_jenkins conda update -n base conda
|
||||
|
||||
# Install correct Python version
|
||||
as_jenkins conda install -y python="$ANACONDA_PYTHON_VERSION"
|
||||
as_jenkins conda install python="$ANACONDA_PYTHON_VERSION"
|
||||
|
||||
conda_install() {
|
||||
# Ensure that the install command don't upgrade/downgrade Python
|
||||
@ -71,23 +64,19 @@ if [ -n "$ANACONDA_PYTHON_VERSION" ]; then
|
||||
# Install PyTorch conda deps, as per https://github.com/pytorch/pytorch README
|
||||
# DO NOT install cmake here as it would install a version newer than 3.5, but
|
||||
# we want to pin to version 3.5.
|
||||
if [ "$ANACONDA_PYTHON_VERSION" = "3.8" ]; then
|
||||
# DO NOT install typing if installing python-3.8, since its part of python-3.8 core packages
|
||||
# Install llvm-8 as it is required to compile llvmlite-0.30.0 from source
|
||||
conda_install numpy=1.18.5 pyyaml mkl mkl-include setuptools cffi future six llvmdev=8.0.0 dataclasses
|
||||
else
|
||||
conda_install numpy=1.18.5 pyyaml mkl mkl-include setuptools cffi typing future six dataclasses
|
||||
fi
|
||||
if [[ "$CUDA_VERSION" == 9.2* ]]; then
|
||||
conda_install numpy pyyaml mkl mkl-include setuptools cffi typing future six
|
||||
if [[ "$CUDA_VERSION" == 8.0* ]]; then
|
||||
conda_install magma-cuda80 -c pytorch
|
||||
elif [[ "$CUDA_VERSION" == 9.0* ]]; then
|
||||
conda_install magma-cuda90 -c pytorch
|
||||
elif [[ "$CUDA_VERSION" == 9.1* ]]; then
|
||||
conda_install magma-cuda91 -c pytorch
|
||||
elif [[ "$CUDA_VERSION" == 9.2* ]]; then
|
||||
conda_install magma-cuda92 -c pytorch
|
||||
elif [[ "$CUDA_VERSION" == 10.0* ]]; then
|
||||
conda_install magma-cuda100 -c pytorch
|
||||
elif [[ "$CUDA_VERSION" == 10.1* ]]; then
|
||||
conda_install magma-cuda101 -c pytorch
|
||||
elif [[ "$CUDA_VERSION" == 10.2* ]]; then
|
||||
conda_install magma-cuda102 -c pytorch
|
||||
elif [[ "$CUDA_VERSION" == 11.0* ]]; then
|
||||
conda_install magma-cuda110 -c pytorch
|
||||
fi
|
||||
|
||||
# TODO: This isn't working atm
|
||||
@ -99,7 +88,7 @@ if [ -n "$ANACONDA_PYTHON_VERSION" ]; then
|
||||
# scikit-learn is pinned because of
|
||||
# https://github.com/scikit-learn/scikit-learn/issues/14485 (affects gcc 5.5
|
||||
# only)
|
||||
as_jenkins pip install --progress-bar off pytest scipy==1.1.0 scikit-learn==0.20.3 scikit-image librosa>=0.6.2 psutil numba==0.46.0 llvmlite==0.30.0
|
||||
as_jenkins pip install --progress-bar off pytest scipy==1.1.0 scikit-learn==0.20.3 scikit-image librosa>=0.6.2 psutil numba==0.43.1 llvmlite==0.28.0
|
||||
|
||||
popd
|
||||
fi
|
||||
|
||||
@ -51,16 +51,11 @@ install_centos() {
|
||||
}
|
||||
|
||||
# Install base packages depending on the base OS
|
||||
ID=$(grep -oP '(?<=^ID=).+' /etc/os-release | tr -d '"')
|
||||
case "$ID" in
|
||||
ubuntu)
|
||||
install_ubuntu
|
||||
;;
|
||||
centos)
|
||||
install_centos
|
||||
;;
|
||||
*)
|
||||
echo "Unable to determine OS..."
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
if [ -f /etc/lsb-release ]; then
|
||||
install_ubuntu
|
||||
elif [ -f /etc/os-release ]; then
|
||||
install_centos
|
||||
else
|
||||
echo "Unable to determine OS..."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
@ -1,10 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -ex
|
||||
|
||||
[ -n "$DEVTOOLSET_VERSION" ]
|
||||
|
||||
yum install -y centos-release-scl
|
||||
yum install -y devtoolset-$DEVTOOLSET_VERSION
|
||||
|
||||
echo "source scl_source enable devtoolset-$DEVTOOLSET_VERSION" > "/etc/profile.d/devtoolset-$DEVTOOLSET_VERSION.sh"
|
||||
@ -7,11 +7,7 @@ if [ -n "$GCC_VERSION" ]; then
|
||||
# Need the official toolchain repo to get alternate packages
|
||||
add-apt-repository ppa:ubuntu-toolchain-r/test
|
||||
apt-get update
|
||||
if [ "$UBUNTU_VERSION" = "16.04" -a "$GCC_VERSION" = "5" ]; then
|
||||
apt-get install -y g++-5=5.4.0-6ubuntu1~16.04.12
|
||||
else
|
||||
apt-get install -y g++-$GCC_VERSION
|
||||
fi
|
||||
apt-get install -y g++-$GCC_VERSION
|
||||
|
||||
update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-"$GCC_VERSION" 50
|
||||
update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-"$GCC_VERSION" 50
|
||||
|
||||
@ -1,34 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -ex
|
||||
|
||||
[ -n "$GLIBC_VERSION" ]
|
||||
if [[ -n "$CENTOS_VERSION" ]]; then
|
||||
[ -n "$DEVTOOLSET_VERSION" ]
|
||||
fi
|
||||
|
||||
yum install -y wget sed
|
||||
|
||||
mkdir -p /packages && cd /packages
|
||||
wget -q http://ftp.gnu.org/gnu/glibc/glibc-$GLIBC_VERSION.tar.gz
|
||||
tar xzf glibc-$GLIBC_VERSION.tar.gz
|
||||
if [[ "$GLIBC_VERSION" == "2.26" ]]; then
|
||||
cd glibc-$GLIBC_VERSION
|
||||
sed -i 's/$name ne "nss_test1"/$name ne "nss_test1" \&\& $name ne "nss_test2"/' scripts/test-installation.pl
|
||||
cd ..
|
||||
fi
|
||||
mkdir -p glibc-$GLIBC_VERSION-build && cd glibc-$GLIBC_VERSION-build
|
||||
|
||||
if [[ -n "$CENTOS_VERSION" ]]; then
|
||||
export PATH=/opt/rh/devtoolset-$DEVTOOLSET_VERSION/root/usr/bin:$PATH
|
||||
fi
|
||||
|
||||
../glibc-$GLIBC_VERSION/configure --prefix=/usr CFLAGS='-Wno-stringop-truncation -Wno-format-overflow -Wno-restrict -Wno-format-truncation -g -O2'
|
||||
make -j$(nproc)
|
||||
make install
|
||||
|
||||
# Cleanup
|
||||
rm -rf /packages
|
||||
rm -rf /var/cache/yum/*
|
||||
rm -rf /var/lib/rpm/__db.*
|
||||
yum clean all
|
||||
@ -46,16 +46,11 @@ install_centos() {
|
||||
}
|
||||
|
||||
# Install base packages depending on the base OS
|
||||
ID=$(grep -oP '(?<=^ID=).+' /etc/os-release | tr -d '"')
|
||||
case "$ID" in
|
||||
ubuntu)
|
||||
install_ubuntu
|
||||
;;
|
||||
centos)
|
||||
install_centos
|
||||
;;
|
||||
*)
|
||||
echo "Unable to determine OS..."
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
if [ -f /etc/lsb-release ]; then
|
||||
install_ubuntu
|
||||
elif [ -f /etc/os-release ]; then
|
||||
install_centos
|
||||
else
|
||||
echo "Unable to determine OS..."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
@ -1,105 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -ex
|
||||
|
||||
install_ubuntu() {
|
||||
apt-get update
|
||||
if [[ $UBUNTU_VERSION == 18.04 ]]; then
|
||||
# gpg-agent is not available by default on 18.04
|
||||
apt-get install -y --no-install-recommends gpg-agent
|
||||
fi
|
||||
apt-get install -y kmod
|
||||
apt-get install -y wget
|
||||
apt-get install -y libopenblas-dev
|
||||
|
||||
# Need the libc++1 and libc++abi1 libraries to allow torch._C to load at runtime
|
||||
apt-get install -y libc++1
|
||||
apt-get install -y libc++abi1
|
||||
|
||||
DEB_ROCM_REPO=http://repo.radeon.com/rocm/apt/${ROCM_VERSION}
|
||||
# Add rocm repository
|
||||
wget -qO - $DEB_ROCM_REPO/rocm.gpg.key | apt-key add -
|
||||
echo "deb [arch=amd64] $DEB_ROCM_REPO xenial main" > /etc/apt/sources.list.d/rocm.list
|
||||
apt-get update --allow-insecure-repositories
|
||||
|
||||
DEBIAN_FRONTEND=noninteractive apt-get install -y --allow-unauthenticated \
|
||||
rocm-dev \
|
||||
rocm-utils \
|
||||
rocfft \
|
||||
miopen-hip \
|
||||
rocblas \
|
||||
hipsparse \
|
||||
rocrand \
|
||||
hipcub \
|
||||
rocthrust \
|
||||
rccl \
|
||||
rocprofiler-dev \
|
||||
roctracer-dev
|
||||
|
||||
# precompiled miopen kernels added in ROCm 3.5; search for all unversioned packages
|
||||
# if search fails it will abort this script; use true to avoid case where search fails
|
||||
MIOPENKERNELS=$(apt-cache search --names-only miopenkernels | awk '{print $1}' | grep -F -v . || true)
|
||||
if [[ "x${MIOPENKERNELS}" = x ]]; then
|
||||
echo "miopenkernels package not available"
|
||||
else
|
||||
DEBIAN_FRONTEND=noninteractive apt-get install -y --allow-unauthenticated ${MIOPENKERNELS}
|
||||
fi
|
||||
|
||||
# Cleanup
|
||||
apt-get autoclean && apt-get clean
|
||||
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
|
||||
}
|
||||
|
||||
install_centos() {
|
||||
|
||||
yum update -y
|
||||
yum install -y kmod
|
||||
yum install -y wget
|
||||
yum install -y openblas-devel
|
||||
|
||||
yum install -y epel-release
|
||||
yum install -y dkms kernel-headers-`uname -r` kernel-devel-`uname -r`
|
||||
|
||||
echo "[ROCm]" > /etc/yum.repos.d/rocm.repo
|
||||
echo "name=ROCm" >> /etc/yum.repos.d/rocm.repo
|
||||
echo "baseurl=http://repo.radeon.com/rocm/yum/${ROCM_VERSION}" >> /etc/yum.repos.d/rocm.repo
|
||||
echo "enabled=1" >> /etc/yum.repos.d/rocm.repo
|
||||
echo "gpgcheck=0" >> /etc/yum.repos.d/rocm.repo
|
||||
|
||||
yum update -y
|
||||
|
||||
yum install -y \
|
||||
rocm-dev \
|
||||
rocm-utils \
|
||||
rocfft \
|
||||
miopen-hip \
|
||||
rocblas \
|
||||
hipsparse \
|
||||
rocrand \
|
||||
rccl \
|
||||
hipcub \
|
||||
rocthrust \
|
||||
rocprofiler-dev \
|
||||
roctracer-dev
|
||||
|
||||
# Cleanup
|
||||
yum clean all
|
||||
rm -rf /var/cache/yum
|
||||
rm -rf /var/lib/yum/yumdb
|
||||
rm -rf /var/lib/yum/history
|
||||
}
|
||||
|
||||
# Install Python packages depending on the base OS
|
||||
ID=$(grep -oP '(?<=^ID=).+' /etc/os-release | tr -d '"')
|
||||
case "$ID" in
|
||||
ubuntu)
|
||||
install_ubuntu
|
||||
;;
|
||||
centos)
|
||||
install_centos
|
||||
;;
|
||||
*)
|
||||
echo "Unable to determine OS..."
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
@ -1,24 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -ex
|
||||
|
||||
[ -n "${SWIFTSHADER}" ]
|
||||
|
||||
retry () {
|
||||
$* || (sleep 1 && $*) || (sleep 2 && $*) || (sleep 4 && $*) || (sleep 8 && $*)
|
||||
}
|
||||
|
||||
_https_amazon_aws=https://ossci-android.s3.amazonaws.com
|
||||
|
||||
# SwiftShader
|
||||
_swiftshader_dir=/var/lib/jenkins/swiftshader
|
||||
_swiftshader_file_targz=swiftshader-abe07b943-prebuilt.tar.gz
|
||||
mkdir -p $_swiftshader_dir
|
||||
_tmp_swiftshader_targz="/tmp/${_swiftshader_file_targz}"
|
||||
|
||||
curl --silent --show-error --location --fail --retry 3 \
|
||||
--output "${_tmp_swiftshader_targz}" "$_https_amazon_aws/${_swiftshader_file_targz}"
|
||||
|
||||
tar -C "${_swiftshader_dir}" -xzf "${_tmp_swiftshader_targz}"
|
||||
|
||||
export VK_ICD_FILENAMES="${_swiftshader_dir}/build/Linux/vk_swiftshader_icd.json"
|
||||
@ -14,7 +14,7 @@ if [ -n "$TRAVIS_PYTHON_VERSION" ]; then
|
||||
|
||||
# Download Python binary from Travis
|
||||
pushd tmp
|
||||
as_jenkins wget --quiet ${TRAVIS_DL_URL_PREFIX}/python-$TRAVIS_PYTHON_VERSION.tar.bz2
|
||||
as_jenkins wget --quiet https://s3.amazonaws.com/travis-python-archives/binaries/ubuntu/14.04/x86_64/python-$TRAVIS_PYTHON_VERSION.tar.bz2
|
||||
# NB: The tarball also comes with /home/travis virtualenv that we
|
||||
# don't care about. (Maybe we should, but we've worked around the
|
||||
# "how do I install to python" issue by making this entire directory
|
||||
@ -49,7 +49,26 @@ if [ -n "$TRAVIS_PYTHON_VERSION" ]; then
|
||||
|
||||
pip --version
|
||||
|
||||
as_jenkins pip install numpy pyyaml
|
||||
if [[ "$TRAVIS_PYTHON_VERSION" == nightly ]]; then
|
||||
# These two packages have broken Cythonizations uploaded
|
||||
# to PyPi, see:
|
||||
#
|
||||
# - https://github.com/numpy/numpy/issues/10500
|
||||
# - https://github.com/yaml/pyyaml/issues/117
|
||||
#
|
||||
# Furthermore, the released version of Cython does not
|
||||
# have these issues fixed.
|
||||
#
|
||||
# While we are waiting on fixes for these, we build
|
||||
# from Git for now. Feel free to delete this conditional
|
||||
# branch if things start working again (you may need
|
||||
# to do this if these packages regress on Git HEAD.)
|
||||
as_jenkins pip install git+https://github.com/cython/cython.git
|
||||
as_jenkins pip install git+https://github.com/numpy/numpy.git
|
||||
as_jenkins pip install git+https://github.com/yaml/pyyaml.git
|
||||
else
|
||||
as_jenkins pip install numpy pyyaml
|
||||
fi
|
||||
|
||||
as_jenkins pip install \
|
||||
future \
|
||||
@ -57,8 +76,7 @@ if [ -n "$TRAVIS_PYTHON_VERSION" ]; then
|
||||
protobuf \
|
||||
pytest \
|
||||
pillow \
|
||||
typing \
|
||||
dataclasses
|
||||
typing
|
||||
|
||||
as_jenkins pip install mkl mkl-devel
|
||||
|
||||
@ -70,9 +88,6 @@ if [ -n "$TRAVIS_PYTHON_VERSION" ]; then
|
||||
# Install psutil for dataloader tests
|
||||
as_jenkins pip install psutil
|
||||
|
||||
# Install dill for serialization tests
|
||||
as_jenkins pip install "dill>=0.3.1"
|
||||
|
||||
# Cleanup package manager
|
||||
apt-get autoclean && apt-get clean
|
||||
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
|
||||
|
||||
@ -47,16 +47,11 @@ install_centos() {
|
||||
}
|
||||
|
||||
# Install base packages depending on the base OS
|
||||
ID=$(grep -oP '(?<=^ID=).+' /etc/os-release | tr -d '"')
|
||||
case "$ID" in
|
||||
ubuntu)
|
||||
install_ubuntu
|
||||
;;
|
||||
centos)
|
||||
install_centos
|
||||
;;
|
||||
*)
|
||||
echo "Unable to determine OS..."
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
if [ -f /etc/lsb-release ]; then
|
||||
install_ubuntu
|
||||
elif [ -f /etc/os-release ]; then
|
||||
install_centos
|
||||
else
|
||||
echo "Unable to determine OS..."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
@ -1,23 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -ex
|
||||
|
||||
[ -n "${VULKAN_SDK_VERSION}" ]
|
||||
|
||||
retry () {
|
||||
$* || (sleep 1 && $*) || (sleep 2 && $*) || (sleep 4 && $*) || (sleep 8 && $*)
|
||||
}
|
||||
|
||||
_https_amazon_aws=https://ossci-android.s3.amazonaws.com
|
||||
|
||||
_vulkansdk_dir=/var/lib/jenkins/vulkansdk
|
||||
mkdir -p $_vulkansdk_dir
|
||||
_tmp_vulkansdk_targz=/tmp/vulkansdk.tar.gz
|
||||
curl --silent --show-error --location --fail --retry 3 \
|
||||
--output "$_tmp_vulkansdk_targz" "$_https_amazon_aws/vulkansdk-linux-x86_64-${VULKAN_SDK_VERSION}.tar.gz"
|
||||
|
||||
tar -C "$_vulkansdk_dir" -xzf "$_tmp_vulkansdk_targz" --strip-components 1
|
||||
|
||||
export VULKAN_SDK="$_vulkansdk_dir/"
|
||||
|
||||
rm "$_tmp_vulkansdk_targz"
|
||||
@ -35,11 +35,6 @@ ARG GCC_VERSION
|
||||
ADD ./common/install_gcc.sh install_gcc.sh
|
||||
RUN bash ./install_gcc.sh && rm install_gcc.sh
|
||||
|
||||
# Install clang
|
||||
ARG CLANG_VERSION
|
||||
ADD ./common/install_clang.sh install_clang.sh
|
||||
RUN bash ./install_clang.sh && rm install_clang.sh
|
||||
|
||||
# Install non-standard Python versions (via Travis binaries)
|
||||
ARG TRAVIS_PYTHON_VERSION
|
||||
ENV PATH /opt/python/$TRAVIS_PYTHON_VERSION/bin:$PATH
|
||||
@ -86,8 +81,5 @@ ENV BUILD_ENVIRONMENT ${BUILD_ENVIRONMENT}
|
||||
ENV TORCH_CUDA_ARCH_LIST Maxwell
|
||||
ENV TORCH_NVCC_FLAGS "-Xfatbin -compress-all"
|
||||
|
||||
# Install LLVM dev version (Defined in the pytorch/builder github repository)
|
||||
COPY --from=pytorch/llvm:9.0.1 /opt/llvm /opt/llvm
|
||||
|
||||
USER jenkins
|
||||
CMD ["bash"]
|
||||
|
||||
1
.circleci/docker/ubuntu-rocm/.gitignore
vendored
1
.circleci/docker/ubuntu-rocm/.gitignore
vendored
@ -1 +0,0 @@
|
||||
*.sh
|
||||
@ -1,87 +0,0 @@
|
||||
ARG UBUNTU_VERSION
|
||||
|
||||
FROM ubuntu:${UBUNTU_VERSION}
|
||||
|
||||
ARG UBUNTU_VERSION
|
||||
|
||||
ENV DEBIAN_FRONTEND noninteractive
|
||||
|
||||
# Install common dependencies (so that this step can be cached separately)
|
||||
ARG EC2
|
||||
ADD ./common/install_base.sh install_base.sh
|
||||
RUN bash ./install_base.sh && rm install_base.sh
|
||||
|
||||
# Install clang
|
||||
ARG LLVMDEV
|
||||
ARG CLANG_VERSION
|
||||
ADD ./common/install_clang.sh install_clang.sh
|
||||
RUN bash ./install_clang.sh && rm install_clang.sh
|
||||
|
||||
# Install user
|
||||
ADD ./common/install_user.sh install_user.sh
|
||||
RUN bash ./install_user.sh && rm install_user.sh
|
||||
|
||||
# Install conda
|
||||
ENV PATH /opt/conda/bin:$PATH
|
||||
ARG ANACONDA_PYTHON_VERSION
|
||||
ADD ./common/install_conda.sh install_conda.sh
|
||||
RUN bash ./install_conda.sh && rm install_conda.sh
|
||||
|
||||
# (optional) Install protobuf for ONNX
|
||||
ARG PROTOBUF
|
||||
ADD ./common/install_protobuf.sh install_protobuf.sh
|
||||
RUN if [ -n "${PROTOBUF}" ]; then bash ./install_protobuf.sh; fi
|
||||
RUN rm install_protobuf.sh
|
||||
ENV INSTALLED_PROTOBUF ${PROTOBUF}
|
||||
|
||||
# (optional) Install database packages like LMDB and LevelDB
|
||||
ARG DB
|
||||
ADD ./common/install_db.sh install_db.sh
|
||||
RUN if [ -n "${DB}" ]; then bash ./install_db.sh; fi
|
||||
RUN rm install_db.sh
|
||||
ENV INSTALLED_DB ${DB}
|
||||
|
||||
# (optional) Install vision packages like OpenCV and ffmpeg
|
||||
ARG VISION
|
||||
ADD ./common/install_vision.sh install_vision.sh
|
||||
RUN if [ -n "${VISION}" ]; then bash ./install_vision.sh; fi
|
||||
RUN rm install_vision.sh
|
||||
ENV INSTALLED_VISION ${VISION}
|
||||
|
||||
# Install rocm
|
||||
ARG ROCM_VERSION
|
||||
ADD ./common/install_rocm.sh install_rocm.sh
|
||||
RUN bash ./install_rocm.sh
|
||||
RUN rm install_rocm.sh
|
||||
ENV PATH /opt/rocm/bin:$PATH
|
||||
ENV PATH /opt/rocm/hcc/bin:$PATH
|
||||
ENV PATH /opt/rocm/hip/bin:$PATH
|
||||
ENV PATH /opt/rocm/opencl/bin:$PATH
|
||||
ENV PATH /opt/rocm/llvm/bin:$PATH
|
||||
ENV HIP_PLATFORM hcc
|
||||
ENV LANG C.UTF-8
|
||||
ENV LC_ALL C.UTF-8
|
||||
|
||||
# (optional) Install non-default CMake version
|
||||
ARG CMAKE_VERSION
|
||||
ADD ./common/install_cmake.sh install_cmake.sh
|
||||
RUN if [ -n "${CMAKE_VERSION}" ]; then bash ./install_cmake.sh; fi
|
||||
RUN rm install_cmake.sh
|
||||
|
||||
# (optional) Install non-default Ninja version
|
||||
ARG NINJA_VERSION
|
||||
ADD ./common/install_ninja.sh install_ninja.sh
|
||||
RUN if [ -n "${NINJA_VERSION}" ]; then bash ./install_ninja.sh; fi
|
||||
RUN rm install_ninja.sh
|
||||
|
||||
# Install ccache/sccache (do this last, so we get priority in PATH)
|
||||
ADD ./common/install_cache.sh install_cache.sh
|
||||
ENV PATH /opt/cache/bin:$PATH
|
||||
RUN bash ./install_cache.sh && rm install_cache.sh
|
||||
|
||||
# Include BUILD_ENVIRONMENT environment variable in image
|
||||
ARG BUILD_ENVIRONMENT
|
||||
ENV BUILD_ENVIRONMENT ${BUILD_ENVIRONMENT}
|
||||
|
||||
USER jenkins
|
||||
CMD ["bash"]
|
||||
@ -46,7 +46,6 @@ RUN bash ./install_gcc.sh && rm install_gcc.sh
|
||||
|
||||
# Install non-standard Python versions (via Travis binaries)
|
||||
ARG TRAVIS_PYTHON_VERSION
|
||||
ARG TRAVIS_DL_URL_PREFIX
|
||||
ENV PATH /opt/python/$TRAVIS_PYTHON_VERSION/bin:$PATH
|
||||
ADD ./common/install_travis_python.sh install_travis_python.sh
|
||||
RUN bash ./install_travis_python.sh && rm install_travis_python.sh
|
||||
@ -85,18 +84,6 @@ RUN rm AndroidManifest.xml
|
||||
RUN rm build.gradle
|
||||
ENV INSTALLED_ANDROID ${ANDROID}
|
||||
|
||||
# (optional) Install Vulkan SDK
|
||||
ARG VULKAN_SDK_VERSION
|
||||
ADD ./common/install_vulkan_sdk.sh install_vulkan_sdk.sh
|
||||
RUN if [ -n "${VULKAN_SDK_VERSION}" ]; then bash ./install_vulkan_sdk.sh; fi
|
||||
RUN rm install_vulkan_sdk.sh
|
||||
|
||||
# (optional) Install swiftshader
|
||||
ARG SWIFTSHADER
|
||||
ADD ./common/install_swiftshader.sh install_swiftshader.sh
|
||||
RUN if [ -n "${SWIFTSHADER}" ]; then bash ./install_swiftshader.sh; fi
|
||||
RUN rm install_swiftshader.sh
|
||||
|
||||
# (optional) Install non-default CMake version
|
||||
ARG CMAKE_VERSION
|
||||
ADD ./common/install_cmake.sh install_cmake.sh
|
||||
@ -123,8 +110,5 @@ RUN bash ./install_jni.sh && rm install_jni.sh
|
||||
ARG BUILD_ENVIRONMENT
|
||||
ENV BUILD_ENVIRONMENT ${BUILD_ENVIRONMENT}
|
||||
|
||||
# Install LLVM dev version (Defined in the pytorch/builder github repository)
|
||||
COPY --from=pytorch/llvm:9.0.1 /opt/llvm /opt/llvm
|
||||
|
||||
USER jenkins
|
||||
CMD ["bash"]
|
||||
|
||||
@ -1,13 +0,0 @@
|
||||
FROM ubuntu:16.04
|
||||
|
||||
RUN apt-get update && apt-get install -y python-pip git && rm -rf /var/lib/apt/lists/* /var/log/dpkg.log
|
||||
|
||||
ADD requirements.txt /requirements.txt
|
||||
|
||||
RUN pip install -r /requirements.txt
|
||||
|
||||
ADD gc.py /usr/bin/gc.py
|
||||
|
||||
ADD docker_hub.py /usr/bin/docker_hub.py
|
||||
|
||||
ENTRYPOINT ["/usr/bin/gc.py"]
|
||||
@ -1,125 +0,0 @@
|
||||
#!/usr/bin/env python
|
||||
|
||||
from collections import namedtuple
|
||||
|
||||
import boto3
|
||||
import requests
|
||||
import os
|
||||
|
||||
|
||||
IMAGE_INFO = namedtuple(
|
||||
"IMAGE_INFO", ("repo", "tag", "size", "last_updated_at", "last_updated_by")
|
||||
)
|
||||
|
||||
|
||||
def build_access_token(username, passwordtr):
|
||||
r = requests.post(
|
||||
"https://hub.docker.com/v2/users/login/",
|
||||
data={"username": username, "password": password},
|
||||
)
|
||||
r.raise_for_status()
|
||||
token = r.json().get("token")
|
||||
return {"Authorization": "JWT " + token}
|
||||
|
||||
|
||||
def list_repos(user, token):
|
||||
r = requests.get("https://hub.docker.com/v2/repositories/" + user, headers=token)
|
||||
r.raise_for_status()
|
||||
ret = sorted(
|
||||
repo["user"] + "/" + repo["name"] for repo in r.json().get("results", [])
|
||||
)
|
||||
if ret:
|
||||
print("repos found:")
|
||||
print("".join("\n\t" + r for r in ret))
|
||||
return ret
|
||||
|
||||
|
||||
def list_tags(repo, token):
|
||||
r = requests.get(
|
||||
"https://hub.docker.com/v2/repositories/" + repo + "/tags", headers=token
|
||||
)
|
||||
r.raise_for_status()
|
||||
return [
|
||||
IMAGE_INFO(
|
||||
repo=repo,
|
||||
tag=t["name"],
|
||||
size=t["full_size"],
|
||||
last_updated_at=t["last_updated"],
|
||||
last_updated_by=t["last_updater_username"],
|
||||
)
|
||||
for t in r.json().get("results", [])
|
||||
]
|
||||
|
||||
|
||||
def save_to_s3(tags):
|
||||
table_content = ""
|
||||
client = boto3.client("s3")
|
||||
for t in tags:
|
||||
table_content += (
|
||||
"<tr><td>{repo}</td><td>{tag}</td><td>{size}</td>"
|
||||
"<td>{last_updated_at}</td><td>{last_updated_by}</td></tr>"
|
||||
).format(
|
||||
repo=t.repo,
|
||||
tag=t.tag,
|
||||
size=t.size,
|
||||
last_updated_at=t.last_updated_at,
|
||||
last_updated_by=t.last_updated_by,
|
||||
)
|
||||
html_body = """
|
||||
<html>
|
||||
<head>
|
||||
<link rel="stylesheet"
|
||||
href="https://stackpath.bootstrapcdn.com/bootstrap/4.4.1/css/bootstrap.min.css"
|
||||
integrity="sha384-Vkoo8x4CGsO3+Hhxv8T/Q5PaXtkKtu6ug5TOeNV6gBiFeWPGFN9MuhOf23Q9Ifjh"
|
||||
crossorigin="anonymous">
|
||||
<link rel="stylesheet" type="text/css"
|
||||
href="https://cdn.datatables.net/1.10.20/css/jquery.dataTables.css">
|
||||
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.4.1/jquery.min.js">
|
||||
</script>
|
||||
<script type="text/javascript" charset="utf8"
|
||||
src="https://cdn.datatables.net/1.10.20/js/jquery.dataTables.js"></script>
|
||||
<title> docker image info</title>
|
||||
</head>
|
||||
<body>
|
||||
<table class="table table-striped table-hover" id="docker">
|
||||
<caption>Docker images on docker hub</caption>
|
||||
<thead class="thead-dark">
|
||||
<tr>
|
||||
<th scope="col">repo</th>
|
||||
<th scope="col">tag</th>
|
||||
<th scope="col">size</th>
|
||||
<th scope="col">last_updated_at</th>
|
||||
<th scope="col">last_updated_by</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
{table_content}
|
||||
</tbody>
|
||||
</table>
|
||||
</body>
|
||||
<script>
|
||||
$(document).ready( function () {{
|
||||
$('#docker').DataTable({{paging: false}});
|
||||
}} );py
|
||||
</script>
|
||||
</html>
|
||||
""".format(
|
||||
table_content=table_content
|
||||
)
|
||||
client.put_object(
|
||||
Bucket="docker.pytorch.org",
|
||||
ACL="public-read",
|
||||
Key="docker_hub.html",
|
||||
Body=html_body,
|
||||
ContentType="text/html",
|
||||
)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
username = os.environ.get("DOCKER_HUB_USERNAME")
|
||||
password = os.environ.get("DOCKER_HUB_PASSWORD")
|
||||
token = build_access_token(username, password)
|
||||
tags = []
|
||||
for repo in list_repos("pytorch", token):
|
||||
tags.extend(list_tags(repo, token))
|
||||
save_to_s3(tags)
|
||||
@ -1,214 +0,0 @@
|
||||
#!/usr/bin/env python
|
||||
|
||||
import argparse
|
||||
import datetime
|
||||
import boto3
|
||||
import pytz
|
||||
import sys
|
||||
import re
|
||||
|
||||
|
||||
def save_to_s3(project, data):
|
||||
table_content = ""
|
||||
client = boto3.client("s3")
|
||||
for repo, tag, window, age, pushed in data:
|
||||
table_content += "<tr><td>{repo}</td><td>{tag}</td><td>{window}</td><td>{age}</td><td>{pushed}</td></tr>".format(
|
||||
repo=repo, tag=tag, window=window, age=age, pushed=pushed
|
||||
)
|
||||
html_body = """
|
||||
<html>
|
||||
<head>
|
||||
<link rel="stylesheet"
|
||||
href="https://stackpath.bootstrapcdn.com/bootstrap/4.4.1/css/bootstrap.min.css"
|
||||
integrity="sha384-Vkoo8x4CGsO3+Hhxv8T/Q5PaXtkKtu6ug5TOeNV6gBiFeWPGFN9MuhOf23Q9Ifjh"
|
||||
crossorigin="anonymous">
|
||||
<link rel="stylesheet" type="text/css" href="https://cdn.datatables.net/1.10.20/css/jquery.dataTables.css">
|
||||
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.4.1/jquery.min.js"></script>
|
||||
<script type="text/javascript" charset="utf8" src="https://cdn.datatables.net/1.10.20/js/jquery.dataTables.js"></script>
|
||||
<title>{project} nightly and permanent docker image info</title>
|
||||
</head>
|
||||
<body>
|
||||
<table class="table table-striped table-hover" id="docker">
|
||||
<thead class="thead-dark">
|
||||
<tr>
|
||||
<th scope="col">repo</th>
|
||||
<th scope="col">tag</th>
|
||||
<th scope="col">keep window</th>
|
||||
<th scope="col">age</th>
|
||||
<th scope="col">pushed at</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
{table_content}
|
||||
</tbody>
|
||||
</table>
|
||||
</body>
|
||||
<script>
|
||||
$(document).ready( function () {{
|
||||
$('#docker').DataTable({{paging: false}});
|
||||
}} );
|
||||
</script>
|
||||
</html>
|
||||
""".format(
|
||||
project=project, table_content=table_content
|
||||
)
|
||||
|
||||
# for pytorch, file can be found at
|
||||
# http://ossci-docker.s3-website.us-east-1.amazonaws.com/pytorch.html
|
||||
# and later one we can config docker.pytorch.org to point to the location
|
||||
|
||||
client.put_object(
|
||||
Bucket="docker.pytorch.org",
|
||||
ACL="public-read",
|
||||
Key="{project}.html".format(project=project),
|
||||
Body=html_body,
|
||||
ContentType="text/html",
|
||||
)
|
||||
|
||||
|
||||
def repos(client):
|
||||
paginator = client.get_paginator("describe_repositories")
|
||||
pages = paginator.paginate(registryId="308535385114")
|
||||
for page in pages:
|
||||
for repo in page["repositories"]:
|
||||
yield repo
|
||||
|
||||
|
||||
def images(client, repository):
|
||||
paginator = client.get_paginator("describe_images")
|
||||
pages = paginator.paginate(
|
||||
registryId="308535385114", repositoryName=repository["repositoryName"]
|
||||
)
|
||||
for page in pages:
|
||||
for image in page["imageDetails"]:
|
||||
yield image
|
||||
|
||||
|
||||
parser = argparse.ArgumentParser(description="Delete old Docker tags from registry")
|
||||
parser.add_argument(
|
||||
"--dry-run", action="store_true", help="Dry run; print tags that would be deleted"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--debug", action="store_true", help="Debug, print ignored / saved tags"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--keep-stable-days",
|
||||
type=int,
|
||||
default=14,
|
||||
help="Days of stable Docker tags to keep (non per-build images)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--keep-unstable-days",
|
||||
type=int,
|
||||
default=1,
|
||||
help="Days of unstable Docker tags to keep (per-build images)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--filter-prefix",
|
||||
type=str,
|
||||
default="",
|
||||
help="Only run cleanup for repositories with this prefix",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--ignore-tags",
|
||||
type=str,
|
||||
default="",
|
||||
help="Never cleanup these tags (comma separated)",
|
||||
)
|
||||
args = parser.parse_args()
|
||||
|
||||
if not args.ignore_tags or not args.filter_prefix:
|
||||
print(
|
||||
"""
|
||||
Missing required arguments --ignore-tags and --filter-prefix
|
||||
|
||||
You must specify --ignore-tags and --filter-prefix to avoid accidentally
|
||||
pruning a stable Docker tag which is being actively used. This will
|
||||
make you VERY SAD. So pay attention.
|
||||
|
||||
First, which filter-prefix do you want? The list of valid prefixes
|
||||
is in jobs/private.groovy under the 'docker-registry-cleanup' job.
|
||||
You probably want either pytorch or caffe2.
|
||||
|
||||
Second, which ignore-tags do you want? It should be whatever the most
|
||||
up-to-date DockerVersion for the repository in question is. Follow
|
||||
the imports of jobs/pytorch.groovy to find them.
|
||||
"""
|
||||
)
|
||||
sys.exit(1)
|
||||
|
||||
client = boto3.client("ecr", region_name="us-east-1")
|
||||
stable_window = datetime.timedelta(days=args.keep_stable_days)
|
||||
unstable_window = datetime.timedelta(days=args.keep_unstable_days)
|
||||
now = datetime.datetime.now(pytz.UTC)
|
||||
ignore_tags = args.ignore_tags.split(",")
|
||||
|
||||
|
||||
def chunks(chunkable, n):
|
||||
""" Yield successive n-sized chunks from l.
|
||||
"""
|
||||
for i in range(0, len(chunkable), n):
|
||||
yield chunkable[i : i + n]
|
||||
|
||||
SHA_PATTERN = re.compile(r'^[0-9a-f]{40}$')
|
||||
def looks_like_git_sha(tag):
|
||||
"""Returns a boolean to check if a tag looks like a git sha
|
||||
|
||||
For reference a sha1 is 40 characters with only 0-9a-f and contains no
|
||||
"-" characters
|
||||
"""
|
||||
return re.match(SHA_PATTERN, tag) is not None
|
||||
|
||||
stable_window_tags = []
|
||||
for repo in repos(client):
|
||||
repositoryName = repo["repositoryName"]
|
||||
if not repositoryName.startswith(args.filter_prefix):
|
||||
continue
|
||||
|
||||
# Keep list of image digests to delete for this repository
|
||||
digest_to_delete = []
|
||||
|
||||
for image in images(client, repo):
|
||||
tags = image.get("imageTags")
|
||||
if not isinstance(tags, (list,)) or len(tags) == 0:
|
||||
continue
|
||||
created = image["imagePushedAt"]
|
||||
age = now - created
|
||||
for tag in tags:
|
||||
if any([
|
||||
looks_like_git_sha(tag),
|
||||
tag.isdigit(),
|
||||
tag.count("-") == 4, # TODO: Remove, this no longer applies as tags are now built using a SHA1
|
||||
tag in ignore_tags]):
|
||||
window = stable_window
|
||||
if tag in ignore_tags:
|
||||
stable_window_tags.append((repositoryName, tag, "", age, created))
|
||||
elif age < window:
|
||||
stable_window_tags.append((repositoryName, tag, window, age, created))
|
||||
else:
|
||||
window = unstable_window
|
||||
|
||||
if tag in ignore_tags or age < window:
|
||||
if args.debug:
|
||||
print("Ignoring {}:{} (age: {})".format(repositoryName, tag, age))
|
||||
break
|
||||
else:
|
||||
for tag in tags:
|
||||
print("{}Deleting {}:{} (age: {})".format("(dry run) " if args.dry_run else "", repositoryName, tag, age))
|
||||
digest_to_delete.append(image["imageDigest"])
|
||||
if args.dry_run:
|
||||
if args.debug:
|
||||
print("Skipping actual deletion, moving on...")
|
||||
else:
|
||||
# Issue batch delete for all images to delete for this repository
|
||||
# Note that as of 2018-07-25, the maximum number of images you can
|
||||
# delete in a single batch is 100, so chunk our list into batches of
|
||||
# 100
|
||||
for c in chunks(digest_to_delete, 100):
|
||||
client.batch_delete_image(
|
||||
registryId="308535385114",
|
||||
repositoryName=repositoryName,
|
||||
imageIds=[{"imageDigest": digest} for digest in c],
|
||||
)
|
||||
|
||||
save_to_s3(args.filter_prefix, stable_window_tags)
|
||||
@ -1,3 +0,0 @@
|
||||
boto3
|
||||
pytz
|
||||
requests
|
||||
@ -6,24 +6,13 @@ Please see README.md in this directory for details.
|
||||
"""
|
||||
|
||||
import os
|
||||
import shutil
|
||||
import sys
|
||||
from collections import namedtuple
|
||||
import shutil
|
||||
from collections import namedtuple, OrderedDict
|
||||
|
||||
import cimodel.data.binary_build_definitions as binary_build_definitions
|
||||
import cimodel.data.pytorch_build_definitions as pytorch_build_definitions
|
||||
import cimodel.data.simple.android_definitions
|
||||
import cimodel.data.simple.bazel_definitions
|
||||
import cimodel.data.simple.binary_smoketest
|
||||
import cimodel.data.simple.docker_definitions
|
||||
import cimodel.data.simple.ge_config_tests
|
||||
import cimodel.data.simple.ios_definitions
|
||||
import cimodel.data.simple.macos_definitions
|
||||
import cimodel.data.simple.mobile_definitions
|
||||
import cimodel.data.simple.nightly_android
|
||||
import cimodel.data.simple.nightly_ios
|
||||
import cimodel.data.simple.anaconda_prune_defintions
|
||||
import cimodel.data.windows_build_definitions as windows_build_definitions
|
||||
import cimodel.data.binary_build_definitions as binary_build_definitions
|
||||
import cimodel.data.caffe2_build_definitions as caffe2_build_definitions
|
||||
import cimodel.lib.miniutils as miniutils
|
||||
import cimodel.lib.miniyaml as miniyaml
|
||||
|
||||
@ -32,7 +21,6 @@ class File(object):
|
||||
"""
|
||||
Verbatim copy the contents of a file into config.yml
|
||||
"""
|
||||
|
||||
def __init__(self, filename):
|
||||
self.filename = filename
|
||||
|
||||
@ -41,7 +29,7 @@ class File(object):
|
||||
shutil.copyfileobj(fh, output_filehandle)
|
||||
|
||||
|
||||
class FunctionGen(namedtuple("FunctionGen", "function depth")):
|
||||
class FunctionGen(namedtuple('FunctionGen', 'function depth')):
|
||||
__slots__ = ()
|
||||
|
||||
|
||||
@ -51,14 +39,15 @@ class Treegen(FunctionGen):
|
||||
"""
|
||||
|
||||
def write(self, output_filehandle):
|
||||
miniyaml.render(output_filehandle, self.function(), self.depth)
|
||||
build_dict = OrderedDict()
|
||||
self.function(build_dict)
|
||||
miniyaml.render(output_filehandle, build_dict, self.depth)
|
||||
|
||||
|
||||
class Listgen(FunctionGen):
|
||||
"""
|
||||
Insert the content of a YAML list into config.yml
|
||||
"""
|
||||
|
||||
def write(self, output_filehandle):
|
||||
miniyaml.render(output_filehandle, self.function(), self.depth)
|
||||
|
||||
@ -68,6 +57,7 @@ def horizontal_rule():
|
||||
|
||||
|
||||
class Header(object):
|
||||
|
||||
def __init__(self, title, summary=None):
|
||||
self.title = title
|
||||
self.summary_lines = summary or []
|
||||
@ -81,63 +71,43 @@ class Header(object):
|
||||
output_filehandle.write(line + "\n")
|
||||
|
||||
|
||||
def gen_build_workflows_tree():
|
||||
build_workflows_functions = [
|
||||
cimodel.data.simple.docker_definitions.get_workflow_jobs,
|
||||
pytorch_build_definitions.get_workflow_jobs,
|
||||
cimodel.data.simple.macos_definitions.get_workflow_jobs,
|
||||
cimodel.data.simple.android_definitions.get_workflow_jobs,
|
||||
cimodel.data.simple.ios_definitions.get_workflow_jobs,
|
||||
cimodel.data.simple.mobile_definitions.get_workflow_jobs,
|
||||
cimodel.data.simple.ge_config_tests.get_workflow_jobs,
|
||||
cimodel.data.simple.bazel_definitions.get_workflow_jobs,
|
||||
cimodel.data.simple.binary_smoketest.get_workflow_jobs,
|
||||
cimodel.data.simple.nightly_ios.get_workflow_jobs,
|
||||
cimodel.data.simple.nightly_android.get_workflow_jobs,
|
||||
cimodel.data.simple.anaconda_prune_defintions.get_workflow_jobs,
|
||||
windows_build_definitions.get_windows_workflows,
|
||||
binary_build_definitions.get_post_upload_jobs,
|
||||
binary_build_definitions.get_binary_smoke_test_jobs,
|
||||
]
|
||||
|
||||
binary_build_functions = [
|
||||
binary_build_definitions.get_binary_build_jobs,
|
||||
binary_build_definitions.get_nightly_tests,
|
||||
binary_build_definitions.get_nightly_uploads,
|
||||
]
|
||||
|
||||
return {
|
||||
"workflows": {
|
||||
"binary_builds": {
|
||||
"when": r"<< pipeline.parameters.run_binary_tests >>",
|
||||
"jobs": [f() for f in binary_build_functions],
|
||||
},
|
||||
"build": {"jobs": [f() for f in build_workflows_functions]},
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
# Order of this list matters to the generated config.yml.
|
||||
YAML_SOURCES = [
|
||||
File("header-section.yml"),
|
||||
File("commands.yml"),
|
||||
File("nightly-binary-build-defaults.yml"),
|
||||
Header("Build parameters"),
|
||||
File("build-parameters/pytorch-build-params.yml"),
|
||||
File("build-parameters/binary-build-params.yml"),
|
||||
File("build-parameters/promote-build-params.yml"),
|
||||
File("pytorch-build-params.yml"),
|
||||
File("caffe2-build-params.yml"),
|
||||
File("binary-build-params.yml"),
|
||||
Header("Job specs"),
|
||||
File("job-specs/pytorch-job-specs.yml"),
|
||||
File("job-specs/binary-job-specs.yml"),
|
||||
File("job-specs/job-specs-custom.yml"),
|
||||
File("job-specs/job-specs-promote.yml"),
|
||||
File("job-specs/binary_update_htmls.yml"),
|
||||
File("job-specs/binary-build-tests.yml"),
|
||||
File("job-specs/docker_jobs.yml"),
|
||||
Header("Workflows"),
|
||||
Treegen(gen_build_workflows_tree, 0),
|
||||
File("workflows/workflows-ecr-gc.yml"),
|
||||
File("workflows/workflows-promote.yml"),
|
||||
File("pytorch-job-specs.yml"),
|
||||
File("caffe2-job-specs.yml"),
|
||||
File("binary-job-specs.yml"),
|
||||
File("job-specs-setup.yml"),
|
||||
File("job-specs-custom.yml"),
|
||||
File("binary_update_htmls.yml"),
|
||||
File("binary-build-tests.yml"),
|
||||
File("docker_build_job.yml"),
|
||||
File("workflows.yml"),
|
||||
Listgen(pytorch_build_definitions.get_workflow_jobs, 3),
|
||||
File("workflows-pytorch-macos-builds.yml"),
|
||||
File("workflows-pytorch-android-gradle-build.yml"),
|
||||
File("workflows-pytorch-ios-builds.yml"),
|
||||
File("workflows-pytorch-mobile-builds.yml"),
|
||||
File("workflows-pytorch-ge-config-tests.yml"),
|
||||
Listgen(caffe2_build_definitions.get_workflow_jobs, 3),
|
||||
File("workflows-binary-builds-smoke-subset.yml"),
|
||||
Listgen(binary_build_definitions.get_binary_smoke_test_jobs, 3),
|
||||
Listgen(binary_build_definitions.get_binary_build_jobs, 3),
|
||||
File("workflows-nightly-ios-binary-builds.yml"),
|
||||
File("workflows-nightly-android-binary-builds.yml"),
|
||||
Header("Nightly tests"),
|
||||
Listgen(binary_build_definitions.get_nightly_tests, 3),
|
||||
File("workflows-nightly-uploads-header.yml"),
|
||||
Listgen(binary_build_definitions.get_nightly_uploads, 3),
|
||||
File("workflows-s3-html.yml"),
|
||||
File("workflows-docker-builder.yml")
|
||||
]
|
||||
|
||||
|
||||
|
||||
@ -1,20 +1,9 @@
|
||||
#!/bin/bash
|
||||
set -eux -o pipefail
|
||||
|
||||
retry () {
|
||||
$* || (sleep 1 && $*) || (sleep 2 && $*) || (sleep 4 && $*) || (sleep 8 && $*)
|
||||
}
|
||||
|
||||
|
||||
# This step runs on multiple executors with different envfile locations
|
||||
if [[ "$(uname)" == Darwin ]]; then
|
||||
# macos executor (builds and tests)
|
||||
workdir="/Users/distiller/project"
|
||||
elif [[ "$OSTYPE" == "msys" ]]; then
|
||||
# windows executor (builds and tests)
|
||||
rm -rf /c/w
|
||||
ln -s "/c/Users/circleci/project" /c/w
|
||||
workdir="/c/w"
|
||||
elif [[ -d "/home/circleci/project" ]]; then
|
||||
# machine executor (binary tests)
|
||||
workdir="/home/circleci/project"
|
||||
@ -24,17 +13,11 @@ else
|
||||
fi
|
||||
|
||||
# It is very important that this stays in sync with binary_populate_env.sh
|
||||
if [[ "$OSTYPE" == "msys" ]]; then
|
||||
# We need to make the paths as short as possible on Windows
|
||||
export PYTORCH_ROOT="$workdir/p"
|
||||
export BUILDER_ROOT="$workdir/b"
|
||||
else
|
||||
export PYTORCH_ROOT="$workdir/pytorch"
|
||||
export BUILDER_ROOT="$workdir/builder"
|
||||
fi
|
||||
export PYTORCH_ROOT="$workdir/pytorch"
|
||||
export BUILDER_ROOT="$workdir/builder"
|
||||
|
||||
# Clone the Pytorch branch
|
||||
retry git clone https://github.com/pytorch/pytorch.git "$PYTORCH_ROOT"
|
||||
git clone https://github.com/pytorch/pytorch.git "$PYTORCH_ROOT"
|
||||
pushd "$PYTORCH_ROOT"
|
||||
if [[ -n "${CIRCLE_PR_NUMBER:-}" ]]; then
|
||||
# "smoke" binary build on PRs
|
||||
@ -50,13 +33,13 @@ else
|
||||
echo "Can't tell what to checkout"
|
||||
exit 1
|
||||
fi
|
||||
retry git submodule update --init --recursive
|
||||
git submodule update --init --recursive --quiet
|
||||
echo "Using Pytorch from "
|
||||
git --no-pager log --max-count 1
|
||||
popd
|
||||
|
||||
# Clone the Builder master repo
|
||||
retry git clone -q https://github.com/pytorch/builder.git "$BUILDER_ROOT"
|
||||
git clone -q https://github.com/pytorch/builder.git "$BUILDER_ROOT"
|
||||
pushd "$BUILDER_ROOT"
|
||||
echo "Using builder from "
|
||||
git --no-pager log --max-count 1
|
||||
|
||||
@ -31,9 +31,9 @@ fi
|
||||
|
||||
conda_sh="$workdir/install_miniconda.sh"
|
||||
if [[ "$(uname)" == Darwin ]]; then
|
||||
curl --retry 3 -o "$conda_sh" https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh
|
||||
retry curl -o "$conda_sh" https://repo.continuum.io/miniconda/Miniconda3-latest-MacOSX-x86_64.sh
|
||||
else
|
||||
curl --retry 3 -o "$conda_sh" https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
|
||||
retry curl -o "$conda_sh" https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
|
||||
fi
|
||||
chmod +x "$conda_sh"
|
||||
"$conda_sh" -b -p "$MINICONDA_ROOT"
|
||||
|
||||
@ -5,25 +5,20 @@ echo ""
|
||||
echo "DIR: $(pwd)"
|
||||
WORKSPACE=/Users/distiller/workspace
|
||||
PROJ_ROOT=/Users/distiller/project
|
||||
export TCLLIBPATH="/usr/local/lib"
|
||||
|
||||
export TCLLIBPATH="/usr/local/lib"
|
||||
# Install conda
|
||||
curl --retry 3 -o ~/conda.sh https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh
|
||||
chmod +x ~/conda.sh
|
||||
/bin/bash ~/conda.sh -b -p ~/anaconda
|
||||
curl -o ~/Downloads/conda.sh https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh
|
||||
chmod +x ~/Downloads/conda.sh
|
||||
/bin/bash ~/Downloads/conda.sh -b -p ~/anaconda
|
||||
export PATH="~/anaconda/bin:${PATH}"
|
||||
source ~/anaconda/bin/activate
|
||||
|
||||
# Install dependencies
|
||||
conda install numpy ninja pyyaml mkl mkl-include setuptools cmake cffi typing requests --yes
|
||||
conda install -c conda-forge valgrind --yes
|
||||
export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
|
||||
|
||||
# sync submodules
|
||||
cd ${PROJ_ROOT}
|
||||
git submodule sync
|
||||
git submodule update --init --recursive
|
||||
|
||||
# run build script
|
||||
chmod a+x ${PROJ_ROOT}/scripts/build_ios.sh
|
||||
echo "########################################################"
|
||||
@ -31,13 +26,13 @@ cat ${PROJ_ROOT}/scripts/build_ios.sh
|
||||
echo "########################################################"
|
||||
echo "IOS_ARCH: ${IOS_ARCH}"
|
||||
echo "IOS_PLATFORM: ${IOS_PLATFORM}"
|
||||
export BUILD_PYTORCH_MOBILE=1
|
||||
export IOS_ARCH=${IOS_ARCH}
|
||||
export IOS_PLATFORM=${IOS_PLATFORM}
|
||||
unbuffer ${PROJ_ROOT}/scripts/build_ios.sh 2>&1 | ts
|
||||
|
||||
#store the binary
|
||||
cd ${WORKSPACE}
|
||||
DEST_DIR=${WORKSPACE}/ios
|
||||
mkdir -p ${DEST_DIR}
|
||||
cp -R ${PROJ_ROOT}/build_ios/install ${DEST_DIR}
|
||||
mv ${DEST_DIR}/install ${DEST_DIR}/${IOS_ARCH}
|
||||
mv ${DEST_DIR}/install ${DEST_DIR}/${IOS_ARCH}
|
||||
@ -13,7 +13,7 @@ base64 --decode cert.txt -o Certificates.p12
|
||||
rm cert.txt
|
||||
bundle exec fastlane install_cert
|
||||
# install the provisioning profile
|
||||
PROFILE=PyTorch_CI_2021.mobileprovision
|
||||
PROFILE=TestApp_CI.mobileprovision
|
||||
PROVISIONING_PROFILES=~/Library/MobileDevice/Provisioning\ Profiles
|
||||
mkdir -pv "${PROVISIONING_PROFILES}"
|
||||
cd "${PROVISIONING_PROFILES}"
|
||||
@ -25,5 +25,5 @@ if ! [ -x "$(command -v xcodebuild)" ]; then
|
||||
echo 'Error: xcodebuild is not installed.'
|
||||
exit 1
|
||||
fi
|
||||
PROFILE=PyTorch_CI_2021
|
||||
PROFILE=TestApp_CI
|
||||
ruby ${PROJ_ROOT}/scripts/xcode_build.rb -i ${PROJ_ROOT}/build_ios/install -x ${PROJ_ROOT}/ios/TestApp/TestApp.xcodeproj -p ${IOS_PLATFORM} -c ${PROFILE} -t ${IOS_DEV_TEAM_ID}
|
||||
|
||||
@ -14,14 +14,14 @@ mkdir -p ${ZIP_DIR}/src
|
||||
cp -R ${ARTIFACTS_DIR}/arm64/include ${ZIP_DIR}/install/
|
||||
# build a FAT bianry
|
||||
cd ${ZIP_DIR}/install/lib
|
||||
target_libs=(libc10.a libclog.a libcpuinfo.a libeigen_blas.a libpthreadpool.a libpytorch_qnnpack.a libtorch_cpu.a libtorch.a libXNNPACK.a)
|
||||
target_libs=(libc10.a libclog.a libcpuinfo.a libeigen_blas.a libpytorch_qnnpack.a libtorch.a)
|
||||
for lib in ${target_libs[*]}
|
||||
do
|
||||
if [ -f "${ARTIFACTS_DIR}/x86_64/lib/${lib}" ] && [ -f "${ARTIFACTS_DIR}/arm64/lib/${lib}" ]; then
|
||||
libs=("${ARTIFACTS_DIR}/x86_64/lib/${lib}" "${ARTIFACTS_DIR}/arm64/lib/${lib}")
|
||||
lipo -create "${libs[@]}" -o ${ZIP_DIR}/install/lib/${lib}
|
||||
fi
|
||||
libs=(${ARTIFACTS_DIR}/x86_64/lib/${lib} ${ARTIFACTS_DIR}/arm64/lib/${lib})
|
||||
lipo -create "${libs[@]}" -o ${ZIP_DIR}/install/lib/${lib}
|
||||
done
|
||||
# for nnpack, we only support arm64 build
|
||||
cp ${ARTIFACTS_DIR}/arm64/lib/libnnpack.a ./
|
||||
lipo -i ${ZIP_DIR}/install/lib/*.a
|
||||
# copy the umbrella header and license
|
||||
cp ${PROJ_ROOT}/ios/LibTorch.h ${ZIP_DIR}/src/
|
||||
|
||||
@ -5,18 +5,26 @@ set -eux -o pipefail
|
||||
source /env
|
||||
|
||||
# Defaults here so they can be changed in one place
|
||||
export MAX_JOBS=${MAX_JOBS:-$(( $(nproc) - 2 ))}
|
||||
export MAX_JOBS=12
|
||||
|
||||
# Parse the parameters
|
||||
if [[ "$PACKAGE_TYPE" == 'conda' ]]; then
|
||||
build_script='conda/build_pytorch.sh'
|
||||
elif [[ "$DESIRED_CUDA" == cpu ]]; then
|
||||
build_script='manywheel/build_cpu.sh'
|
||||
elif [[ "$DESIRED_CUDA" == *"rocm"* ]]; then
|
||||
build_script='manywheel/build_rocm.sh'
|
||||
else
|
||||
build_script='manywheel/build.sh'
|
||||
fi
|
||||
|
||||
# We want to call unbuffer, which calls tclsh which finds the expect
|
||||
# package. The expect was installed by yum into /usr/bin so we want to
|
||||
# find /usr/bin/tclsh, but this is shadowed by /opt/conda/bin/tclsh in
|
||||
# the conda docker images, so we prepend it to the path here.
|
||||
if [[ "$PACKAGE_TYPE" == 'conda' ]]; then
|
||||
mkdir /just_tclsh_bin
|
||||
ln -s /usr/bin/tclsh /just_tclsh_bin/tclsh
|
||||
export PATH=/just_tclsh_bin:$PATH
|
||||
fi
|
||||
|
||||
# Build the package
|
||||
SKIP_ALL_TESTS=1 "/builder/$build_script"
|
||||
SKIP_ALL_TESTS=1 unbuffer "/builder/$build_script" | ts
|
||||
|
||||
@ -5,25 +5,17 @@ cat >/home/circleci/project/ci_test_script.sh <<EOL
|
||||
# =================== The following code will be executed inside Docker container ===================
|
||||
set -eux -o pipefail
|
||||
|
||||
python_nodot="\$(echo $DESIRED_PYTHON | tr -d m.u)"
|
||||
|
||||
# Set up Python
|
||||
if [[ "$PACKAGE_TYPE" == conda ]]; then
|
||||
retry conda create -qyn testenv python="$DESIRED_PYTHON"
|
||||
source activate testenv >/dev/null
|
||||
elif [[ "$DESIRED_PYTHON" == 2.7mu ]]; then
|
||||
export PATH="/opt/python/cp27-cp27mu/bin:\$PATH"
|
||||
elif [[ "$DESIRED_PYTHON" == 3.8m ]]; then
|
||||
export PATH="/opt/python/cp38-cp38/bin:\$PATH"
|
||||
elif [[ "$PACKAGE_TYPE" != libtorch ]]; then
|
||||
python_path="/opt/python/cp\$python_nodot-cp\${python_nodot}"
|
||||
# Prior to Python 3.8 paths were suffixed with an 'm'
|
||||
if [[ -d "\${python_path}/bin" ]]; then
|
||||
export PATH="\${python_path}/bin:\$PATH"
|
||||
elif [[ -d "\${python_path}m/bin" ]]; then
|
||||
export PATH="\${python_path}m/bin:\$PATH"
|
||||
fi
|
||||
fi
|
||||
|
||||
EXTRA_CONDA_FLAGS=""
|
||||
if [[ "\$python_nodot" = *39* ]]; then
|
||||
EXTRA_CONDA_FLAGS="-c=conda-forge"
|
||||
python_nodot="\$(echo $DESIRED_PYTHON | tr -d m.u)"
|
||||
export PATH="/opt/python/cp\$python_nodot-cp\${python_nodot}m/bin:\$PATH"
|
||||
fi
|
||||
|
||||
# Install the package
|
||||
@ -34,19 +26,19 @@ fi
|
||||
# conda build scripts themselves. These should really be consolidated
|
||||
pkg="/final_pkgs/\$(ls /final_pkgs)"
|
||||
if [[ "$PACKAGE_TYPE" == conda ]]; then
|
||||
conda install \${EXTRA_CONDA_FLAGS} -y "\$pkg" --offline
|
||||
conda install -y "\$pkg" --offline
|
||||
if [[ "$DESIRED_CUDA" == 'cpu' ]]; then
|
||||
retry conda install \${EXTRA_CONDA_FLAGS} -y cpuonly -c pytorch
|
||||
conda install -y cpuonly -c pytorch
|
||||
fi
|
||||
retry conda install \${EXTRA_CONDA_FLAGS} -yq future numpy protobuf six
|
||||
retry conda install -yq future numpy protobuf six
|
||||
if [[ "$DESIRED_CUDA" != 'cpu' ]]; then
|
||||
# DESIRED_CUDA is in format cu90 or cu102
|
||||
# DESIRED_CUDA is in format cu90 or cu100
|
||||
if [[ "${#DESIRED_CUDA}" == 4 ]]; then
|
||||
cu_ver="${DESIRED_CUDA:2:1}.${DESIRED_CUDA:3}"
|
||||
else
|
||||
cu_ver="${DESIRED_CUDA:2:2}.${DESIRED_CUDA:4}"
|
||||
fi
|
||||
retry conda install \${EXTRA_CONDA_FLAGS} -yq -c nvidia -c pytorch "cudatoolkit=\${cu_ver}"
|
||||
retry conda install -yq -c pytorch "cudatoolkit=\${cu_ver}"
|
||||
fi
|
||||
elif [[ "$PACKAGE_TYPE" != libtorch ]]; then
|
||||
pip install "\$pkg"
|
||||
@ -60,7 +52,6 @@ fi
|
||||
|
||||
# Test the package
|
||||
/builder/check_binary.sh
|
||||
|
||||
# =================== The above code will be executed inside Docker container ===================
|
||||
EOL
|
||||
echo
|
||||
|
||||
40
.circleci/scripts/binary_linux_upload.sh
Executable file
40
.circleci/scripts/binary_linux_upload.sh
Executable file
@ -0,0 +1,40 @@
|
||||
#!/bin/bash
|
||||
# Do NOT set -x
|
||||
source /home/circleci/project/env
|
||||
set -eu -o pipefail
|
||||
set +x
|
||||
declare -x "AWS_ACCESS_KEY_ID=${PYTORCH_BINARY_AWS_ACCESS_KEY_ID}"
|
||||
declare -x "AWS_SECRET_ACCESS_KEY=${PYTORCH_BINARY_AWS_SECRET_ACCESS_KEY}"
|
||||
cat >/home/circleci/project/login_to_anaconda.sh <<EOL
|
||||
set +x
|
||||
echo "Trying to login to Anaconda"
|
||||
yes | anaconda login \
|
||||
--username "$PYTORCH_BINARY_PJH5_CONDA_USERNAME" \
|
||||
--password "$PYTORCH_BINARY_PJH5_CONDA_PASSWORD"
|
||||
set -x
|
||||
EOL
|
||||
chmod +x /home/circleci/project/login_to_anaconda.sh
|
||||
|
||||
#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!
|
||||
# DO NOT TURN -x ON BEFORE THIS LINE
|
||||
#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!
|
||||
set -eux -o pipefail
|
||||
export PATH="$MINICONDA_ROOT/bin:$PATH"
|
||||
|
||||
# Upload the package to the final location
|
||||
pushd /home/circleci/project/final_pkgs
|
||||
if [[ "$PACKAGE_TYPE" == conda ]]; then
|
||||
retry conda install -yq anaconda-client
|
||||
retry timeout 30 /home/circleci/project/login_to_anaconda.sh
|
||||
anaconda upload "$(ls)" -u pytorch-nightly --label main --no-progress --force
|
||||
elif [[ "$PACKAGE_TYPE" == libtorch ]]; then
|
||||
retry pip install -q awscli
|
||||
s3_dir="s3://pytorch/libtorch/${PIP_UPLOAD_FOLDER}${DESIRED_CUDA}/"
|
||||
for pkg in $(ls); do
|
||||
retry aws s3 cp "$pkg" "$s3_dir" --acl public-read
|
||||
done
|
||||
else
|
||||
retry pip install -q awscli
|
||||
s3_dir="s3://pytorch/whl/${PIP_UPLOAD_FOLDER}${DESIRED_CUDA}/"
|
||||
retry aws s3 cp "$(ls)" "$s3_dir" --acl public-read
|
||||
fi
|
||||
@ -20,9 +20,9 @@ if [[ "$PACKAGE_TYPE" == libtorch ]]; then
|
||||
unzip "$pkg" -d /tmp
|
||||
cd /tmp/libtorch
|
||||
elif [[ "$PACKAGE_TYPE" == conda ]]; then
|
||||
conda install -y "$pkg"
|
||||
conda install -y "$pkg" --offline
|
||||
else
|
||||
pip install "$pkg" -v
|
||||
pip install "$pkg" --no-index --no-dependencies -v
|
||||
fi
|
||||
|
||||
# Test
|
||||
|
||||
40
.circleci/scripts/binary_macos_upload.sh
Executable file
40
.circleci/scripts/binary_macos_upload.sh
Executable file
@ -0,0 +1,40 @@
|
||||
#!/bin/bash
|
||||
# Do NOT set -x
|
||||
set -eu -o pipefail
|
||||
set +x
|
||||
export AWS_ACCESS_KEY_ID="${PYTORCH_BINARY_AWS_ACCESS_KEY_ID}"
|
||||
export AWS_SECRET_ACCESS_KEY="${PYTORCH_BINARY_AWS_SECRET_ACCESS_KEY}"
|
||||
cat >/Users/distiller/project/login_to_anaconda.sh <<EOL
|
||||
set +x
|
||||
echo "Trying to login to Anaconda"
|
||||
yes | anaconda login \
|
||||
--username "$PYTORCH_BINARY_PJH5_CONDA_USERNAME" \
|
||||
--password "$PYTORCH_BINARY_PJH5_CONDA_PASSWORD"
|
||||
set -x
|
||||
EOL
|
||||
chmod +x /Users/distiller/project/login_to_anaconda.sh
|
||||
|
||||
#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!
|
||||
# DO NOT TURN -x ON BEFORE THIS LINE
|
||||
#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!
|
||||
set -eux -o pipefail
|
||||
|
||||
source "/Users/distiller/project/env"
|
||||
export "PATH=$workdir/miniconda/bin:$PATH"
|
||||
|
||||
pushd "$workdir/final_pkgs"
|
||||
if [[ "$PACKAGE_TYPE" == conda ]]; then
|
||||
retry conda install -yq anaconda-client
|
||||
retry /Users/distiller/project/login_to_anaconda.sh
|
||||
retry anaconda upload "$(ls)" -u pytorch-nightly --label main --no-progress --force
|
||||
elif [[ "$PACKAGE_TYPE" == libtorch ]]; then
|
||||
retry pip install -q awscli
|
||||
s3_dir="s3://pytorch/libtorch/${PIP_UPLOAD_FOLDER}${DESIRED_CUDA}/"
|
||||
for pkg in $(ls); do
|
||||
retry aws s3 cp "$pkg" "$s3_dir" --acl public-read
|
||||
done
|
||||
else
|
||||
retry pip install -q awscli
|
||||
s3_dir="s3://pytorch/whl/${PIP_UPLOAD_FOLDER}${DESIRED_CUDA}/"
|
||||
retry aws s3 cp "$(ls)" "$s3_dir" --acl public-read
|
||||
fi
|
||||
@ -2,31 +2,11 @@
|
||||
set -eux -o pipefail
|
||||
export TZ=UTC
|
||||
|
||||
tagged_version() {
|
||||
# Grabs version from either the env variable CIRCLE_TAG
|
||||
# or the pytorch git described version
|
||||
if [[ "$OSTYPE" == "msys" ]]; then
|
||||
GIT_DESCRIBE="git --git-dir ${workdir}/p/.git describe"
|
||||
else
|
||||
GIT_DESCRIBE="git --git-dir ${workdir}/pytorch/.git describe"
|
||||
fi
|
||||
if [[ -n "${CIRCLE_TAG:-}" ]]; then
|
||||
echo "${CIRCLE_TAG}"
|
||||
elif ${GIT_DESCRIBE} --exact --tags >/dev/null; then
|
||||
${GIT_DESCRIBE} --tags
|
||||
else
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# We need to write an envfile to persist these variables to following
|
||||
# steps, but the location of the envfile depends on the circleci executor
|
||||
if [[ "$(uname)" == Darwin ]]; then
|
||||
# macos executor (builds and tests)
|
||||
workdir="/Users/distiller/project"
|
||||
elif [[ "$OSTYPE" == "msys" ]]; then
|
||||
# windows executor (builds and tests)
|
||||
workdir="/c/w"
|
||||
elif [[ -d "/home/circleci/project" ]]; then
|
||||
# machine executor (binary tests)
|
||||
workdir="/home/circleci/project"
|
||||
@ -43,15 +23,7 @@ configs=($BUILD_ENVIRONMENT)
|
||||
export PACKAGE_TYPE="${configs[0]}"
|
||||
export DESIRED_PYTHON="${configs[1]}"
|
||||
export DESIRED_CUDA="${configs[2]}"
|
||||
if [[ "${BUILD_FOR_SYSTEM:-}" == "windows" ]]; then
|
||||
export DESIRED_DEVTOOLSET=""
|
||||
export LIBTORCH_CONFIG="${configs[3]:-}"
|
||||
if [[ "$LIBTORCH_CONFIG" == 'debug' ]]; then
|
||||
export DEBUG=1
|
||||
fi
|
||||
else
|
||||
export DESIRED_DEVTOOLSET="${configs[3]:-}"
|
||||
fi
|
||||
export DESIRED_DEVTOOLSET="${configs[3]:-}"
|
||||
if [[ "$PACKAGE_TYPE" == 'libtorch' ]]; then
|
||||
export BUILD_PYTHONLESS=1
|
||||
fi
|
||||
@ -68,27 +40,25 @@ if [[ -z "$DOCKER_IMAGE" ]]; then
|
||||
fi
|
||||
fi
|
||||
|
||||
# Default to nightly, since that's where this normally uploads to
|
||||
PIP_UPLOAD_FOLDER='nightly/'
|
||||
# Upload to parallel folder for devtoolsets
|
||||
# All nightlies used to be devtoolset3, then devtoolset7 was added as a build
|
||||
# option, so the upload was redirected to nightly/devtoolset7 to avoid
|
||||
# conflicts with other binaries (there shouldn't be any conflicts). Now we are
|
||||
# making devtoolset7 the default.
|
||||
if [[ "$DESIRED_DEVTOOLSET" == 'devtoolset7' || "$DESIRED_DEVTOOLSET" == *"cxx11-abi"* || "$(uname)" == 'Darwin' ]]; then
|
||||
export PIP_UPLOAD_FOLDER='nightly/'
|
||||
else
|
||||
# On linux machines, this shouldn't actually be called anymore. This is just
|
||||
# here for extra safety.
|
||||
export PIP_UPLOAD_FOLDER='nightly/devtoolset3/'
|
||||
fi
|
||||
|
||||
# We put this here so that OVERRIDE_PACKAGE_VERSION below can read from it
|
||||
export DATE="$(date -u +%Y%m%d)"
|
||||
#TODO: We should be pulling semver version from the base version.txt
|
||||
BASE_BUILD_VERSION="1.7.0.dev$DATE"
|
||||
# Change BASE_BUILD_VERSION to git tag when on a git tag
|
||||
# Use 'git -C' to make doubly sure we're in the correct directory for checking
|
||||
# the git tag
|
||||
if tagged_version >/dev/null; then
|
||||
# Switch upload folder to 'test/' if we are on a tag
|
||||
PIP_UPLOAD_FOLDER='test/'
|
||||
# Grab git tag, remove prefixed v and remove everything after -
|
||||
# Used to clean up tags that are for release candidates like v1.6.0-rc1
|
||||
# Turns tag v1.6.0-rc1 -> v1.6.0
|
||||
BASE_BUILD_VERSION="$(tagged_version | sed -e 's/^v//' -e 's/-.*$//')"
|
||||
fi
|
||||
if [[ "$(uname)" == 'Darwin' ]] || [[ "$DESIRED_CUDA" == "cu102" ]] || [[ "$PACKAGE_TYPE" == conda ]]; then
|
||||
export PYTORCH_BUILD_VERSION="${BASE_BUILD_VERSION}"
|
||||
if [[ "$(uname)" == 'Darwin' ]] || [[ "$DESIRED_CUDA" == "cu101" ]] || [[ "$PACKAGE_TYPE" == conda ]]; then
|
||||
export PYTORCH_BUILD_VERSION="1.4.0.dev$DATE"
|
||||
else
|
||||
export PYTORCH_BUILD_VERSION="${BASE_BUILD_VERSION}+$DESIRED_CUDA"
|
||||
export PYTORCH_BUILD_VERSION="1.4.0.dev$DATE+$DESIRED_CUDA"
|
||||
fi
|
||||
export PYTORCH_BUILD_NUMBER=1
|
||||
|
||||
@ -124,13 +94,9 @@ export DESIRED_CUDA="$DESIRED_CUDA"
|
||||
export LIBTORCH_VARIANT="${LIBTORCH_VARIANT:-}"
|
||||
export BUILD_PYTHONLESS="${BUILD_PYTHONLESS:-}"
|
||||
export DESIRED_DEVTOOLSET="$DESIRED_DEVTOOLSET"
|
||||
if [[ "${BUILD_FOR_SYSTEM:-}" == "windows" ]]; then
|
||||
export LIBTORCH_CONFIG="${LIBTORCH_CONFIG:-}"
|
||||
export DEBUG="${DEBUG:-}"
|
||||
fi
|
||||
|
||||
export DATE="$DATE"
|
||||
export NIGHTLIES_DATE_PREAMBLE=1.7.0.dev
|
||||
export NIGHTLIES_DATE_PREAMBLE=1.4.0.dev
|
||||
export PYTORCH_BUILD_VERSION="$PYTORCH_BUILD_VERSION"
|
||||
export PYTORCH_BUILD_NUMBER="$PYTORCH_BUILD_NUMBER"
|
||||
export OVERRIDE_PACKAGE_VERSION="$PYTORCH_BUILD_VERSION"
|
||||
@ -147,13 +113,8 @@ export DOCKER_IMAGE="$DOCKER_IMAGE"
|
||||
|
||||
export workdir="$workdir"
|
||||
export MAC_PACKAGE_WORK_DIR="$workdir"
|
||||
if [[ "$OSTYPE" == "msys" ]]; then
|
||||
export PYTORCH_ROOT="$workdir/p"
|
||||
export BUILDER_ROOT="$workdir/b"
|
||||
else
|
||||
export PYTORCH_ROOT="$workdir/pytorch"
|
||||
export BUILDER_ROOT="$workdir/builder"
|
||||
fi
|
||||
export PYTORCH_ROOT="$workdir/pytorch"
|
||||
export BUILDER_ROOT="$workdir/builder"
|
||||
export MINICONDA_ROOT="$workdir/miniconda"
|
||||
export PYTORCH_FINAL_PACKAGE_DIR="$workdir/final_pkgs"
|
||||
|
||||
|
||||
@ -16,12 +16,31 @@ set -eux -o pipefail
|
||||
# Expect actual code to be written to this file
|
||||
chmod +x /home/circleci/project/ci_test_script.sh
|
||||
|
||||
VOLUME_MOUNTS="-v /home/circleci/project/:/circleci_stuff -v /home/circleci/project/final_pkgs:/final_pkgs -v ${PYTORCH_ROOT}:/pytorch -v ${BUILDER_ROOT}:/builder"
|
||||
# Run the docker
|
||||
if [ -n "${USE_CUDA_DOCKER_RUNTIME:-}" ]; then
|
||||
export id=$(docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --gpus all ${VOLUME_MOUNTS} -t -d "${DOCKER_IMAGE}")
|
||||
export id=$(docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --runtime=nvidia -t -d "${DOCKER_IMAGE}")
|
||||
else
|
||||
export id=$(docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined ${VOLUME_MOUNTS} -t -d "${DOCKER_IMAGE}")
|
||||
export id=$(docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -t -d "${DOCKER_IMAGE}")
|
||||
fi
|
||||
|
||||
# Copy the envfile and script with all the code to run into the docker.
|
||||
docker cp /home/circleci/project/. "$id:/circleci_stuff"
|
||||
|
||||
# Copy built packages into the docker to test. This should only exist on the
|
||||
# binary test jobs. The package should've been created from a binary build job,
|
||||
# whhich persisted the package to a CircleCI workspace, which this job then
|
||||
# copies into a GPU enabled docker for testing
|
||||
if [[ -d "/home/circleci/project/final_pkgs" ]]; then
|
||||
docker cp /home/circleci/project/final_pkgs "$id:/final_pkgs"
|
||||
fi
|
||||
|
||||
# Copy the needed repos into the docker. These do not exist in the smoke test
|
||||
# jobs, since the smoke test jobs do not need the Pytorch source code.
|
||||
if [[ -d "$PYTORCH_ROOT" ]]; then
|
||||
docker cp "$PYTORCH_ROOT" "$id:/pytorch"
|
||||
fi
|
||||
if [[ -d "$BUILDER_ROOT" ]]; then
|
||||
docker cp "$BUILDER_ROOT" "$id:/builder"
|
||||
fi
|
||||
|
||||
# Execute the test script that was populated by an earlier section
|
||||
|
||||
@ -1,98 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
PACKAGE_TYPE=${PACKAGE_TYPE:-conda}
|
||||
|
||||
PKG_DIR=${PKG_DIR:-/tmp/workspace/final_pkgs}
|
||||
|
||||
# Designates whether to submit as a release candidate or a nightly build
|
||||
# Value should be `test` when uploading release candidates
|
||||
# currently set within `designate_upload_channel`
|
||||
UPLOAD_CHANNEL=${UPLOAD_CHANNEL:-nightly}
|
||||
# Designates what subfolder to put packages into
|
||||
UPLOAD_SUBFOLDER=${UPLOAD_SUBFOLDER:-cpu}
|
||||
UPLOAD_BUCKET="s3://pytorch"
|
||||
BACKUP_BUCKET="s3://pytorch-backup"
|
||||
|
||||
DRY_RUN=${DRY_RUN:-enabled}
|
||||
# Don't actually do work unless explicit
|
||||
ANACONDA="true anaconda"
|
||||
AWS_S3_CP="aws s3 cp --dryrun"
|
||||
if [[ "${DRY_RUN}" = "disabled" ]]; then
|
||||
ANACONDA="anaconda"
|
||||
AWS_S3_CP="aws s3 cp"
|
||||
fi
|
||||
|
||||
do_backup() {
|
||||
local backup_dir
|
||||
backup_dir=$1
|
||||
(
|
||||
pushd /tmp/workspace
|
||||
set -x
|
||||
${AWS_S3_CP} --recursive . "${BACKUP_BUCKET}/${CIRCLE_TAG}/${backup_dir}/"
|
||||
)
|
||||
}
|
||||
|
||||
conda_upload() {
|
||||
(
|
||||
set -x
|
||||
${ANACONDA} \
|
||||
upload \
|
||||
${PKG_DIR}/*.tar.bz2 \
|
||||
-u "pytorch-${UPLOAD_CHANNEL}" \
|
||||
--label main \
|
||||
--no-progress \
|
||||
--force
|
||||
)
|
||||
}
|
||||
|
||||
s3_upload() {
|
||||
local extension
|
||||
local pkg_type
|
||||
extension="$1"
|
||||
pkg_type="$2"
|
||||
s3_dir="${UPLOAD_BUCKET}/${pkg_type}/${UPLOAD_CHANNEL}/${UPLOAD_SUBFOLDER}/"
|
||||
(
|
||||
for pkg in ${PKG_DIR}/*.${extension}; do
|
||||
(
|
||||
set -x
|
||||
${AWS_S3_CP} --no-progress --acl public-read "${pkg}" "${s3_dir}"
|
||||
)
|
||||
done
|
||||
)
|
||||
}
|
||||
|
||||
case "${PACKAGE_TYPE}" in
|
||||
conda)
|
||||
conda_upload
|
||||
# Fetch platform (eg. win-64, linux-64, etc.) from index file
|
||||
# Because there's no actual conda command to read this
|
||||
subdir=$(\
|
||||
tar -xOf ${PKG_DIR}/*.bz2 info/index.json \
|
||||
| grep subdir \
|
||||
| cut -d ':' -f2 \
|
||||
| sed -e 's/[[:space:]]//' -e 's/"//g' -e 's/,//' \
|
||||
)
|
||||
BACKUP_DIR="conda/${subdir}"
|
||||
;;
|
||||
libtorch)
|
||||
s3_upload "zip" "libtorch"
|
||||
BACKUP_DIR="libtorch/${UPLOAD_CHANNEL}/${UPLOAD_SUBFOLDER}"
|
||||
;;
|
||||
# wheel can either refer to wheel/manywheel
|
||||
*wheel)
|
||||
s3_upload "whl" "whl"
|
||||
BACKUP_DIR="whl/${UPLOAD_CHANNEL}/${UPLOAD_SUBFOLDER}"
|
||||
;;
|
||||
*)
|
||||
echo "ERROR: unknown package type: ${PACKAGE_TYPE}"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
||||
# CIRCLE_TAG is defined by upstream circleci,
|
||||
# this can be changed to recognize tagged versions
|
||||
if [[ -n "${CIRCLE_TAG:-}" ]]; then
|
||||
do_backup "${BACKUP_DIR}"
|
||||
fi
|
||||
@ -1,41 +0,0 @@
|
||||
#!/bin/bash
|
||||
set -eux -o pipefail
|
||||
|
||||
source "/c/w/env"
|
||||
mkdir -p "$PYTORCH_FINAL_PACKAGE_DIR"
|
||||
|
||||
export CUDA_VERSION="${DESIRED_CUDA/cu/}"
|
||||
export USE_SCCACHE=1
|
||||
export SCCACHE_BUCKET=ossci-compiler-cache-windows
|
||||
export NIGHTLIES_PYTORCH_ROOT="$PYTORCH_ROOT"
|
||||
|
||||
if [[ "$CUDA_VERSION" == "92" || "$CUDA_VERSION" == "100" ]]; then
|
||||
export VC_YEAR=2017
|
||||
else
|
||||
export VC_YEAR=2019
|
||||
fi
|
||||
|
||||
set +x
|
||||
export AWS_ACCESS_KEY_ID=${CIRCLECI_AWS_ACCESS_KEY_FOR_SCCACHE_S3_BUCKET_V4:-}
|
||||
export AWS_SECRET_ACCESS_KEY=${CIRCLECI_AWS_SECRET_KEY_FOR_SCCACHE_S3_BUCKET_V4:-}
|
||||
set -x
|
||||
|
||||
if [[ "$CIRCLECI" == 'true' && -d "C:\\ProgramData\\Microsoft\\VisualStudio\\Packages\\_Instances" ]]; then
|
||||
mv "C:\\ProgramData\\Microsoft\\VisualStudio\\Packages\\_Instances" .
|
||||
rm -rf "C:\\ProgramData\\Microsoft\\VisualStudio\\Packages"
|
||||
mkdir -p "C:\\ProgramData\\Microsoft\\VisualStudio\\Packages"
|
||||
mv _Instances "C:\\ProgramData\\Microsoft\\VisualStudio\\Packages"
|
||||
fi
|
||||
|
||||
echo "Free space on filesystem before build:"
|
||||
df -h
|
||||
|
||||
pushd "$BUILDER_ROOT"
|
||||
if [[ "$PACKAGE_TYPE" == 'conda' ]]; then
|
||||
./windows/internal/build_conda.bat
|
||||
elif [[ "$PACKAGE_TYPE" == 'wheel' || "$PACKAGE_TYPE" == 'libtorch' ]]; then
|
||||
./windows/internal/build_wheels.bat
|
||||
fi
|
||||
|
||||
echo "Free space on filesystem after build:"
|
||||
df -h
|
||||
@ -1,19 +0,0 @@
|
||||
#!/bin/bash
|
||||
set -eux -o pipefail
|
||||
|
||||
source "/c/w/env"
|
||||
|
||||
export CUDA_VERSION="${DESIRED_CUDA/cu/}"
|
||||
export VC_YEAR=2017
|
||||
|
||||
if [[ "$CUDA_VERSION" == "92" || "$CUDA_VERSION" == "100" ]]; then
|
||||
export VC_YEAR=2017
|
||||
else
|
||||
export VC_YEAR=2019
|
||||
fi
|
||||
|
||||
pushd "$BUILDER_ROOT"
|
||||
|
||||
./windows/internal/smoke_test.bat
|
||||
|
||||
popd
|
||||
@ -1,11 +1,7 @@
|
||||
#!/usr/bin/env bash
|
||||
set -eux -o pipefail
|
||||
|
||||
env
|
||||
echo "BUILD_ENVIRONMENT:$BUILD_ENVIRONMENT"
|
||||
|
||||
export ANDROID_NDK_HOME=/opt/ndk
|
||||
export ANDROID_NDK=/opt/ndk
|
||||
export ANDROID_HOME=/opt/android/sdk
|
||||
|
||||
# Must be in sync with GRADLE_VERSION in docker image for android
|
||||
@ -14,31 +10,6 @@ export GRADLE_VERSION=4.10.3
|
||||
export GRADLE_HOME=/opt/gradle/gradle-$GRADLE_VERSION
|
||||
export GRADLE_PATH=$GRADLE_HOME/bin/gradle
|
||||
|
||||
# touch gradle cache files to prevent expiration
|
||||
while IFS= read -r -d '' file
|
||||
do
|
||||
touch "$file" || true
|
||||
done < <(find /var/lib/jenkins/.gradle -type f -print0)
|
||||
|
||||
export GRADLE_LOCAL_PROPERTIES=~/workspace/android/local.properties
|
||||
rm -f $GRADLE_LOCAL_PROPERTIES
|
||||
echo "sdk.dir=/opt/android/sdk" >> $GRADLE_LOCAL_PROPERTIES
|
||||
echo "ndk.dir=/opt/ndk" >> $GRADLE_LOCAL_PROPERTIES
|
||||
echo "cmake.dir=/usr/local" >> $GRADLE_LOCAL_PROPERTIES
|
||||
|
||||
retry () {
|
||||
$* || (sleep 1 && $*) || (sleep 2 && $*) || (sleep 4 && $*) || (sleep 8 && $*)
|
||||
}
|
||||
|
||||
# Run custom build script
|
||||
if [[ "${BUILD_ENVIRONMENT}" == *-gradle-custom-build* ]]; then
|
||||
# Install torch & torchvision - used to download & dump used ops from test model.
|
||||
retry pip install torch torchvision --progress-bar off
|
||||
|
||||
exec "$(dirname "${BASH_SOURCE[0]}")/../../android/build_test_app_custom.sh" armeabi-v7a
|
||||
fi
|
||||
|
||||
# Run default build
|
||||
BUILD_ANDROID_INCLUDE_DIR_x86=~/workspace/build_android/install/include
|
||||
BUILD_ANDROID_LIB_DIR_x86=~/workspace/build_android/install/lib
|
||||
|
||||
@ -73,6 +44,9 @@ ln -s ${BUILD_ANDROID_INCLUDE_DIR_arm_v8a} ${JNI_INCLUDE_DIR}/arm64-v8a
|
||||
ln -s ${BUILD_ANDROID_LIB_DIR_arm_v8a} ${JNI_LIBS_DIR}/arm64-v8a
|
||||
fi
|
||||
|
||||
env
|
||||
echo "BUILD_ENVIRONMENT:$BUILD_ENVIRONMENT"
|
||||
|
||||
GRADLE_PARAMS="-p android assembleRelease --debug --stacktrace"
|
||||
if [[ "${BUILD_ENVIRONMENT}" == *-gradle-build-only-x86_32* ]]; then
|
||||
GRADLE_PARAMS+=" -PABI_FILTERS=x86"
|
||||
@ -82,6 +56,20 @@ if [ -n "{GRADLE_OFFLINE:-}" ]; then
|
||||
GRADLE_PARAMS+=" --offline"
|
||||
fi
|
||||
|
||||
# touch gradle cache files to prevent expiration
|
||||
while IFS= read -r -d '' file
|
||||
do
|
||||
touch "$file" || true
|
||||
done < <(find /var/lib/jenkins/.gradle -type f -print0)
|
||||
|
||||
env
|
||||
|
||||
export GRADLE_LOCAL_PROPERTIES=~/workspace/android/local.properties
|
||||
rm -f $GRADLE_LOCAL_PROPERTIES
|
||||
echo "sdk.dir=/opt/android/sdk" >> $GRADLE_LOCAL_PROPERTIES
|
||||
echo "ndk.dir=/opt/ndk" >> $GRADLE_LOCAL_PROPERTIES
|
||||
echo "cmake.dir=/usr/local" >> $GRADLE_LOCAL_PROPERTIES
|
||||
|
||||
$GRADLE_PATH $GRADLE_PARAMS
|
||||
|
||||
find . -type f -name "*.a" -exec ls -lh {} \;
|
||||
|
||||
@ -30,7 +30,13 @@ if [ "$version" == "master" ]; then
|
||||
is_master_doc=true
|
||||
fi
|
||||
|
||||
echo "install_path: $install_path version: $version"
|
||||
# Argument 3: (optional) If present, we will NOT do any pushing. Used for testing.
|
||||
dry_run=false
|
||||
if [ "$3" != "" ]; then
|
||||
dry_run=true
|
||||
fi
|
||||
|
||||
echo "install_path: $install_path version: $version dry_run: $dry_run"
|
||||
|
||||
# ======================== Building PyTorch C++ API Docs ========================
|
||||
|
||||
@ -47,11 +53,17 @@ sudo apt-get -y install doxygen
|
||||
# Generate ATen files
|
||||
pushd "${pt_checkout}"
|
||||
pip install -r requirements.txt
|
||||
time python -m tools.codegen.gen \
|
||||
time python aten/src/ATen/gen.py \
|
||||
-s aten/src/ATen \
|
||||
-d build/aten/src/ATen
|
||||
-d build/aten/src/ATen \
|
||||
aten/src/ATen/Declarations.cwrap \
|
||||
aten/src/THNN/generic/THNN.h \
|
||||
aten/src/THCUNN/generic/THCUNN.h \
|
||||
aten/src/ATen/nn.yaml \
|
||||
aten/src/ATen/native/native_functions.yaml
|
||||
|
||||
# Copy some required files
|
||||
cp aten/src/ATen/common_with_cwrap.py tools/shared/cwrap_common.py
|
||||
cp torch/_utils_internal.py tools/shared
|
||||
|
||||
# Generate PyTorch files
|
||||
@ -61,7 +73,12 @@ time python tools/setup_helpers/generate_code.py \
|
||||
|
||||
# Build the docs
|
||||
pushd docs/cpp
|
||||
pip install -r requirements.txt
|
||||
pip install breathe==4.11.1 bs4 lxml six
|
||||
pip install --no-cache-dir -e "git+https://github.com/pytorch/pytorch_sphinx_theme.git#egg=pytorch_sphinx_theme"
|
||||
pip install exhale>=0.2.1
|
||||
pip install sphinx==1.8.5
|
||||
# Uncomment once it is fixed
|
||||
# pip install -r requirements.txt
|
||||
time make VERBOSE=1 html -j
|
||||
|
||||
popd
|
||||
@ -90,5 +107,21 @@ git config user.name "pytorchbot"
|
||||
git commit -m "Automatic sync on $(date)" || true
|
||||
git status
|
||||
|
||||
if [ "$dry_run" = false ]; then
|
||||
echo "Pushing to https://github.com/pytorch/cppdocs"
|
||||
set +x
|
||||
/usr/bin/expect <<DONE
|
||||
spawn git push -u origin master
|
||||
expect "Username*"
|
||||
send "pytorchbot\n"
|
||||
expect "Password*"
|
||||
send "$::env(GITHUB_PYTORCHBOT_TOKEN)\n"
|
||||
expect eof
|
||||
DONE
|
||||
set -x
|
||||
else
|
||||
echo "Skipping push due to dry_run"
|
||||
fi
|
||||
|
||||
popd
|
||||
# =================== The above code **should** be executed inside Docker container ===================
|
||||
|
||||
@ -1,8 +0,0 @@
|
||||
set "DRIVER_DOWNLOAD_LINK=https://s3.amazonaws.com/ossci-windows/451.82-tesla-desktop-winserver-2019-2016-international.exe"
|
||||
curl --retry 3 -kL %DRIVER_DOWNLOAD_LINK% --output 451.82-tesla-desktop-winserver-2019-2016-international.exe
|
||||
if errorlevel 1 exit /b 1
|
||||
|
||||
start /wait 451.82-tesla-desktop-winserver-2019-2016-international.exe -s -noreboot
|
||||
if errorlevel 1 exit /b 1
|
||||
|
||||
del 451.82-tesla-desktop-winserver-2019-2016-international.exe || ver > NUL
|
||||
@ -7,8 +7,6 @@ sudo apt-get -y install expect-dev
|
||||
# This is where the local pytorch install in the docker image is located
|
||||
pt_checkout="/var/lib/jenkins/workspace"
|
||||
|
||||
source "$pt_checkout/.jenkins/pytorch/common_utils.sh"
|
||||
|
||||
echo "python_doc_push_script.sh: Invoked with $*"
|
||||
|
||||
set -ex
|
||||
@ -40,7 +38,13 @@ echo "error: python_doc_push_script.sh: branch (arg3) not specified"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "install_path: $install_path version: $version"
|
||||
# Argument 4: (optional) If present, we will NOT do any pushing. Used for testing.
|
||||
dry_run=false
|
||||
if [ "$4" != "" ]; then
|
||||
dry_run=true
|
||||
fi
|
||||
|
||||
echo "install_path: $install_path version: $version dry_run: $dry_run"
|
||||
|
||||
git clone https://github.com/pytorch/pytorch.github.io -b $branch
|
||||
pushd pytorch.github.io
|
||||
@ -50,38 +54,25 @@ export PATH=/opt/conda/bin:$PATH
|
||||
|
||||
rm -rf pytorch || true
|
||||
|
||||
# Install TensorBoard in python 3 so torch.utils.tensorboard classes render
|
||||
pip install -q https://s3.amazonaws.com/ossci-linux/wheels/tensorboard-1.14.0a0-py3-none-any.whl
|
||||
|
||||
# Get all the documentation sources, put them in one place
|
||||
pushd "$pt_checkout"
|
||||
checkout_install_torchvision
|
||||
git clone https://github.com/pytorch/vision
|
||||
pushd vision
|
||||
conda install -q pillow
|
||||
time python setup.py install
|
||||
popd
|
||||
pushd docs
|
||||
rm -rf source/torchvision
|
||||
cp -a ../vision/docs/source source/torchvision
|
||||
|
||||
# Build the docs
|
||||
pip -q install -r requirements.txt
|
||||
pip -q install -r requirements.txt || true
|
||||
if [ "$is_master_doc" = true ]; then
|
||||
make html
|
||||
make coverage
|
||||
# Now we have the coverage report, we need to make sure it is empty.
|
||||
# Count the number of lines in the file and turn that number into a variable
|
||||
# $lines. The `cut -f1 ...` is to only parse the number, not the filename
|
||||
# Skip the report header by subtracting 2: the header will be output even if
|
||||
# there are no undocumented items.
|
||||
#
|
||||
# Also: see docs/source/conf.py for "coverage_ignore*" items, which should
|
||||
# be documented then removed from there.
|
||||
lines=$(wc -l build/coverage/python.txt 2>/dev/null |cut -f1 -d' ')
|
||||
undocumented=$(($lines - 2))
|
||||
if [ $undocumented -lt 0 ]; then
|
||||
echo coverage output not found
|
||||
exit 1
|
||||
elif [ $undocumented -gt 0 ]; then
|
||||
echo undocumented objects found:
|
||||
cat build/coverage/python.txt
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
# Don't fail the build on coverage problems
|
||||
make html-stable
|
||||
fi
|
||||
|
||||
@ -99,12 +90,6 @@ else
|
||||
find "$install_path" -name "*.html" -print0 | xargs -0 perl -pi -w -e "s@master\s+\((\d\.\d\.[A-Fa-f0-9]+\+[A-Fa-f0-9]+)\s+\)@<a href='http://pytorch.org/docs/versions.html'>$version \▼</a>@g"
|
||||
fi
|
||||
|
||||
# Prevent Google from indexing $install_path/_modules. This folder contains
|
||||
# generated source files.
|
||||
# NB: the following only works on gnu sed. The sed shipped with mac os is different.
|
||||
# One can `brew install gnu-sed` on a mac and then use "gsed" instead of "sed".
|
||||
find "$install_path/_modules" -name "*.html" -print0 | xargs -0 sed -i '/<head>/a \ \ <meta name="robots" content="noindex">'
|
||||
|
||||
git add "$install_path" || true
|
||||
git status
|
||||
git config user.email "soumith+bot@pytorch.org"
|
||||
@ -113,5 +98,21 @@ git config user.name "pytorchbot"
|
||||
git commit -m "auto-generating sphinx docs" || true
|
||||
git status
|
||||
|
||||
if [ "$dry_run" = false ]; then
|
||||
echo "Pushing to pytorch.github.io:$branch"
|
||||
set +x
|
||||
/usr/bin/expect <<DONE
|
||||
spawn git push origin $branch
|
||||
expect "Username*"
|
||||
send "pytorchbot\n"
|
||||
expect "Password*"
|
||||
send "$::env(GITHUB_PYTORCHBOT_TOKEN)\n"
|
||||
expect eof
|
||||
DONE
|
||||
set -x
|
||||
else
|
||||
echo "Skipping push due to dry_run"
|
||||
fi
|
||||
|
||||
popd
|
||||
# =================== The above code **should** be executed inside Docker container ===================
|
||||
|
||||
@ -1,90 +1,81 @@
|
||||
#!/usr/bin/env bash
|
||||
set -ex -o pipefail
|
||||
|
||||
# Set up NVIDIA docker repo
|
||||
curl -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
|
||||
echo "deb https://nvidia.github.io/libnvidia-container/ubuntu16.04/amd64 /" | sudo tee -a /etc/apt/sources.list.d/nvidia-docker.list
|
||||
echo "deb https://nvidia.github.io/nvidia-container-runtime/ubuntu16.04/amd64 /" | sudo tee -a /etc/apt/sources.list.d/nvidia-docker.list
|
||||
echo "deb https://nvidia.github.io/nvidia-docker/ubuntu16.04/amd64 /" | sudo tee -a /etc/apt/sources.list.d/nvidia-docker.list
|
||||
|
||||
# Remove unnecessary sources
|
||||
sudo rm -f /etc/apt/sources.list.d/google-chrome.list
|
||||
sudo rm -f /etc/apt/heroku.list
|
||||
sudo rm -f /etc/apt/openjdk-r-ubuntu-ppa-xenial.list
|
||||
sudo rm -f /etc/apt/partner.list
|
||||
|
||||
retry () {
|
||||
$* || $* || $* || $* || $*
|
||||
}
|
||||
|
||||
# Method adapted from here: https://askubuntu.com/questions/875213/apt-get-to-retry-downloading
|
||||
# (with use of tee to avoid permissions problems)
|
||||
# This is better than retrying the whole apt-get command
|
||||
echo "APT::Acquire::Retries \"3\";" | sudo tee /etc/apt/apt.conf.d/80-retries
|
||||
|
||||
retry sudo apt-get update -qq
|
||||
retry sudo apt-get -y install \
|
||||
sudo apt-get -y update
|
||||
sudo apt-get -y remove linux-image-generic linux-headers-generic linux-generic docker-ce
|
||||
# WARNING: Docker version is hardcoded here; you must update the
|
||||
# version number below for docker-ce and nvidia-docker2 to get newer
|
||||
# versions of Docker. We hardcode these numbers because we kept
|
||||
# getting broken CI when Docker would update their docker version,
|
||||
# and nvidia-docker2 would be out of date for a day until they
|
||||
# released a newer version of their package.
|
||||
#
|
||||
# How to figure out what the correct versions of these packages are?
|
||||
# My preferred method is to start a Docker instance of the correct
|
||||
# Ubuntu version (e.g., docker run -it ubuntu:16.04) and then ask
|
||||
# apt what the packages you need are. Note that the CircleCI image
|
||||
# comes with Docker.
|
||||
sudo apt-get -y install \
|
||||
linux-headers-$(uname -r) \
|
||||
linux-image-generic \
|
||||
moreutils \
|
||||
docker-ce=5:18.09.4~3-0~ubuntu-xenial \
|
||||
nvidia-container-runtime=2.0.0+docker18.09.4-1 \
|
||||
nvidia-docker2=2.0.3+docker18.09.4-1 \
|
||||
expect-dev
|
||||
|
||||
echo "== DOCKER VERSION =="
|
||||
docker version
|
||||
sudo pkill -SIGHUP dockerd
|
||||
|
||||
retry () {
|
||||
$* || $* || $* || $* || $*
|
||||
}
|
||||
|
||||
retry sudo pip -q install awscli==1.16.35
|
||||
|
||||
if [ -n "${USE_CUDA_DOCKER_RUNTIME:-}" ]; then
|
||||
DRIVER_FN="NVIDIA-Linux-x86_64-450.51.06.run"
|
||||
DRIVER_FN="NVIDIA-Linux-x86_64-430.40.run"
|
||||
wget "https://s3.amazonaws.com/ossci-linux/nvidia_driver/$DRIVER_FN"
|
||||
sudo /bin/bash "$DRIVER_FN" -s --no-drm || (sudo cat /var/log/nvidia-installer.log && false)
|
||||
nvidia-smi
|
||||
|
||||
# Taken directly from https://github.com/NVIDIA/nvidia-docker
|
||||
# Add the package repositories
|
||||
distribution=$(. /etc/os-release;echo "$ID$VERSION_ID")
|
||||
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
|
||||
curl -s -L "https://nvidia.github.io/nvidia-docker/${distribution}/nvidia-docker.list" | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
|
||||
|
||||
sudo apt-get update -qq
|
||||
# Necessary to get the `--gpus` flag to function within docker
|
||||
sudo apt-get install -y nvidia-container-toolkit
|
||||
sudo systemctl restart docker
|
||||
else
|
||||
# Explicitly remove nvidia docker apt repositories if not building for cuda
|
||||
sudo rm -rf /etc/apt/sources.list.d/nvidia-docker.list
|
||||
fi
|
||||
|
||||
add_to_env_file() {
|
||||
local content
|
||||
content=$1
|
||||
# BASH_ENV should be set by CircleCI
|
||||
echo "${content}" >> "${BASH_ENV:-/tmp/env}"
|
||||
}
|
||||
|
||||
add_to_env_file "IN_CIRCLECI=1"
|
||||
add_to_env_file "COMMIT_SOURCE=${CIRCLE_BRANCH:-}"
|
||||
add_to_env_file "BUILD_ENVIRONMENT=${BUILD_ENVIRONMENT}"
|
||||
add_to_env_file "CIRCLE_PULL_REQUEST=${CIRCLE_PULL_REQUEST}"
|
||||
|
||||
|
||||
if [[ "${BUILD_ENVIRONMENT}" == *-build ]]; then
|
||||
add_to_env_file "SCCACHE_BUCKET=ossci-compiler-cache-circleci-v2"
|
||||
|
||||
SCCACHE_MAX_JOBS=$(( $(nproc) - 1 ))
|
||||
MEMORY_LIMIT_MAX_JOBS=8 # the "large" resource class on CircleCI has 32 CPU cores, if we use all of them we'll OOM
|
||||
MAX_JOBS=$(( ${SCCACHE_MAX_JOBS} > ${MEMORY_LIMIT_MAX_JOBS} ? ${MEMORY_LIMIT_MAX_JOBS} : ${SCCACHE_MAX_JOBS} ))
|
||||
add_to_env_file "MAX_JOBS=${MAX_JOBS}"
|
||||
|
||||
echo "declare -x IN_CIRCLECI=1" > /home/circleci/project/env
|
||||
echo "declare -x COMMIT_SOURCE=${CIRCLE_BRANCH:-}" >> /home/circleci/project/env
|
||||
echo "declare -x SCCACHE_BUCKET=ossci-compiler-cache-circleci-v2" >> /home/circleci/project/env
|
||||
if [ -n "${USE_CUDA_DOCKER_RUNTIME:-}" ]; then
|
||||
add_to_env_file "TORCH_CUDA_ARCH_LIST=5.2"
|
||||
echo "declare -x TORCH_CUDA_ARCH_LIST=5.2" >> /home/circleci/project/env
|
||||
fi
|
||||
export SCCACHE_MAX_JOBS=`expr $(nproc) - 1`
|
||||
export MEMORY_LIMIT_MAX_JOBS=8 # the "large" resource class on CircleCI has 32 CPU cores, if we use all of them we'll OOM
|
||||
export MAX_JOBS=$(( ${SCCACHE_MAX_JOBS} > ${MEMORY_LIMIT_MAX_JOBS} ? ${MEMORY_LIMIT_MAX_JOBS} : ${SCCACHE_MAX_JOBS} ))
|
||||
echo "declare -x MAX_JOBS=${MAX_JOBS}" >> /home/circleci/project/env
|
||||
|
||||
if [[ "${BUILD_ENVIRONMENT}" == *xla* ]]; then
|
||||
# This IAM user allows write access to S3 bucket for sccache & bazels3cache
|
||||
set +x
|
||||
add_to_env_file "XLA_CLANG_CACHE_S3_BUCKET_NAME=${XLA_CLANG_CACHE_S3_BUCKET_NAME:-}"
|
||||
add_to_env_file "AWS_ACCESS_KEY_ID=${CIRCLECI_AWS_ACCESS_KEY_FOR_SCCACHE_AND_XLA_BAZEL_S3_BUCKET_V2:-}"
|
||||
add_to_env_file "AWS_SECRET_ACCESS_KEY=${CIRCLECI_AWS_SECRET_KEY_FOR_SCCACHE_AND_XLA_BAZEL_S3_BUCKET_V2:-}"
|
||||
echo "declare -x XLA_CLANG_CACHE_S3_BUCKET_NAME=${XLA_CLANG_CACHE_S3_BUCKET_NAME:-}" >> /home/circleci/project/env
|
||||
echo "declare -x AWS_ACCESS_KEY_ID=${CIRCLECI_AWS_ACCESS_KEY_FOR_SCCACHE_AND_XLA_BAZEL_S3_BUCKET_V2:-}" >> /home/circleci/project/env
|
||||
echo "declare -x AWS_SECRET_ACCESS_KEY=${CIRCLECI_AWS_SECRET_KEY_FOR_SCCACHE_AND_XLA_BAZEL_S3_BUCKET_V2:-}" >> /home/circleci/project/env
|
||||
set -x
|
||||
else
|
||||
# This IAM user allows write access to S3 bucket for sccache
|
||||
set +x
|
||||
add_to_env_file "XLA_CLANG_CACHE_S3_BUCKET_NAME=${XLA_CLANG_CACHE_S3_BUCKET_NAME:-}"
|
||||
add_to_env_file "AWS_ACCESS_KEY_ID=${CIRCLECI_AWS_ACCESS_KEY_FOR_SCCACHE_S3_BUCKET_V4:-}"
|
||||
add_to_env_file "AWS_SECRET_ACCESS_KEY=${CIRCLECI_AWS_SECRET_KEY_FOR_SCCACHE_S3_BUCKET_V4:-}"
|
||||
echo "declare -x XLA_CLANG_CACHE_S3_BUCKET_NAME=${XLA_CLANG_CACHE_S3_BUCKET_NAME:-}" >> /home/circleci/project/env
|
||||
echo "declare -x AWS_ACCESS_KEY_ID=${CIRCLECI_AWS_ACCESS_KEY_FOR_SCCACHE_S3_BUCKET_V4:-}" >> /home/circleci/project/env
|
||||
echo "declare -x AWS_SECRET_ACCESS_KEY=${CIRCLECI_AWS_SECRET_KEY_FOR_SCCACHE_S3_BUCKET_V4:-}" >> /home/circleci/project/env
|
||||
set -x
|
||||
fi
|
||||
fi
|
||||
@ -93,5 +84,5 @@ fi
|
||||
set +x
|
||||
export AWS_ACCESS_KEY_ID=${CIRCLECI_AWS_ACCESS_KEY_FOR_ECR_READ_WRITE_V4:-}
|
||||
export AWS_SECRET_ACCESS_KEY=${CIRCLECI_AWS_SECRET_KEY_FOR_ECR_READ_WRITE_V4:-}
|
||||
eval "$(aws ecr get-login --region us-east-1 --no-include-email)"
|
||||
eval $(aws ecr get-login --region us-east-1 --no-include-email)
|
||||
set -x
|
||||
|
||||
@ -2,7 +2,7 @@
|
||||
set -eux -o pipefail
|
||||
|
||||
# Set up CircleCI GPG keys for apt, if needed
|
||||
curl --retry 3 -s -L https://packagecloud.io/circleci/trusty/gpgkey | sudo apt-key add -
|
||||
curl -L https://packagecloud.io/circleci/trusty/gpgkey | sudo apt-key add -
|
||||
|
||||
# Stop background apt updates. Hypothetically, the kill should not
|
||||
# be necessary, because stop is supposed to send a kill signal to
|
||||
@ -33,7 +33,7 @@ systemctl list-units --all | cat
|
||||
sudo pkill apt-get || true
|
||||
|
||||
# For even better luck, purge unattended-upgrades
|
||||
sudo apt-get purge -y unattended-upgrades || true
|
||||
sudo apt-get purge -y unattended-upgrades
|
||||
|
||||
cat /etc/apt/sources.list
|
||||
|
||||
|
||||
140
.circleci/scripts/should_run_job.py
Normal file
140
.circleci/scripts/should_run_job.py
Normal file
@ -0,0 +1,140 @@
|
||||
import argparse
|
||||
import re
|
||||
import sys
|
||||
|
||||
# Modify this variable if you want to change the set of default jobs
|
||||
# which are run on all pull requests.
|
||||
#
|
||||
# WARNING: Actually, this is a lie; we're currently also controlling
|
||||
# the set of jobs to run via the Workflows filters in CircleCI config.
|
||||
|
||||
default_set = set([
|
||||
# PyTorch CPU
|
||||
# Selected oldest Python 2 version to ensure Python 2 coverage
|
||||
'pytorch-linux-xenial-py2.7.9',
|
||||
# PyTorch CUDA
|
||||
'pytorch-linux-xenial-cuda9-cudnn7-py3',
|
||||
# PyTorch ASAN
|
||||
'pytorch-linux-xenial-py3-clang5-asan',
|
||||
# PyTorch DEBUG
|
||||
'pytorch-linux-xenial-py3.6-gcc5.4',
|
||||
# LibTorch
|
||||
'pytorch-libtorch-linux-xenial-cuda9-cudnn7-py3',
|
||||
|
||||
# Caffe2 CPU
|
||||
'caffe2-py2-mkl-ubuntu16.04',
|
||||
# Caffe2 CUDA
|
||||
'caffe2-py3.5-cuda10.1-cudnn7-ubuntu16.04',
|
||||
# Caffe2 ONNX
|
||||
'caffe2-onnx-py2-gcc5-ubuntu16.04',
|
||||
'caffe2-onnx-py3.6-clang7-ubuntu16.04',
|
||||
# Caffe2 Clang
|
||||
'caffe2-py2-clang7-ubuntu16.04',
|
||||
# Caffe2 CMake
|
||||
'caffe2-cmake-cuda9.0-cudnn7-ubuntu16.04',
|
||||
# Caffe2 CentOS
|
||||
'caffe2-py3.6-devtoolset7-cuda9.0-cudnn7-centos7',
|
||||
|
||||
# Binaries
|
||||
'manywheel 2.7mu cpu devtoolset7',
|
||||
'libtorch 2.7m cpu devtoolset7',
|
||||
'libtorch 2.7m cpu gcc5.4_cxx11-abi',
|
||||
'libtorch 2.7 cpu',
|
||||
'libtorch-ios-11.2.1-nightly-x86_64-build',
|
||||
'libtorch-ios-11.2.1-nightly-arm64-build',
|
||||
'libtorch-ios-11.2.1-nightly-binary-build-upload',
|
||||
|
||||
# Caffe2 Android
|
||||
'caffe2-py2-android-ubuntu16.04',
|
||||
# Caffe2 OSX
|
||||
'caffe2-py2-system-macos10.13',
|
||||
# PyTorch OSX
|
||||
'pytorch-macos-10.13-py3',
|
||||
'pytorch-macos-10.13-cuda9.2-cudnn7-py3',
|
||||
# PyTorch Android
|
||||
'pytorch-linux-xenial-py3-clang5-android-ndk-r19c-x86_32-build',
|
||||
'pytorch-linux-xenial-py3-clang5-android-ndk-r19',
|
||||
# PyTorch Android gradle
|
||||
'pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-build-only-x86_32',
|
||||
|
||||
# Pytorch iOS builds
|
||||
'pytorch-ios-11.2.1-x86_64_build',
|
||||
'pytorch-ios-11.2.1-arm64_build',
|
||||
# PyTorch Mobile builds
|
||||
'pytorch-linux-xenial-py3-clang5-mobile-build',
|
||||
|
||||
# Pytorch backward compatibility check
|
||||
'pytorch-linux-backward-compatibility-check-test',
|
||||
|
||||
# XLA
|
||||
'pytorch-xla-linux-xenial-py3.6-clang7',
|
||||
|
||||
# GraphExecutor config jobs
|
||||
'pytorch-linux-xenial-py3.6-gcc5.4-ge_config_simple-test',
|
||||
'pytorch-linux-xenial-py3.6-gcc5.4-ge_config_legacy-test',
|
||||
|
||||
# Other checks
|
||||
'pytorch-short-perf-test-gpu',
|
||||
'pytorch-python-doc-push',
|
||||
'pytorch-cpp-doc-push',
|
||||
])
|
||||
|
||||
# Collection of jobs that are *temporarily* excluded from running on PRs.
|
||||
# Use this if there is a long-running job breakage that we can't fix with a
|
||||
# single revert.
|
||||
skip_override = {
|
||||
# example entry:
|
||||
# 'pytorch-cpp-doc-push': "https://github.com/pytorch/pytorch/issues/<related issue>"
|
||||
}
|
||||
|
||||
# Takes in commit message to analyze via stdin
|
||||
#
|
||||
# This script will query Git and attempt to determine if we should
|
||||
# run the current CI job under question
|
||||
#
|
||||
# NB: Try to avoid hard-coding names here, so there's less place to update when jobs
|
||||
# are updated/renamed
|
||||
#
|
||||
# Semantics in the presence of multiple tags:
|
||||
# - Let D be the set of default builds
|
||||
# - Let S be the set of explicitly specified builds
|
||||
# - Let O be the set of temporarily skipped builds
|
||||
# - Run S \/ (D - O)
|
||||
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument('build_environment')
|
||||
args = parser.parse_args()
|
||||
|
||||
commit_msg = sys.stdin.read()
|
||||
|
||||
# Matches anything that looks like [foo ci] or [ci foo] or [foo test]
|
||||
# or [test foo]
|
||||
RE_MARKER = re.compile(r'\[(?:([^ \[\]]+) )?(?:ci|test)(?: ([^ \[\]]+))?\]')
|
||||
|
||||
markers = RE_MARKER.finditer(commit_msg)
|
||||
|
||||
for m in markers:
|
||||
if m.group(1) and m.group(2):
|
||||
print("Unrecognized marker: {}".format(m.group(0)))
|
||||
continue
|
||||
spec = m.group(1) or m.group(2)
|
||||
if spec is None:
|
||||
print("Unrecognized marker: {}".format(m.group(0)))
|
||||
continue
|
||||
if spec in args.build_environment or spec == 'all':
|
||||
print("Accepting {} due to commit marker {}".format(args.build_environment, m.group(0)))
|
||||
sys.exit(0)
|
||||
|
||||
skip_override_set = set(skip_override.keys())
|
||||
should_run_set = default_set - skip_override_set
|
||||
for spec in should_run_set:
|
||||
if spec in args.build_environment:
|
||||
print("Accepting {} as part of default set".format(args.build_environment))
|
||||
sys.exit(0)
|
||||
|
||||
print("Rejecting {}".format(args.build_environment))
|
||||
for spec, issue in skip_override.items():
|
||||
if spec in args.build_environment:
|
||||
print("This job is temporarily excluded from running on PRs. Reason: {}".format(issue))
|
||||
break
|
||||
sys.exit(1)
|
||||
29
.circleci/scripts/should_run_job.sh
Executable file
29
.circleci/scripts/should_run_job.sh
Executable file
@ -0,0 +1,29 @@
|
||||
#!/usr/bin/env bash
|
||||
set -exu -o pipefail
|
||||
|
||||
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
|
||||
|
||||
# Check if we should actually run
|
||||
echo "BUILD_ENVIRONMENT: ${BUILD_ENVIRONMENT:-}"
|
||||
echo "CIRCLE_PULL_REQUEST: ${CIRCLE_PULL_REQUEST:-}"
|
||||
if [ -z "${BUILD_ENVIRONMENT:-}" ]; then
|
||||
echo "Cannot run should_run_job.sh if BUILD_ENVIRONMENT is not defined!"
|
||||
echo "CircleCI scripts are probably misconfigured."
|
||||
exit 1
|
||||
fi
|
||||
if ! [ -e "$SCRIPT_DIR/COMMIT_MSG" ]; then
|
||||
echo "Cannot run should_run_job.sh if you don't have COMMIT_MSG"
|
||||
echo "written out. Are you perhaps running the wrong copy of this script?"
|
||||
echo "You should be running the copy in ~/workspace; SCRIPT_DIR=$SCRIPT_DIR"
|
||||
exit 1
|
||||
fi
|
||||
if [ -n "${CIRCLE_PULL_REQUEST:-}" ]; then
|
||||
if [[ $CIRCLE_BRANCH != "ci-all/"* ]] && [[ $CIRCLE_BRANCH != "nightly" ]] && [[ $CIRCLE_BRANCH != "postnightly" ]] ; then
|
||||
# Don't swallow "script doesn't exist
|
||||
[ -e "$SCRIPT_DIR/should_run_job.py" ]
|
||||
if ! python "$SCRIPT_DIR/should_run_job.py" "${BUILD_ENVIRONMENT:-}" < "$SCRIPT_DIR/COMMIT_MSG" ; then
|
||||
circleci step halt
|
||||
exit
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
@ -1,147 +0,0 @@
|
||||
import glob
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import os.path
|
||||
import pathlib
|
||||
import re
|
||||
import sys
|
||||
import time
|
||||
import zipfile
|
||||
|
||||
import requests
|
||||
|
||||
|
||||
def get_size(file_dir):
|
||||
try:
|
||||
# we should only expect one file, if no, something is wrong
|
||||
file_name = glob.glob(os.path.join(file_dir, "*"))[0]
|
||||
return os.stat(file_name).st_size
|
||||
except:
|
||||
logging.exception(f"error getting file from: {file_dir}")
|
||||
return 0
|
||||
|
||||
|
||||
def build_message(size):
|
||||
pkg_type, py_ver, cu_ver, *_ = os.environ.get("BUILD_ENVIRONMENT", "").split() + [
|
||||
None,
|
||||
None,
|
||||
None,
|
||||
]
|
||||
os_name = os.uname()[0].lower()
|
||||
if os_name == "darwin":
|
||||
os_name = "macos"
|
||||
return {
|
||||
"normal": {
|
||||
"os": os_name,
|
||||
"pkg_type": pkg_type,
|
||||
"py_ver": py_ver,
|
||||
"cu_ver": cu_ver,
|
||||
"pr": os.environ.get("CIRCLE_PR_NUMBER"),
|
||||
"build_num": os.environ.get("CIRCLE_BUILD_NUM"),
|
||||
"sha1": os.environ.get("CIRCLE_SHA1"),
|
||||
"branch": os.environ.get("CIRCLE_BRANCH"),
|
||||
},
|
||||
"int": {
|
||||
"time": int(time.time()),
|
||||
"size": size,
|
||||
"commit_time": int(os.environ.get("COMMIT_TIME", "0")),
|
||||
"run_duration": int(time.time() - os.path.getmtime(os.path.realpath(__file__))),
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
def send_message(messages):
|
||||
access_token = os.environ.get("SCRIBE_GRAPHQL_ACCESS_TOKEN")
|
||||
if not access_token:
|
||||
raise ValueError("Can't find access token from environment variable")
|
||||
url = "https://graph.facebook.com/scribe_logs"
|
||||
r = requests.post(
|
||||
url,
|
||||
data={
|
||||
"access_token": access_token,
|
||||
"logs": json.dumps(
|
||||
[
|
||||
{
|
||||
"category": "perfpipe_pytorch_binary_size",
|
||||
"message": json.dumps(message),
|
||||
"line_escape": False,
|
||||
}
|
||||
for message in messages
|
||||
]
|
||||
),
|
||||
},
|
||||
)
|
||||
print(r.text)
|
||||
r.raise_for_status()
|
||||
|
||||
|
||||
def report_android_sizes(file_dir):
|
||||
def gen_sizes():
|
||||
# we should only expect one file, if no, something is wrong
|
||||
aar_files = list(pathlib.Path(file_dir).rglob("pytorch_android-*.aar"))
|
||||
if len(aar_files) != 1:
|
||||
logging.exception(f"error getting aar files from: {file_dir} / {aar_files}")
|
||||
return
|
||||
|
||||
aar_file = aar_files[0]
|
||||
zf = zipfile.ZipFile(aar_file)
|
||||
for info in zf.infolist():
|
||||
# Scan ".so" libs in `jni` folder. Examples:
|
||||
# jni/arm64-v8a/libfbjni.so
|
||||
# jni/arm64-v8a/libpytorch_jni.so
|
||||
m = re.match(r"^jni/([^/]+)/(.*\.so)$", info.filename)
|
||||
if not m:
|
||||
continue
|
||||
arch, lib = m.groups()
|
||||
# report per architecture library size
|
||||
yield [arch, lib, info.compress_size, info.file_size]
|
||||
|
||||
# report whole package size
|
||||
yield ["aar", aar_file.name, os.stat(aar_file).st_size, 0]
|
||||
|
||||
def gen_messages():
|
||||
android_build_type = os.environ.get("ANDROID_BUILD_TYPE")
|
||||
for arch, lib, comp_size, uncomp_size in gen_sizes():
|
||||
print(android_build_type, arch, lib, comp_size, uncomp_size)
|
||||
yield {
|
||||
"normal": {
|
||||
"os": "android",
|
||||
# TODO: create dedicated columns
|
||||
"pkg_type": "{}/{}/{}".format(android_build_type, arch, lib),
|
||||
"cu_ver": "", # dummy value for derived field `build_name`
|
||||
"py_ver": "", # dummy value for derived field `build_name`
|
||||
"pr": os.environ.get("CIRCLE_PR_NUMBER"),
|
||||
"build_num": os.environ.get("CIRCLE_BUILD_NUM"),
|
||||
"sha1": os.environ.get("CIRCLE_SHA1"),
|
||||
"branch": os.environ.get("CIRCLE_BRANCH"),
|
||||
},
|
||||
"int": {
|
||||
"time": int(time.time()),
|
||||
"commit_time": int(os.environ.get("COMMIT_TIME", "0")),
|
||||
"run_duration": int(time.time() - os.path.getmtime(os.path.realpath(__file__))),
|
||||
"size": comp_size,
|
||||
"raw_size": uncomp_size,
|
||||
},
|
||||
}
|
||||
|
||||
send_message(list(gen_messages()))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
file_dir = os.environ.get(
|
||||
"PYTORCH_FINAL_PACKAGE_DIR", "/home/circleci/project/final_pkgs"
|
||||
)
|
||||
if len(sys.argv) == 2:
|
||||
file_dir = sys.argv[1]
|
||||
print("checking dir: " + file_dir)
|
||||
|
||||
if "-android" in os.environ.get("BUILD_ENVIRONMENT", ""):
|
||||
report_android_sizes(file_dir)
|
||||
else:
|
||||
size = get_size(file_dir)
|
||||
if size != 0:
|
||||
try:
|
||||
send_message([build_message(size)])
|
||||
except:
|
||||
logging.exception("can't send message")
|
||||
@ -1,34 +0,0 @@
|
||||
$VS_DOWNLOAD_LINK = "https://aka.ms/vs/15/release/vs_buildtools.exe"
|
||||
$COLLECT_DOWNLOAD_LINK = "https://aka.ms/vscollect.exe"
|
||||
$VS_INSTALL_ARGS = @("--nocache","--quiet","--wait", "--add Microsoft.VisualStudio.Workload.VCTools",
|
||||
"--add Microsoft.VisualStudio.Component.VC.Tools.14.13",
|
||||
"--add Microsoft.Component.MSBuild",
|
||||
"--add Microsoft.VisualStudio.Component.Roslyn.Compiler",
|
||||
"--add Microsoft.VisualStudio.Component.TextTemplating",
|
||||
"--add Microsoft.VisualStudio.Component.VC.CoreIde",
|
||||
"--add Microsoft.VisualStudio.Component.VC.Redist.14.Latest",
|
||||
"--add Microsoft.VisualStudio.ComponentGroup.NativeDesktop.Core",
|
||||
"--add Microsoft.VisualStudio.Component.VC.Tools.x86.x64",
|
||||
"--add Microsoft.VisualStudio.ComponentGroup.NativeDesktop.Win81")
|
||||
|
||||
curl.exe --retry 3 -kL $VS_DOWNLOAD_LINK --output vs_installer.exe
|
||||
if ($LASTEXITCODE -ne 0) {
|
||||
echo "Download of the VS 2017 installer failed"
|
||||
exit 1
|
||||
}
|
||||
|
||||
$process = Start-Process "${PWD}\vs_installer.exe" -ArgumentList $VS_INSTALL_ARGS -NoNewWindow -Wait -PassThru
|
||||
Remove-Item -Path vs_installer.exe -Force
|
||||
$exitCode = $process.ExitCode
|
||||
if (($exitCode -ne 0) -and ($exitCode -ne 3010)) {
|
||||
echo "VS 2017 installer exited with code $exitCode, which should be one of [0, 3010]."
|
||||
curl.exe --retry 3 -kL $COLLECT_DOWNLOAD_LINK --output Collect.exe
|
||||
if ($LASTEXITCODE -ne 0) {
|
||||
echo "Download of the VS Collect tool failed."
|
||||
exit 1
|
||||
}
|
||||
Start-Process "${PWD}\Collect.exe" -NoNewWindow -Wait -PassThru
|
||||
New-Item -Path "C:\w\build-results" -ItemType "directory" -Force
|
||||
Copy-Item -Path "C:\Users\circleci\AppData\Local\Temp\vslogs.zip" -Destination "C:\w\build-results\"
|
||||
exit 1
|
||||
}
|
||||
@ -1,57 +0,0 @@
|
||||
#!/bin/bash
|
||||
set -eux -o pipefail
|
||||
|
||||
if [[ "$CUDA_VERSION" == "10" ]]; then
|
||||
cuda_complete_version="10.1"
|
||||
cuda_installer_name="cuda_10.1.243_426.00_win10"
|
||||
msbuild_project_dir="CUDAVisualStudioIntegration/extras/visual_studio_integration/MSBuildExtensions"
|
||||
cuda_install_packages="nvcc_10.1 cuobjdump_10.1 nvprune_10.1 cupti_10.1 cublas_10.1 cublas_dev_10.1 cudart_10.1 cufft_10.1 cufft_dev_10.1 curand_10.1 curand_dev_10.1 cusolver_10.1 cusolver_dev_10.1 cusparse_10.1 cusparse_dev_10.1 nvgraph_10.1 nvgraph_dev_10.1 npp_10.1 npp_dev_10.1 nvrtc_10.1 nvrtc_dev_10.1 nvml_dev_10.1"
|
||||
elif [[ "$CUDA_VERSION" == "11" ]]; then
|
||||
cuda_complete_version="11.0"
|
||||
cuda_installer_name="cuda_11.0.2_451.48_win10"
|
||||
msbuild_project_dir="visual_studio_integration/CUDAVisualStudioIntegration/extras/visual_studio_integration/MSBuildExtensions"
|
||||
cuda_install_packages="nvcc_11.0 cuobjdump_11.0 nvprune_11.0 nvprof_11.0 cupti_11.0 cublas_11.0 cublas_dev_11.0 cudart_11.0 cufft_11.0 cufft_dev_11.0 curand_11.0 curand_dev_11.0 cusolver_11.0 cusolver_dev_11.0 cusparse_11.0 cusparse_dev_11.0 npp_11.0 npp_dev_11.0 nvrtc_11.0 nvrtc_dev_11.0 nvml_dev_11.0"
|
||||
else
|
||||
echo "CUDA_VERSION $CUDA_VERSION is not supported yet"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
cuda_installer_link="https://ossci-windows.s3.amazonaws.com/${cuda_installer_name}.exe"
|
||||
|
||||
curl --retry 3 -kLO $cuda_installer_link
|
||||
7z x ${cuda_installer_name}.exe -o${cuda_installer_name}
|
||||
cd ${cuda_installer_name}
|
||||
mkdir cuda_install_logs
|
||||
|
||||
set +e
|
||||
|
||||
./setup.exe -s ${cuda_install_packages} -loglevel:6 -log:"$(pwd -W)/cuda_install_logs"
|
||||
|
||||
set -e
|
||||
|
||||
if [[ "${VC_YEAR}" == "2017" ]]; then
|
||||
cp -r ${msbuild_project_dir}/* "C:/Program Files (x86)/Microsoft Visual Studio/2017/${VC_PRODUCT}/Common7/IDE/VC/VCTargets/BuildCustomizations/"
|
||||
else
|
||||
cp -r ${msbuild_project_dir}/* "C:/Program Files (x86)/Microsoft Visual Studio/2019/${VC_PRODUCT}/MSBuild/Microsoft/VC/v160/BuildCustomizations/"
|
||||
fi
|
||||
|
||||
if ! ls "/c/Program Files/NVIDIA Corporation/NvToolsExt/bin/x64/nvToolsExt64_1.dll"
|
||||
then
|
||||
curl --retry 3 -kLO https://ossci-windows.s3.amazonaws.com/NvToolsExt.7z
|
||||
7z x NvToolsExt.7z -oNvToolsExt
|
||||
mkdir -p "C:/Program Files/NVIDIA Corporation/NvToolsExt"
|
||||
cp -r NvToolsExt/* "C:/Program Files/NVIDIA Corporation/NvToolsExt/"
|
||||
export NVTOOLSEXT_PATH="C:\\Program Files\\NVIDIA Corporation\\NvToolsExt\\"
|
||||
fi
|
||||
|
||||
if ! ls "/c/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v${cuda_complete_version}/bin/nvcc.exe"
|
||||
then
|
||||
echo "CUDA installation failed"
|
||||
mkdir -p /c/w/build-results
|
||||
7z a "c:\\w\\build-results\\cuda_install_logs.7z" cuda_install_logs
|
||||
exit 1
|
||||
fi
|
||||
|
||||
cd ..
|
||||
rm -rf ./${cuda_installer_name}
|
||||
rm -f ./${cuda_installer_name}.exe
|
||||
@ -1,21 +0,0 @@
|
||||
#!/bin/bash
|
||||
set -eux -o pipefail
|
||||
|
||||
if [[ "$CUDA_VERSION" == "10" ]]; then
|
||||
cuda_complete_version="10.1"
|
||||
cudnn_installer_name="cudnn-10.1-windows10-x64-v7.6.4.38"
|
||||
elif [[ "$CUDA_VERSION" == "11" ]]; then
|
||||
cuda_complete_version="11.0"
|
||||
cudnn_installer_name="cudnn-11.0-windows-x64-v8.0.2.39"
|
||||
else
|
||||
echo "CUDNN for CUDA_VERSION $CUDA_VERSION is not supported yet"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
cudnn_installer_link="https://ossci-windows.s3.amazonaws.com/${cudnn_installer_name}.zip"
|
||||
|
||||
curl --retry 3 -O $cudnn_installer_link
|
||||
7z x ${cudnn_installer_name}.zip -ocudnn
|
||||
cp -r cudnn/cuda/* "C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v${cuda_complete_version}/"
|
||||
rm -rf cudnn
|
||||
rm -f ${cudnn_installer_name}.zip
|
||||
44
.circleci/validate-docker-version.py
Executable file
44
.circleci/validate-docker-version.py
Executable file
@ -0,0 +1,44 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import urllib.request
|
||||
import re
|
||||
|
||||
import cimodel.data.pytorch_build_definitions as pytorch_build_definitions
|
||||
import cimodel.data.caffe2_build_definitions as caffe2_build_definitions
|
||||
|
||||
RE_VERSION = re.compile(r'allDeployedVersions = "([0-9,]+)"')
|
||||
|
||||
URL_TEMPLATE = (
|
||||
"https://raw.githubusercontent.com/pytorch/ossci-job-dsl/"
|
||||
"master/src/main/groovy/ossci/{}/DockerVersion.groovy"
|
||||
)
|
||||
|
||||
def check_version(job, expected_version):
|
||||
url = URL_TEMPLATE.format(job)
|
||||
with urllib.request.urlopen(url) as f:
|
||||
contents = f.read().decode('utf-8')
|
||||
m = RE_VERSION.search(contents)
|
||||
if not m:
|
||||
raise RuntimeError(
|
||||
"Unbelievable! I could not find the variable allDeployedVersions in "
|
||||
"{}; did the organization of ossci-job-dsl change?\n\nFull contents:\n{}"
|
||||
.format(url, contents)
|
||||
)
|
||||
valid_versions = [int(v) for v in m.group(1).split(',')]
|
||||
if expected_version not in valid_versions:
|
||||
raise RuntimeError(
|
||||
"We configured {} to use Docker version {}; but this "
|
||||
"version is not deployed in {}. Non-deployed versions will be "
|
||||
"garbage collected two weeks after they are created. DO NOT LAND "
|
||||
"THIS TO MASTER without also updating ossci-job-dsl with this version."
|
||||
"\n\nDeployed versions: {}"
|
||||
.format(job, expected_version, url, m.group(1))
|
||||
)
|
||||
|
||||
def validate_docker_version():
|
||||
check_version('pytorch', pytorch_build_definitions.DOCKER_IMAGE_VERSION)
|
||||
check_version('caffe2', caffe2_build_definitions.DOCKER_IMAGE_VERSION)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
validate_docker_version()
|
||||
@ -52,15 +52,3 @@ binary_mac_params: &binary_mac_params
|
||||
environment:
|
||||
BUILD_ENVIRONMENT: << parameters.build_environment >>
|
||||
|
||||
binary_windows_params: &binary_windows_params
|
||||
parameters:
|
||||
build_environment:
|
||||
type: string
|
||||
default: ""
|
||||
executor:
|
||||
type: string
|
||||
default: "windows-xlarge-cpu-with-nvidia-cuda"
|
||||
environment:
|
||||
BUILD_ENVIRONMENT: << parameters.build_environment >>
|
||||
BUILD_FOR_SYSTEM: windows
|
||||
JOB_EXECUTOR: <<parameters.executor>>
|
||||
20
.circleci/verbatim-sources/binary-build-tests.yml
Normal file
20
.circleci/verbatim-sources/binary-build-tests.yml
Normal file
@ -0,0 +1,20 @@
|
||||
|
||||
# There is currently no testing for libtorch TODO
|
||||
# binary_linux_libtorch_2.7m_cpu_test:
|
||||
# environment:
|
||||
# BUILD_ENVIRONMENT: "libtorch 2.7m cpu"
|
||||
# resource_class: gpu.medium
|
||||
# <<: *binary_linux_test
|
||||
#
|
||||
# binary_linux_libtorch_2.7m_cu90_test:
|
||||
# environment:
|
||||
# BUILD_ENVIRONMENT: "libtorch 2.7m cu90"
|
||||
# resource_class: gpu.medium
|
||||
# <<: *binary_linux_test
|
||||
#
|
||||
# binary_linux_libtorch_2.7m_cu100_test:
|
||||
# environment:
|
||||
# BUILD_ENVIRONMENT: "libtorch 2.7m cu100"
|
||||
# resource_class: gpu.medium
|
||||
# <<: *binary_linux_test
|
||||
|
||||
@ -1,42 +1,50 @@
|
||||
binary_linux_build:
|
||||
<<: *binary_linux_build_params
|
||||
steps:
|
||||
- checkout
|
||||
- calculate_docker_image_tag
|
||||
# See Note [Workspace for CircleCI scripts] in job-specs-setup.yml
|
||||
- should_run_job
|
||||
- run:
|
||||
<<: *binary_checkout
|
||||
- run:
|
||||
<<: *binary_populate_env
|
||||
- run:
|
||||
name: Install unbuffer and ts
|
||||
command: |
|
||||
set -eux -o pipefail
|
||||
source /env
|
||||
OS_NAME=`awk -F= '/^NAME/{print $2}' /etc/os-release`
|
||||
if [[ "$OS_NAME" == *"CentOS Linux"* ]]; then
|
||||
retry yum -q -y install epel-release
|
||||
retry yum -q -y install expect moreutils
|
||||
elif [[ "$OS_NAME" == *"Ubuntu"* ]]; then
|
||||
retry apt-get update
|
||||
retry apt-get -y install expect moreutils
|
||||
conda install -y -c eumetsat expect
|
||||
conda install -y cmake
|
||||
fi
|
||||
- run:
|
||||
name: Update compiler to devtoolset7
|
||||
command: |
|
||||
set -eux -o pipefail
|
||||
source /env
|
||||
if [[ "$DESIRED_DEVTOOLSET" == 'devtoolset7' ]]; then
|
||||
source "/builder/update_compiler.sh"
|
||||
|
||||
# Env variables are not persisted into the next step
|
||||
echo "export PATH=$PATH" >> /env
|
||||
echo "export LD_LIBRARY_PATH=$LD_LIBRARY_PATH" >> /env
|
||||
else
|
||||
echo "Not updating compiler"
|
||||
fi
|
||||
- run:
|
||||
name: Build
|
||||
no_output_timeout: "1h"
|
||||
command: |
|
||||
source "/pytorch/.circleci/scripts/binary_linux_build.sh"
|
||||
# Preserve build log
|
||||
if [ -f /pytorch/build/.ninja_log ]; then
|
||||
cp /pytorch/build/.ninja_log /final_pkgs
|
||||
fi
|
||||
- run:
|
||||
name: Output binary sizes
|
||||
no_output_timeout: "1m"
|
||||
command: |
|
||||
ls -lah /final_pkgs
|
||||
- run:
|
||||
name: save binary size
|
||||
no_output_timeout: "5m"
|
||||
command: |
|
||||
source /env
|
||||
cd /pytorch && export COMMIT_TIME=$(git log --max-count=1 --format=%ct || echo 0)
|
||||
python3 -mpip install requests && \
|
||||
SCRIBE_GRAPHQL_ACCESS_TOKEN=${SCRIBE_GRAPHQL_ACCESS_TOKEN} \
|
||||
python3 /pytorch/.circleci/scripts/upload_binary_size_to_scuba.py || exit 0
|
||||
- persist_to_workspace:
|
||||
root: /
|
||||
paths: final_pkgs
|
||||
|
||||
- store_artifacts:
|
||||
path: /final_pkgs
|
||||
|
||||
# This should really just be another step of the binary_linux_build job above.
|
||||
# This isn't possible right now b/c the build job uses the docker executor
|
||||
# (otherwise they'd be really really slow) but this one uses the macine
|
||||
@ -45,10 +53,11 @@
|
||||
binary_linux_test:
|
||||
<<: *binary_linux_test_upload_params
|
||||
machine:
|
||||
image: ubuntu-1604:202007-01
|
||||
image: ubuntu-1604:201903-01
|
||||
steps:
|
||||
# See Note [Workspace for CircleCI scripts] in job-specs-setup.yml
|
||||
- checkout
|
||||
- should_run_job
|
||||
# TODO: We shouldn't attach the workspace multiple times
|
||||
- attach_workspace:
|
||||
at: /home/circleci/project
|
||||
- setup_linux_system_environment
|
||||
@ -60,45 +69,29 @@
|
||||
- run:
|
||||
name: Prepare test code
|
||||
no_output_timeout: "1h"
|
||||
command: .circleci/scripts/binary_linux_test.sh
|
||||
command: ~/workspace/.circleci/scripts/binary_linux_test.sh
|
||||
- run:
|
||||
<<: *binary_run_in_docker
|
||||
|
||||
binary_upload:
|
||||
parameters:
|
||||
package_type:
|
||||
type: string
|
||||
description: "What type of package we are uploading (eg. wheel, libtorch, conda)"
|
||||
default: "wheel"
|
||||
upload_subfolder:
|
||||
type: string
|
||||
description: "What subfolder to put our package into (eg. cpu, cudaX.Y, etc.)"
|
||||
default: "cpu"
|
||||
docker:
|
||||
- image: continuumio/miniconda3
|
||||
environment:
|
||||
- DRY_RUN: disabled
|
||||
- PACKAGE_TYPE: "<< parameters.package_type >>"
|
||||
- UPLOAD_SUBFOLDER: "<< parameters.upload_subfolder >>"
|
||||
binary_linux_upload:
|
||||
<<: *binary_linux_test_upload_params
|
||||
machine:
|
||||
image: ubuntu-1604:201903-01
|
||||
steps:
|
||||
- attach_workspace:
|
||||
at: /tmp/workspace
|
||||
- checkout
|
||||
- designate_upload_channel
|
||||
- run:
|
||||
name: Install dependencies
|
||||
no_output_timeout: "1h"
|
||||
command: |
|
||||
conda install -yq anaconda-client
|
||||
pip install -q awscli
|
||||
- run:
|
||||
name: Do upload
|
||||
no_output_timeout: "1h"
|
||||
command: |
|
||||
AWS_ACCESS_KEY_ID="${PYTORCH_BINARY_AWS_ACCESS_KEY_ID}" \
|
||||
AWS_SECRET_ACCESS_KEY="${PYTORCH_BINARY_AWS_SECRET_ACCESS_KEY}" \
|
||||
ANACONDA_API_TOKEN="${CONDA_PYTORCHBOT_TOKEN}" \
|
||||
.circleci/scripts/binary_upload.sh
|
||||
# See Note [Workspace for CircleCI scripts] in job-specs-setup.yml
|
||||
- should_run_job
|
||||
- setup_linux_system_environment
|
||||
- setup_ci_environment
|
||||
- attach_workspace:
|
||||
at: /home/circleci/project
|
||||
- run:
|
||||
<<: *binary_populate_env
|
||||
- run:
|
||||
<<: *binary_install_miniconda
|
||||
- run:
|
||||
name: Upload
|
||||
no_output_timeout: "1h"
|
||||
command: ~/workspace/.circleci/scripts/binary_linux_upload.sh
|
||||
|
||||
# Nighlty build smoke tests defaults
|
||||
# These are the second-round smoke tests. These make sure that the binaries are
|
||||
@ -108,10 +101,12 @@
|
||||
smoke_linux_test:
|
||||
<<: *binary_linux_test_upload_params
|
||||
machine:
|
||||
image: ubuntu-1604:202007-01
|
||||
image: ubuntu-1604:201903-01
|
||||
steps:
|
||||
- checkout
|
||||
- calculate_docker_image_tag
|
||||
- attach_workspace:
|
||||
at: ~/workspace
|
||||
- attach_workspace:
|
||||
at: /home/circleci/project
|
||||
- setup_linux_system_environment
|
||||
- setup_ci_environment
|
||||
- run:
|
||||
@ -135,9 +130,12 @@
|
||||
smoke_mac_test:
|
||||
<<: *binary_linux_test_upload_params
|
||||
macos:
|
||||
xcode: "11.2.1"
|
||||
xcode: "9.0"
|
||||
steps:
|
||||
- checkout
|
||||
- attach_workspace:
|
||||
at: ~/workspace
|
||||
- attach_workspace: # TODO - we can `cp` from ~/workspace
|
||||
at: /Users/distiller/project
|
||||
- run:
|
||||
<<: *binary_checkout
|
||||
- run:
|
||||
@ -160,10 +158,10 @@
|
||||
binary_mac_build:
|
||||
<<: *binary_mac_params
|
||||
macos:
|
||||
xcode: "11.2.1"
|
||||
xcode: "9.0"
|
||||
steps:
|
||||
# See Note [Workspace for CircleCI scripts] in job-specs-setup.yml
|
||||
- checkout
|
||||
- should_run_job
|
||||
- run:
|
||||
<<: *binary_checkout
|
||||
- run:
|
||||
@ -198,13 +196,38 @@
|
||||
root: /Users/distiller/project
|
||||
paths: final_pkgs
|
||||
|
||||
binary_mac_upload: &binary_mac_upload
|
||||
<<: *binary_mac_params
|
||||
macos:
|
||||
xcode: "9.0"
|
||||
steps:
|
||||
# See Note [Workspace for CircleCI scripts] in job-specs-setup.yml
|
||||
- should_run_job
|
||||
- run:
|
||||
<<: *binary_checkout
|
||||
- run:
|
||||
<<: *binary_populate_env
|
||||
- brew_update
|
||||
- run:
|
||||
<<: *binary_install_miniconda
|
||||
- attach_workspace: # TODO - we can `cp` from ~/workspace
|
||||
at: /Users/distiller/project
|
||||
- run:
|
||||
name: Upload
|
||||
no_output_timeout: "10m"
|
||||
command: |
|
||||
script="/Users/distiller/project/pytorch/.circleci/scripts/binary_macos_upload.sh"
|
||||
cat "$script"
|
||||
source "$script"
|
||||
|
||||
binary_ios_build:
|
||||
<<: *pytorch_ios_params
|
||||
macos:
|
||||
xcode: "12.0"
|
||||
xcode: "11.2.1"
|
||||
steps:
|
||||
- attach_workspace:
|
||||
at: ~/workspace
|
||||
- should_run_job
|
||||
- checkout
|
||||
- run_brew_for_ios_build
|
||||
- run:
|
||||
@ -224,14 +247,15 @@
|
||||
- persist_to_workspace:
|
||||
root: /Users/distiller/workspace/
|
||||
paths: ios
|
||||
|
||||
binary_ios_upload:
|
||||
|
||||
binary_ios_upload:
|
||||
<<: *pytorch_ios_params
|
||||
macos:
|
||||
xcode: "12.0"
|
||||
xcode: "11.2.1"
|
||||
steps:
|
||||
- attach_workspace:
|
||||
at: ~/workspace
|
||||
- should_run_job
|
||||
- checkout
|
||||
- run_brew_for_ios_build
|
||||
- run:
|
||||
@ -241,115 +265,3 @@
|
||||
script="/Users/distiller/project/.circleci/scripts/binary_ios_upload.sh"
|
||||
cat "$script"
|
||||
source "$script"
|
||||
|
||||
binary_windows_build:
|
||||
<<: *binary_windows_params
|
||||
parameters:
|
||||
build_environment:
|
||||
type: string
|
||||
default: ""
|
||||
executor:
|
||||
type: string
|
||||
default: "windows-xlarge-cpu-with-nvidia-cuda"
|
||||
executor: <<parameters.executor>>
|
||||
steps:
|
||||
# See Note [Workspace for CircleCI scripts] in job-specs-setup.yml
|
||||
- checkout
|
||||
- run:
|
||||
<<: *binary_checkout
|
||||
- run:
|
||||
<<: *binary_populate_env
|
||||
- run:
|
||||
name: Build
|
||||
no_output_timeout: "1h"
|
||||
command: |
|
||||
set -eux -o pipefail
|
||||
script="/c/w/p/.circleci/scripts/binary_windows_build.sh"
|
||||
cat "$script"
|
||||
source "$script"
|
||||
- persist_to_workspace:
|
||||
root: "C:/w"
|
||||
paths: final_pkgs
|
||||
|
||||
binary_windows_test:
|
||||
<<: *binary_windows_params
|
||||
parameters:
|
||||
build_environment:
|
||||
type: string
|
||||
default: ""
|
||||
executor:
|
||||
type: string
|
||||
default: "windows-medium-cpu-with-nvidia-cuda"
|
||||
executor: <<parameters.executor>>
|
||||
steps:
|
||||
- checkout
|
||||
- attach_workspace:
|
||||
at: c:/users/circleci/project
|
||||
- run:
|
||||
<<: *binary_checkout
|
||||
- run:
|
||||
<<: *binary_populate_env
|
||||
- run:
|
||||
name: Test
|
||||
no_output_timeout: "1h"
|
||||
command: |
|
||||
set -eux -o pipefail
|
||||
script="/c/w/p/.circleci/scripts/binary_windows_test.sh"
|
||||
cat "$script"
|
||||
source "$script"
|
||||
|
||||
smoke_windows_test:
|
||||
<<: *binary_windows_params
|
||||
parameters:
|
||||
build_environment:
|
||||
type: string
|
||||
default: ""
|
||||
executor:
|
||||
type: string
|
||||
default: "windows-medium-cpu-with-nvidia-cuda"
|
||||
executor: <<parameters.executor>>
|
||||
steps:
|
||||
- checkout
|
||||
- run:
|
||||
<<: *binary_checkout
|
||||
- run:
|
||||
<<: *binary_populate_env
|
||||
- run:
|
||||
name: Test
|
||||
no_output_timeout: "1h"
|
||||
command: |
|
||||
set -eux -o pipefail
|
||||
export TEST_NIGHTLY_PACKAGE=1
|
||||
script="/c/w/p/.circleci/scripts/binary_windows_test.sh"
|
||||
cat "$script"
|
||||
source "$script"
|
||||
|
||||
anaconda_prune:
|
||||
parameters:
|
||||
packages:
|
||||
type: string
|
||||
description: "What packages are we pruning? (quoted, space-separated string. eg. 'pytorch', 'torchvision torchaudio', etc.)"
|
||||
default: "pytorch"
|
||||
channel:
|
||||
type: string
|
||||
description: "What channel are we pruning? (eq. pytorch-nightly)"
|
||||
default: "pytorch-nightly"
|
||||
docker:
|
||||
- image: continuumio/miniconda3
|
||||
environment:
|
||||
- PACKAGES: "<< parameters.packages >>"
|
||||
- CHANNEL: "<< parameters.channel >>"
|
||||
steps:
|
||||
- checkout
|
||||
- run:
|
||||
name: Install dependencies
|
||||
no_output_timeout: "1h"
|
||||
command: |
|
||||
conda install -yq anaconda-client
|
||||
- run:
|
||||
name: Prune packages
|
||||
no_output_timeout: "1h"
|
||||
command: |
|
||||
ANACONDA_API_TOKEN="${CONDA_PYTORCHBOT_TOKEN}" \
|
||||
scripts/release/anaconda-prune/run.sh
|
||||
|
||||
@ -8,10 +8,10 @@
|
||||
# then install the one with the most recent version.
|
||||
update_s3_htmls: &update_s3_htmls
|
||||
machine:
|
||||
image: ubuntu-1604:202007-01
|
||||
resource_class: medium
|
||||
image: ubuntu-1604:201903-01
|
||||
steps:
|
||||
- checkout
|
||||
- attach_workspace:
|
||||
at: ~/workspace
|
||||
- setup_linux_system_environment
|
||||
- run:
|
||||
<<: *binary_checkout
|
||||
@ -28,15 +28,6 @@
|
||||
# make sure it has the same upload folder as the job it's attached to. This
|
||||
# function is idempotent, so it won't hurt anything; it's just a little
|
||||
# unnescessary"
|
||||
- run:
|
||||
name: define PIP_UPLOAD_FOLDER
|
||||
command: |
|
||||
our_upload_folder=nightly/
|
||||
# On tags upload to test instead
|
||||
if [[ -n "${CIRCLE_TAG}" ]]; then
|
||||
our_upload_folder=test/
|
||||
fi
|
||||
echo "export PIP_UPLOAD_FOLDER=${our_upload_folder}" >> ${BASH_ENV}
|
||||
- run:
|
||||
name: Update s3 htmls
|
||||
no_output_timeout: "1h"
|
||||
@ -51,3 +42,55 @@
|
||||
}
|
||||
retry pip install awscli==1.6
|
||||
"/home/circleci/project/builder/cron/update_s3_htmls.sh"
|
||||
|
||||
# Update s3 htmls for the nightlies
|
||||
update_s3_htmls_for_nightlies:
|
||||
environment:
|
||||
PIP_UPLOAD_FOLDER: "nightly/"
|
||||
<<: *update_s3_htmls
|
||||
|
||||
# Update s3 htmls for the nightlies for devtoolset7
|
||||
update_s3_htmls_for_nightlies_devtoolset7:
|
||||
environment:
|
||||
PIP_UPLOAD_FOLDER: "nightly/devtoolset7/"
|
||||
<<: *update_s3_htmls
|
||||
|
||||
|
||||
# upload_binary_logs job
|
||||
# The builder hud at pytorch.org/builder shows the sizes of all the binaries
|
||||
# over time. It gets this info from html files stored in S3, which this job
|
||||
# populates every day.
|
||||
upload_binary_sizes: &upload_binary_sizes
|
||||
machine:
|
||||
image: ubuntu-1604:201903-01
|
||||
steps:
|
||||
- attach_workspace:
|
||||
at: ~/workspace
|
||||
- setup_linux_system_environment
|
||||
- run:
|
||||
<<: *binary_checkout
|
||||
- run:
|
||||
<<: *binary_install_miniconda
|
||||
- run:
|
||||
name: Upload binary sizes
|
||||
no_output_timeout: "1h"
|
||||
command: |
|
||||
set +x
|
||||
echo "declare -x \"AWS_ACCESS_KEY_ID=${PYTORCH_BINARY_AWS_ACCESS_KEY_ID}\"" > /home/circleci/project/env
|
||||
echo "declare -x \"AWS_SECRET_ACCESS_KEY=${PYTORCH_BINARY_AWS_SECRET_ACCESS_KEY}\"" >> /home/circleci/project/env
|
||||
export DATE="$(date -u +%Y_%m_%d)"
|
||||
retry () {
|
||||
$* || (sleep 1 && $*) || (sleep 2 && $*) || (sleep 4 && $*) || (sleep 8 && $*)
|
||||
}
|
||||
source /home/circleci/project/env
|
||||
set -eux -o pipefail
|
||||
|
||||
# This is hardcoded to match binary_install_miniconda.sh
|
||||
export PATH="/home/circleci/project/miniconda/bin:$PATH"
|
||||
# Not any awscli will work. Most won't. This one will work
|
||||
retry conda create -qyn aws36 python=3.6
|
||||
source activate aws36
|
||||
pip install awscli==1.16.46
|
||||
|
||||
"/home/circleci/project/builder/cron/upload_binary_sizes.sh"
|
||||
|
||||
@ -1,14 +0,0 @@
|
||||
|
||||
promote_common: &promote_common
|
||||
docker:
|
||||
- image: pytorch/release
|
||||
parameters:
|
||||
package_name:
|
||||
description: "package name to promote"
|
||||
type: string
|
||||
default: ""
|
||||
environment:
|
||||
PACKAGE_NAME: << parameters.package_name >>
|
||||
ANACONDA_API_TOKEN: ${CONDA_PYTORCHBOT_TOKEN}
|
||||
AWS_ACCESS_KEY_ID: ${PYTORCH_BINARY_AWS_ACCESS_KEY_ID}
|
||||
AWS_SECRET_ACCESS_KEY: ${PYTORCH_BINARY_AWS_SECRET_ACCESS_KEY}
|
||||
@ -1,85 +0,0 @@
|
||||
pytorch_params: &pytorch_params
|
||||
parameters:
|
||||
build_environment:
|
||||
type: string
|
||||
default: ""
|
||||
docker_image:
|
||||
type: string
|
||||
default: ""
|
||||
resource_class:
|
||||
type: string
|
||||
default: "large"
|
||||
use_cuda_docker_runtime:
|
||||
type: string
|
||||
default: ""
|
||||
build_only:
|
||||
type: string
|
||||
default: ""
|
||||
environment:
|
||||
BUILD_ENVIRONMENT: << parameters.build_environment >>
|
||||
DOCKER_IMAGE: << parameters.docker_image >>
|
||||
USE_CUDA_DOCKER_RUNTIME: << parameters.use_cuda_docker_runtime >>
|
||||
BUILD_ONLY: << parameters.build_only >>
|
||||
resource_class: << parameters.resource_class >>
|
||||
|
||||
pytorch_ios_params: &pytorch_ios_params
|
||||
parameters:
|
||||
build_environment:
|
||||
type: string
|
||||
default: ""
|
||||
ios_arch:
|
||||
type: string
|
||||
default: ""
|
||||
ios_platform:
|
||||
type: string
|
||||
default: ""
|
||||
op_list:
|
||||
type: string
|
||||
default: ""
|
||||
environment:
|
||||
BUILD_ENVIRONMENT: << parameters.build_environment >>
|
||||
IOS_ARCH: << parameters.ios_arch >>
|
||||
IOS_PLATFORM: << parameters.ios_platform >>
|
||||
SELECTED_OP_LIST: << parameters.op_list >>
|
||||
|
||||
pytorch_windows_params: &pytorch_windows_params
|
||||
parameters:
|
||||
executor:
|
||||
type: string
|
||||
default: "windows-xlarge-cpu-with-nvidia-cuda"
|
||||
build_environment:
|
||||
type: string
|
||||
default: ""
|
||||
test_name:
|
||||
type: string
|
||||
default: ""
|
||||
cuda_version:
|
||||
type: string
|
||||
default: "10"
|
||||
python_version:
|
||||
type: string
|
||||
default: "3.6"
|
||||
vc_version:
|
||||
type: string
|
||||
default: "14.16"
|
||||
vc_year:
|
||||
type: string
|
||||
default: "2019"
|
||||
vc_product:
|
||||
type: string
|
||||
default: "BuildTools"
|
||||
use_cuda:
|
||||
type: string
|
||||
default: ""
|
||||
environment:
|
||||
BUILD_ENVIRONMENT: <<parameters.build_environment>>
|
||||
SCCACHE_BUCKET: "ossci-compiler-cache"
|
||||
CUDA_VERSION: <<parameters.cuda_version>>
|
||||
PYTHON_VERSION: <<parameters.python_version>>
|
||||
VC_VERSION: <<parameters.vc_version>>
|
||||
VC_YEAR: <<parameters.vc_year>>
|
||||
VC_PRODUCT: <<parameters.vc_product>>
|
||||
USE_CUDA: <<parameters.use_cuda>>
|
||||
TORCH_CUDA_ARCH_LIST: "7.5"
|
||||
JOB_BASE_NAME: <<parameters.test_name>>
|
||||
JOB_EXECUTOR: <<parameters.executor>>
|
||||
28
.circleci/verbatim-sources/caffe2-build-params.yml
Normal file
28
.circleci/verbatim-sources/caffe2-build-params.yml
Normal file
@ -0,0 +1,28 @@
|
||||
caffe2_params: &caffe2_params
|
||||
parameters:
|
||||
build_environment:
|
||||
type: string
|
||||
default: ""
|
||||
build_ios:
|
||||
type: string
|
||||
default: ""
|
||||
docker_image:
|
||||
type: string
|
||||
default: ""
|
||||
use_cuda_docker_runtime:
|
||||
type: string
|
||||
default: ""
|
||||
build_only:
|
||||
type: string
|
||||
default: ""
|
||||
resource_class:
|
||||
type: string
|
||||
default: "large"
|
||||
environment:
|
||||
BUILD_ENVIRONMENT: << parameters.build_environment >>
|
||||
BUILD_IOS: << parameters.build_ios >>
|
||||
USE_CUDA_DOCKER_RUNTIME: << parameters.use_cuda_docker_runtime >>
|
||||
DOCKER_IMAGE: << parameters.docker_image >>
|
||||
BUILD_ONLY: << parameters.build_only >>
|
||||
resource_class: << parameters.resource_class >>
|
||||
|
||||
200
.circleci/verbatim-sources/caffe2-job-specs.yml
Normal file
200
.circleci/verbatim-sources/caffe2-job-specs.yml
Normal file
@ -0,0 +1,200 @@
|
||||
caffe2_linux_build:
|
||||
<<: *caffe2_params
|
||||
machine:
|
||||
image: ubuntu-1604:201903-01
|
||||
steps:
|
||||
# See Note [Workspace for CircleCI scripts] in job-specs-setup.yml
|
||||
- should_run_job
|
||||
- setup_linux_system_environment
|
||||
- checkout
|
||||
- setup_ci_environment
|
||||
- run:
|
||||
name: Build
|
||||
no_output_timeout: "1h"
|
||||
command: |
|
||||
set -e
|
||||
cat >/home/circleci/project/ci_build_script.sh \<<EOL
|
||||
# =================== The following code will be executed inside Docker container ===================
|
||||
set -ex
|
||||
export BUILD_ENVIRONMENT="$BUILD_ENVIRONMENT"
|
||||
|
||||
# Reinitialize submodules
|
||||
git submodule sync && git submodule update -q --init --recursive
|
||||
|
||||
# conda must be added to the path for Anaconda builds (this location must be
|
||||
# the same as that in install_anaconda.sh used to build the docker image)
|
||||
if [[ "${BUILD_ENVIRONMENT}" == conda* ]]; then
|
||||
export PATH=/opt/conda/bin:$PATH
|
||||
sudo chown -R jenkins:jenkins '/opt/conda'
|
||||
fi
|
||||
|
||||
# Build
|
||||
./.jenkins/caffe2/build.sh
|
||||
|
||||
# Show sccache stats if it is running
|
||||
if pgrep sccache > /dev/null; then
|
||||
sccache --show-stats
|
||||
fi
|
||||
# =================== The above code will be executed inside Docker container ===================
|
||||
EOL
|
||||
chmod +x /home/circleci/project/ci_build_script.sh
|
||||
|
||||
echo "DOCKER_IMAGE: "${DOCKER_IMAGE}
|
||||
time docker pull ${DOCKER_IMAGE} >/dev/null
|
||||
export id=$(docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -t -d -w /var/lib/jenkins ${DOCKER_IMAGE})
|
||||
docker cp /home/circleci/project/. $id:/var/lib/jenkins/workspace
|
||||
|
||||
export COMMAND='((echo "source ./workspace/env" && echo "sudo chown -R jenkins workspace && cd workspace && ./ci_build_script.sh") | docker exec -u jenkins -i "$id" bash) 2>&1'
|
||||
echo ${COMMAND} > ./command.sh && unbuffer bash ./command.sh | ts
|
||||
|
||||
# Push intermediate Docker image for next phase to use
|
||||
if [ -z "${BUILD_ONLY}" ]; then
|
||||
if [[ "$BUILD_ENVIRONMENT" == *cmake* ]]; then
|
||||
export COMMIT_DOCKER_IMAGE=${DOCKER_IMAGE}-cmake-${CIRCLE_SHA1}
|
||||
else
|
||||
export COMMIT_DOCKER_IMAGE=${DOCKER_IMAGE}-${CIRCLE_SHA1}
|
||||
fi
|
||||
docker commit "$id" ${COMMIT_DOCKER_IMAGE}
|
||||
time docker push ${COMMIT_DOCKER_IMAGE}
|
||||
fi
|
||||
|
||||
caffe2_linux_test:
|
||||
<<: *caffe2_params
|
||||
machine:
|
||||
image: ubuntu-1604:201903-01
|
||||
steps:
|
||||
# See Note [Workspace for CircleCI scripts] in job-specs-setup.yml
|
||||
- should_run_job
|
||||
- setup_linux_system_environment
|
||||
- setup_ci_environment
|
||||
- run:
|
||||
name: Test
|
||||
no_output_timeout: "1h"
|
||||
command: |
|
||||
set -e
|
||||
# TODO: merge this into Caffe2 test.sh
|
||||
cat >/home/circleci/project/ci_test_script.sh \<<EOL
|
||||
# =================== The following code will be executed inside Docker container ===================
|
||||
set -ex
|
||||
|
||||
export BUILD_ENVIRONMENT="$BUILD_ENVIRONMENT"
|
||||
|
||||
# libdc1394 (dependency of OpenCV) expects /dev/raw1394 to exist...
|
||||
sudo ln /dev/null /dev/raw1394
|
||||
|
||||
# conda must be added to the path for Anaconda builds (this location must be
|
||||
# the same as that in install_anaconda.sh used to build the docker image)
|
||||
if [[ "${BUILD_ENVIRONMENT}" == conda* ]]; then
|
||||
export PATH=/opt/conda/bin:$PATH
|
||||
fi
|
||||
|
||||
# Upgrade SSL module to avoid old SSL warnings
|
||||
pip -q install --user --upgrade pyOpenSSL ndg-httpsclient pyasn1
|
||||
|
||||
pip -q install --user -b /tmp/pip_install_onnx "file:///var/lib/jenkins/workspace/third_party/onnx#egg=onnx"
|
||||
|
||||
# Build
|
||||
./.jenkins/caffe2/test.sh
|
||||
|
||||
# Remove benign core dumps.
|
||||
# These are tests for signal handling (including SIGABRT).
|
||||
rm -f ./crash/core.fatal_signal_as.*
|
||||
rm -f ./crash/core.logging_test.*
|
||||
# =================== The above code will be executed inside Docker container ===================
|
||||
EOL
|
||||
chmod +x /home/circleci/project/ci_test_script.sh
|
||||
|
||||
if [[ "$BUILD_ENVIRONMENT" == *cmake* ]]; then
|
||||
export COMMIT_DOCKER_IMAGE=${DOCKER_IMAGE}-cmake-${CIRCLE_SHA1}
|
||||
else
|
||||
export COMMIT_DOCKER_IMAGE=${DOCKER_IMAGE}-${CIRCLE_SHA1}
|
||||
fi
|
||||
echo "DOCKER_IMAGE: "${COMMIT_DOCKER_IMAGE}
|
||||
time docker pull ${COMMIT_DOCKER_IMAGE} >/dev/null
|
||||
if [ -n "${USE_CUDA_DOCKER_RUNTIME}" ]; then
|
||||
export id=$(docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --runtime=nvidia -t -d -w /var/lib/jenkins ${COMMIT_DOCKER_IMAGE})
|
||||
else
|
||||
export id=$(docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -t -d -w /var/lib/jenkins ${COMMIT_DOCKER_IMAGE})
|
||||
fi
|
||||
docker cp /home/circleci/project/. "$id:/var/lib/jenkins/workspace"
|
||||
|
||||
export COMMAND='((echo "source ./workspace/env" && echo "sudo chown -R jenkins workspace && cd workspace && ./ci_test_script.sh") | docker exec -u jenkins -i "$id" bash) 2>&1'
|
||||
echo ${COMMAND} > ./command.sh && unbuffer bash ./command.sh | ts
|
||||
|
||||
caffe2_macos_build:
|
||||
<<: *caffe2_params
|
||||
macos:
|
||||
xcode: "9.0"
|
||||
steps:
|
||||
# See Note [Workspace for CircleCI scripts] in job-specs-setup.yml
|
||||
- should_run_job
|
||||
- checkout
|
||||
- run_brew_for_macos_build
|
||||
- run:
|
||||
name: Build
|
||||
no_output_timeout: "1h"
|
||||
command: |
|
||||
set -e
|
||||
|
||||
export IN_CIRCLECI=1
|
||||
|
||||
brew install cmake
|
||||
|
||||
# Reinitialize submodules
|
||||
git submodule sync && git submodule update -q --init --recursive
|
||||
|
||||
# Reinitialize path (see man page for path_helper(8))
|
||||
eval `/usr/libexec/path_helper -s`
|
||||
|
||||
export PATH=/usr/local/opt/python/libexec/bin:/usr/local/bin:$PATH
|
||||
|
||||
# Install Anaconda if we need to
|
||||
if [ -n "${CAFFE2_USE_ANACONDA}" ]; then
|
||||
rm -rf ${TMPDIR}/anaconda
|
||||
curl -o ${TMPDIR}/conda.sh https://repo.continuum.io/miniconda/Miniconda${ANACONDA_VERSION}-latest-MacOSX-x86_64.sh
|
||||
chmod +x ${TMPDIR}/conda.sh
|
||||
/bin/bash ${TMPDIR}/conda.sh -b -p ${TMPDIR}/anaconda
|
||||
rm -f ${TMPDIR}/conda.sh
|
||||
export PATH="${TMPDIR}/anaconda/bin:${PATH}"
|
||||
source ${TMPDIR}/anaconda/bin/activate
|
||||
fi
|
||||
|
||||
pip -q install numpy
|
||||
|
||||
# Install sccache
|
||||
sudo curl https://s3.amazonaws.com/ossci-macos/sccache --output /usr/local/bin/sccache
|
||||
sudo chmod +x /usr/local/bin/sccache
|
||||
export SCCACHE_BUCKET=ossci-compiler-cache-circleci-v2
|
||||
|
||||
# This IAM user allows write access to S3 bucket for sccache
|
||||
set +x
|
||||
export AWS_ACCESS_KEY_ID=${CIRCLECI_AWS_ACCESS_KEY_FOR_SCCACHE_S3_BUCKET_V4}
|
||||
export AWS_SECRET_ACCESS_KEY=${CIRCLECI_AWS_SECRET_KEY_FOR_SCCACHE_S3_BUCKET_V4}
|
||||
set -x
|
||||
|
||||
export SCCACHE_BIN=${PWD}/sccache_bin
|
||||
mkdir -p ${SCCACHE_BIN}
|
||||
if which sccache > /dev/null; then
|
||||
printf "#!/bin/sh\nexec sccache $(which clang++) \$*" > "${SCCACHE_BIN}/clang++"
|
||||
chmod a+x "${SCCACHE_BIN}/clang++"
|
||||
|
||||
printf "#!/bin/sh\nexec sccache $(which clang) \$*" > "${SCCACHE_BIN}/clang"
|
||||
chmod a+x "${SCCACHE_BIN}/clang"
|
||||
|
||||
export PATH="${SCCACHE_BIN}:$PATH"
|
||||
fi
|
||||
|
||||
# Build
|
||||
if [ "${BUILD_IOS:-0}" -eq 1 ]; then
|
||||
unbuffer scripts/build_ios.sh 2>&1 | ts
|
||||
elif [ -n "${CAFFE2_USE_ANACONDA}" ]; then
|
||||
# All conda build logic should be in scripts/build_anaconda.sh
|
||||
unbuffer scripts/build_anaconda.sh 2>&1 | ts
|
||||
else
|
||||
unbuffer scripts/build_local.sh 2>&1 | ts
|
||||
fi
|
||||
|
||||
# Show sccache stats if it is running
|
||||
if which sccache > /dev/null; then
|
||||
sccache --show-stats
|
||||
fi
|
||||
@ -1,26 +1,18 @@
|
||||
commands:
|
||||
|
||||
calculate_docker_image_tag:
|
||||
description: "Calculates the docker image tag"
|
||||
# NB: This command must be run as the first command in a job. It
|
||||
# attaches the workspace at ~/workspace; this workspace is generated
|
||||
# by the setup job. Note that ~/workspace is not the default working
|
||||
# directory (that's ~/project).
|
||||
should_run_job:
|
||||
description: "Test if the job should run or not"
|
||||
steps:
|
||||
- attach_workspace:
|
||||
name: Attaching workspace
|
||||
at: ~/workspace
|
||||
- run:
|
||||
name: "Calculate docker image hash"
|
||||
command: |
|
||||
DOCKER_TAG=$(git rev-parse HEAD:.circleci/docker)
|
||||
echo "DOCKER_TAG=${DOCKER_TAG}" >> "${BASH_ENV}"
|
||||
|
||||
designate_upload_channel:
|
||||
description: "inserts the correct upload channel into ${BASH_ENV}"
|
||||
steps:
|
||||
- run:
|
||||
name: adding UPLOAD_CHANNEL to BASH_ENV
|
||||
command: |
|
||||
our_upload_channel=nightly
|
||||
# On tags upload to test instead
|
||||
if [[ -n "${CIRCLE_TAG}" ]]; then
|
||||
our_upload_channel=test
|
||||
fi
|
||||
echo "export UPLOAD_CHANNEL=${our_upload_channel}" >> ${BASH_ENV}
|
||||
name: Should run job
|
||||
no_output_timeout: "2m"
|
||||
command: ~/workspace/.circleci/scripts/should_run_job.sh
|
||||
|
||||
# This system setup script is meant to run before the CI-related scripts, e.g.,
|
||||
# installing Git client, checking out code, setting up CI env, and
|
||||
@ -30,14 +22,14 @@ commands:
|
||||
- run:
|
||||
name: Set Up System Environment
|
||||
no_output_timeout: "1h"
|
||||
command: .circleci/scripts/setup_linux_system_environment.sh
|
||||
command: ~/workspace/.circleci/scripts/setup_linux_system_environment.sh
|
||||
|
||||
setup_ci_environment:
|
||||
steps:
|
||||
- run:
|
||||
name: Set Up CI Environment After attach_workspace
|
||||
no_output_timeout: "1h"
|
||||
command: .circleci/scripts/setup_ci_environment.sh
|
||||
command: ~/workspace/.circleci/scripts/setup_ci_environment.sh
|
||||
|
||||
brew_update:
|
||||
description: "Update Homebrew and install base formulae"
|
||||
@ -96,79 +88,3 @@ commands:
|
||||
- brew_update
|
||||
- brew_install:
|
||||
formulae: libtool
|
||||
|
||||
optional_merge_target_branch:
|
||||
steps:
|
||||
- run:
|
||||
name: (Optional) Merge target branch
|
||||
no_output_timeout: "10m"
|
||||
command: |
|
||||
if [ -n "$CIRCLE_PULL_REQUEST" ]; then
|
||||
PR_NUM=$(basename $CIRCLE_PULL_REQUEST)
|
||||
CIRCLE_PR_BASE_BRANCH=$(curl -s https://api.github.com/repos/$CIRCLE_PROJECT_USERNAME/$CIRCLE_PROJECT_REPONAME/pulls/$PR_NUM | jq -r '.base.ref')
|
||||
if [[ "${BUILD_ENVIRONMENT}" == *"xla"* || "${BUILD_ENVIRONMENT}" == *"gcc5"* ]] ; then
|
||||
set -x
|
||||
git config --global user.email "circleci.ossci@gmail.com"
|
||||
git config --global user.name "CircleCI"
|
||||
git config remote.origin.url https://github.com/pytorch/pytorch.git
|
||||
git config --add remote.origin.fetch +refs/heads/master:refs/remotes/origin/master
|
||||
git fetch --tags --progress https://github.com/pytorch/pytorch.git +refs/heads/master:refs/remotes/origin/master --depth=100 --quiet
|
||||
# PRs generated from ghstack has format CIRCLE_PR_BASE_BRANCH=gh/xxx/1234/base
|
||||
if [[ "${CIRCLE_PR_BASE_BRANCH}" == "gh/"* ]]; then
|
||||
CIRCLE_PR_BASE_BRANCH=master
|
||||
fi
|
||||
export GIT_MERGE_TARGET=`git log -n 1 --pretty=format:"%H" origin/$CIRCLE_PR_BASE_BRANCH`
|
||||
echo "GIT_MERGE_TARGET: " ${GIT_MERGE_TARGET}
|
||||
export GIT_COMMIT=${CIRCLE_SHA1}
|
||||
echo "GIT_COMMIT: " ${GIT_COMMIT}
|
||||
git checkout -f ${GIT_COMMIT}
|
||||
git reset --hard ${GIT_COMMIT}
|
||||
git merge --allow-unrelated-histories --no-edit --no-ff ${GIT_MERGE_TARGET}
|
||||
echo "Merged $CIRCLE_PR_BASE_BRANCH branch before building in environment $BUILD_ENVIRONMENT"
|
||||
set +x
|
||||
else
|
||||
echo "No need to merge with $CIRCLE_PR_BASE_BRANCH, skipping..."
|
||||
fi
|
||||
else
|
||||
echo "This is not a pull request, skipping..."
|
||||
fi
|
||||
|
||||
upload_binary_size_for_android_build:
|
||||
description: "Upload binary size data for Android build"
|
||||
parameters:
|
||||
build_type:
|
||||
type: string
|
||||
default: ""
|
||||
artifacts:
|
||||
type: string
|
||||
default: ""
|
||||
steps:
|
||||
- run:
|
||||
name: "Binary Size - Install Dependencies"
|
||||
no_output_timeout: "5m"
|
||||
command: |
|
||||
retry () {
|
||||
$* || (sleep 1 && $*) || (sleep 2 && $*) || (sleep 4 && $*) || (sleep 8 && $*)
|
||||
}
|
||||
retry pip3 install requests
|
||||
- run:
|
||||
name: "Binary Size - Untar Artifacts"
|
||||
no_output_timeout: "5m"
|
||||
command: |
|
||||
# The artifact file is created inside docker container, which contains the result binaries.
|
||||
# Now unpackage it into the project folder. The subsequent script will scan project folder
|
||||
# to locate result binaries and report their sizes.
|
||||
# If artifact file is not provided it assumes that the project folder has been mounted in
|
||||
# the docker during build and already contains the result binaries, so this step can be skipped.
|
||||
export ARTIFACTS="<< parameters.artifacts >>"
|
||||
if [ -n "${ARTIFACTS}" ]; then
|
||||
tar xf "${ARTIFACTS}" -C ~/project
|
||||
fi
|
||||
- run:
|
||||
name: "Binary Size - Upload << parameters.build_type >>"
|
||||
no_output_timeout: "5m"
|
||||
command: |
|
||||
cd ~/project
|
||||
export ANDROID_BUILD_TYPE="<< parameters.build_type >>"
|
||||
export COMMIT_TIME=$(git log --max-count=1 --format=%ct || echo 0)
|
||||
python3 .circleci/scripts/upload_binary_size_to_scuba.py android
|
||||
|
||||
21
.circleci/verbatim-sources/docker_build_job.yml
Normal file
21
.circleci/verbatim-sources/docker_build_job.yml
Normal file
@ -0,0 +1,21 @@
|
||||
docker_build_job:
|
||||
parameters:
|
||||
image_name:
|
||||
type: string
|
||||
default: ""
|
||||
machine:
|
||||
image: ubuntu-1604:201903-01
|
||||
resource_class: large
|
||||
environment:
|
||||
IMAGE_NAME: << parameters.image_name >>
|
||||
steps:
|
||||
- checkout
|
||||
- run:
|
||||
name: build_docker_image_<< parameters.image_name >>
|
||||
no_output_timeout: "1h"
|
||||
command: |
|
||||
set +x
|
||||
export AWS_ACCESS_KEY_ID=${CIRCLECI_AWS_ACCESS_KEY_FOR_DOCKER_BUILDER_V1}
|
||||
export AWS_SECRET_ACCESS_KEY=${CIRCLECI_AWS_SECRET_KEY_FOR_DOCKER_BUILDER_V1}
|
||||
set -x
|
||||
cd .circleci/docker && ./build_docker.sh
|
||||
@ -1,39 +1,21 @@
|
||||
# WARNING: DO NOT EDIT THIS FILE DIRECTLY!!!
|
||||
# See the README.md in this directory.
|
||||
|
||||
# IMPORTANT: To update Docker image version, please follow
|
||||
# the instructions at
|
||||
# https://github.com/pytorch/pytorch/wiki/Docker-image-build-on-CircleCI
|
||||
# IMPORTANT: To update Docker image version, please first update
|
||||
# https://github.com/pytorch/ossci-job-dsl/blob/master/src/main/groovy/ossci/pytorch/DockerVersion.groovy and
|
||||
# https://github.com/pytorch/ossci-job-dsl/blob/master/src/main/groovy/ossci/caffe2/DockerVersion.groovy,
|
||||
# and then update DOCKER_IMAGE_VERSION at the top of the following files:
|
||||
# * cimodel/data/pytorch_build_definitions.py
|
||||
# * cimodel/data/caffe2_build_definitions.py
|
||||
# And the inline copies of the variable in
|
||||
# * verbatim-sources/job-specs-custom.yml
|
||||
# (grep for DOCKER_IMAGE)
|
||||
|
||||
version: 2.1
|
||||
|
||||
parameters:
|
||||
run_binary_tests:
|
||||
type: boolean
|
||||
default: false
|
||||
|
||||
docker_config_defaults: &docker_config_defaults
|
||||
user: jenkins
|
||||
aws_auth:
|
||||
# This IAM user only allows read-write access to ECR
|
||||
aws_access_key_id: ${CIRCLECI_AWS_ACCESS_KEY_FOR_ECR_READ_WRITE_V4}
|
||||
aws_secret_access_key: ${CIRCLECI_AWS_SECRET_KEY_FOR_ECR_READ_WRITE_V4}
|
||||
|
||||
executors:
|
||||
windows-with-nvidia-gpu:
|
||||
machine:
|
||||
resource_class: windows.gpu.nvidia.medium
|
||||
image: windows-server-2019-nvidia:stable
|
||||
shell: bash.exe
|
||||
|
||||
windows-xlarge-cpu-with-nvidia-cuda:
|
||||
machine:
|
||||
resource_class: windows.xlarge
|
||||
image: windows-server-2019-vs2019:stable
|
||||
shell: bash.exe
|
||||
|
||||
windows-medium-cpu-with-nvidia-cuda:
|
||||
machine:
|
||||
resource_class: windows.medium
|
||||
image: windows-server-2019-vs2019:stable
|
||||
shell: bash.exe
|
||||
|
||||
474
.circleci/verbatim-sources/job-specs-custom.yml
Normal file
474
.circleci/verbatim-sources/job-specs-custom.yml
Normal file
@ -0,0 +1,474 @@
|
||||
pytorch_python_doc_push:
|
||||
environment:
|
||||
BUILD_ENVIRONMENT: pytorch-python-doc-push
|
||||
# TODO: stop hardcoding this
|
||||
DOCKER_IMAGE: "308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-xenial-cuda9-cudnn7-py3:405"
|
||||
resource_class: large
|
||||
machine:
|
||||
image: ubuntu-1604:201903-01
|
||||
steps:
|
||||
# See Note [Workspace for CircleCI scripts] in job-specs-setup.yml
|
||||
- should_run_job
|
||||
- setup_linux_system_environment
|
||||
- setup_ci_environment
|
||||
- run:
|
||||
name: Doc Build and Push
|
||||
no_output_timeout: "1h"
|
||||
command: |
|
||||
set -ex
|
||||
export COMMIT_DOCKER_IMAGE=${DOCKER_IMAGE}-${CIRCLE_SHA1}
|
||||
echo "DOCKER_IMAGE: "${COMMIT_DOCKER_IMAGE}
|
||||
time docker pull ${COMMIT_DOCKER_IMAGE} >/dev/null
|
||||
export id=$(docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -t -d -w /var/lib/jenkins ${COMMIT_DOCKER_IMAGE})
|
||||
|
||||
# master branch docs push
|
||||
if [[ "${CIRCLE_BRANCH}" == "master" ]]; then
|
||||
export COMMAND='((echo "export BUILD_ENVIRONMENT=${BUILD_ENVIRONMENT}" && echo "export GITHUB_PYTORCHBOT_TOKEN=${GITHUB_PYTORCHBOT_TOKEN}" && echo "source ./workspace/env" && echo "sudo chown -R jenkins workspace && cd workspace && . ./.circleci/scripts/python_doc_push_script.sh docs/master master site") | docker exec -u jenkins -i "$id" bash) 2>&1'
|
||||
|
||||
# stable release docs push. Due to some circleci limitations, we keep
|
||||
# an eternal PR open for merging v1.2.0 -> master for this job.
|
||||
# XXX: The following code is only run on the v1.2.0 branch, which might
|
||||
# not be exactly the same as what you see here.
|
||||
elif [[ "${CIRCLE_BRANCH}" == "v1.2.0" ]]; then
|
||||
export COMMAND='((echo "export BUILD_ENVIRONMENT=${BUILD_ENVIRONMENT}" && echo "export GITHUB_PYTORCHBOT_TOKEN=${GITHUB_PYTORCHBOT_TOKEN}" && echo "source ./workspace/env" && echo "sudo chown -R jenkins workspace && cd workspace && . ./.circleci/scripts/python_doc_push_script.sh docs/stable 1.2.0 site dry_run") | docker exec -u jenkins -i "$id" bash) 2>&1'
|
||||
|
||||
# For open PRs: Do a dry_run of the docs build, don't push build
|
||||
else
|
||||
export COMMAND='((echo "export BUILD_ENVIRONMENT=${BUILD_ENVIRONMENT}" && echo "export GITHUB_PYTORCHBOT_TOKEN=${GITHUB_PYTORCHBOT_TOKEN}" && echo "source ./workspace/env" && echo "sudo chown -R jenkins workspace && cd workspace && . ./.circleci/scripts/python_doc_push_script.sh docs/master master site dry_run") | docker exec -u jenkins -i "$id" bash) 2>&1'
|
||||
fi
|
||||
|
||||
echo ${COMMAND} > ./command.sh && unbuffer bash ./command.sh | ts
|
||||
|
||||
# Save the docs build so we can debug any problems
|
||||
export DEBUG_COMMIT_DOCKER_IMAGE=${COMMIT_DOCKER_IMAGE}-debug
|
||||
docker commit "$id" ${DEBUG_COMMIT_DOCKER_IMAGE}
|
||||
time docker push ${DEBUG_COMMIT_DOCKER_IMAGE}
|
||||
|
||||
pytorch_cpp_doc_push:
|
||||
environment:
|
||||
BUILD_ENVIRONMENT: pytorch-cpp-doc-push
|
||||
DOCKER_IMAGE: "308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-xenial-cuda9-cudnn7-py3:405"
|
||||
resource_class: large
|
||||
machine:
|
||||
image: ubuntu-1604:201903-01
|
||||
steps:
|
||||
# See Note [Workspace for CircleCI scripts] in job-specs-setup.yml
|
||||
- should_run_job
|
||||
- setup_linux_system_environment
|
||||
- setup_ci_environment
|
||||
- run:
|
||||
name: Doc Build and Push
|
||||
no_output_timeout: "1h"
|
||||
command: |
|
||||
set -ex
|
||||
export COMMIT_DOCKER_IMAGE=${DOCKER_IMAGE}-${CIRCLE_SHA1}
|
||||
echo "DOCKER_IMAGE: "${COMMIT_DOCKER_IMAGE}
|
||||
time docker pull ${COMMIT_DOCKER_IMAGE} >/dev/null
|
||||
export id=$(docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -t -d -w /var/lib/jenkins ${COMMIT_DOCKER_IMAGE})
|
||||
|
||||
# master branch docs push
|
||||
if [[ "${CIRCLE_BRANCH}" == "master" ]]; then
|
||||
export COMMAND='((echo "export BUILD_ENVIRONMENT=${BUILD_ENVIRONMENT}" && echo "export GITHUB_PYTORCHBOT_TOKEN=${GITHUB_PYTORCHBOT_TOKEN}" && echo "source ./workspace/env" && echo "sudo chown -R jenkins workspace && cd workspace && . ./.circleci/scripts/cpp_doc_push_script.sh docs/master master") | docker exec -u jenkins -i "$id" bash) 2>&1'
|
||||
|
||||
# stable release docs push. Due to some circleci limitations, we keep
|
||||
# an eternal PR open (#16502) for merging v1.0.1 -> master for this job.
|
||||
# XXX: The following code is only run on the v1.0.1 branch, which might
|
||||
# not be exactly the same as what you see here.
|
||||
elif [[ "${CIRCLE_BRANCH}" == "v1.0.1" ]]; then
|
||||
export COMMAND='((echo "export BUILD_ENVIRONMENT=${BUILD_ENVIRONMENT}" && echo "export GITHUB_PYTORCHBOT_TOKEN=${GITHUB_PYTORCHBOT_TOKEN}" && echo "source ./workspace/env" && echo "sudo chown -R jenkins workspace && cd workspace && . ./.circleci/scripts/cpp_doc_push_script.sh docs/stable 1.0.1") | docker exec -u jenkins -i "$id" bash) 2>&1'
|
||||
|
||||
# For open PRs: Do a dry_run of the docs build, don't push build
|
||||
else
|
||||
export COMMAND='((echo "export BUILD_ENVIRONMENT=${BUILD_ENVIRONMENT}" && echo "export GITHUB_PYTORCHBOT_TOKEN=${GITHUB_PYTORCHBOT_TOKEN}" && echo "source ./workspace/env" && echo "sudo chown -R jenkins workspace && cd workspace && . ./.circleci/scripts/cpp_doc_push_script.sh docs/master master dry_run") | docker exec -u jenkins -i "$id" bash) 2>&1'
|
||||
fi
|
||||
|
||||
echo ${COMMAND} > ./command.sh && unbuffer bash ./command.sh | ts
|
||||
|
||||
# Save the docs build so we can debug any problems
|
||||
export DEBUG_COMMIT_DOCKER_IMAGE=${COMMIT_DOCKER_IMAGE}-debug
|
||||
docker commit "$id" ${DEBUG_COMMIT_DOCKER_IMAGE}
|
||||
time docker push ${DEBUG_COMMIT_DOCKER_IMAGE}
|
||||
|
||||
pytorch_macos_10_13_py3_build:
|
||||
environment:
|
||||
BUILD_ENVIRONMENT: pytorch-macos-10.13-py3-build
|
||||
macos:
|
||||
xcode: "9.0"
|
||||
steps:
|
||||
# See Note [Workspace for CircleCI scripts] in job-specs-setup.yml
|
||||
- should_run_job
|
||||
- checkout
|
||||
- run_brew_for_macos_build
|
||||
- run:
|
||||
name: Build
|
||||
no_output_timeout: "1h"
|
||||
command: |
|
||||
set -e
|
||||
export IN_CIRCLECI=1
|
||||
|
||||
# Install sccache
|
||||
sudo curl https://s3.amazonaws.com/ossci-macos/sccache --output /usr/local/bin/sccache
|
||||
sudo chmod +x /usr/local/bin/sccache
|
||||
export SCCACHE_BUCKET=ossci-compiler-cache-circleci-v2
|
||||
|
||||
# This IAM user allows write access to S3 bucket for sccache
|
||||
set +x
|
||||
export AWS_ACCESS_KEY_ID=${CIRCLECI_AWS_ACCESS_KEY_FOR_SCCACHE_S3_BUCKET_V4}
|
||||
export AWS_SECRET_ACCESS_KEY=${CIRCLECI_AWS_SECRET_KEY_FOR_SCCACHE_S3_BUCKET_V4}
|
||||
set -x
|
||||
|
||||
chmod a+x .jenkins/pytorch/macos-build.sh
|
||||
unbuffer .jenkins/pytorch/macos-build.sh 2>&1 | ts
|
||||
|
||||
# copy with -a to preserve relative structure (e.g., symlinks), and be recursive
|
||||
cp -a ~/project ~/workspace
|
||||
|
||||
- persist_to_workspace:
|
||||
root: ~/workspace
|
||||
paths:
|
||||
- miniconda3
|
||||
- project
|
||||
|
||||
pytorch_macos_10_13_py3_test:
|
||||
environment:
|
||||
BUILD_ENVIRONMENT: pytorch-macos-10.13-py3-test
|
||||
macos:
|
||||
xcode: "9.0"
|
||||
steps:
|
||||
# See Note [Workspace for CircleCI scripts] in job-specs-setup.yml
|
||||
# This workspace also carries binaries from the build job
|
||||
- should_run_job
|
||||
- run_brew_for_macos_build
|
||||
- run:
|
||||
name: Test
|
||||
no_output_timeout: "1h"
|
||||
command: |
|
||||
set -e
|
||||
export IN_CIRCLECI=1
|
||||
|
||||
# copy with -a to preserve relative structure (e.g., symlinks), and be recursive
|
||||
cp -a ~/workspace/project/. ~/project
|
||||
|
||||
chmod a+x .jenkins/pytorch/macos-test.sh
|
||||
unbuffer .jenkins/pytorch/macos-test.sh 2>&1 | ts
|
||||
- store_test_results:
|
||||
path: test/test-reports
|
||||
|
||||
pytorch_macos_10_13_cuda9_2_cudnn7_py3_build:
|
||||
environment:
|
||||
BUILD_ENVIRONMENT: pytorch-macos-10.13-cuda9.2-cudnn7-py3-build
|
||||
macos:
|
||||
xcode: "9.0"
|
||||
steps:
|
||||
# See Note [Workspace for CircleCI scripts] in job-specs-setup.yml
|
||||
- should_run_job
|
||||
- checkout
|
||||
- run_brew_for_macos_build
|
||||
- run:
|
||||
name: Build
|
||||
no_output_timeout: "1h"
|
||||
command: |
|
||||
set -e
|
||||
|
||||
export IN_CIRCLECI=1
|
||||
|
||||
# Install CUDA 9.2
|
||||
sudo rm -rf ~/cuda_9.2.64_mac_installer.app || true
|
||||
curl https://s3.amazonaws.com/ossci-macos/cuda_9.2.64_mac_installer.zip -o ~/cuda_9.2.64_mac_installer.zip
|
||||
unzip ~/cuda_9.2.64_mac_installer.zip -d ~/
|
||||
sudo ~/cuda_9.2.64_mac_installer.app/Contents/MacOS/CUDAMacOSXInstaller --accept-eula --no-window
|
||||
sudo cp /usr/local/cuda/lib/libcuda.dylib /Developer/NVIDIA/CUDA-9.2/lib/libcuda.dylib
|
||||
sudo rm -rf /usr/local/cuda || true
|
||||
|
||||
# Install cuDNN 7.1 for CUDA 9.2
|
||||
curl https://s3.amazonaws.com/ossci-macos/cudnn-9.2-osx-x64-v7.1.tgz -o ~/cudnn-9.2-osx-x64-v7.1.tgz
|
||||
rm -rf ~/cudnn-9.2-osx-x64-v7.1 && mkdir ~/cudnn-9.2-osx-x64-v7.1
|
||||
tar -xzvf ~/cudnn-9.2-osx-x64-v7.1.tgz -C ~/cudnn-9.2-osx-x64-v7.1
|
||||
sudo cp ~/cudnn-9.2-osx-x64-v7.1/cuda/include/cudnn.h /Developer/NVIDIA/CUDA-9.2/include/
|
||||
sudo cp ~/cudnn-9.2-osx-x64-v7.1/cuda/lib/libcudnn* /Developer/NVIDIA/CUDA-9.2/lib/
|
||||
sudo chmod a+r /Developer/NVIDIA/CUDA-9.2/include/cudnn.h /Developer/NVIDIA/CUDA-9.2/lib/libcudnn*
|
||||
|
||||
# Install sccache
|
||||
sudo curl https://s3.amazonaws.com/ossci-macos/sccache --output /usr/local/bin/sccache
|
||||
sudo chmod +x /usr/local/bin/sccache
|
||||
export SCCACHE_BUCKET=ossci-compiler-cache-circleci-v2
|
||||
# This IAM user allows write access to S3 bucket for sccache
|
||||
set +x
|
||||
export AWS_ACCESS_KEY_ID=${CIRCLECI_AWS_ACCESS_KEY_FOR_SCCACHE_S3_BUCKET_V4}
|
||||
export AWS_SECRET_ACCESS_KEY=${CIRCLECI_AWS_SECRET_KEY_FOR_SCCACHE_S3_BUCKET_V4}
|
||||
set -x
|
||||
|
||||
git submodule sync && git submodule update -q --init --recursive
|
||||
chmod a+x .jenkins/pytorch/macos-build.sh
|
||||
unbuffer .jenkins/pytorch/macos-build.sh 2>&1 | ts
|
||||
|
||||
pytorch_android_gradle_build:
|
||||
environment:
|
||||
BUILD_ENVIRONMENT: pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-build
|
||||
DOCKER_IMAGE: "308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-xenial-py3-clang5-android-ndk-r19c:405"
|
||||
PYTHON_VERSION: "3.6"
|
||||
resource_class: large
|
||||
machine:
|
||||
image: ubuntu-1604:201903-01
|
||||
steps:
|
||||
- should_run_job
|
||||
- setup_linux_system_environment
|
||||
- checkout
|
||||
- setup_ci_environment
|
||||
- run:
|
||||
name: pytorch android gradle build
|
||||
no_output_timeout: "1h"
|
||||
command: |
|
||||
set -eux
|
||||
docker_image_commit=${DOCKER_IMAGE}-${CIRCLE_SHA1}
|
||||
|
||||
docker_image_libtorch_android_x86_32=${docker_image_commit}-android-x86_32
|
||||
docker_image_libtorch_android_x86_64=${docker_image_commit}-android-x86_64
|
||||
docker_image_libtorch_android_arm_v7a=${docker_image_commit}-android-arm-v7a
|
||||
docker_image_libtorch_android_arm_v8a=${docker_image_commit}-android-arm-v8a
|
||||
|
||||
echo "docker_image_commit: "${docker_image_commit}
|
||||
echo "docker_image_libtorch_android_x86_32: "${docker_image_libtorch_android_x86_32}
|
||||
echo "docker_image_libtorch_android_x86_64: "${docker_image_libtorch_android_x86_64}
|
||||
echo "docker_image_libtorch_android_arm_v7a: "${docker_image_libtorch_android_arm_v7a}
|
||||
echo "docker_image_libtorch_android_arm_v8a: "${docker_image_libtorch_android_arm_v8a}
|
||||
|
||||
# x86_32
|
||||
time docker pull ${docker_image_libtorch_android_x86_32} >/dev/null
|
||||
export id_x86_32=$(docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -t -d -w /var/lib/jenkins ${docker_image_libtorch_android_x86_32})
|
||||
|
||||
export COMMAND='((echo "source ./workspace/env" && echo "sudo chown -R jenkins workspace") | docker exec -u jenkins -i "$id_x86_32" bash) 2>&1'
|
||||
echo ${COMMAND} > ./command.sh && unbuffer bash ./command.sh | ts
|
||||
|
||||
# arm-v7a
|
||||
time docker pull ${docker_image_libtorch_android_arm_v7a} >/dev/null
|
||||
export id_arm_v7a=$(docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -t -d -w /var/lib/jenkins ${docker_image_libtorch_android_arm_v7a})
|
||||
|
||||
export COMMAND='((echo "source ./workspace/env" && echo "sudo chown -R jenkins workspace") | docker exec -u jenkins -i "$id_arm_v7a" bash) 2>&1'
|
||||
echo ${COMMAND} > ./command.sh && unbuffer bash ./command.sh | ts
|
||||
|
||||
mkdir ~/workspace/build_android_install_arm_v7a
|
||||
docker cp $id_arm_v7a:/var/lib/jenkins/workspace/build_android/install ~/workspace/build_android_install_arm_v7a
|
||||
|
||||
# x86_64
|
||||
time docker pull ${docker_image_libtorch_android_x86_64} >/dev/null
|
||||
export id_x86_64=$(docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -t -d -w /var/lib/jenkins ${docker_image_libtorch_android_x86_64})
|
||||
|
||||
export COMMAND='((echo "source ./workspace/env" && echo "sudo chown -R jenkins workspace") | docker exec -u jenkins -i "$id_x86_64" bash) 2>&1'
|
||||
echo ${COMMAND} > ./command.sh && unbuffer bash ./command.sh | ts
|
||||
|
||||
mkdir ~/workspace/build_android_install_x86_64
|
||||
docker cp $id_x86_64:/var/lib/jenkins/workspace/build_android/install ~/workspace/build_android_install_x86_64
|
||||
|
||||
# arm-v8a
|
||||
time docker pull ${docker_image_libtorch_android_arm_v8a} >/dev/null
|
||||
export id_arm_v8a=$(docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -t -d -w /var/lib/jenkins ${docker_image_libtorch_android_arm_v8a})
|
||||
|
||||
export COMMAND='((echo "source ./workspace/env" && echo "sudo chown -R jenkins workspace") | docker exec -u jenkins -i "$id_arm_v8a" bash) 2>&1'
|
||||
echo ${COMMAND} > ./command.sh && unbuffer bash ./command.sh | ts
|
||||
|
||||
mkdir ~/workspace/build_android_install_arm_v8a
|
||||
docker cp $id_arm_v8a:/var/lib/jenkins/workspace/build_android/install ~/workspace/build_android_install_arm_v8a
|
||||
|
||||
docker cp ~/workspace/build_android_install_arm_v7a $id_x86_32:/var/lib/jenkins/workspace/build_android_install_arm_v7a
|
||||
docker cp ~/workspace/build_android_install_x86_64 $id_x86_32:/var/lib/jenkins/workspace/build_android_install_x86_64
|
||||
docker cp ~/workspace/build_android_install_arm_v8a $id_x86_32:/var/lib/jenkins/workspace/build_android_install_arm_v8a
|
||||
|
||||
# run gradle buildRelease
|
||||
export COMMAND='((echo "source ./workspace/env" && echo "export BUILD_ENVIRONMENT=${BUILD_ENVIRONMENT}" && echo "export GRADLE_OFFLINE=1" && echo "sudo chown -R jenkins workspace && cd workspace && ./.circleci/scripts/build_android_gradle.sh") | docker exec -u jenkins -i "$id_x86_32" bash) 2>&1'
|
||||
echo ${COMMAND} > ./command.sh && unbuffer bash ./command.sh | ts
|
||||
|
||||
mkdir -p ~/workspace/build_android_artifacts
|
||||
docker cp $id_x86_32:/var/lib/jenkins/workspace/android/artifacts.tgz ~/workspace/build_android_artifacts/
|
||||
|
||||
output_image=$docker_image_libtorch_android_x86_32-gradle
|
||||
docker commit "$id_x86_32" ${output_image}
|
||||
time docker push ${output_image}
|
||||
- store_artifacts:
|
||||
path: ~/workspace/build_android_artifacts/artifacts.tgz
|
||||
destination: artifacts.tgz
|
||||
|
||||
pytorch_android_publish_snapshot:
|
||||
environment:
|
||||
BUILD_ENVIRONMENT: pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-publish-snapshot
|
||||
DOCKER_IMAGE: "308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-xenial-py3-clang5-android-ndk-r19c:405"
|
||||
PYTHON_VERSION: "3.6"
|
||||
resource_class: large
|
||||
machine:
|
||||
image: ubuntu-1604:201903-01
|
||||
steps:
|
||||
- should_run_job
|
||||
- setup_linux_system_environment
|
||||
- checkout
|
||||
- setup_ci_environment
|
||||
- run:
|
||||
name: pytorch android gradle build
|
||||
no_output_timeout: "1h"
|
||||
command: |
|
||||
set -eux
|
||||
docker_image_commit=${DOCKER_IMAGE}-${CIRCLE_SHA1}
|
||||
|
||||
docker_image_libtorch_android_x86_32_gradle=${docker_image_commit}-android-x86_32-gradle
|
||||
|
||||
echo "docker_image_commit: "${docker_image_commit}
|
||||
echo "docker_image_libtorch_android_x86_32_gradle: "${docker_image_libtorch_android_x86_32_gradle}
|
||||
|
||||
# x86_32
|
||||
time docker pull ${docker_image_libtorch_android_x86_32_gradle} >/dev/null
|
||||
export id_x86_32=$(docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -t -d -w /var/lib/jenkins ${docker_image_libtorch_android_x86_32_gradle})
|
||||
|
||||
export COMMAND='((echo "source ./workspace/env" && echo "sudo chown -R jenkins workspace" && echo "export BUILD_ENVIRONMENT=${BUILD_ENVIRONMENT}" && echo "export SONATYPE_NEXUS_USERNAME=${SONATYPE_NEXUS_USERNAME}" && echo "export SONATYPE_NEXUS_PASSWORD=${SONATYPE_NEXUS_PASSWORD}" && echo "export ANDROID_SIGN_KEY=${ANDROID_SIGN_KEY}" && echo "export ANDROID_SIGN_PASS=${ANDROID_SIGN_PASS}" && echo "sudo chown -R jenkins workspace && cd workspace && ./.circleci/scripts/publish_android_snapshot.sh") | docker exec -u jenkins -i "$id_x86_32" bash) 2>&1'
|
||||
echo ${COMMAND} > ./command.sh && unbuffer bash ./command.sh | ts
|
||||
|
||||
output_image=${docker_image_libtorch_android_x86_32_gradle}-publish-snapshot
|
||||
docker commit "$id_x86_32" ${output_image}
|
||||
time docker push ${output_image}
|
||||
|
||||
pytorch_android_gradle_build-x86_32:
|
||||
environment:
|
||||
BUILD_ENVIRONMENT: pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-build-only-x86_32
|
||||
DOCKER_IMAGE: "308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-xenial-py3-clang5-android-ndk-r19c:405"
|
||||
PYTHON_VERSION: "3.6"
|
||||
resource_class: large
|
||||
machine:
|
||||
image: ubuntu-1604:201903-01
|
||||
steps:
|
||||
- should_run_job
|
||||
- run:
|
||||
name: filter out not PR runs
|
||||
no_output_timeout: "5m"
|
||||
command: |
|
||||
echo "CIRCLE_PULL_REQUEST: ${CIRCLE_PULL_REQUEST:-}"
|
||||
if [ -z "${CIRCLE_PULL_REQUEST:-}" ]; then
|
||||
circleci step halt
|
||||
fi
|
||||
- setup_linux_system_environment
|
||||
- checkout
|
||||
- setup_ci_environment
|
||||
- run:
|
||||
name: pytorch android gradle build only x86_32 (for PR)
|
||||
no_output_timeout: "1h"
|
||||
command: |
|
||||
set -e
|
||||
docker_image_libtorch_android_x86_32=${DOCKER_IMAGE}-${CIRCLE_SHA1}-android-x86_32
|
||||
echo "docker_image_libtorch_android_x86_32: "${docker_image_libtorch_android_x86_32}
|
||||
|
||||
# x86
|
||||
time docker pull ${docker_image_libtorch_android_x86_32} >/dev/null
|
||||
export id=$(docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -t -d -w /var/lib/jenkins ${docker_image_libtorch_android_x86_32})
|
||||
|
||||
export COMMAND='((echo "source ./workspace/env" && echo "export BUILD_ENVIRONMENT=${BUILD_ENVIRONMENT}" && echo "export GRADLE_OFFLINE=1" && echo "sudo chown -R jenkins workspace && cd workspace && ./.circleci/scripts/build_android_gradle.sh") | docker exec -u jenkins -i "$id" bash) 2>&1'
|
||||
echo ${COMMAND} > ./command.sh && unbuffer bash ./command.sh | ts
|
||||
|
||||
mkdir -p ~/workspace/build_android_x86_32_artifacts
|
||||
docker cp $id:/var/lib/jenkins/workspace/android/artifacts.tgz ~/workspace/build_android_x86_32_artifacts/
|
||||
|
||||
output_image=${docker_image_libtorch_android_x86_32}-gradle
|
||||
docker commit "$id" ${output_image}
|
||||
time docker push ${output_image}
|
||||
- store_artifacts:
|
||||
path: ~/workspace/build_android_x86_32_artifacts/artifacts.tgz
|
||||
destination: artifacts.tgz
|
||||
|
||||
pytorch_ios_build:
|
||||
<<: *pytorch_ios_params
|
||||
macos:
|
||||
xcode: "11.2.1"
|
||||
steps:
|
||||
# See Note [Workspace for CircleCI scripts] in job-specs-setup.yml
|
||||
- should_run_job
|
||||
- checkout
|
||||
- run_brew_for_ios_build
|
||||
- run:
|
||||
name: Run Fastlane
|
||||
no_output_timeout: "1h"
|
||||
command: |
|
||||
set -e
|
||||
PROJ_ROOT=/Users/distiller/project
|
||||
cd ${PROJ_ROOT}/ios/TestApp
|
||||
# install fastlane
|
||||
sudo gem install bundler && bundle install
|
||||
# install certificates
|
||||
echo ${IOS_CERT_KEY} >> cert.txt
|
||||
base64 --decode cert.txt -o Certificates.p12
|
||||
rm cert.txt
|
||||
bundle exec fastlane install_cert
|
||||
# install the provisioning profile
|
||||
PROFILE=TestApp_CI.mobileprovision
|
||||
PROVISIONING_PROFILES=~/Library/MobileDevice/Provisioning\ Profiles
|
||||
mkdir -pv "${PROVISIONING_PROFILES}"
|
||||
cd "${PROVISIONING_PROFILES}"
|
||||
echo ${IOS_SIGN_KEY} >> cert.txt
|
||||
base64 --decode cert.txt -o ${PROFILE}
|
||||
rm cert.txt
|
||||
- run:
|
||||
name: Build
|
||||
no_output_timeout: "1h"
|
||||
command: |
|
||||
set -e
|
||||
export IN_CIRCLECI=1
|
||||
WORKSPACE=/Users/distiller/workspace
|
||||
PROJ_ROOT=/Users/distiller/project
|
||||
export TCLLIBPATH="/usr/local/lib"
|
||||
# Install conda
|
||||
curl -o ~/Downloads/conda.sh https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh
|
||||
chmod +x ~/Downloads/conda.sh
|
||||
/bin/bash ~/Downloads/conda.sh -b -p ~/anaconda
|
||||
export PATH="~/anaconda/bin:${PATH}"
|
||||
source ~/anaconda/bin/activate
|
||||
# Install dependencies
|
||||
conda install numpy ninja pyyaml mkl mkl-include setuptools cmake cffi typing requests --yes
|
||||
# sync submodules
|
||||
cd ${PROJ_ROOT}
|
||||
git submodule sync
|
||||
git submodule update --init --recursive
|
||||
# export
|
||||
export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
|
||||
# run build script
|
||||
chmod a+x ${PROJ_ROOT}/scripts/build_ios.sh
|
||||
echo "IOS_ARCH: ${IOS_ARCH}"
|
||||
echo "IOS_PLATFORM: ${IOS_PLATFORM}"
|
||||
export BUILD_PYTORCH_MOBILE=1
|
||||
export IOS_ARCH=${IOS_ARCH}
|
||||
export IOS_PLATFORM=${IOS_PLATFORM}
|
||||
unbuffer ${PROJ_ROOT}/scripts/build_ios.sh 2>&1 | ts
|
||||
- run:
|
||||
name: Run Build Tests
|
||||
no_output_timeout: "30m"
|
||||
command: |
|
||||
set -e
|
||||
PROJ_ROOT=/Users/distiller/project
|
||||
PROFILE=TestApp_CI
|
||||
# run the ruby build script
|
||||
if ! [ -x "$(command -v xcodebuild)" ]; then
|
||||
echo 'Error: xcodebuild is not installed.'
|
||||
exit 1
|
||||
fi
|
||||
echo ${IOS_DEV_TEAM_ID}
|
||||
ruby ${PROJ_ROOT}/scripts/xcode_build.rb -i ${PROJ_ROOT}/build_ios/install -x ${PROJ_ROOT}/ios/TestApp/TestApp.xcodeproj -p ${IOS_PLATFORM} -c ${PROFILE} -t ${IOS_DEV_TEAM_ID}
|
||||
if ! [ "$?" -eq "0" ]; then
|
||||
echo 'xcodebuild failed!'
|
||||
exit 1
|
||||
fi
|
||||
- run:
|
||||
name: Run Simulator Tests
|
||||
no_output_timeout: "2h"
|
||||
command: |
|
||||
set -e
|
||||
if [ ${IOS_PLATFORM} != "SIMULATOR" ]; then
|
||||
echo "not SIMULATOR build, skip it."
|
||||
exit 0
|
||||
fi
|
||||
WORKSPACE=/Users/distiller/workspace
|
||||
PROJ_ROOT=/Users/distiller/project
|
||||
source ~/anaconda/bin/activate
|
||||
#install the latest version of PyTorch and TorchVision
|
||||
pip install torch torchvision
|
||||
#run unit test
|
||||
cd ${PROJ_ROOT}/ios/TestApp/benchmark
|
||||
python trace_model.py
|
||||
ruby setup.rb
|
||||
cd ${PROJ_ROOT}/ios/TestApp
|
||||
instruments -s -devices
|
||||
fastlane scan
|
||||
|
||||
@ -27,3 +27,4 @@
|
||||
- persist_to_workspace:
|
||||
root: .
|
||||
paths: .circleci/scripts
|
||||
|
||||
@ -1,14 +0,0 @@
|
||||
|
||||
# There is currently no testing for libtorch TODO
|
||||
# binary_linux_libtorch_3.6m_cpu_test:
|
||||
# environment:
|
||||
# BUILD_ENVIRONMENT: "libtorch 3.6m cpu"
|
||||
# resource_class: gpu.medium
|
||||
# <<: *binary_linux_test
|
||||
#
|
||||
# binary_linux_libtorch_3.6m_cu90_test:
|
||||
# environment:
|
||||
# BUILD_ENVIRONMENT: "libtorch 3.6m cu90"
|
||||
# resource_class: gpu.medium
|
||||
# <<: *binary_linux_test
|
||||
#
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user