mirror of
https://github.com/pytorch/pytorch.git
synced 2025-10-24 23:54:56 +08:00
Compare commits
1 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| db5d3131d1 |
2
.circleci/.gitignore
vendored
2
.circleci/.gitignore
vendored
@ -1,2 +0,0 @@
|
||||
*.svg
|
||||
*.png
|
||||
@ -1,504 +0,0 @@
|
||||
Structure of CI
|
||||
===============
|
||||
|
||||
setup job:
|
||||
1. Does a git checkout
|
||||
2. Persists CircleCI scripts (everything in `.circleci`) into a workspace. Why?
|
||||
We don't always do a Git checkout on all subjobs, but we usually
|
||||
still want to be able to call scripts one way or another in a subjob.
|
||||
Persisting files this way lets us have access to them without doing a
|
||||
checkout. This workspace is conventionally mounted on `~/workspace`
|
||||
(this is distinguished from `~/project`, which is the conventional
|
||||
working directory that CircleCI will default to starting your jobs
|
||||
in.)
|
||||
3. Write out the commit message to `.circleci/COMMIT_MSG`. This is so
|
||||
we can determine in subjobs if we should actually run the jobs or
|
||||
not, even if there isn't a Git checkout.
|
||||
|
||||
|
||||
|
||||
|
||||
CircleCI configuration generator
|
||||
================================
|
||||
|
||||
One may no longer make changes to the `.circleci/config.yml` file directly.
|
||||
Instead, one must edit these Python scripts or files in the `verbatim-sources/` directory.
|
||||
|
||||
|
||||
Usage
|
||||
----------
|
||||
|
||||
1. Make changes to these scripts.
|
||||
2. Run the `regenerate.sh` script in this directory and commit the script changes and the resulting change to `config.yml`.
|
||||
|
||||
You'll see a build failure on TravisCI if the scripts don't agree with the checked-in version.
|
||||
|
||||
|
||||
Motivation
|
||||
----------
|
||||
|
||||
These scripts establish a single, authoritative source of documentation for the CircleCI configuration matrix.
|
||||
The documentation, in the form of diagrams, is automatically generated and cannot drift out of sync with the YAML content.
|
||||
|
||||
Furthermore, consistency is enforced within the YAML config itself, by using a single source of data to generate
|
||||
multiple parts of the file.
|
||||
|
||||
* Facilitates one-off culling/enabling of CI configs for testing PRs on special targets
|
||||
|
||||
Also see https://github.com/pytorch/pytorch/issues/17038
|
||||
|
||||
|
||||
Future direction
|
||||
----------------
|
||||
|
||||
### Declaring sparse config subsets
|
||||
See comment [here](https://github.com/pytorch/pytorch/pull/17323#pullrequestreview-206945747):
|
||||
|
||||
In contrast with a full recursive tree traversal of configuration dimensions,
|
||||
> in the future future I think we actually want to decrease our matrix somewhat and have only a few mostly-orthogonal builds that taste as many different features as possible on PRs, plus a more complete suite on every PR and maybe an almost full suite nightly/weekly (we don't have this yet). Specifying PR jobs in the future might be easier to read with an explicit list when we come to this.
|
||||
|
||||
----------------
|
||||
----------------
|
||||
|
||||
# How do the binaries / nightlies / releases work?
|
||||
|
||||
### What is a binary?
|
||||
|
||||
A binary or package (used interchangeably) is a pre-built collection of c++ libraries, header files, python bits, and other files. We build these and distribute them so that users do not need to install from source.
|
||||
|
||||
A **binary configuration** is a collection of
|
||||
|
||||
* release or nightly
|
||||
* releases are stable, nightlies are beta and built every night
|
||||
* python version
|
||||
* linux: 2.7m, 2.7mu, 3.5m, 3.6m 3.7m (mu is wide unicode or something like that. It usually doesn't matter but you should know that it exists)
|
||||
* macos: 2.7, 3.5, 3.6, 3.7
|
||||
* windows: 3.5, 3.6, 3.7
|
||||
* cpu version
|
||||
* cpu, cuda 9.0, cuda 10.0
|
||||
* The supported cuda versions occasionally change
|
||||
* operating system
|
||||
* Linux - these are all built on CentOS. There haven't been any problems in the past building on CentOS and using on Ubuntu
|
||||
* MacOS
|
||||
* Windows - these are built on Azure pipelines
|
||||
* devtoolset version (gcc compiler version)
|
||||
* This only matters on Linux cause only Linux uses gcc. tldr is gcc made a backwards incompatible change from gcc 4.8 to gcc 5, because it had to change how it implemented std::vector and std::string
|
||||
|
||||
### Where are the binaries?
|
||||
|
||||
The binaries are built in CircleCI. There are nightly binaries built every night at 9pm PST (midnight EST) and release binaries corresponding to Pytorch releases, usually every few months.
|
||||
|
||||
We have 3 types of binary packages
|
||||
|
||||
* pip packages - nightlies are stored on s3 (pip install -f <a s3 url>). releases are stored in a pip repo (pip install torch) (ask Soumith about this)
|
||||
* conda packages - nightlies and releases are both stored in a conda repo. Nighty packages have a '_nightly' suffix
|
||||
* libtorch packages - these are zips of all the c++ libraries, header files, and sometimes dependencies. These are c++ only
|
||||
* shared with dependencies (the only supported option for Windows)
|
||||
* static with dependencies
|
||||
* shared without dependencies
|
||||
* static without dependencies
|
||||
|
||||
All binaries are built in CircleCI workflows except Windows. There are checked-in workflows (committed into the .circleci/config.yml) to build the nightlies every night. Releases are built by manually pushing a PR that builds the suite of release binaries (overwrite the config.yml to build the release)
|
||||
|
||||
# CircleCI structure of the binaries
|
||||
|
||||
Some quick vocab:
|
||||
|
||||
* A\**workflow** is a CircleCI concept; it is a DAG of '**jobs**'. ctrl-f 'workflows' on\https://github.com/pytorch/pytorch/blob/master/.circleci/config.yml to see the workflows.
|
||||
* **jobs** are a sequence of '**steps**'
|
||||
* **steps** are usually just a bash script or a builtin CircleCI command.* All steps run in new environments, environment variables declared in one script DO NOT persist to following steps*
|
||||
* CircleCI has a **workspace**, which is essentially a cache between steps of the *same job* in which you can store artifacts between steps.
|
||||
|
||||
## How are the workflows structured?
|
||||
|
||||
The nightly binaries have 3 workflows. We have one job (actually 3 jobs: build, test, and upload) per binary configuration
|
||||
|
||||
1. binarybuilds
|
||||
1. every day midnight EST
|
||||
2. linux: https://github.com/pytorch/pytorch/blob/master/.circleci/verbatim-sources/linux-binary-build-defaults.yml
|
||||
3. macos: https://github.com/pytorch/pytorch/blob/master/.circleci/verbatim-sources/macos-binary-build-defaults.yml
|
||||
4. For each binary configuration, e.g. linux_conda_3.7_cpu there is a
|
||||
1. binary_linux_conda_3.7_cpu_build
|
||||
1. Builds the build. On linux jobs this uses the 'docker executor'.
|
||||
2. Persists the package to the workspace
|
||||
2. binary_linux_conda_3.7_cpu_test
|
||||
1. Loads the package to the workspace
|
||||
2. Spins up a docker image (on Linux), mapping the package and code repos into the docker
|
||||
3. Runs some smoke tests in the docker
|
||||
4. (Actually, for macos this is a step rather than a separate job)
|
||||
3. binary_linux_conda_3.7_cpu_upload
|
||||
1. Logs in to aws/conda
|
||||
2. Uploads the package
|
||||
2. update_s3_htmls
|
||||
1. every day 5am EST
|
||||
2. https://github.com/pytorch/pytorch/blob/master/.circleci/verbatim-sources/binary_update_htmls.yml
|
||||
3. See below for what these are for and why they're needed
|
||||
4. Three jobs that each examine the current contents of aws and the conda repo and update some html files in s3
|
||||
3. binarysmoketests
|
||||
1. every day
|
||||
2. https://github.com/pytorch/pytorch/blob/master/.circleci/verbatim-sources/nightly-build-smoke-tests-defaults.yml
|
||||
3. For each binary configuration, e.g. linux_conda_3.7_cpu there is a
|
||||
1. smoke_linux_conda_3.7_cpu
|
||||
1. Downloads the package from the cloud, e.g. using the official pip or conda instructions
|
||||
2. Runs the smoke tests
|
||||
|
||||
## How are the jobs structured?
|
||||
|
||||
The jobs are in https://github.com/pytorch/pytorch/tree/master/.circleci/verbatim-sources . Jobs are made of multiple steps. There are some shared steps used by all the binaries/smokes. Steps of these jobs are all delegated to scripts in https://github.com/pytorch/pytorch/tree/master/.circleci/scripts .
|
||||
|
||||
* Linux jobs: https://github.com/pytorch/pytorch/blob/master/.circleci/verbatim-sources/linux-binary-build-defaults.yml
|
||||
* binary_linux_build.sh
|
||||
* binary_linux_test.sh
|
||||
* binary_linux_upload.sh
|
||||
* MacOS jobs: https://github.com/pytorch/pytorch/blob/master/.circleci/verbatim-sources/macos-binary-build-defaults.yml
|
||||
* binary_macos_build.sh
|
||||
* binary_macos_test.sh
|
||||
* binary_macos_upload.sh
|
||||
* Update html jobs: https://github.com/pytorch/pytorch/blob/master/.circleci/verbatim-sources/binary_update_htmls.yml
|
||||
* These delegate from the pytorch/builder repo
|
||||
* https://github.com/pytorch/builder/blob/master/cron/update_s3_htmls.sh
|
||||
* https://github.com/pytorch/builder/blob/master/cron/upload_binary_sizes.sh
|
||||
* Smoke jobs (both linux and macos): https://github.com/pytorch/pytorch/blob/master/.circleci/verbatim-sources/nightly-build-smoke-tests-defaults.yml
|
||||
* These delegate from the pytorch/builder repo
|
||||
* https://github.com/pytorch/builder/blob/master/run_tests.sh
|
||||
* https://github.com/pytorch/builder/blob/master/smoke_test.sh
|
||||
* https://github.com/pytorch/builder/blob/master/check_binary.sh
|
||||
* Common shared code (shared across linux and macos): https://github.com/pytorch/pytorch/blob/master/.circleci/verbatim-sources/nightly-binary-build-defaults.yml
|
||||
* binary_checkout.sh - checks out pytorch/builder repo. Right now this also checks out pytorch/pytorch, but it shouldn't. pytorch/pytorch should just be shared through the workspace. This can handle being run before binary_populate_env.sh
|
||||
* binary_populate_env.sh - parses BUILD_ENVIRONMENT into the separate env variables that make up a binary configuration. Also sets lots of default values, the date, the version strings, the location of folders in s3, all sorts of things. This generally has to be run before other steps.
|
||||
* binary_install_miniconda.sh - Installs miniconda, cross platform. Also hacks this for the update_binary_sizes job that doesn't have the right env variables
|
||||
* binary_run_in_docker.sh - Takes a bash script file (the actual test code) from a hardcoded location, spins up a docker image, and runs the script inside the docker image
|
||||
|
||||
### **Why do the steps all refer to scripts?**
|
||||
|
||||
CircleCI creates a final yaml file by inlining every <<* segment, so if we were to keep all the code in the config.yml itself then the config size would go over 4 MB and cause infra problems.
|
||||
|
||||
### **What is binary_run_in_docker for?**
|
||||
|
||||
So, CircleCI has several executor types: macos, machine, and docker are the ones we use. The 'machine' executor gives you two cores on some linux vm. The 'docker' executor gives you considerably more cores (nproc was 32 instead of 2 back when I tried in February). Since the dockers are faster, we try to run everything that we can in dockers. Thus
|
||||
|
||||
* linux build jobs use the docker executor. Running them on the docker executor was at least 2x faster than running them on the machine executor
|
||||
* linux test jobs use the machine executor and spin up their own docker. Why this nonsense? It's cause we run nvidia-docker for our GPU tests; any code that calls into the CUDA runtime needs to be run on nvidia-docker. To run a nvidia-docker you need to install some nvidia packages on the host machine and then call docker with the '—runtime nvidia' argument. CircleCI doesn't support this, so we have to do it ourself.
|
||||
* This is not just a mere inconvenience. **This blocks all of our linux tests from using more than 2 cores.** But there is nothing that we can do about it, but wait for a fix on circleci's side. Right now, we only run some smoke tests (some simple imports) on the binaries, but this also affects non-binary test jobs.
|
||||
* linux upload jobs use the machine executor. The upload jobs are so short that it doesn't really matter what they use
|
||||
* linux smoke test jobs use the machine executor for the same reason as the linux test jobs
|
||||
|
||||
binary_run_in_docker.sh is a way to share the docker start-up code between the binary test jobs and the binary smoke test jobs
|
||||
|
||||
### **Why does binary_checkout also checkout pytorch? Why shouldn't it?**
|
||||
|
||||
We want all the nightly binary jobs to run on the exact same git commit, so we wrote our own checkout logic to ensure that the same commit was always picked. Later circleci changed that to use a single pytorch checkout and persist it through the workspace (they did this because our config file was too big, so they wanted to take a lot of the setup code into scripts, but the scripts needed the code repo to exist to be called, so they added a prereq step called 'setup' to checkout the code and persist the needed scripts to the workspace). The changes to the binary jobs were not properly tested, so they all broke from missing pytorch code no longer existing. We hotfixed the problem by adding the pytorch checkout back to binary_checkout, so now there's two checkouts of pytorch on the binary jobs. This problem still needs to be fixed, but it takes careful tracing of which code is being called where.
|
||||
|
||||
# Azure Pipelines structure of the binaries
|
||||
|
||||
TODO: fill in stuff
|
||||
|
||||
## How are the workflows structured?
|
||||
|
||||
TODO: fill in stuff
|
||||
|
||||
## How are the jobs structured?
|
||||
|
||||
TODO: fill in stuff
|
||||
|
||||
# Code structure of the binaries (circleci agnostic)
|
||||
|
||||
## Overview
|
||||
|
||||
The code that runs the binaries lives in two places, in the normal [github.com/pytorch/pytorch](http://github.com/pytorch/pytorch), but also in [github.com/pytorch/builder](http://github.com/pytorch/builder) , which is a repo that defines how all the binaries are built. The relevant code is
|
||||
|
||||
|
||||
```
|
||||
# All code needed to set-up environments for build code to run in,
|
||||
# but only code that is specific to the current CI system
|
||||
pytorch/pytorch
|
||||
- .circleci/ # Folder that holds all circleci related stuff
|
||||
- config.yml # GENERATED file that actually controls all circleci behavior
|
||||
- verbatim-sources # Used to generate job/workflow sections in ^
|
||||
- scripts/ # Code needed to prepare circleci environments for binary build scripts
|
||||
|
||||
- setup.py # Builds pytorch. This is wrapped in pytorch/builder
|
||||
- cmake files # used in normal building of pytorch
|
||||
|
||||
# All code needed to prepare a binary build, given an environment
|
||||
# with all the right variables/packages/paths.
|
||||
pytorch/builder
|
||||
|
||||
# Given an installed binary and a proper python env, runs some checks
|
||||
# to make sure the binary was built the proper way. Checks things like
|
||||
# the library dependencies, symbols present, etc.
|
||||
- check_binary.sh
|
||||
|
||||
# Given an installed binary, runs python tests to make sure everything
|
||||
# is in order. These should be de-duped. Right now they both run smoke
|
||||
# tests, but are called from different places. Usually just call some
|
||||
# import statements, but also has overlap with check_binary.sh above
|
||||
- run_tests.sh
|
||||
- smoke_test.sh
|
||||
|
||||
# Folders that govern how packages are built. See paragraphs below
|
||||
|
||||
- conda/
|
||||
- build_pytorch.sh # Entrypoint. Delegates to proper conda build folder
|
||||
- switch_cuda_version.sh # Switches activate CUDA installation in Docker
|
||||
- pytorch-nightly/ # Build-folder
|
||||
- manywheel/
|
||||
- build_cpu.sh # Entrypoint for cpu builds
|
||||
- build.sh # Entrypoint for CUDA builds
|
||||
- build_common.sh # Actual build script that ^^ call into
|
||||
- wheel/
|
||||
- build_wheel.sh # Entrypoint for wheel builds
|
||||
- windows/
|
||||
- build_pytorch.bat # Entrypoint for wheel builds on Windows
|
||||
```
|
||||
|
||||
Every type of package has an entrypoint build script that handles the all the important logic.
|
||||
|
||||
## Conda
|
||||
|
||||
Linux, MacOS and Windows use the same code flow for the conda builds.
|
||||
|
||||
Conda packages are built with conda-build, see https://conda.io/projects/conda-build/en/latest/resources/commands/conda-build.html
|
||||
|
||||
Basically, you pass `conda build` a build folder (pytorch-nightly/ above) that contains a build script and a meta.yaml. The meta.yaml specifies in what python environment to build the package in, and what dependencies the resulting package should have, and the build script gets called in the env to build the thing.
|
||||
tldr; on conda-build is
|
||||
|
||||
1. Creates a brand new conda environment, based off of deps in the meta.yaml
|
||||
1. Note that environment variables do not get passed into this build env unless they are specified in the meta.yaml
|
||||
2. If the build fails this environment will stick around. You can activate it for much easier debugging. The “General Python” section below explains what exactly a python “environment” is.
|
||||
2. Calls build.sh in the environment
|
||||
3. Copies the finished package to a new conda env, also specified by the meta.yaml
|
||||
4. Runs some simple import tests (if specified in the meta.yaml)
|
||||
5. Saves the finished package as a tarball
|
||||
|
||||
The build.sh we use is essentially a wrapper around ```python setup.py build``` , but it also manually copies in some of our dependent libraries into the resulting tarball and messes with some rpaths.
|
||||
|
||||
The entrypoint file `builder/conda/build_conda.sh` is complicated because
|
||||
|
||||
* It works for Linux, MacOS and Windows
|
||||
* The mac builds used to create their own environments, since they all used to be on the same machine. There’s now a lot of extra logic to handle conda envs. This extra machinery could be removed
|
||||
* It used to handle testing too, which adds more logic messing with python environments too. This extra machinery could be removed.
|
||||
|
||||
## Manywheels (linux pip and libtorch packages)
|
||||
|
||||
Manywheels are pip packages for linux distros. Note that these manywheels are not actually manylinux compliant.
|
||||
|
||||
`builder/manywheel/build_cpu.sh` and `builder/manywheel/build.sh` (for CUDA builds) just set different env vars and then call into `builder/manywheel/build_common.sh`
|
||||
|
||||
The entrypoint file `builder/manywheel/build_common.sh` is really really complicated because
|
||||
|
||||
* This used to handle building for several different python versions at the same time. The loops have been removed, but there's still unnecessary folders and movements here and there.
|
||||
* The script is never used this way anymore. This extra machinery could be removed.
|
||||
* This used to handle testing the pip packages too. This is why there’s testing code at the end that messes with python installations and stuff
|
||||
* The script is never used this way anymore. This extra machinery could be removed.
|
||||
* This also builds libtorch packages
|
||||
* This should really be separate. libtorch packages are c++ only and have no python. They should not share infra with all the python specific stuff in this file.
|
||||
* There is a lot of messing with rpaths. This is necessary, but could be made much much simpler if the above issues were fixed.
|
||||
|
||||
## Wheels (MacOS pip and libtorch packages)
|
||||
|
||||
The entrypoint file `builder/wheel/build_wheel.sh` is complicated because
|
||||
|
||||
* The mac builds used to all run on one machine (we didn’t have autoscaling mac machines till circleci). So this script handled siloing itself by setting-up and tearing-down its build env and siloing itself into its own build directory.
|
||||
* The script is never used this way anymore. This extra machinery could be removed.
|
||||
* This also builds libtorch packages
|
||||
* Ditto the comment above. This should definitely be separated out.
|
||||
|
||||
Note that the MacOS Python wheels are still built in conda environments. Some of the dependencies present during build also come from conda.
|
||||
|
||||
## Windows Wheels (Windows pip and libtorch packages)
|
||||
|
||||
The entrypoint file `builder/windows/build_pytorch.bat` is complicated because
|
||||
|
||||
* This used to handle building for several different python versions at the same time. This is why there are loops everywhere
|
||||
* The script is never used this way anymore. This extra machinery could be removed.
|
||||
* This used to handle testing the pip packages too. This is why there’s testing code at the end that messes with python installations and stuff
|
||||
* The script is never used this way anymore. This extra machinery could be removed.
|
||||
* This also builds libtorch packages
|
||||
* This should really be separate. libtorch packages are c++ only and have no python. They should not share infra with all the python specific stuff in this file.
|
||||
|
||||
Note that the Windows Python wheels are still built in conda environments. Some of the dependencies present during build also come from conda.
|
||||
|
||||
## General notes
|
||||
|
||||
### Note on run_tests.sh, smoke_test.sh, and check_binary.sh
|
||||
|
||||
* These should all be consolidated
|
||||
* These must run on all OS types: MacOS, Linux, and Windows
|
||||
* These all run smoke tests at the moment. They inspect the packages some, maybe run a few import statements. They DO NOT run the python tests nor the cpp tests. The idea is that python tests on master and PR merges will catch all breakages. All these tests have to do is make sure the special binary machinery didn’t mess anything up.
|
||||
* There are separate run_tests.sh and smoke_test.sh because one used to be called by the smoke jobs and one used to be called by the binary test jobs (see circleci structure section above). This is still true actually, but these could be united into a single script that runs these checks, given an installed pytorch package.
|
||||
|
||||
### Note on libtorch
|
||||
|
||||
Libtorch packages are built in the wheel build scripts: manywheel/build_*.sh for linux and build_wheel.sh for mac. There are several things wrong with this
|
||||
|
||||
* It’s confusing. Most of those scripts deal with python specifics.
|
||||
* The extra conditionals everywhere severely complicate the wheel build scripts
|
||||
* The process for building libtorch is different from the official instructions (a plain call to cmake, or a call to a script)
|
||||
|
||||
### Note on docker images / Dockerfiles
|
||||
|
||||
All linux builds occur in docker images. The docker images are
|
||||
|
||||
* pytorch/conda-cuda
|
||||
* Has ALL CUDA versions installed. The script pytorch/builder/conda/switch_cuda_version.sh sets /usr/local/cuda to a symlink to e.g. /usr/local/cuda-10.0 to enable different CUDA builds
|
||||
* Also used for cpu builds
|
||||
* pytorch/manylinux-cuda90
|
||||
* pytorch/manylinux-cuda92
|
||||
* pytorch/manylinux-cuda100
|
||||
* Also used for cpu builds
|
||||
|
||||
The Dockerfiles are available in pytorch/builder, but there is no circleci job or script to build these docker images, and they cannot be run locally (unless you have the correct local packages/paths). Only Soumith can build them right now.
|
||||
|
||||
### General Python
|
||||
|
||||
* This is still a good explanation of python installations https://caffe2.ai/docs/faq.html#why-do-i-get-import-errors-in-python-when-i-try-to-use-caffe2
|
||||
|
||||
# How to manually rebuild the binaries
|
||||
|
||||
tldr; make a PR that looks like https://github.com/pytorch/pytorch/pull/21159
|
||||
|
||||
Sometimes we want to push a change to master and then rebuild all of today's binaries after that change. As of May 30, 2019 there isn't a way to manually run a workflow in the UI. You can manually re-run a workflow, but it will use the exact same git commits as the first run and will not include any changes. So we have to make a PR and then force circleci to run the binary workflow instead of the normal tests. The above PR is an example of how to do this; essentially you copy-paste the binarybuilds workflow steps into the default workflow steps. If you need to point the builder repo to a different commit then you'd need to change https://github.com/pytorch/pytorch/blob/master/.circleci/scripts/binary_checkout.sh#L42-L45 to checkout what you want.
|
||||
|
||||
## How to test changes to the binaries via .circleci
|
||||
|
||||
Writing PRs that test the binaries is annoying, since the default circleci jobs that run on PRs are not the jobs that you want to run. Likely, changes to the binaries will touch something under .circleci/ and require that .circleci/config.yml be regenerated (.circleci/config.yml controls all .circleci behavior, and is generated using ```.circleci/regenerate.sh``` in python 3.7). But you also need to manually hardcode the binary jobs that you want to test into the .circleci/config.yml workflow, so you should actually make at least two commits, one for your changes and one to temporarily hardcode jobs. See https://github.com/pytorch/pytorch/pull/22928 as an example of how to do this.
|
||||
|
||||
```
|
||||
# Make your changes
|
||||
touch .circleci/verbatim-sources/nightly-binary-build-defaults.yml
|
||||
|
||||
# Regenerate the yaml, has to be in python 3.7
|
||||
.circleci/regenerate.sh
|
||||
|
||||
# Make a commit
|
||||
git add .circleci *
|
||||
git commit -m "My real changes"
|
||||
git push origin my_branch
|
||||
|
||||
# Now hardcode the jobs that you want in the .circleci/config.yml workflows section
|
||||
# Also eliminate ensure-consistency and should_run_job checks
|
||||
# e.g. https://github.com/pytorch/pytorch/commit/2b3344bfed8772fe86e5210cc4ee915dee42b32d
|
||||
|
||||
# Make a commit you won't keep
|
||||
git add .circleci
|
||||
git commit -m "[DO NOT LAND] testing binaries for above changes"
|
||||
git push origin my_branch
|
||||
|
||||
# Now you need to make some changes to the first commit.
|
||||
git rebase -i HEAD~2 # mark the first commit as 'edit'
|
||||
|
||||
# Make the changes
|
||||
touch .circleci/verbatim-sources/nightly-binary-build-defaults.yml
|
||||
.circleci/regenerate.sh
|
||||
|
||||
# Ammend the commit and recontinue
|
||||
git add .circleci
|
||||
git commit --amend
|
||||
git rebase --continue
|
||||
|
||||
# Update the PR, need to force since the commits are different now
|
||||
git push origin my_branch --force
|
||||
```
|
||||
|
||||
The advantage of this flow is that you can make new changes to the base commit and regenerate the .circleci without having to re-write which binary jobs you want to test on. The downside is that all updates will be force pushes.
|
||||
|
||||
## How to build a binary locally
|
||||
|
||||
### Linux
|
||||
|
||||
You can build Linux binaries locally easily using docker.
|
||||
|
||||
```
|
||||
# Run the docker
|
||||
# Use the correct docker image, pytorch/conda-cuda used here as an example
|
||||
#
|
||||
# -v path/to/foo:path/to/bar makes path/to/foo on your local machine (the
|
||||
# machine that you're running the command on) accessible to the docker
|
||||
# container at path/to/bar. So if you then run `touch path/to/bar/baz`
|
||||
# in the docker container then you will see path/to/foo/baz on your local
|
||||
# machine. You could also clone the pytorch and builder repos in the docker.
|
||||
#
|
||||
# If you're building a CUDA binary then use `nvidia-docker run` instead, see below.
|
||||
#
|
||||
# If you know how, add ccache as a volume too and speed up everything
|
||||
docker run \
|
||||
-v your/pytorch/repo:/pytorch \
|
||||
-v your/builder/repo:/builder \
|
||||
-v where/you/want/packages/to/appear:/final_pkgs \
|
||||
-it pytorch/conda-cuda /bin/bash
|
||||
|
||||
# Export whatever variables are important to you. All variables that you'd
|
||||
# possibly need are in .circleci/scripts/binary_populate_env.sh
|
||||
# You should probably always export at least these 3 variables
|
||||
export PACKAGE_TYPE=conda
|
||||
export DESIRED_PYTHON=3.6
|
||||
export DESIRED_CUDA=cpu
|
||||
|
||||
# Call the entrypoint
|
||||
# `|& tee foo.log` just copies all stdout and stderr output to foo.log
|
||||
# The builds generate lots of output so you probably need this when
|
||||
# building locally.
|
||||
/builder/conda/build_pytorch.sh |& tee build_output.log
|
||||
```
|
||||
|
||||
**Building CUDA binaries on docker**
|
||||
|
||||
To build a CUDA binary you need to use `nvidia-docker run` instead of just `docker run` (or you can manually pass `--runtime=nvidia`). This adds some needed libraries and things to build CUDA stuff.
|
||||
|
||||
You can build CUDA binaries on CPU only machines, but you can only run CUDA binaries on CUDA machines. This means that you can build a CUDA binary on a docker on your laptop if you so choose (though it’s gonna take a loong time).
|
||||
|
||||
For Facebook employees, ask about beefy machines that have docker support and use those instead of your laptop; it will be 5x as fast.
|
||||
|
||||
### MacOS
|
||||
|
||||
There’s no easy way to generate reproducible hermetic MacOS environments. If you have a Mac laptop then you can try emulating the .circleci environments as much as possible, but you probably have packages in /usr/local/, possibly installed by brew, that will probably interfere with the build. If you’re trying to repro an error on a Mac build in .circleci and you can’t seem to repro locally, then my best advice is actually to iterate on .circleci :/
|
||||
|
||||
But if you want to try, then I’d recommend
|
||||
|
||||
```
|
||||
# Create a new terminal
|
||||
# Clear your LD_LIBRARY_PATH and trim as much out of your PATH as you
|
||||
# know how to do
|
||||
|
||||
# Install a new miniconda
|
||||
# First remove any other python or conda installation from your PATH
|
||||
# Always install miniconda 3, even if building for Python <3
|
||||
new_conda="~/my_new_conda"
|
||||
conda_sh="$new_conda/install_miniconda.sh"
|
||||
curl -o "$conda_sh" https://repo.continuum.io/miniconda/Miniconda3-latest-MacOSX-x86_64.sh
|
||||
chmod +x "$conda_sh"
|
||||
"$conda_sh" -b -p "$MINICONDA_ROOT"
|
||||
rm -f "$conda_sh"
|
||||
export PATH="~/my_new_conda/bin:$PATH"
|
||||
|
||||
# Create a clean python env
|
||||
# All MacOS builds use conda to manage the python env and dependencies
|
||||
# that are built with, even the pip packages
|
||||
conda create -yn binary python=2.7
|
||||
conda activate binary
|
||||
|
||||
# Export whatever variables are important to you. All variables that you'd
|
||||
# possibly need are in .circleci/scripts/binary_populate_env.sh
|
||||
# You should probably always export at least these 3 variables
|
||||
export PACKAGE_TYPE=conda
|
||||
export DESIRED_PYTHON=3.6
|
||||
export DESIRED_CUDA=cpu
|
||||
|
||||
# Call the entrypoint you want
|
||||
path/to/builder/wheel/build_wheel.sh
|
||||
```
|
||||
|
||||
N.B. installing a brand new miniconda is important. This has to do with how conda installations work. See the “General Python” section above, but tldr; is that
|
||||
|
||||
1. You make the ‘conda’ command accessible by prepending `path/to/conda_root/bin` to your PATH.
|
||||
2. You make a new env and activate it, which then also gets prepended to your PATH. Now you have `path/to/conda_root/envs/new_env/bin:path/to/conda_root/bin:$PATH`
|
||||
3. Now say you (or some code that you ran) call python executable `foo`
|
||||
1. if you installed `foo` in `new_env`, then `path/to/conda_root/envs/new_env/bin/foo` will get called, as expected.
|
||||
2. But if you forgot to installed `foo` in `new_env` but happened to previously install it in your root conda env (called ‘base’), then unix/linux will still find `path/to/conda_root/bin/foo` . This is dangerous, since `foo` can be a different version than you want; `foo` can even be for an incompatible python version!
|
||||
|
||||
Newer conda versions and proper python hygiene can prevent this, but just install a new miniconda to be safe.
|
||||
|
||||
### Windows
|
||||
|
||||
TODO: fill in
|
||||
@ -1,171 +0,0 @@
|
||||
"""
|
||||
This module models the tree of configuration variants
|
||||
for "smoketest" builds.
|
||||
|
||||
Each subclass of ConfigNode represents a layer of the configuration hierarchy.
|
||||
These tree nodes encapsulate the logic for whether a branch of the hierarchy
|
||||
should be "pruned".
|
||||
|
||||
In addition to generating config.yml content, the tree is also traversed
|
||||
to produce a visualization of config dimensions.
|
||||
"""
|
||||
|
||||
from collections import OrderedDict
|
||||
|
||||
from cimodel.lib.conf_tree import ConfigNode
|
||||
import cimodel.data.dimensions as dimensions
|
||||
|
||||
|
||||
LINKING_DIMENSIONS = [
|
||||
"shared",
|
||||
"static",
|
||||
]
|
||||
|
||||
|
||||
DEPS_INCLUSION_DIMENSIONS = [
|
||||
"with-deps",
|
||||
"without-deps",
|
||||
]
|
||||
|
||||
|
||||
def get_processor_arch_name(cuda_version):
|
||||
return "cpu" if not cuda_version else "cu" + cuda_version
|
||||
|
||||
|
||||
LINUX_PACKAGE_VARIANTS = OrderedDict(
|
||||
manywheel=[
|
||||
"2.7m",
|
||||
"2.7mu",
|
||||
"3.5m",
|
||||
"3.6m",
|
||||
"3.7m",
|
||||
],
|
||||
conda=dimensions.STANDARD_PYTHON_VERSIONS,
|
||||
libtorch=[
|
||||
"2.7m",
|
||||
],
|
||||
)
|
||||
|
||||
CONFIG_TREE_DATA = OrderedDict(
|
||||
linux=(dimensions.CUDA_VERSIONS, LINUX_PACKAGE_VARIANTS),
|
||||
macos=([None], OrderedDict(
|
||||
wheel=dimensions.STANDARD_PYTHON_VERSIONS,
|
||||
conda=dimensions.STANDARD_PYTHON_VERSIONS,
|
||||
libtorch=[
|
||||
"2.7",
|
||||
],
|
||||
)),
|
||||
)
|
||||
|
||||
# GCC config variants:
|
||||
#
|
||||
# All the nightlies (except libtorch with new gcc ABI) are built with devtoolset7,
|
||||
# which can only build with old gcc ABI. It is better than devtoolset3
|
||||
# because it understands avx512, which is needed for good fbgemm performance.
|
||||
#
|
||||
# Libtorch with new gcc ABI is built with gcc 5.4 on Ubuntu 16.04.
|
||||
LINUX_GCC_CONFIG_VARIANTS = OrderedDict(
|
||||
manywheel=['devtoolset7'],
|
||||
conda=['devtoolset7'],
|
||||
libtorch=[
|
||||
"devtoolset7",
|
||||
"gcc5.4_cxx11-abi",
|
||||
],
|
||||
)
|
||||
|
||||
|
||||
class TopLevelNode(ConfigNode):
|
||||
def __init__(self, node_name, config_tree_data, smoke):
|
||||
super(TopLevelNode, self).__init__(None, node_name)
|
||||
|
||||
self.config_tree_data = config_tree_data
|
||||
self.props["smoke"] = smoke
|
||||
|
||||
def get_children(self):
|
||||
return [OSConfigNode(self, x, c, p) for (x, (c, p)) in self.config_tree_data.items()]
|
||||
|
||||
|
||||
class OSConfigNode(ConfigNode):
|
||||
def __init__(self, parent, os_name, cuda_versions, py_tree):
|
||||
super(OSConfigNode, self).__init__(parent, os_name)
|
||||
|
||||
self.py_tree = py_tree
|
||||
self.props["os_name"] = os_name
|
||||
self.props["cuda_versions"] = cuda_versions
|
||||
|
||||
def get_children(self):
|
||||
return [PackageFormatConfigNode(self, k, v) for k, v in self.py_tree.items()]
|
||||
|
||||
|
||||
class PackageFormatConfigNode(ConfigNode):
|
||||
def __init__(self, parent, package_format, python_versions):
|
||||
super(PackageFormatConfigNode, self).__init__(parent, package_format)
|
||||
|
||||
self.props["python_versions"] = python_versions
|
||||
self.props["package_format"] = package_format
|
||||
|
||||
def get_children(self):
|
||||
if self.find_prop("os_name") == "linux":
|
||||
return [LinuxGccConfigNode(self, v) for v in LINUX_GCC_CONFIG_VARIANTS[self.find_prop("package_format")]]
|
||||
else:
|
||||
return [ArchConfigNode(self, v) for v in self.find_prop("cuda_versions")]
|
||||
|
||||
|
||||
class LinuxGccConfigNode(ConfigNode):
|
||||
def __init__(self, parent, gcc_config_variant):
|
||||
super(LinuxGccConfigNode, self).__init__(parent, "GCC_CONFIG_VARIANT=" + str(gcc_config_variant))
|
||||
|
||||
self.props["gcc_config_variant"] = gcc_config_variant
|
||||
|
||||
def get_children(self):
|
||||
cuda_versions = self.find_prop("cuda_versions")
|
||||
|
||||
# XXX devtoolset7 on CUDA 9.0 is temporarily disabled
|
||||
# see https://github.com/pytorch/pytorch/issues/20066
|
||||
if self.find_prop("gcc_config_variant") == 'devtoolset7':
|
||||
cuda_versions = filter(lambda x: x != "90", cuda_versions)
|
||||
|
||||
return [ArchConfigNode(self, v) for v in cuda_versions]
|
||||
|
||||
|
||||
class ArchConfigNode(ConfigNode):
|
||||
def __init__(self, parent, cu):
|
||||
super(ArchConfigNode, self).__init__(parent, get_processor_arch_name(cu))
|
||||
|
||||
self.props["cu"] = cu
|
||||
|
||||
def get_children(self):
|
||||
return [PyVersionConfigNode(self, v) for v in self.find_prop("python_versions")]
|
||||
|
||||
|
||||
class PyVersionConfigNode(ConfigNode):
|
||||
def __init__(self, parent, pyver):
|
||||
super(PyVersionConfigNode, self).__init__(parent, pyver)
|
||||
|
||||
self.props["pyver"] = pyver
|
||||
|
||||
def get_children(self):
|
||||
|
||||
smoke = self.find_prop("smoke")
|
||||
package_format = self.find_prop("package_format")
|
||||
os_name = self.find_prop("os_name")
|
||||
|
||||
has_libtorch_variants = package_format == "libtorch" and os_name == "linux"
|
||||
linking_variants = LINKING_DIMENSIONS if has_libtorch_variants else []
|
||||
|
||||
return [LinkingVariantConfigNode(self, v) for v in linking_variants]
|
||||
|
||||
|
||||
class LinkingVariantConfigNode(ConfigNode):
|
||||
def __init__(self, parent, linking_variant):
|
||||
super(LinkingVariantConfigNode, self).__init__(parent, linking_variant)
|
||||
|
||||
def get_children(self):
|
||||
return [DependencyInclusionConfigNode(self, v) for v in DEPS_INCLUSION_DIMENSIONS]
|
||||
|
||||
|
||||
class DependencyInclusionConfigNode(ConfigNode):
|
||||
def __init__(self, parent, deps_variant):
|
||||
super(DependencyInclusionConfigNode, self).__init__(parent, deps_variant)
|
||||
|
||||
self.props["libtorch_variant"] = "-".join([self.parent.get_label(), self.get_label()])
|
||||
@ -1,169 +0,0 @@
|
||||
from collections import OrderedDict
|
||||
|
||||
import cimodel.data.binary_build_data as binary_build_data
|
||||
import cimodel.lib.conf_tree as conf_tree
|
||||
import cimodel.lib.miniutils as miniutils
|
||||
|
||||
|
||||
class Conf(object):
|
||||
def __init__(self, os, cuda_version, pydistro, parms, smoke, libtorch_variant, gcc_config_variant):
|
||||
|
||||
self.os = os
|
||||
self.cuda_version = cuda_version
|
||||
self.pydistro = pydistro
|
||||
self.parms = parms
|
||||
self.smoke = smoke
|
||||
self.libtorch_variant = libtorch_variant
|
||||
self.gcc_config_variant = gcc_config_variant
|
||||
|
||||
def gen_build_env_parms(self):
|
||||
elems = [self.pydistro] + self.parms + [binary_build_data.get_processor_arch_name(self.cuda_version)]
|
||||
if self.gcc_config_variant is not None:
|
||||
elems.append(str(self.gcc_config_variant))
|
||||
return elems
|
||||
|
||||
def gen_docker_image(self):
|
||||
if self.gcc_config_variant == 'gcc5.4_cxx11-abi':
|
||||
return miniutils.quote("pytorch/conda-cuda-cxx11-ubuntu1604:latest")
|
||||
|
||||
docker_word_substitution = {
|
||||
"manywheel": "manylinux",
|
||||
"libtorch": "manylinux",
|
||||
}
|
||||
|
||||
docker_distro_prefix = miniutils.override(self.pydistro, docker_word_substitution)
|
||||
|
||||
# The cpu nightlies are built on the pytorch/manylinux-cuda100 docker image
|
||||
alt_docker_suffix = self.cuda_version or "100"
|
||||
docker_distro_suffix = "" if self.pydistro == "conda" else alt_docker_suffix
|
||||
if self.cuda_version == "101":
|
||||
return "soumith/manylinux-cuda101@sha256:5d62be90d5b7777121180e6137c7eed73d37aaf9f669c51b783611e37e0b4916"
|
||||
return miniutils.quote("pytorch/" + docker_distro_prefix + "-cuda" + docker_distro_suffix)
|
||||
|
||||
def get_name_prefix(self):
|
||||
return "smoke" if self.smoke else "binary"
|
||||
|
||||
def gen_build_name(self, build_or_test, nightly):
|
||||
|
||||
parts = [self.get_name_prefix(), self.os] + self.gen_build_env_parms()
|
||||
|
||||
if nightly:
|
||||
parts.append("nightly")
|
||||
|
||||
if self.libtorch_variant:
|
||||
parts.append(self.libtorch_variant)
|
||||
|
||||
if not self.smoke:
|
||||
parts.append(build_or_test)
|
||||
|
||||
joined = "_".join(parts)
|
||||
return joined.replace(".", "_")
|
||||
|
||||
def gen_workflow_job(self, phase, upload_phase_dependency=None, nightly=False):
|
||||
job_def = OrderedDict()
|
||||
job_def["name"] = self.gen_build_name(phase, nightly)
|
||||
job_def["build_environment"] = miniutils.quote(" ".join(self.gen_build_env_parms()))
|
||||
job_def["requires"] = ["setup"]
|
||||
if self.smoke:
|
||||
job_def["requires"].append("update_s3_htmls_for_nightlies")
|
||||
job_def["requires"].append("update_s3_htmls_for_nightlies_devtoolset7")
|
||||
job_def["filters"] = {"branches": {"only": "postnightly"}}
|
||||
else:
|
||||
job_def["filters"] = {"branches": {"only": "nightly"}}
|
||||
if self.libtorch_variant:
|
||||
job_def["libtorch_variant"] = miniutils.quote(self.libtorch_variant)
|
||||
if phase == "test":
|
||||
if not self.smoke:
|
||||
job_def["requires"].append(self.gen_build_name("build", nightly))
|
||||
if not (self.smoke and self.os == "macos"):
|
||||
job_def["docker_image"] = self.gen_docker_image()
|
||||
|
||||
if self.cuda_version:
|
||||
job_def["use_cuda_docker_runtime"] = miniutils.quote("1")
|
||||
else:
|
||||
if self.os == "linux" and phase != "upload":
|
||||
job_def["docker_image"] = self.gen_docker_image()
|
||||
|
||||
if phase == "test":
|
||||
if self.cuda_version:
|
||||
job_def["resource_class"] = "gpu.medium"
|
||||
if phase == "upload":
|
||||
job_def["context"] = "org-member"
|
||||
job_def["requires"] = ["setup", self.gen_build_name(upload_phase_dependency, nightly)]
|
||||
|
||||
os_name = miniutils.override(self.os, {"macos": "mac"})
|
||||
job_name = "_".join([self.get_name_prefix(), os_name, phase])
|
||||
return {job_name : job_def}
|
||||
|
||||
def get_root(smoke, name):
|
||||
|
||||
return binary_build_data.TopLevelNode(
|
||||
name,
|
||||
binary_build_data.CONFIG_TREE_DATA,
|
||||
smoke,
|
||||
)
|
||||
|
||||
|
||||
def gen_build_env_list(smoke):
|
||||
|
||||
root = get_root(smoke, "N/A")
|
||||
config_list = conf_tree.dfs(root)
|
||||
|
||||
newlist = []
|
||||
for c in config_list:
|
||||
conf = Conf(
|
||||
c.find_prop("os_name"),
|
||||
c.find_prop("cu"),
|
||||
c.find_prop("package_format"),
|
||||
[c.find_prop("pyver")],
|
||||
c.find_prop("smoke"),
|
||||
c.find_prop("libtorch_variant"),
|
||||
c.find_prop("gcc_config_variant"),
|
||||
)
|
||||
newlist.append(conf)
|
||||
|
||||
return newlist
|
||||
|
||||
|
||||
def predicate_exclude_nonlinux_and_libtorch(config):
|
||||
return config.os == "linux"
|
||||
|
||||
|
||||
def get_nightly_uploads():
|
||||
configs = gen_build_env_list(False)
|
||||
mylist = []
|
||||
for conf in configs:
|
||||
phase_dependency = "test" if predicate_exclude_nonlinux_and_libtorch(conf) else "build"
|
||||
mylist.append(conf.gen_workflow_job("upload", phase_dependency, nightly=True))
|
||||
|
||||
return mylist
|
||||
|
||||
def get_nightly_tests():
|
||||
|
||||
configs = gen_build_env_list(False)
|
||||
filtered_configs = filter(predicate_exclude_nonlinux_and_libtorch, configs)
|
||||
|
||||
tests = []
|
||||
for conf_options in filtered_configs:
|
||||
yaml_item = conf_options.gen_workflow_job("test", nightly=True)
|
||||
tests.append(yaml_item)
|
||||
|
||||
return tests
|
||||
|
||||
|
||||
def get_jobs(toplevel_key, smoke):
|
||||
jobs_list = []
|
||||
configs = gen_build_env_list(smoke)
|
||||
phase = "build" if toplevel_key == "binarybuilds" else "test"
|
||||
for build_config in configs:
|
||||
jobs_list.append(build_config.gen_workflow_job(phase, nightly=True))
|
||||
|
||||
return jobs_list
|
||||
|
||||
|
||||
def get_binary_build_jobs():
|
||||
return get_jobs("binarybuilds", False)
|
||||
|
||||
|
||||
def get_binary_smoke_test_jobs():
|
||||
return get_jobs("binarysmoketests", True)
|
||||
@ -1,81 +0,0 @@
|
||||
from cimodel.lib.conf_tree import ConfigNode, XImportant
|
||||
from cimodel.lib.conf_tree import Ver
|
||||
|
||||
|
||||
CONFIG_TREE_DATA = [
|
||||
(Ver("ubuntu", "16.04"), [
|
||||
([Ver("gcc", "5")], [XImportant("onnx_py2")]),
|
||||
([Ver("clang", "7")], [XImportant("onnx_py3.6")]),
|
||||
]),
|
||||
]
|
||||
|
||||
|
||||
class TreeConfigNode(ConfigNode):
|
||||
def __init__(self, parent, node_name, subtree):
|
||||
super(TreeConfigNode, self).__init__(parent, self.modify_label(node_name))
|
||||
self.subtree = subtree
|
||||
self.init2(node_name)
|
||||
|
||||
# noinspection PyMethodMayBeStatic
|
||||
def modify_label(self, label):
|
||||
return str(label)
|
||||
|
||||
def init2(self, node_name):
|
||||
pass
|
||||
|
||||
def get_children(self):
|
||||
return [self.child_constructor()(self, k, v) for (k, v) in self.subtree]
|
||||
|
||||
def is_build_only(self):
|
||||
if str(self.find_prop("language_version")) == "onnx_py3.6":
|
||||
return False
|
||||
return set(str(c) for c in self.find_prop("compiler_version")).intersection({
|
||||
"clang3.8",
|
||||
"clang3.9",
|
||||
"clang7",
|
||||
"android",
|
||||
}) or self.find_prop("distro_version").name == "macos"
|
||||
|
||||
|
||||
class TopLevelNode(TreeConfigNode):
|
||||
def __init__(self, node_name, subtree):
|
||||
super(TopLevelNode, self).__init__(None, node_name, subtree)
|
||||
|
||||
# noinspection PyMethodMayBeStatic
|
||||
def child_constructor(self):
|
||||
return DistroConfigNode
|
||||
|
||||
|
||||
class DistroConfigNode(TreeConfigNode):
|
||||
def init2(self, node_name):
|
||||
self.props["distro_version"] = node_name
|
||||
|
||||
# noinspection PyMethodMayBeStatic
|
||||
def child_constructor(self):
|
||||
return CompilerConfigNode
|
||||
|
||||
|
||||
class CompilerConfigNode(TreeConfigNode):
|
||||
def init2(self, node_name):
|
||||
self.props["compiler_version"] = node_name
|
||||
|
||||
# noinspection PyMethodMayBeStatic
|
||||
def child_constructor(self):
|
||||
return LanguageConfigNode
|
||||
|
||||
|
||||
class LanguageConfigNode(TreeConfigNode):
|
||||
def init2(self, node_name):
|
||||
self.props["language_version"] = node_name
|
||||
self.props["build_only"] = self.is_build_only()
|
||||
|
||||
def child_constructor(self):
|
||||
return ImportantConfigNode
|
||||
|
||||
|
||||
class ImportantConfigNode(TreeConfigNode):
|
||||
def init2(self, node_name):
|
||||
self.props["important"] = True
|
||||
|
||||
def get_children(self):
|
||||
return []
|
||||
@ -1,161 +0,0 @@
|
||||
from collections import OrderedDict
|
||||
|
||||
import cimodel.data.dimensions as dimensions
|
||||
import cimodel.lib.conf_tree as conf_tree
|
||||
from cimodel.lib.conf_tree import Ver
|
||||
import cimodel.lib.miniutils as miniutils
|
||||
from cimodel.data.caffe2_build_data import CONFIG_TREE_DATA, TopLevelNode
|
||||
|
||||
|
||||
from dataclasses import dataclass
|
||||
|
||||
|
||||
DOCKER_IMAGE_PATH_BASE = "308535385114.dkr.ecr.us-east-1.amazonaws.com/caffe2/"
|
||||
|
||||
DOCKER_IMAGE_VERSION = 345
|
||||
|
||||
|
||||
@dataclass
|
||||
class Conf:
|
||||
language: str
|
||||
distro: Ver
|
||||
# There could be multiple compiler versions configured (e.g. nvcc
|
||||
# for gpu files and host compiler (gcc/clang) for cpu files)
|
||||
compilers: [Ver]
|
||||
build_only: bool
|
||||
is_important: bool
|
||||
|
||||
@property
|
||||
def compiler_names(self):
|
||||
return [c.name for c in self.compilers]
|
||||
|
||||
# TODO: Eventually we can probably just remove the cudnn7 everywhere.
|
||||
def get_cudnn_insertion(self):
|
||||
|
||||
omit = self.language == "onnx_py2" \
|
||||
or self.language == "onnx_py3.6" \
|
||||
or set(self.compiler_names).intersection({"android", "mkl", "clang"}) \
|
||||
or str(self.distro) in ["ubuntu14.04", "macos10.13"]
|
||||
|
||||
return [] if omit else ["cudnn7"]
|
||||
|
||||
def get_build_name_root_parts(self):
|
||||
return [
|
||||
"caffe2",
|
||||
self.language,
|
||||
] + self.get_build_name_middle_parts()
|
||||
|
||||
def get_build_name_middle_parts(self):
|
||||
return [str(c) for c in self.compilers] + self.get_cudnn_insertion() + [str(self.distro)]
|
||||
|
||||
def construct_phase_name(self, phase):
|
||||
root_parts = self.get_build_name_root_parts()
|
||||
return "_".join(root_parts + [phase]).replace(".", "_")
|
||||
|
||||
def get_platform(self):
|
||||
platform = self.distro.name
|
||||
if self.distro.name != "macos":
|
||||
platform = "linux"
|
||||
return platform
|
||||
|
||||
def gen_docker_image(self):
|
||||
|
||||
lang_substitutions = {
|
||||
"onnx_py2": "py2",
|
||||
"onnx_py3.6": "py3.6",
|
||||
"cmake": "py2",
|
||||
}
|
||||
|
||||
lang = miniutils.override(self.language, lang_substitutions)
|
||||
parts = [lang] + self.get_build_name_middle_parts()
|
||||
return miniutils.quote(DOCKER_IMAGE_PATH_BASE + "-".join(parts) + ":" + str(DOCKER_IMAGE_VERSION))
|
||||
|
||||
def gen_workflow_params(self, phase):
|
||||
parameters = OrderedDict()
|
||||
lang_substitutions = {
|
||||
"onnx_py2": "onnx-py2",
|
||||
"onnx_py3.6": "onnx-py3.6",
|
||||
}
|
||||
|
||||
lang = miniutils.override(self.language, lang_substitutions)
|
||||
|
||||
parts = [
|
||||
"caffe2",
|
||||
lang,
|
||||
] + self.get_build_name_middle_parts() + [phase]
|
||||
|
||||
build_env_name = "-".join(parts)
|
||||
parameters["build_environment"] = miniutils.quote(build_env_name)
|
||||
if "ios" in self.compiler_names:
|
||||
parameters["build_ios"] = miniutils.quote("1")
|
||||
if phase == "test":
|
||||
# TODO cuda should not be considered a compiler
|
||||
if "cuda" in self.compiler_names:
|
||||
parameters["use_cuda_docker_runtime"] = miniutils.quote("1")
|
||||
|
||||
if self.distro.name != "macos":
|
||||
parameters["docker_image"] = self.gen_docker_image()
|
||||
if self.build_only:
|
||||
parameters["build_only"] = miniutils.quote("1")
|
||||
if phase == "test":
|
||||
resource_class = "large" if "cuda" not in self.compiler_names else "gpu.medium"
|
||||
parameters["resource_class"] = resource_class
|
||||
|
||||
return parameters
|
||||
|
||||
def gen_workflow_job(self, phase):
|
||||
job_def = OrderedDict()
|
||||
job_def["name"] = self.construct_phase_name(phase)
|
||||
job_def["requires"] = ["setup"]
|
||||
|
||||
if phase == "test":
|
||||
job_def["requires"].append(self.construct_phase_name("build"))
|
||||
job_name = "caffe2_" + self.get_platform() + "_test"
|
||||
else:
|
||||
job_name = "caffe2_" + self.get_platform() + "_build"
|
||||
|
||||
if not self.is_important:
|
||||
job_def["filters"] = {"branches": {"only": ["master", r"/ci-all\/.*/"]}}
|
||||
job_def.update(self.gen_workflow_params(phase))
|
||||
return {job_name : job_def}
|
||||
|
||||
|
||||
def get_root():
|
||||
return TopLevelNode("Caffe2 Builds", CONFIG_TREE_DATA)
|
||||
|
||||
|
||||
def instantiate_configs():
|
||||
|
||||
config_list = []
|
||||
|
||||
root = get_root()
|
||||
found_configs = conf_tree.dfs(root)
|
||||
for fc in found_configs:
|
||||
c = Conf(
|
||||
language=fc.find_prop("language_version"),
|
||||
distro=fc.find_prop("distro_version"),
|
||||
compilers=fc.find_prop("compiler_version"),
|
||||
build_only=fc.find_prop("build_only"),
|
||||
is_important=fc.find_prop("important"),
|
||||
)
|
||||
|
||||
config_list.append(c)
|
||||
|
||||
return config_list
|
||||
|
||||
|
||||
def get_workflow_jobs():
|
||||
|
||||
configs = instantiate_configs()
|
||||
|
||||
x = []
|
||||
for conf_options in configs:
|
||||
|
||||
phases = ["build"]
|
||||
if not conf_options.build_only:
|
||||
phases = dimensions.PHASES
|
||||
|
||||
for phase in phases:
|
||||
x.append(conf_options.gen_workflow_job(phase))
|
||||
|
||||
return x
|
||||
@ -1,15 +0,0 @@
|
||||
PHASES = ["build", "test"]
|
||||
|
||||
CUDA_VERSIONS = [
|
||||
None, # cpu build
|
||||
"92",
|
||||
"100",
|
||||
"101",
|
||||
]
|
||||
|
||||
STANDARD_PYTHON_VERSIONS = [
|
||||
"2.7",
|
||||
"3.5",
|
||||
"3.6",
|
||||
"3.7",
|
||||
]
|
||||
@ -1,215 +0,0 @@
|
||||
from cimodel.lib.conf_tree import ConfigNode, X, XImportant
|
||||
|
||||
|
||||
CONFIG_TREE_DATA = [
|
||||
("xenial", [
|
||||
(None, [
|
||||
XImportant("2.7.9"),
|
||||
X("2.7"),
|
||||
XImportant("3.5"), # Not run on all PRs, but should be included on [test all]
|
||||
X("nightly"),
|
||||
]),
|
||||
("gcc", [
|
||||
("5.4", [ # All this subtree rebases to master and then build
|
||||
XImportant("3.6"),
|
||||
("3.6", [
|
||||
("parallel_tbb", [XImportant(True)]),
|
||||
("parallel_native", [XImportant(True)]),
|
||||
]),
|
||||
]),
|
||||
# TODO: bring back libtorch test
|
||||
("7", [X("3.6")]),
|
||||
]),
|
||||
("clang", [
|
||||
("5", [
|
||||
XImportant("3.6"), # This is actually the ASAN build
|
||||
]),
|
||||
# ("7", [
|
||||
# ("3.6", [
|
||||
# ("xla", [XImportant(True)]),
|
||||
# ]),
|
||||
# ]),
|
||||
]),
|
||||
("cuda", [
|
||||
("9", [
|
||||
# Note there are magic strings here
|
||||
# https://github.com/pytorch/pytorch/blob/master/.jenkins/pytorch/build.sh#L21
|
||||
# and
|
||||
# https://github.com/pytorch/pytorch/blob/master/.jenkins/pytorch/build.sh#L143
|
||||
# and
|
||||
# https://github.com/pytorch/pytorch/blob/master/.jenkins/pytorch/build.sh#L153
|
||||
# (from https://github.com/pytorch/pytorch/pull/17323#discussion_r259453144)
|
||||
XImportant("3.6"),
|
||||
("3.6", [
|
||||
("libtorch", [XImportant(True)])
|
||||
]),
|
||||
]),
|
||||
("9.2", [X("3.6")]),
|
||||
("10", [X("3.6")]),
|
||||
("10.1", [X("3.6")]),
|
||||
]),
|
||||
("android", [
|
||||
("r19c", [
|
||||
("3.6", [
|
||||
("android_abi", [XImportant("x86_32")]),
|
||||
("android_abi", [X("x86_64")]),
|
||||
("android_abi", [X("arm-v7a")]),
|
||||
("android_abi", [X("arm-v8a")]),
|
||||
])
|
||||
]),
|
||||
]),
|
||||
]),
|
||||
]
|
||||
|
||||
|
||||
def get_major_pyver(dotted_version):
|
||||
parts = dotted_version.split(".")
|
||||
return "py" + parts[0]
|
||||
|
||||
|
||||
class TreeConfigNode(ConfigNode):
|
||||
def __init__(self, parent, node_name, subtree):
|
||||
super(TreeConfigNode, self).__init__(parent, self.modify_label(node_name))
|
||||
self.subtree = subtree
|
||||
self.init2(node_name)
|
||||
|
||||
def modify_label(self, label):
|
||||
return label
|
||||
|
||||
def init2(self, node_name):
|
||||
pass
|
||||
|
||||
def get_children(self):
|
||||
return [self.child_constructor()(self, k, v) for (k, v) in self.subtree]
|
||||
|
||||
|
||||
class TopLevelNode(TreeConfigNode):
|
||||
def __init__(self, node_name, subtree):
|
||||
super(TopLevelNode, self).__init__(None, node_name, subtree)
|
||||
|
||||
# noinspection PyMethodMayBeStatic
|
||||
def child_constructor(self):
|
||||
return DistroConfigNode
|
||||
|
||||
|
||||
class DistroConfigNode(TreeConfigNode):
|
||||
def init2(self, node_name):
|
||||
self.props["distro_name"] = node_name
|
||||
|
||||
def child_constructor(self):
|
||||
distro = self.find_prop("distro_name")
|
||||
|
||||
next_nodes = {
|
||||
"xenial": XenialCompilerConfigNode,
|
||||
}
|
||||
return next_nodes[distro]
|
||||
|
||||
|
||||
class PyVerConfigNode(TreeConfigNode):
|
||||
def init2(self, node_name):
|
||||
self.props["pyver"] = node_name
|
||||
self.props["abbreviated_pyver"] = get_major_pyver(node_name)
|
||||
|
||||
# noinspection PyMethodMayBeStatic
|
||||
def child_constructor(self):
|
||||
return ExperimentalFeatureConfigNode
|
||||
|
||||
|
||||
class ExperimentalFeatureConfigNode(TreeConfigNode):
|
||||
def init2(self, node_name):
|
||||
self.props["experimental_feature"] = node_name
|
||||
|
||||
def child_constructor(self):
|
||||
experimental_feature = self.find_prop("experimental_feature")
|
||||
|
||||
next_nodes = {
|
||||
"xla": XlaConfigNode,
|
||||
"parallel_tbb": ParallelTBBConfigNode,
|
||||
"parallel_native": ParallelNativeConfigNode,
|
||||
"libtorch": LibTorchConfigNode,
|
||||
"important": ImportantConfigNode,
|
||||
"android_abi": AndroidAbiConfigNode,
|
||||
}
|
||||
return next_nodes[experimental_feature]
|
||||
|
||||
|
||||
class XlaConfigNode(TreeConfigNode):
|
||||
def modify_label(self, label):
|
||||
return "XLA=" + str(label)
|
||||
|
||||
def init2(self, node_name):
|
||||
self.props["is_xla"] = node_name
|
||||
|
||||
def child_constructor(self):
|
||||
return ImportantConfigNode
|
||||
|
||||
class ParallelTBBConfigNode(TreeConfigNode):
|
||||
def modify_label(self, label):
|
||||
return "PARALLELTBB=" + str(label)
|
||||
|
||||
def init2(self, node_name):
|
||||
self.props["parallel_backend"] = "paralleltbb"
|
||||
|
||||
def child_constructor(self):
|
||||
return ImportantConfigNode
|
||||
|
||||
class ParallelNativeConfigNode(TreeConfigNode):
|
||||
def modify_label(self, label):
|
||||
return "PARALLELNATIVE=" + str(label)
|
||||
|
||||
def init2(self, node_name):
|
||||
self.props["parallel_backend"] = "parallelnative"
|
||||
|
||||
def child_constructor(self):
|
||||
return ImportantConfigNode
|
||||
|
||||
class LibTorchConfigNode(TreeConfigNode):
|
||||
def modify_label(self, label):
|
||||
return "BUILD_TEST_LIBTORCH=" + str(label)
|
||||
|
||||
def init2(self, node_name):
|
||||
self.props["is_libtorch"] = node_name
|
||||
|
||||
def child_constructor(self):
|
||||
return ImportantConfigNode
|
||||
|
||||
class AndroidAbiConfigNode(TreeConfigNode):
|
||||
|
||||
def init2(self, node_name):
|
||||
self.props["android_abi"] = node_name
|
||||
|
||||
def child_constructor(self):
|
||||
return ImportantConfigNode
|
||||
|
||||
class ImportantConfigNode(TreeConfigNode):
|
||||
def modify_label(self, label):
|
||||
return "IMPORTANT=" + str(label)
|
||||
|
||||
def init2(self, node_name):
|
||||
self.props["is_important"] = node_name
|
||||
|
||||
def get_children(self):
|
||||
return []
|
||||
|
||||
|
||||
class XenialCompilerConfigNode(TreeConfigNode):
|
||||
|
||||
def modify_label(self, label):
|
||||
return label or "<unspecified>"
|
||||
|
||||
def init2(self, node_name):
|
||||
self.props["compiler_name"] = node_name
|
||||
|
||||
# noinspection PyMethodMayBeStatic
|
||||
def child_constructor(self):
|
||||
|
||||
return XenialCompilerVersionConfigNode if self.props["compiler_name"] else PyVerConfigNode
|
||||
|
||||
|
||||
class XenialCompilerVersionConfigNode(TreeConfigNode):
|
||||
def init2(self, node_name):
|
||||
self.props["compiler_version"] = node_name
|
||||
|
||||
# noinspection PyMethodMayBeStatic
|
||||
def child_constructor(self):
|
||||
return PyVerConfigNode
|
||||
@ -1,295 +0,0 @@
|
||||
from collections import OrderedDict
|
||||
|
||||
from cimodel.data.pytorch_build_data import TopLevelNode, CONFIG_TREE_DATA
|
||||
import cimodel.data.dimensions as dimensions
|
||||
import cimodel.lib.conf_tree as conf_tree
|
||||
import cimodel.lib.miniutils as miniutils
|
||||
|
||||
from dataclasses import dataclass, field
|
||||
from typing import List, Optional
|
||||
|
||||
|
||||
DOCKER_IMAGE_PATH_BASE = "308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/"
|
||||
|
||||
# ARE YOU EDITING THIS NUMBER? MAKE SURE YOU READ THE GUIDANCE AT THE
|
||||
# TOP OF .circleci/config.yml
|
||||
DOCKER_IMAGE_VERSION = 405
|
||||
|
||||
|
||||
@dataclass
|
||||
class Conf:
|
||||
distro: str
|
||||
parms: List[str]
|
||||
parms_list_ignored_for_docker_image: Optional[List[str]] = None
|
||||
pyver: Optional[str] = None
|
||||
cuda_version: Optional[str] = None
|
||||
# TODO expand this to cover all the USE_* that we want to test for
|
||||
# tesnrorrt, leveldb, lmdb, redis, opencv, mkldnn, ideep, etc.
|
||||
# (from https://github.com/pytorch/pytorch/pull/17323#discussion_r259453608)
|
||||
is_xla: bool = False
|
||||
restrict_phases: Optional[List[str]] = None
|
||||
gpu_resource: Optional[str] = None
|
||||
dependent_tests: List = field(default_factory=list)
|
||||
parent_build: Optional['Conf'] = None
|
||||
is_libtorch: bool = False
|
||||
is_important: bool = False
|
||||
parallel_backend: Optional[str] = None
|
||||
|
||||
# TODO: Eliminate the special casing for docker paths
|
||||
# In the short term, we *will* need to support special casing as docker images are merged for caffe2 and pytorch
|
||||
def get_parms(self, for_docker):
|
||||
leading = []
|
||||
# We just don't run non-important jobs on pull requests;
|
||||
# previously we also named them in a way to make it obvious
|
||||
# if self.is_important and not for_docker:
|
||||
# leading.append("AAA")
|
||||
leading.append("pytorch")
|
||||
if self.is_xla and not for_docker:
|
||||
leading.append("xla")
|
||||
if self.is_libtorch and not for_docker:
|
||||
leading.append("libtorch")
|
||||
if self.parallel_backend is not None and not for_docker:
|
||||
leading.append(self.parallel_backend)
|
||||
|
||||
cuda_parms = []
|
||||
if self.cuda_version:
|
||||
cuda_parms.extend(["cuda" + self.cuda_version, "cudnn7"])
|
||||
result = leading + ["linux", self.distro] + cuda_parms + self.parms
|
||||
if (not for_docker and self.parms_list_ignored_for_docker_image is not None):
|
||||
result = result + self.parms_list_ignored_for_docker_image
|
||||
return result
|
||||
|
||||
def gen_docker_image_path(self):
|
||||
|
||||
parms_source = self.parent_build or self
|
||||
base_build_env_name = "-".join(parms_source.get_parms(True))
|
||||
|
||||
return miniutils.quote(DOCKER_IMAGE_PATH_BASE + base_build_env_name + ":" + str(DOCKER_IMAGE_VERSION))
|
||||
|
||||
def get_build_job_name_pieces(self, build_or_test):
|
||||
return self.get_parms(False) + [build_or_test]
|
||||
|
||||
def gen_build_name(self, build_or_test):
|
||||
return ("_".join(map(str, self.get_build_job_name_pieces(build_or_test)))).replace(".", "_").replace("-", "_")
|
||||
|
||||
def get_dependents(self):
|
||||
return self.dependent_tests or []
|
||||
|
||||
def gen_workflow_params(self, phase):
|
||||
parameters = OrderedDict()
|
||||
build_job_name_pieces = self.get_build_job_name_pieces(phase)
|
||||
|
||||
build_env_name = "-".join(map(str, build_job_name_pieces))
|
||||
parameters["build_environment"] = miniutils.quote(build_env_name)
|
||||
parameters["docker_image"] = self.gen_docker_image_path()
|
||||
if phase == "test" and self.gpu_resource:
|
||||
parameters["use_cuda_docker_runtime"] = miniutils.quote("1")
|
||||
if phase == "test":
|
||||
resource_class = "large"
|
||||
if self.gpu_resource:
|
||||
resource_class = "gpu." + self.gpu_resource
|
||||
parameters["resource_class"] = resource_class
|
||||
return parameters
|
||||
|
||||
def gen_workflow_job(self, phase):
|
||||
# All jobs require the setup job
|
||||
job_def = OrderedDict()
|
||||
job_def["name"] = self.gen_build_name(phase)
|
||||
job_def["requires"] = ["setup"]
|
||||
|
||||
if phase == "test":
|
||||
|
||||
# TODO When merging the caffe2 and pytorch jobs, it might be convenient for a while to make a
|
||||
# caffe2 test job dependent on a pytorch build job. This way we could quickly dedup the repeated
|
||||
# build of pytorch in the caffe2 build job, and just run the caffe2 tests off of a completed
|
||||
# pytorch build job (from https://github.com/pytorch/pytorch/pull/17323#discussion_r259452641)
|
||||
|
||||
dependency_build = self.parent_build or self
|
||||
job_def["requires"].append(dependency_build.gen_build_name("build"))
|
||||
job_name = "pytorch_linux_test"
|
||||
else:
|
||||
job_name = "pytorch_linux_build"
|
||||
|
||||
|
||||
if not self.is_important:
|
||||
# If you update this, update
|
||||
# caffe2_build_definitions.py too
|
||||
job_def["filters"] = {"branches": {"only": ["master", r"/ci-all\/.*/"]}}
|
||||
job_def.update(self.gen_workflow_params(phase))
|
||||
|
||||
return {job_name : job_def}
|
||||
|
||||
|
||||
# TODO This is a hack to special case some configs just for the workflow list
|
||||
class HiddenConf(object):
|
||||
def __init__(self, name, parent_build=None):
|
||||
self.name = name
|
||||
self.parent_build = parent_build
|
||||
|
||||
def gen_workflow_job(self, phase):
|
||||
return {self.gen_build_name(phase): {"requires": [self.parent_build.gen_build_name("build")]}}
|
||||
|
||||
def gen_build_name(self, _):
|
||||
return self.name
|
||||
|
||||
|
||||
# TODO Convert these to graph nodes
|
||||
def gen_dependent_configs(xenial_parent_config):
|
||||
|
||||
extra_parms = [
|
||||
(["multigpu"], "large"),
|
||||
(["NO_AVX2"], "medium"),
|
||||
(["NO_AVX", "NO_AVX2"], "medium"),
|
||||
(["slow"], "medium"),
|
||||
(["nogpu"], None),
|
||||
]
|
||||
|
||||
configs = []
|
||||
for parms, gpu in extra_parms:
|
||||
|
||||
c = Conf(
|
||||
xenial_parent_config.distro,
|
||||
["py3"] + parms,
|
||||
pyver="3.6",
|
||||
cuda_version=xenial_parent_config.cuda_version,
|
||||
restrict_phases=["test"],
|
||||
gpu_resource=gpu,
|
||||
parent_build=xenial_parent_config,
|
||||
is_important=xenial_parent_config.is_important,
|
||||
)
|
||||
|
||||
configs.append(c)
|
||||
|
||||
for x in ["pytorch_python_doc_push", "pytorch_cpp_doc_push"]:
|
||||
configs.append(HiddenConf(x, parent_build=xenial_parent_config))
|
||||
|
||||
return configs
|
||||
|
||||
|
||||
def get_root():
|
||||
return TopLevelNode("PyTorch Builds", CONFIG_TREE_DATA)
|
||||
|
||||
|
||||
def gen_tree():
|
||||
root = get_root()
|
||||
configs_list = conf_tree.dfs(root)
|
||||
return configs_list
|
||||
|
||||
|
||||
def instantiate_configs():
|
||||
|
||||
config_list = []
|
||||
|
||||
root = get_root()
|
||||
found_configs = conf_tree.dfs(root)
|
||||
restrict_phases = None
|
||||
for fc in found_configs:
|
||||
|
||||
distro_name = fc.find_prop("distro_name")
|
||||
compiler_name = fc.find_prop("compiler_name")
|
||||
compiler_version = fc.find_prop("compiler_version")
|
||||
is_xla = fc.find_prop("is_xla") or False
|
||||
parms_list_ignored_for_docker_image = []
|
||||
|
||||
python_version = None
|
||||
if compiler_name == "cuda" or compiler_name == "android":
|
||||
python_version = fc.find_prop("pyver")
|
||||
parms_list = [fc.find_prop("abbreviated_pyver")]
|
||||
else:
|
||||
parms_list = ["py" + fc.find_prop("pyver")]
|
||||
|
||||
cuda_version = None
|
||||
if compiler_name == "cuda":
|
||||
cuda_version = fc.find_prop("compiler_version")
|
||||
|
||||
elif compiler_name == "android":
|
||||
android_ndk_version = fc.find_prop("compiler_version")
|
||||
# TODO: do we need clang to compile host binaries like protoc?
|
||||
parms_list.append("clang5")
|
||||
parms_list.append("android-ndk-" + android_ndk_version)
|
||||
android_abi = fc.find_prop("android_abi")
|
||||
parms_list_ignored_for_docker_image.append(android_abi)
|
||||
restrict_phases = ["build"]
|
||||
fc.props["is_important"] = True
|
||||
|
||||
elif compiler_name:
|
||||
gcc_version = compiler_name + (fc.find_prop("compiler_version") or "")
|
||||
parms_list.append(gcc_version)
|
||||
|
||||
# TODO: This is a nasty special case
|
||||
if compiler_name == "clang" and not is_xla:
|
||||
parms_list.append("asan")
|
||||
python_version = fc.find_prop("pyver")
|
||||
parms_list[0] = fc.find_prop("abbreviated_pyver")
|
||||
|
||||
if cuda_version in ["9.2", "10", "10.1"]:
|
||||
# TODO The gcc version is orthogonal to CUDA version?
|
||||
parms_list.append("gcc7")
|
||||
|
||||
is_libtorch = fc.find_prop("is_libtorch") or False
|
||||
is_important = fc.find_prop("is_important") or False
|
||||
parallel_backend = fc.find_prop("parallel_backend") or None
|
||||
|
||||
gpu_resource = None
|
||||
if cuda_version and cuda_version != "10":
|
||||
gpu_resource = "medium"
|
||||
|
||||
c = Conf(
|
||||
distro_name,
|
||||
parms_list,
|
||||
parms_list_ignored_for_docker_image,
|
||||
python_version,
|
||||
cuda_version,
|
||||
is_xla,
|
||||
restrict_phases,
|
||||
gpu_resource,
|
||||
is_libtorch=is_libtorch,
|
||||
is_important=is_important,
|
||||
parallel_backend=parallel_backend,
|
||||
)
|
||||
|
||||
if cuda_version == "9" and python_version == "3.6" and not is_libtorch:
|
||||
c.dependent_tests = gen_dependent_configs(c)
|
||||
|
||||
if (compiler_name == "gcc"
|
||||
and compiler_version == "5.4"
|
||||
and not is_libtorch
|
||||
and parallel_backend is None):
|
||||
bc_breaking_check = Conf(
|
||||
"backward-compatibility-check",
|
||||
[],
|
||||
is_xla=False,
|
||||
restrict_phases=["test"],
|
||||
is_libtorch=False,
|
||||
is_important=True,
|
||||
parent_build=c,
|
||||
)
|
||||
c.dependent_tests.append(bc_breaking_check)
|
||||
|
||||
config_list.append(c)
|
||||
|
||||
return config_list
|
||||
|
||||
|
||||
def get_workflow_jobs():
|
||||
|
||||
config_list = instantiate_configs()
|
||||
|
||||
x = ["setup"]
|
||||
for conf_options in config_list:
|
||||
|
||||
phases = conf_options.restrict_phases or dimensions.PHASES
|
||||
|
||||
for phase in phases:
|
||||
|
||||
# TODO why does this not have a test?
|
||||
if phase == "test" and conf_options.cuda_version == "10":
|
||||
continue
|
||||
|
||||
x.append(conf_options.gen_workflow_job(phase))
|
||||
|
||||
# TODO convert to recursion
|
||||
for conf in conf_options.get_dependents():
|
||||
x.append(conf.gen_workflow_job("test"))
|
||||
|
||||
return x
|
||||
@ -1,107 +0,0 @@
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Optional, Dict
|
||||
|
||||
|
||||
def X(val):
|
||||
"""
|
||||
Compact way to write a leaf node
|
||||
"""
|
||||
return val, []
|
||||
|
||||
|
||||
def XImportant(name):
|
||||
"""Compact way to write an important (run on PRs) leaf node"""
|
||||
return (name, [("important", [X(True)])])
|
||||
|
||||
|
||||
@dataclass
|
||||
class Ver:
|
||||
"""
|
||||
Represents a product with a version number
|
||||
"""
|
||||
name: str
|
||||
version: str = ""
|
||||
|
||||
def __str__(self):
|
||||
return self.name + self.version
|
||||
|
||||
|
||||
@dataclass
|
||||
class ConfigNode:
|
||||
parent: Optional['ConfigNode']
|
||||
node_name: str
|
||||
props: Dict[str, str] = field(default_factory=dict)
|
||||
|
||||
def get_label(self):
|
||||
return self.node_name
|
||||
|
||||
# noinspection PyMethodMayBeStatic
|
||||
def get_children(self):
|
||||
return []
|
||||
|
||||
def get_parents(self):
|
||||
return (self.parent.get_parents() + [self.parent.get_label()]) if self.parent else []
|
||||
|
||||
def get_depth(self):
|
||||
return len(self.get_parents())
|
||||
|
||||
def get_node_key(self):
|
||||
return "%".join(self.get_parents() + [self.get_label()])
|
||||
|
||||
def find_prop(self, propname, searched=None):
|
||||
"""
|
||||
Checks if its own dictionary has
|
||||
the property, otherwise asks parent node.
|
||||
"""
|
||||
|
||||
if searched is None:
|
||||
searched = []
|
||||
|
||||
searched.append(self.node_name)
|
||||
|
||||
if propname in self.props:
|
||||
return self.props[propname]
|
||||
elif self.parent:
|
||||
return self.parent.find_prop(propname, searched)
|
||||
else:
|
||||
# raise Exception('Property "%s" does not exist anywhere in the tree! Searched: %s' % (propname, searched))
|
||||
return None
|
||||
|
||||
|
||||
def dfs_recurse(
|
||||
node,
|
||||
leaf_callback=lambda x: None,
|
||||
discovery_callback=lambda x, y, z: None,
|
||||
child_callback=lambda x, y: None,
|
||||
sibling_index=0,
|
||||
sibling_count=1):
|
||||
|
||||
discovery_callback(node, sibling_index, sibling_count)
|
||||
|
||||
node_children = node.get_children()
|
||||
if node_children:
|
||||
for i, child in enumerate(node_children):
|
||||
child_callback(node, child)
|
||||
|
||||
dfs_recurse(
|
||||
child,
|
||||
leaf_callback,
|
||||
discovery_callback,
|
||||
child_callback,
|
||||
i,
|
||||
len(node_children),
|
||||
)
|
||||
else:
|
||||
leaf_callback(node)
|
||||
|
||||
|
||||
def dfs(toplevel_config_node):
|
||||
|
||||
config_list = []
|
||||
|
||||
def leaf_callback(node):
|
||||
config_list.append(node)
|
||||
|
||||
dfs_recurse(toplevel_config_node, leaf_callback)
|
||||
|
||||
return config_list
|
||||
@ -1,10 +0,0 @@
|
||||
def quote(s):
|
||||
return sandwich('"', s)
|
||||
|
||||
|
||||
def sandwich(bread, jam):
|
||||
return bread + jam + bread
|
||||
|
||||
|
||||
def override(word, substitutions):
|
||||
return substitutions.get(word, word)
|
||||
@ -1,47 +0,0 @@
|
||||
from collections import OrderedDict
|
||||
|
||||
|
||||
LIST_MARKER = "- "
|
||||
INDENTATION_WIDTH = 2
|
||||
|
||||
|
||||
def is_dict(data):
|
||||
return type(data) in [dict, OrderedDict]
|
||||
|
||||
|
||||
def is_collection(data):
|
||||
return is_dict(data) or type(data) is list
|
||||
|
||||
|
||||
def render(fh, data, depth, is_list_member=False):
|
||||
"""
|
||||
PyYaml does not allow precise control over the quoting
|
||||
behavior, especially for merge references.
|
||||
Therefore, we use this custom YAML renderer.
|
||||
"""
|
||||
|
||||
indentation = " " * INDENTATION_WIDTH * depth
|
||||
|
||||
if is_dict(data):
|
||||
|
||||
tuples = list(data.items())
|
||||
if type(data) is not OrderedDict:
|
||||
tuples.sort()
|
||||
|
||||
for i, (k, v) in enumerate(tuples):
|
||||
|
||||
# If this dict is itself a list member, the first key gets prefixed with a list marker
|
||||
list_marker_prefix = LIST_MARKER if is_list_member and not i else ""
|
||||
|
||||
trailing_whitespace = "\n" if is_collection(v) else " "
|
||||
fh.write(indentation + list_marker_prefix + k + ":" + trailing_whitespace)
|
||||
|
||||
render(fh, v, depth + 1 + int(is_list_member))
|
||||
|
||||
elif type(data) is list:
|
||||
for v in data:
|
||||
render(fh, v, depth, True)
|
||||
|
||||
else:
|
||||
list_member_prefix = indentation + LIST_MARKER if is_list_member else ""
|
||||
fh.write(list_member_prefix + str(data) + "\n")
|
||||
@ -1,84 +0,0 @@
|
||||
"""
|
||||
This module encapsulates dependencies on pygraphviz
|
||||
"""
|
||||
|
||||
import colorsys
|
||||
|
||||
import cimodel.lib.conf_tree as conf_tree
|
||||
|
||||
|
||||
def rgb2hex(rgb_tuple):
|
||||
def to_hex(f):
|
||||
return "%02x" % int(f * 255)
|
||||
|
||||
return "#" + "".join(map(to_hex, list(rgb_tuple)))
|
||||
|
||||
|
||||
def handle_missing_graphviz(f):
|
||||
"""
|
||||
If the user has not installed pygraphviz, this causes
|
||||
calls to the draw() method of the returned object to do nothing.
|
||||
"""
|
||||
try:
|
||||
import pygraphviz # noqa: F401
|
||||
return f
|
||||
|
||||
except ModuleNotFoundError:
|
||||
|
||||
class FakeGraph:
|
||||
def draw(self, *args, **kwargs):
|
||||
pass
|
||||
|
||||
return lambda _: FakeGraph()
|
||||
|
||||
|
||||
@handle_missing_graphviz
|
||||
def generate_graph(toplevel_config_node):
|
||||
"""
|
||||
Traverses the graph once first just to find the max depth
|
||||
"""
|
||||
|
||||
config_list = conf_tree.dfs(toplevel_config_node)
|
||||
|
||||
max_depth = 0
|
||||
for config in config_list:
|
||||
max_depth = max(max_depth, config.get_depth())
|
||||
|
||||
# color the nodes using the max depth
|
||||
|
||||
from pygraphviz import AGraph
|
||||
dot = AGraph()
|
||||
|
||||
def node_discovery_callback(node, sibling_index, sibling_count):
|
||||
depth = node.get_depth()
|
||||
|
||||
sat_min, sat_max = 0.1, 0.6
|
||||
sat_range = sat_max - sat_min
|
||||
|
||||
saturation_fraction = sibling_index / float(sibling_count - 1) if sibling_count > 1 else 1
|
||||
saturation = sat_min + sat_range * saturation_fraction
|
||||
|
||||
# TODO Use a hash of the node label to determine the color
|
||||
hue = depth / float(max_depth + 1)
|
||||
|
||||
rgb_tuple = colorsys.hsv_to_rgb(hue, saturation, 1)
|
||||
|
||||
this_node_key = node.get_node_key()
|
||||
|
||||
dot.add_node(
|
||||
this_node_key,
|
||||
label=node.get_label(),
|
||||
style="filled",
|
||||
# fillcolor=hex_color + ":orange",
|
||||
fillcolor=rgb2hex(rgb_tuple),
|
||||
penwidth=3,
|
||||
color=rgb2hex(colorsys.hsv_to_rgb(hue, saturation, 0.9))
|
||||
)
|
||||
|
||||
def child_callback(node, child):
|
||||
this_node_key = node.get_node_key()
|
||||
child_node_key = child.get_node_key()
|
||||
dot.add_edge((this_node_key, child_node_key))
|
||||
|
||||
conf_tree.dfs_recurse(toplevel_config_node, lambda x: None, node_discovery_callback, child_callback)
|
||||
return dot
|
||||
6263
.circleci/config.yml
6263
.circleci/config.yml
File diff suppressed because it is too large
Load Diff
@ -1,19 +0,0 @@
|
||||
# Docker images for Jenkins
|
||||
|
||||
This directory contains everything needed to build the Docker images
|
||||
that are used in our CI
|
||||
|
||||
The Dockerfiles located in subdirectories are parameterized to
|
||||
conditionally run build stages depending on build arguments passed to
|
||||
`docker build`. This lets us use only a few Dockerfiles for many
|
||||
images. The different configurations are identified by a freeform
|
||||
string that we call a _build environment_. This string is persisted in
|
||||
each image as the `BUILD_ENVIRONMENT` environment variable.
|
||||
|
||||
See `build.sh` for valid build environments (it's the giant switch).
|
||||
|
||||
## Contents
|
||||
|
||||
* `build.sh` -- dispatch script to launch all builds
|
||||
* `common` -- scripts used to execute individual Docker build stages
|
||||
* `ubuntu-cuda` -- Dockerfile for Ubuntu image with CUDA support for nvidia-docker
|
||||
@ -1 +0,0 @@
|
||||
<manifest package="org.pytorch.deps" />
|
||||
@ -1,68 +0,0 @@
|
||||
buildscript {
|
||||
ext {
|
||||
minSdkVersion = 21
|
||||
targetSdkVersion = 28
|
||||
compileSdkVersion = 28
|
||||
buildToolsVersion = '28.0.3'
|
||||
|
||||
coreVersion = "1.2.0"
|
||||
extJUnitVersion = "1.1.1"
|
||||
runnerVersion = "1.2.0"
|
||||
rulesVersion = "1.2.0"
|
||||
junitVersion = "4.12"
|
||||
}
|
||||
|
||||
repositories {
|
||||
google()
|
||||
mavenLocal()
|
||||
mavenCentral()
|
||||
jcenter()
|
||||
}
|
||||
|
||||
dependencies {
|
||||
classpath 'com.android.tools.build:gradle:3.3.2'
|
||||
classpath "com.jfrog.bintray.gradle:gradle-bintray-plugin:1.8.0"
|
||||
classpath "com.github.dcendents:android-maven-gradle-plugin:2.1"
|
||||
classpath "org.jfrog.buildinfo:build-info-extractor-gradle:4.9.8"
|
||||
}
|
||||
}
|
||||
|
||||
repositories {
|
||||
google()
|
||||
jcenter()
|
||||
}
|
||||
|
||||
apply plugin: 'com.android.library'
|
||||
|
||||
android {
|
||||
compileSdkVersion rootProject.compileSdkVersion
|
||||
buildToolsVersion rootProject.buildToolsVersion
|
||||
|
||||
defaultConfig {
|
||||
minSdkVersion minSdkVersion
|
||||
targetSdkVersion targetSdkVersion
|
||||
}
|
||||
|
||||
sourceSets {
|
||||
main {
|
||||
manifest.srcFile 'AndroidManifest.xml'
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
dependencies {
|
||||
implementation 'com.android.support:appcompat-v7:28.0.0'
|
||||
implementation 'androidx.appcompat:appcompat:1.0.0'
|
||||
implementation 'com.facebook.fbjni:fbjni-java-only:0.0.3'
|
||||
implementation 'com.google.code.findbugs:jsr305:3.0.1'
|
||||
implementation 'com.facebook.soloader:nativeloader:0.8.0'
|
||||
|
||||
implementation 'junit:junit:' + rootProject.junitVersion
|
||||
implementation 'androidx.test:core:' + rootProject.coreVersion
|
||||
|
||||
implementation 'junit:junit:' + rootProject.junitVersion
|
||||
implementation 'androidx.test:core:' + rootProject.coreVersion
|
||||
implementation 'androidx.test.ext:junit:' + rootProject.extJUnitVersion
|
||||
implementation 'androidx.test:rules:' + rootProject.rulesVersion
|
||||
implementation 'androidx.test:runner:' + rootProject.runnerVersion
|
||||
}
|
||||
@ -1,275 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -ex
|
||||
|
||||
image="$1"
|
||||
shift
|
||||
|
||||
if [ -z "${image}" ]; then
|
||||
echo "Usage: $0 IMAGE"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# TODO: Generalize
|
||||
OS="ubuntu"
|
||||
DOCKERFILE="${OS}/Dockerfile"
|
||||
if [[ "$image" == *-cuda* ]]; then
|
||||
DOCKERFILE="${OS}-cuda/Dockerfile"
|
||||
fi
|
||||
|
||||
if [[ "$image" == *-trusty* ]]; then
|
||||
UBUNTU_VERSION=14.04
|
||||
elif [[ "$image" == *-xenial* ]]; then
|
||||
UBUNTU_VERSION=16.04
|
||||
elif [[ "$image" == *-artful* ]]; then
|
||||
UBUNTU_VERSION=17.10
|
||||
elif [[ "$image" == *-bionic* ]]; then
|
||||
UBUNTU_VERSION=18.04
|
||||
fi
|
||||
|
||||
# It's annoying to rename jobs every time you want to rewrite a
|
||||
# configuration, so we hardcode everything here rather than do it
|
||||
# from scratch
|
||||
case "$image" in
|
||||
pytorch-linux-bionic-clang9-thrift-llvmdev)
|
||||
CLANG_VERSION=9
|
||||
THRIFT=yes
|
||||
LLVMDEV=yes
|
||||
PROTOBUF=yes
|
||||
;;
|
||||
pytorch-linux-xenial-py2.7.9)
|
||||
TRAVIS_PYTHON_VERSION=2.7.9
|
||||
GCC_VERSION=7
|
||||
# Do not install PROTOBUF, DB, and VISION as a test
|
||||
;;
|
||||
pytorch-linux-xenial-py2.7)
|
||||
TRAVIS_PYTHON_VERSION=2.7
|
||||
GCC_VERSION=7
|
||||
PROTOBUF=yes
|
||||
DB=yes
|
||||
VISION=yes
|
||||
;;
|
||||
pytorch-linux-xenial-py3.5)
|
||||
TRAVIS_PYTHON_VERSION=3.5
|
||||
GCC_VERSION=7
|
||||
# Do not install PROTOBUF, DB, and VISION as a test
|
||||
;;
|
||||
pytorch-linux-xenial-py3.6-gcc4.8)
|
||||
ANACONDA_PYTHON_VERSION=3.6
|
||||
GCC_VERSION=4.8
|
||||
PROTOBUF=yes
|
||||
DB=yes
|
||||
VISION=yes
|
||||
;;
|
||||
pytorch-linux-xenial-py3.6-gcc5.4)
|
||||
ANACONDA_PYTHON_VERSION=3.6
|
||||
GCC_VERSION=5
|
||||
PROTOBUF=yes
|
||||
DB=yes
|
||||
VISION=yes
|
||||
;;
|
||||
pytorch-linux-xenial-py3.6-gcc7.2)
|
||||
ANACONDA_PYTHON_VERSION=3.6
|
||||
GCC_VERSION=7
|
||||
# Do not install PROTOBUF, DB, and VISION as a test
|
||||
;;
|
||||
pytorch-linux-xenial-py3.6-gcc7)
|
||||
ANACONDA_PYTHON_VERSION=3.6
|
||||
GCC_VERSION=7
|
||||
PROTOBUF=yes
|
||||
DB=yes
|
||||
VISION=yes
|
||||
;;
|
||||
pytorch-linux-xenial-pynightly)
|
||||
TRAVIS_PYTHON_VERSION=nightly
|
||||
GCC_VERSION=7
|
||||
PROTOBUF=yes
|
||||
DB=yes
|
||||
VISION=yes
|
||||
;;
|
||||
pytorch-linux-xenial-cuda8-cudnn7-py2)
|
||||
CUDA_VERSION=8.0
|
||||
CUDNN_VERSION=7
|
||||
ANACONDA_PYTHON_VERSION=2.7
|
||||
PROTOBUF=yes
|
||||
DB=yes
|
||||
VISION=yes
|
||||
;;
|
||||
pytorch-linux-xenial-cuda8-cudnn7-py3)
|
||||
CUDA_VERSION=8.0
|
||||
CUDNN_VERSION=7
|
||||
ANACONDA_PYTHON_VERSION=3.6
|
||||
PROTOBUF=yes
|
||||
DB=yes
|
||||
VISION=yes
|
||||
;;
|
||||
pytorch-linux-xenial-cuda9-cudnn7-py2)
|
||||
CUDA_VERSION=9.0
|
||||
CUDNN_VERSION=7
|
||||
ANACONDA_PYTHON_VERSION=2.7
|
||||
PROTOBUF=yes
|
||||
DB=yes
|
||||
VISION=yes
|
||||
;;
|
||||
pytorch-linux-xenial-cuda9-cudnn7-py3)
|
||||
CUDA_VERSION=9.0
|
||||
CUDNN_VERSION=7
|
||||
ANACONDA_PYTHON_VERSION=3.6
|
||||
PROTOBUF=yes
|
||||
DB=yes
|
||||
VISION=yes
|
||||
KATEX=yes
|
||||
;;
|
||||
pytorch-linux-xenial-cuda9.2-cudnn7-py3-gcc7)
|
||||
CUDA_VERSION=9.2
|
||||
CUDNN_VERSION=7
|
||||
ANACONDA_PYTHON_VERSION=3.6
|
||||
GCC_VERSION=7
|
||||
PROTOBUF=yes
|
||||
DB=yes
|
||||
VISION=yes
|
||||
;;
|
||||
pytorch-linux-xenial-cuda10-cudnn7-py3-gcc7)
|
||||
CUDA_VERSION=10.0
|
||||
CUDNN_VERSION=7
|
||||
ANACONDA_PYTHON_VERSION=3.6
|
||||
GCC_VERSION=7
|
||||
PROTOBUF=yes
|
||||
DB=yes
|
||||
VISION=yes
|
||||
;;
|
||||
pytorch-linux-xenial-cuda10.1-cudnn7-py3-gcc7)
|
||||
CUDA_VERSION=10.1
|
||||
CUDNN_VERSION=7
|
||||
ANACONDA_PYTHON_VERSION=3.6
|
||||
GCC_VERSION=7
|
||||
PROTOBUF=yes
|
||||
DB=yes
|
||||
VISION=yes
|
||||
;;
|
||||
pytorch-linux-xenial-py3-clang5-asan)
|
||||
ANACONDA_PYTHON_VERSION=3.6
|
||||
CLANG_VERSION=5.0
|
||||
PROTOBUF=yes
|
||||
DB=yes
|
||||
VISION=yes
|
||||
;;
|
||||
pytorch-linux-xenial-py3-clang5-android-ndk-r19c)
|
||||
ANACONDA_PYTHON_VERSION=3.6
|
||||
CLANG_VERSION=5.0
|
||||
PROTOBUF=yes
|
||||
ANDROID=yes
|
||||
ANDROID_NDK_VERSION=r19c
|
||||
GRADLE_VERSION=4.10.3
|
||||
CMAKE_VERSION=3.7.0
|
||||
NINJA_VERSION=1.9.0
|
||||
;;
|
||||
pytorch-linux-xenial-py3.6-clang7)
|
||||
ANACONDA_PYTHON_VERSION=3.6
|
||||
CLANG_VERSION=7
|
||||
PROTOBUF=yes
|
||||
DB=yes
|
||||
VISION=yes
|
||||
;;
|
||||
esac
|
||||
|
||||
# Set Jenkins UID and GID if running Jenkins
|
||||
if [ -n "${JENKINS:-}" ]; then
|
||||
JENKINS_UID=$(id -u jenkins)
|
||||
JENKINS_GID=$(id -g jenkins)
|
||||
fi
|
||||
|
||||
tmp_tag="tmp-$(cat /dev/urandom | tr -dc 'a-z' | fold -w 32 | head -n 1)"
|
||||
|
||||
# Build image
|
||||
docker build \
|
||||
--no-cache \
|
||||
--build-arg "BUILD_ENVIRONMENT=${image}" \
|
||||
--build-arg "PROTOBUF=${PROTOBUF:-}" \
|
||||
--build-arg "THRIFT=${THRIFT:-}" \
|
||||
--build-arg "LLVMDEV=${LLVMDEV:-}" \
|
||||
--build-arg "DB=${DB:-}" \
|
||||
--build-arg "VISION=${VISION:-}" \
|
||||
--build-arg "EC2=${EC2:-}" \
|
||||
--build-arg "JENKINS=${JENKINS:-}" \
|
||||
--build-arg "JENKINS_UID=${JENKINS_UID:-}" \
|
||||
--build-arg "JENKINS_GID=${JENKINS_GID:-}" \
|
||||
--build-arg "UBUNTU_VERSION=${UBUNTU_VERSION}" \
|
||||
--build-arg "CLANG_VERSION=${CLANG_VERSION}" \
|
||||
--build-arg "ANACONDA_PYTHON_VERSION=${ANACONDA_PYTHON_VERSION}" \
|
||||
--build-arg "TRAVIS_PYTHON_VERSION=${TRAVIS_PYTHON_VERSION}" \
|
||||
--build-arg "GCC_VERSION=${GCC_VERSION}" \
|
||||
--build-arg "CUDA_VERSION=${CUDA_VERSION}" \
|
||||
--build-arg "CUDNN_VERSION=${CUDNN_VERSION}" \
|
||||
--build-arg "ANDROID=${ANDROID}" \
|
||||
--build-arg "ANDROID_NDK=${ANDROID_NDK_VERSION}" \
|
||||
--build-arg "GRADLE_VERSION=${GRADLE_VERSION}" \
|
||||
--build-arg "CMAKE_VERSION=${CMAKE_VERSION:-}" \
|
||||
--build-arg "NINJA_VERSION=${NINJA_VERSION:-}" \
|
||||
--build-arg "KATEX=${KATEX:-}" \
|
||||
-f $(dirname ${DOCKERFILE})/Dockerfile \
|
||||
-t "$tmp_tag" \
|
||||
"$@" \
|
||||
.
|
||||
|
||||
function drun() {
|
||||
docker run --rm "$tmp_tag" $*
|
||||
}
|
||||
|
||||
if [[ "$OS" == "ubuntu" ]]; then
|
||||
if !(drun lsb_release -a 2>&1 | grep -qF Ubuntu); then
|
||||
echo "OS=ubuntu, but:"
|
||||
drun lsb_release -a
|
||||
exit 1
|
||||
fi
|
||||
if !(drun lsb_release -a 2>&1 | grep -qF "$UBUNTU_VERSION"); then
|
||||
echo "UBUNTU_VERSION=$UBUNTU_VERSION, but:"
|
||||
drun lsb_release -a
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ -n "$TRAVIS_PYTHON_VERSION" ]; then
|
||||
if [[ "$TRAVIS_PYTHON_VERSION" != nightly ]]; then
|
||||
if !(drun python --version 2>&1 | grep -qF "Python $TRAVIS_PYTHON_VERSION"); then
|
||||
echo "TRAVIS_PYTHON_VERSION=$TRAVIS_PYTHON_VERSION, but:"
|
||||
drun python --version
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
echo "Please manually check nightly is OK:"
|
||||
drun python --version
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ -n "$ANACONDA_PYTHON_VERSION" ]; then
|
||||
if !(drun python --version 2>&1 | grep -qF "Python $ANACONDA_PYTHON_VERSION"); then
|
||||
echo "ANACONDA_PYTHON_VERSION=$ANACONDA_PYTHON_VERSION, but:"
|
||||
drun python --version
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ -n "$GCC_VERSION" ]; then
|
||||
if !(drun gcc --version 2>&1 | grep -q " $GCC_VERSION\\W"); then
|
||||
echo "GCC_VERSION=$GCC_VERSION, but:"
|
||||
drun gcc --version
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ -n "$CLANG_VERSION" ]; then
|
||||
if !(drun clang --version 2>&1 | grep -qF "clang version $CLANG_VERSION"); then
|
||||
echo "CLANG_VERSION=$CLANG_VERSION, but:"
|
||||
drun clang --version
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ -n "$KATEX" ]; then
|
||||
if !(drun katex --version); then
|
||||
echo "KATEX=$KATEX, but:"
|
||||
drun katex --version
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
@ -1,49 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -ex
|
||||
|
||||
retry () {
|
||||
$* || (sleep 1 && $*) || (sleep 2 && $*)
|
||||
}
|
||||
|
||||
# If UPSTREAM_BUILD_ID is set (see trigger job), then we can
|
||||
# use it to tag this build with the same ID used to tag all other
|
||||
# base image builds. Also, we can try and pull the previous
|
||||
# image first, to avoid rebuilding layers that haven't changed.
|
||||
|
||||
#until we find a way to reliably reuse previous build, this last_tag is not in use
|
||||
# last_tag="$(( CIRCLE_BUILD_NUM - 1 ))"
|
||||
tag="${CIRCLE_WORKFLOW_ID}"
|
||||
|
||||
|
||||
registry="308535385114.dkr.ecr.us-east-1.amazonaws.com"
|
||||
image="${registry}/pytorch/${IMAGE_NAME}"
|
||||
|
||||
login() {
|
||||
aws ecr get-authorization-token --region us-east-1 --output text --query 'authorizationData[].authorizationToken' |
|
||||
base64 -d |
|
||||
cut -d: -f2 |
|
||||
docker login -u AWS --password-stdin "$1"
|
||||
}
|
||||
|
||||
# Retry on timeouts (can happen on job stampede).
|
||||
retry login "${registry}"
|
||||
|
||||
# Logout on exit
|
||||
trap "docker logout ${registry}" EXIT
|
||||
|
||||
# export EC2=1
|
||||
# export JENKINS=1
|
||||
|
||||
# Try to pull the previous image (perhaps we can reuse some layers)
|
||||
# if [ -n "${last_tag}" ]; then
|
||||
# docker pull "${image}:${last_tag}" || true
|
||||
# fi
|
||||
|
||||
# Build new image
|
||||
./build.sh ${IMAGE_NAME} -t "${image}:${tag}"
|
||||
|
||||
docker push "${image}:${tag}"
|
||||
|
||||
docker save -o "${IMAGE_NAME}:${tag}.tar" "${image}:${tag}"
|
||||
aws s3 cp "${IMAGE_NAME}:${tag}.tar" "s3://ossci-linux-build/pytorch/base/${IMAGE_NAME}:${tag}.tar" --acl public-read
|
||||
@ -1,129 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -ex
|
||||
|
||||
[ -n "${ANDROID_NDK}" ]
|
||||
|
||||
apt-get update
|
||||
apt-get install -y --no-install-recommends autotools-dev autoconf unzip
|
||||
apt-get autoclean && apt-get clean
|
||||
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
|
||||
|
||||
pushd /tmp
|
||||
curl -Os https://dl.google.com/android/repository/android-ndk-${ANDROID_NDK}-linux-x86_64.zip
|
||||
popd
|
||||
_ndk_dir=/opt/ndk
|
||||
mkdir -p "$_ndk_dir"
|
||||
unzip -qo /tmp/android*.zip -d "$_ndk_dir"
|
||||
_versioned_dir=$(find "$_ndk_dir/" -mindepth 1 -maxdepth 1 -type d)
|
||||
mv "$_versioned_dir"/* "$_ndk_dir"/
|
||||
rmdir "$_versioned_dir"
|
||||
rm -rf /tmp/*
|
||||
|
||||
# Install OpenJDK
|
||||
# https://hub.docker.com/r/picoded/ubuntu-openjdk-8-jdk/dockerfile/
|
||||
|
||||
sudo apt-get update && \
|
||||
apt-get install -y openjdk-8-jdk && \
|
||||
apt-get install -y ant && \
|
||||
apt-get clean && \
|
||||
rm -rf /var/lib/apt/lists/* && \
|
||||
rm -rf /var/cache/oracle-jdk8-installer;
|
||||
|
||||
# Fix certificate issues, found as of
|
||||
# https://bugs.launchpad.net/ubuntu/+source/ca-certificates-java/+bug/983302
|
||||
|
||||
sudo apt-get update && \
|
||||
apt-get install -y ca-certificates-java && \
|
||||
apt-get clean && \
|
||||
update-ca-certificates -f && \
|
||||
rm -rf /var/lib/apt/lists/* && \
|
||||
rm -rf /var/cache/oracle-jdk8-installer;
|
||||
|
||||
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64/
|
||||
|
||||
# Installing android sdk
|
||||
# https://github.com/circleci/circleci-images/blob/staging/android/Dockerfile.m4
|
||||
|
||||
_sdk_version=sdk-tools-linux-3859397.zip
|
||||
_android_home=/opt/android/sdk
|
||||
|
||||
rm -rf $_android_home
|
||||
sudo mkdir -p $_android_home
|
||||
curl --silent --show-error --location --fail --retry 3 --output /tmp/$_sdk_version https://dl.google.com/android/repository/$_sdk_version
|
||||
sudo unzip -q /tmp/$_sdk_version -d $_android_home
|
||||
rm /tmp/$_sdk_version
|
||||
|
||||
sudo chmod -R 777 $_android_home
|
||||
|
||||
export ANDROID_HOME=$_android_home
|
||||
export ADB_INSTALL_TIMEOUT=120
|
||||
|
||||
export PATH="${ANDROID_HOME}/emulator:${ANDROID_HOME}/tools:${ANDROID_HOME}/tools/bin:${ANDROID_HOME}/platform-tools:${PATH}"
|
||||
echo "PATH:${PATH}"
|
||||
alias sdkmanager="$ANDROID_HOME/tools/bin/sdkmanager"
|
||||
|
||||
sudo mkdir ~/.android && sudo echo '### User Sources for Android SDK Manager' > ~/.android/repositories.cfg
|
||||
sudo chmod -R 777 ~/.android
|
||||
|
||||
yes | sdkmanager --licenses
|
||||
yes | sdkmanager --update
|
||||
|
||||
sdkmanager \
|
||||
"tools" \
|
||||
"platform-tools" \
|
||||
"emulator"
|
||||
|
||||
sdkmanager \
|
||||
"build-tools;28.0.3" \
|
||||
"build-tools;29.0.2"
|
||||
|
||||
sdkmanager \
|
||||
"platforms;android-28" \
|
||||
"platforms;android-29"
|
||||
sdkmanager --list
|
||||
|
||||
# Installing Gradle
|
||||
echo "GRADLE_VERSION:${GRADLE_VERSION}"
|
||||
_gradle_home=/opt/gradle
|
||||
sudo rm -rf $gradle_home
|
||||
sudo mkdir -p $_gradle_home
|
||||
|
||||
wget --no-verbose --output-document=/tmp/gradle.zip \
|
||||
"https://services.gradle.org/distributions/gradle-${GRADLE_VERSION}-bin.zip"
|
||||
|
||||
sudo unzip -q /tmp/gradle.zip -d $_gradle_home
|
||||
rm /tmp/gradle.zip
|
||||
|
||||
sudo chmod -R 777 $_gradle_home
|
||||
|
||||
export GRADLE_HOME=$_gradle_home/gradle-$GRADLE_VERSION
|
||||
alias gradle="${GRADLE_HOME}/bin/gradle"
|
||||
|
||||
export PATH="${GRADLE_HOME}/bin/:${PATH}"
|
||||
echo "PATH:${PATH}"
|
||||
|
||||
gradle --version
|
||||
|
||||
mkdir /var/lib/jenkins/gradledeps
|
||||
cp build.gradle /var/lib/jenkins/gradledeps
|
||||
cp AndroidManifest.xml /var/lib/jenkins/gradledeps
|
||||
|
||||
pushd /var/lib/jenkins
|
||||
|
||||
export GRADLE_LOCAL_PROPERTIES=gradledeps/local.properties
|
||||
rm -f $GRADLE_LOCAL_PROPERTIES
|
||||
echo "sdk.dir=/opt/android/sdk" >> $GRADLE_LOCAL_PROPERTIES
|
||||
echo "ndk.dir=/opt/ndk" >> $GRADLE_LOCAL_PROPERTIES
|
||||
|
||||
chown -R jenkins /var/lib/jenkins/gradledeps
|
||||
chgrp -R jenkins /var/lib/jenkins/gradledeps
|
||||
|
||||
sudo -H -u jenkins $GRADLE_HOME/bin/gradle -p /var/lib/jenkins/gradledeps -g /var/lib/jenkins/.gradle --refresh-dependencies --debug --stacktrace assemble
|
||||
|
||||
chown -R jenkins /var/lib/jenkins/.gradle
|
||||
chgrp -R jenkins /var/lib/jenkins/.gradle
|
||||
|
||||
popd
|
||||
|
||||
rm -rf /var/lib/jenkins/.gradle/daemon
|
||||
@ -1,75 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -ex
|
||||
|
||||
if [[ "$UBUNTU_VERSION" == "14.04" ]]; then
|
||||
# cmake 2 is too old
|
||||
cmake3=cmake3
|
||||
else
|
||||
cmake3=cmake
|
||||
fi
|
||||
|
||||
if [[ "$UBUNTU_VERSION" == "18.04" ]]; then
|
||||
cmake3="cmake=3.10*"
|
||||
else
|
||||
cmake3="${cmake3}=3.5*"
|
||||
fi
|
||||
|
||||
# Install common dependencies
|
||||
apt-get update
|
||||
# TODO: Some of these may not be necessary
|
||||
# TODO: libiomp also gets installed by conda, aka there's a conflict
|
||||
ccache_deps="asciidoc docbook-xml docbook-xsl xsltproc"
|
||||
numpy_deps="gfortran"
|
||||
apt-get install -y --no-install-recommends \
|
||||
$ccache_deps \
|
||||
$numpy_deps \
|
||||
${cmake3} \
|
||||
apt-transport-https \
|
||||
autoconf \
|
||||
automake \
|
||||
build-essential \
|
||||
ca-certificates \
|
||||
curl \
|
||||
git \
|
||||
libatlas-base-dev \
|
||||
libc6-dbg \
|
||||
libiomp-dev \
|
||||
libyaml-dev \
|
||||
libz-dev \
|
||||
libjpeg-dev \
|
||||
libasound2-dev \
|
||||
libsndfile-dev \
|
||||
python \
|
||||
python-dev \
|
||||
python-setuptools \
|
||||
python-wheel \
|
||||
software-properties-common \
|
||||
sudo \
|
||||
wget \
|
||||
vim
|
||||
|
||||
# Install Valgrind separately since the apt-get version is too old.
|
||||
mkdir valgrind_build && cd valgrind_build
|
||||
if ! wget http://valgrind.org/downloads/valgrind-3.14.0.tar.bz2
|
||||
then
|
||||
wget https://sourceware.org/ftp/valgrind/valgrind-3.14.0.tar.bz2
|
||||
fi
|
||||
tar -xjf valgrind-3.14.0.tar.bz2
|
||||
cd valgrind-3.14.0
|
||||
./configure --prefix=/usr/local
|
||||
make
|
||||
sudo make install
|
||||
cd ../../
|
||||
rm -rf valgrind_build
|
||||
alias valgrind="/usr/local/bin/valgrind"
|
||||
|
||||
# TODO: THIS IS A HACK!!!
|
||||
# distributed nccl(2) tests are a bit busted, see https://github.com/pytorch/pytorch/issues/5877
|
||||
if dpkg -s libnccl-dev; then
|
||||
apt-get remove -y libnccl-dev libnccl2 --allow-change-held-packages
|
||||
fi
|
||||
|
||||
# Cleanup package manager
|
||||
apt-get autoclean && apt-get clean
|
||||
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
|
||||
@ -1,35 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -ex
|
||||
|
||||
mkdir -p /opt/cache/bin
|
||||
mkdir -p /opt/cache/lib
|
||||
sed -e 's|PATH="\(.*\)"|PATH="/opt/cache/bin:\1"|g' -i /etc/environment
|
||||
export PATH="/opt/cache/bin:$PATH"
|
||||
|
||||
# Setup compiler cache
|
||||
curl https://s3.amazonaws.com/ossci-linux/sccache -o /opt/cache/bin/sccache
|
||||
chmod a+x /opt/cache/bin/sccache
|
||||
|
||||
function write_sccache_stub() {
|
||||
printf "#!/bin/sh\nexec sccache $(which $1) \$*" > "/opt/cache/bin/$1"
|
||||
chmod a+x "/opt/cache/bin/$1"
|
||||
}
|
||||
|
||||
write_sccache_stub cc
|
||||
write_sccache_stub c++
|
||||
write_sccache_stub gcc
|
||||
write_sccache_stub g++
|
||||
write_sccache_stub clang
|
||||
write_sccache_stub clang++
|
||||
|
||||
if [ -n "$CUDA_VERSION" ]; then
|
||||
# TODO: This is a workaround for the fact that PyTorch's FindCUDA
|
||||
# implementation cannot find nvcc if it is setup this way, because it
|
||||
# appears to search for the nvcc in PATH, and use its path to infer
|
||||
# where CUDA is installed. Instead, we install an nvcc symlink outside
|
||||
# of the PATH, and set CUDA_NVCC_EXECUTABLE so that we make use of it.
|
||||
|
||||
printf "#!/bin/sh\nexec sccache $(which nvcc) \"\$@\"" > /opt/cache/lib/nvcc
|
||||
chmod a+x /opt/cache/lib/nvcc
|
||||
fi
|
||||
@ -1,44 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -ex
|
||||
|
||||
if [ -n "$CLANG_VERSION" ]; then
|
||||
|
||||
if [[ $CLANG_VERSION == 7 && $UBUNTU_VERSION == 16.04 ]]; then
|
||||
wget -O - https://apt.llvm.org/llvm-snapshot.gpg.key | sudo apt-key add -
|
||||
sudo apt-add-repository "deb http://apt.llvm.org/xenial/ llvm-toolchain-xenial-7 main"
|
||||
elif [[ $CLANG_VERSION == 9 && $UBUNTU_VERSION == 18.04 ]]; then
|
||||
sudo apt-get update
|
||||
# gpg-agent is not available by default on 18.04
|
||||
sudo apt-get install -y --no-install-recommends gpg-agent
|
||||
wget --no-check-certificate -O - https://apt.llvm.org/llvm-snapshot.gpg.key | sudo apt-key add -
|
||||
apt-add-repository "deb http://apt.llvm.org/bionic/ llvm-toolchain-bionic-${CLANG_VERSION} main"
|
||||
fi
|
||||
|
||||
sudo apt-get update
|
||||
apt-get install -y --no-install-recommends clang-"$CLANG_VERSION"
|
||||
apt-get install -y --no-install-recommends llvm-"$CLANG_VERSION"
|
||||
|
||||
# Install dev version of LLVM.
|
||||
if [ -n "$LLVMDEV" ]; then
|
||||
sudo apt-get install -y --no-install-recommends llvm-"$CLANG_VERSION"-dev
|
||||
fi
|
||||
|
||||
# Use update-alternatives to make this version the default
|
||||
# TODO: Decide if overriding gcc as well is a good idea
|
||||
# update-alternatives --install /usr/bin/gcc gcc /usr/bin/clang-"$CLANG_VERSION" 50
|
||||
# update-alternatives --install /usr/bin/g++ g++ /usr/bin/clang++-"$CLANG_VERSION" 50
|
||||
update-alternatives --install /usr/bin/clang clang /usr/bin/clang-"$CLANG_VERSION" 50
|
||||
update-alternatives --install /usr/bin/clang++ clang++ /usr/bin/clang++-"$CLANG_VERSION" 50
|
||||
|
||||
# clang's packaging is a little messed up (the runtime libs aren't
|
||||
# added into the linker path), so give it a little help
|
||||
clang_lib=("/usr/lib/llvm-$CLANG_VERSION/lib/clang/"*"/lib/linux")
|
||||
echo "$clang_lib" > /etc/ld.so.conf.d/clang.conf
|
||||
ldconfig
|
||||
|
||||
# Cleanup package manager
|
||||
apt-get autoclean && apt-get clean
|
||||
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
|
||||
|
||||
fi
|
||||
@ -1,16 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -ex
|
||||
|
||||
[ -n "$CMAKE_VERSION" ]
|
||||
|
||||
# Turn 3.6.3 into v3.6
|
||||
path=$(echo "${CMAKE_VERSION}" | sed -e 's/\([0-9].[0-9]\+\).*/v\1/')
|
||||
file="cmake-${CMAKE_VERSION}-Linux-x86_64.tar.gz"
|
||||
|
||||
# Download and install specific CMake version in /usr/local
|
||||
pushd /tmp
|
||||
curl -Os "https://cmake.org/files/${path}/${file}"
|
||||
tar -C /usr/local --strip-components 1 --no-same-owner -zxf cmake-*.tar.gz
|
||||
rm -f cmake-*.tar.gz
|
||||
popd
|
||||
@ -1,94 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -ex
|
||||
|
||||
# Optionally install conda
|
||||
if [ -n "$ANACONDA_PYTHON_VERSION" ]; then
|
||||
BASE_URL="https://repo.continuum.io/miniconda"
|
||||
|
||||
MAJOR_PYTHON_VERSION=$(echo "$ANACONDA_PYTHON_VERSION" | cut -d . -f 1)
|
||||
|
||||
case "$MAJOR_PYTHON_VERSION" in
|
||||
2)
|
||||
CONDA_FILE="Miniconda2-latest-Linux-x86_64.sh"
|
||||
;;
|
||||
3)
|
||||
CONDA_FILE="Miniconda3-latest-Linux-x86_64.sh"
|
||||
;;
|
||||
*)
|
||||
echo "Unsupported ANACONDA_PYTHON_VERSION: $ANACONDA_PYTHON_VERSION"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
||||
mkdir /opt/conda
|
||||
chown jenkins:jenkins /opt/conda
|
||||
|
||||
as_jenkins() {
|
||||
# NB: unsetting the environment variables works around a conda bug
|
||||
# https://github.com/conda/conda/issues/6576
|
||||
# NB: Pass on PATH and LD_LIBRARY_PATH to sudo invocation
|
||||
# NB: This must be run from a directory that jenkins has access to,
|
||||
# works around https://github.com/conda/conda-package-handling/pull/34
|
||||
sudo -H -u jenkins env -u SUDO_UID -u SUDO_GID -u SUDO_COMMAND -u SUDO_USER env "PATH=$PATH" "LD_LIBRARY_PATH=$LD_LIBRARY_PATH" $*
|
||||
}
|
||||
|
||||
pushd /tmp
|
||||
wget -q "${BASE_URL}/${CONDA_FILE}"
|
||||
chmod +x "${CONDA_FILE}"
|
||||
as_jenkins ./"${CONDA_FILE}" -b -f -p "/opt/conda"
|
||||
popd
|
||||
|
||||
# NB: Don't do this, rely on the rpath to get it right
|
||||
#echo "/opt/conda/lib" > /etc/ld.so.conf.d/conda-python.conf
|
||||
#ldconfig
|
||||
sed -e 's|PATH="\(.*\)"|PATH="/opt/conda/bin:\1"|g' -i /etc/environment
|
||||
export PATH="/opt/conda/bin:$PATH"
|
||||
|
||||
# Ensure we run conda in a directory that jenkins has write access to
|
||||
pushd /opt/conda
|
||||
|
||||
# Track latest conda update
|
||||
as_jenkins conda update -n base conda
|
||||
|
||||
# Install correct Python version
|
||||
as_jenkins conda install python="$ANACONDA_PYTHON_VERSION"
|
||||
|
||||
conda_install() {
|
||||
# Ensure that the install command don't upgrade/downgrade Python
|
||||
# This should be called as
|
||||
# conda_install pkg1 pkg2 ... [-c channel]
|
||||
as_jenkins conda install -q -y python="$ANACONDA_PYTHON_VERSION" $*
|
||||
}
|
||||
|
||||
# Install PyTorch conda deps, as per https://github.com/pytorch/pytorch README
|
||||
# DO NOT install cmake here as it would install a version newer than 3.5, but
|
||||
# we want to pin to version 3.5.
|
||||
conda_install numpy pyyaml mkl mkl-include setuptools cffi typing future six
|
||||
if [[ "$CUDA_VERSION" == 8.0* ]]; then
|
||||
conda_install magma-cuda80 -c pytorch
|
||||
elif [[ "$CUDA_VERSION" == 9.0* ]]; then
|
||||
conda_install magma-cuda90 -c pytorch
|
||||
elif [[ "$CUDA_VERSION" == 9.1* ]]; then
|
||||
conda_install magma-cuda91 -c pytorch
|
||||
elif [[ "$CUDA_VERSION" == 9.2* ]]; then
|
||||
conda_install magma-cuda92 -c pytorch
|
||||
elif [[ "$CUDA_VERSION" == 10.0* ]]; then
|
||||
conda_install magma-cuda100 -c pytorch
|
||||
elif [[ "$CUDA_VERSION" == 10.1* ]]; then
|
||||
conda_install magma-cuda101 -c pytorch
|
||||
fi
|
||||
|
||||
# TODO: This isn't working atm
|
||||
conda_install nnpack -c killeent
|
||||
|
||||
# Install some other packages
|
||||
# TODO: Why is scipy pinned
|
||||
# numba & llvmlite is pinned because of https://github.com/numba/numba/issues/4368
|
||||
# scikit-learn is pinned because of
|
||||
# https://github.com/scikit-learn/scikit-learn/issues/14485 (affects gcc 5.5
|
||||
# only)
|
||||
as_jenkins pip install --progress-bar off pytest scipy==1.1.0 scikit-learn==0.20.3 scikit-image librosa>=0.6.2 psutil numba==0.43.1 llvmlite==0.28.0
|
||||
|
||||
popd
|
||||
fi
|
||||
@ -1,61 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -ex
|
||||
|
||||
# This function installs protobuf 2.6
|
||||
install_protobuf_26() {
|
||||
pb_dir="/usr/temp_pb_install_dir"
|
||||
mkdir -p $pb_dir
|
||||
|
||||
# On the nvidia/cuda:9-cudnn7-devel-centos7 image we need this symlink or
|
||||
# else it will fail with
|
||||
# g++: error: ./../lib64/crti.o: No such file or directory
|
||||
ln -s /usr/lib64 "$pb_dir/lib64"
|
||||
|
||||
curl -LO "https://github.com/google/protobuf/releases/download/v2.6.1/protobuf-2.6.1.tar.gz"
|
||||
tar -xvz -C "$pb_dir" --strip-components 1 -f protobuf-2.6.1.tar.gz
|
||||
pushd "$pb_dir" && ./configure && make && make check && sudo make install && sudo ldconfig
|
||||
popd
|
||||
rm -rf $pb_dir
|
||||
}
|
||||
|
||||
install_ubuntu() {
|
||||
apt-get update
|
||||
apt-get install -y --no-install-recommends \
|
||||
libhiredis-dev \
|
||||
libleveldb-dev \
|
||||
liblmdb-dev \
|
||||
libsnappy-dev
|
||||
|
||||
# Cleanup
|
||||
apt-get autoclean && apt-get clean
|
||||
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
|
||||
}
|
||||
|
||||
install_centos() {
|
||||
# Need EPEL for many packages we depend on.
|
||||
# See http://fedoraproject.org/wiki/EPEL
|
||||
yum --enablerepo=extras install -y epel-release
|
||||
|
||||
yum install -y \
|
||||
hiredis-devel \
|
||||
leveldb-devel \
|
||||
lmdb-devel \
|
||||
snappy-devel
|
||||
|
||||
# Cleanup
|
||||
yum clean all
|
||||
rm -rf /var/cache/yum
|
||||
rm -rf /var/lib/yum/yumdb
|
||||
rm -rf /var/lib/yum/history
|
||||
}
|
||||
|
||||
# Install base packages depending on the base OS
|
||||
if [ -f /etc/lsb-release ]; then
|
||||
install_ubuntu
|
||||
elif [ -f /etc/os-release ]; then
|
||||
install_centos
|
||||
else
|
||||
echo "Unable to determine OS..."
|
||||
exit 1
|
||||
fi
|
||||
@ -1,19 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -ex
|
||||
|
||||
if [ -n "$GCC_VERSION" ]; then
|
||||
|
||||
# Need the official toolchain repo to get alternate packages
|
||||
add-apt-repository ppa:ubuntu-toolchain-r/test
|
||||
apt-get update
|
||||
apt-get install -y g++-$GCC_VERSION
|
||||
|
||||
update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-"$GCC_VERSION" 50
|
||||
update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-"$GCC_VERSION" 50
|
||||
|
||||
# Cleanup package manager
|
||||
apt-get autoclean && apt-get clean
|
||||
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
|
||||
|
||||
fi
|
||||
@ -1,6 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -ex
|
||||
|
||||
mkdir -p /usr/local/include
|
||||
cp jni.h /usr/local/include
|
||||
@ -1,20 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -ex
|
||||
|
||||
if [ -n "$KATEX" ]; then
|
||||
|
||||
curl -sL https://deb.nodesource.com/setup_12.x | sudo -E bash -
|
||||
sudo apt-get install -y nodejs
|
||||
|
||||
curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | sudo apt-key add -
|
||||
echo "deb https://dl.yarnpkg.com/debian/ stable main" | sudo tee /etc/apt/sources.list.d/yarn.list
|
||||
|
||||
apt-get update
|
||||
apt-get install -y --no-install-recommends yarn
|
||||
yarn global add katex --prefix /usr/local
|
||||
|
||||
apt-get autoclean && apt-get clean
|
||||
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
|
||||
|
||||
fi
|
||||
@ -1,13 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -ex
|
||||
|
||||
[ -n "$NINJA_VERSION" ]
|
||||
|
||||
url="https://github.com/ninja-build/ninja/releases/download/v${NINJA_VERSION}/ninja-linux.zip"
|
||||
|
||||
pushd /tmp
|
||||
wget --no-verbose --output-document=ninja-linux.zip "$url"
|
||||
unzip ninja-linux.zip -d /usr/local/bin
|
||||
rm -f ninja-linux.zip
|
||||
popd
|
||||
@ -1,56 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -ex
|
||||
|
||||
# This function installs protobuf 2.6
|
||||
install_protobuf_26() {
|
||||
pb_dir="/usr/temp_pb_install_dir"
|
||||
mkdir -p $pb_dir
|
||||
|
||||
# On the nvidia/cuda:9-cudnn7-devel-centos7 image we need this symlink or
|
||||
# else it will fail with
|
||||
# g++: error: ./../lib64/crti.o: No such file or directory
|
||||
ln -s /usr/lib64 "$pb_dir/lib64"
|
||||
|
||||
curl -LO "https://github.com/google/protobuf/releases/download/v2.6.1/protobuf-2.6.1.tar.gz"
|
||||
tar -xvz -C "$pb_dir" --strip-components 1 -f protobuf-2.6.1.tar.gz
|
||||
pushd "$pb_dir" && ./configure && make && make check && sudo make install && sudo ldconfig
|
||||
popd
|
||||
rm -rf $pb_dir
|
||||
}
|
||||
|
||||
install_ubuntu() {
|
||||
# Ubuntu 14.04 ships with protobuf 2.5, but ONNX needs protobuf >= 2.6
|
||||
# so we install that here if on 14.04
|
||||
# Ubuntu 14.04 also has cmake 2.8.12 as the default option, so we will
|
||||
# install cmake3 here and use cmake3.
|
||||
apt-get update
|
||||
if [[ "$UBUNTU_VERSION" == 14.04 ]]; then
|
||||
apt-get install -y --no-install-recommends cmake3
|
||||
install_protobuf_26
|
||||
else
|
||||
apt-get install -y --no-install-recommends \
|
||||
libprotobuf-dev \
|
||||
protobuf-compiler
|
||||
fi
|
||||
|
||||
# Cleanup
|
||||
apt-get autoclean && apt-get clean
|
||||
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
|
||||
}
|
||||
|
||||
install_centos() {
|
||||
# Centos7 ships with protobuf 2.5, but ONNX needs protobuf >= 2.6
|
||||
# so we always install install that here
|
||||
install_protobuf_26
|
||||
}
|
||||
|
||||
# Install base packages depending on the base OS
|
||||
if [ -f /etc/lsb-release ]; then
|
||||
install_ubuntu
|
||||
elif [ -f /etc/os-release ]; then
|
||||
install_centos
|
||||
else
|
||||
echo "Unable to determine OS..."
|
||||
exit 1
|
||||
fi
|
||||
@ -1,14 +0,0 @@
|
||||
apt-get update
|
||||
apt-get install -y sudo wget libboost-dev libboost-test-dev libboost-program-options-dev libboost-filesystem-dev libboost-thread-dev libevent-dev automake libtool flex bison pkg-config g++ libssl-dev
|
||||
wget https://www-us.apache.org/dist/thrift/0.12.0/thrift-0.12.0.tar.gz
|
||||
tar -xvf thrift-0.12.0.tar.gz
|
||||
cd thrift-0.12.0
|
||||
for file in ./compiler/cpp/Makefile*; do
|
||||
sed -i 's/\-Werror//' $file
|
||||
done
|
||||
./bootstrap.sh
|
||||
./configure --without-php --without-java --without-python --without-nodejs --without-go --without-ruby
|
||||
sudo make
|
||||
sudo make install
|
||||
cd ..
|
||||
rm thrift-0.12.0.tar.gz
|
||||
@ -1,94 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -ex
|
||||
|
||||
as_jenkins() {
|
||||
# NB: Preserve PATH and LD_LIBRARY_PATH changes
|
||||
sudo -H -u jenkins env "PATH=$PATH" "LD_LIBRARY_PATH=$LD_LIBRARY_PATH" $*
|
||||
}
|
||||
|
||||
if [ -n "$TRAVIS_PYTHON_VERSION" ]; then
|
||||
|
||||
mkdir -p /opt/python
|
||||
chown jenkins:jenkins /opt/python
|
||||
|
||||
# Download Python binary from Travis
|
||||
pushd tmp
|
||||
as_jenkins wget --quiet https://s3.amazonaws.com/travis-python-archives/binaries/ubuntu/14.04/x86_64/python-$TRAVIS_PYTHON_VERSION.tar.bz2
|
||||
# NB: The tarball also comes with /home/travis virtualenv that we
|
||||
# don't care about. (Maybe we should, but we've worked around the
|
||||
# "how do I install to python" issue by making this entire directory
|
||||
# user-writable "lol")
|
||||
# NB: Relative ordering of opt/python and flags matters
|
||||
as_jenkins tar xjf python-$TRAVIS_PYTHON_VERSION.tar.bz2 --strip-components=2 --directory /opt/python opt/python
|
||||
popd
|
||||
|
||||
echo "/opt/python/$TRAVIS_PYTHON_VERSION/lib" > /etc/ld.so.conf.d/travis-python.conf
|
||||
ldconfig
|
||||
sed -e 's|PATH="\(.*\)"|PATH="/opt/python/'"$TRAVIS_PYTHON_VERSION"'/bin:\1"|g' -i /etc/environment
|
||||
export PATH="/opt/python/$TRAVIS_PYTHON_VERSION/bin:$PATH"
|
||||
|
||||
python --version
|
||||
pip --version
|
||||
|
||||
# Install pip from source.
|
||||
# The python-pip package on Ubuntu Trusty is old
|
||||
# and upon install numpy doesn't use the binary
|
||||
# distribution, and fails to compile it from source.
|
||||
pushd tmp
|
||||
as_jenkins curl -L -O https://pypi.python.org/packages/11/b6/abcb525026a4be042b486df43905d6893fb04f05aac21c32c638e939e447/pip-9.0.1.tar.gz
|
||||
as_jenkins tar zxf pip-9.0.1.tar.gz
|
||||
pushd pip-9.0.1
|
||||
as_jenkins python setup.py install
|
||||
popd
|
||||
rm -rf pip-9.0.1*
|
||||
popd
|
||||
|
||||
# Install pip packages
|
||||
as_jenkins pip install --upgrade pip
|
||||
|
||||
pip --version
|
||||
|
||||
if [[ "$TRAVIS_PYTHON_VERSION" == nightly ]]; then
|
||||
# These two packages have broken Cythonizations uploaded
|
||||
# to PyPi, see:
|
||||
#
|
||||
# - https://github.com/numpy/numpy/issues/10500
|
||||
# - https://github.com/yaml/pyyaml/issues/117
|
||||
#
|
||||
# Furthermore, the released version of Cython does not
|
||||
# have these issues fixed.
|
||||
#
|
||||
# While we are waiting on fixes for these, we build
|
||||
# from Git for now. Feel free to delete this conditional
|
||||
# branch if things start working again (you may need
|
||||
# to do this if these packages regress on Git HEAD.)
|
||||
as_jenkins pip install git+https://github.com/cython/cython.git
|
||||
as_jenkins pip install git+https://github.com/numpy/numpy.git
|
||||
as_jenkins pip install git+https://github.com/yaml/pyyaml.git
|
||||
else
|
||||
as_jenkins pip install numpy pyyaml
|
||||
fi
|
||||
|
||||
as_jenkins pip install \
|
||||
future \
|
||||
hypothesis \
|
||||
protobuf \
|
||||
pytest \
|
||||
pillow \
|
||||
typing
|
||||
|
||||
as_jenkins pip install mkl mkl-devel
|
||||
|
||||
# SciPy does not support Python 3.7 or Python 2.7.9
|
||||
if [[ "$TRAVIS_PYTHON_VERSION" != nightly ]] && [[ "$TRAVIS_PYTHON_VERSION" != "2.7.9" ]]; then
|
||||
as_jenkins pip install scipy==1.1.0 scikit-image librosa>=0.6.2
|
||||
fi
|
||||
|
||||
# Install psutil for dataloader tests
|
||||
as_jenkins pip install psutil
|
||||
|
||||
# Cleanup package manager
|
||||
apt-get autoclean && apt-get clean
|
||||
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
|
||||
fi
|
||||
@ -1,20 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -ex
|
||||
|
||||
# Mirror jenkins user in container
|
||||
echo "jenkins:x:1014:1014::/var/lib/jenkins:" >> /etc/passwd
|
||||
echo "jenkins:x:1014:" >> /etc/group
|
||||
|
||||
# Create $HOME
|
||||
mkdir -p /var/lib/jenkins
|
||||
chown jenkins:jenkins /var/lib/jenkins
|
||||
mkdir -p /var/lib/jenkins/.ccache
|
||||
chown jenkins:jenkins /var/lib/jenkins/.ccache
|
||||
|
||||
# Allow writing to /usr/local (for make install)
|
||||
chown jenkins:jenkins /usr/local
|
||||
|
||||
# Allow sudo
|
||||
# TODO: Maybe we shouldn't
|
||||
echo 'jenkins ALL=(ALL) NOPASSWD:ALL' > /etc/sudoers.d/jenkins
|
||||
@ -1,57 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -ex
|
||||
|
||||
# This function installs protobuf 2.6
|
||||
install_protobuf_26() {
|
||||
pb_dir="/usr/temp_pb_install_dir"
|
||||
mkdir -p $pb_dir
|
||||
|
||||
# On the nvidia/cuda:9-cudnn7-devel-centos7 image we need this symlink or
|
||||
# else it will fail with
|
||||
# g++: error: ./../lib64/crti.o: No such file or directory
|
||||
ln -s /usr/lib64 "$pb_dir/lib64"
|
||||
|
||||
curl -LO "https://github.com/google/protobuf/releases/download/v2.6.1/protobuf-2.6.1.tar.gz"
|
||||
tar -xvz -C "$pb_dir" --strip-components 1 -f protobuf-2.6.1.tar.gz
|
||||
pushd "$pb_dir" && ./configure && make && make check && sudo make install && sudo ldconfig
|
||||
popd
|
||||
rm -rf $pb_dir
|
||||
}
|
||||
|
||||
install_ubuntu() {
|
||||
apt-get update
|
||||
apt-get install -y --no-install-recommends \
|
||||
libopencv-dev \
|
||||
libavcodec-dev
|
||||
|
||||
# Cleanup
|
||||
apt-get autoclean && apt-get clean
|
||||
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
|
||||
}
|
||||
|
||||
install_centos() {
|
||||
# Need EPEL for many packages we depend on.
|
||||
# See http://fedoraproject.org/wiki/EPEL
|
||||
yum --enablerepo=extras install -y epel-release
|
||||
|
||||
yum install -y \
|
||||
opencv-devel \
|
||||
ffmpeg-devel
|
||||
|
||||
# Cleanup
|
||||
yum clean all
|
||||
rm -rf /var/cache/yum
|
||||
rm -rf /var/lib/yum/yumdb
|
||||
rm -rf /var/lib/yum/history
|
||||
}
|
||||
|
||||
# Install base packages depending on the base OS
|
||||
if [ -f /etc/lsb-release ]; then
|
||||
install_ubuntu
|
||||
elif [ -f /etc/os-release ]; then
|
||||
install_centos
|
||||
else
|
||||
echo "Unable to determine OS..."
|
||||
exit 1
|
||||
fi
|
||||
File diff suppressed because it is too large
Load Diff
@ -1,85 +0,0 @@
|
||||
ARG UBUNTU_VERSION
|
||||
ARG CUDA_VERSION
|
||||
ARG CUDNN_VERSION
|
||||
|
||||
FROM nvidia/cuda:${CUDA_VERSION}-cudnn${CUDNN_VERSION}-devel-ubuntu${UBUNTU_VERSION}
|
||||
|
||||
ARG UBUNTU_VERSION
|
||||
ARG CUDA_VERSION
|
||||
ARG CUDNN_VERSION
|
||||
|
||||
ENV DEBIAN_FRONTEND noninteractive
|
||||
|
||||
# Install common dependencies (so that this step can be cached separately)
|
||||
ARG EC2
|
||||
ADD ./common/install_base.sh install_base.sh
|
||||
RUN bash ./install_base.sh && rm install_base.sh
|
||||
|
||||
# Install user
|
||||
ADD ./common/install_user.sh install_user.sh
|
||||
RUN bash ./install_user.sh && rm install_user.sh
|
||||
|
||||
# Install katex
|
||||
ARG KATEX
|
||||
ADD ./common/install_katex.sh install_katex.sh
|
||||
RUN bash ./install_katex.sh && rm install_katex.sh
|
||||
|
||||
# Install conda
|
||||
ENV PATH /opt/conda/bin:$PATH
|
||||
ARG ANACONDA_PYTHON_VERSION
|
||||
ADD ./common/install_conda.sh install_conda.sh
|
||||
RUN bash ./install_conda.sh && rm install_conda.sh
|
||||
|
||||
# Install gcc
|
||||
ARG GCC_VERSION
|
||||
ADD ./common/install_gcc.sh install_gcc.sh
|
||||
RUN bash ./install_gcc.sh && rm install_gcc.sh
|
||||
|
||||
# Install non-standard Python versions (via Travis binaries)
|
||||
ARG TRAVIS_PYTHON_VERSION
|
||||
ENV PATH /opt/python/$TRAVIS_PYTHON_VERSION/bin:$PATH
|
||||
ADD ./common/install_travis_python.sh install_travis_python.sh
|
||||
RUN bash ./install_travis_python.sh && rm install_travis_python.sh
|
||||
|
||||
# (optional) Install protobuf for ONNX
|
||||
ARG PROTOBUF
|
||||
ADD ./common/install_protobuf.sh install_protobuf.sh
|
||||
RUN if [ -n "${PROTOBUF}" ]; then bash ./install_protobuf.sh; fi
|
||||
RUN rm install_protobuf.sh
|
||||
ENV INSTALLED_PROTOBUF ${PROTOBUF}
|
||||
|
||||
# (optional) Install database packages like LMDB and LevelDB
|
||||
ARG DB
|
||||
ADD ./common/install_db.sh install_db.sh
|
||||
RUN if [ -n "${DB}" ]; then bash ./install_db.sh; fi
|
||||
RUN rm install_db.sh
|
||||
ENV INSTALLED_DB ${DB}
|
||||
|
||||
# (optional) Install vision packages like OpenCV and ffmpeg
|
||||
ARG VISION
|
||||
ADD ./common/install_vision.sh install_vision.sh
|
||||
RUN if [ -n "${VISION}" ]; then bash ./install_vision.sh; fi
|
||||
RUN rm install_vision.sh
|
||||
ENV INSTALLED_VISION ${VISION}
|
||||
|
||||
# Install ccache/sccache (do this last, so we get priority in PATH)
|
||||
ADD ./common/install_cache.sh install_cache.sh
|
||||
ENV PATH /opt/cache/bin:$PATH
|
||||
RUN bash ./install_cache.sh && rm install_cache.sh
|
||||
ENV CUDA_NVCC_EXECUTABLE=/opt/cache/lib/nvcc
|
||||
|
||||
# Add jni.h for java host build
|
||||
ADD ./common/install_jni.sh install_jni.sh
|
||||
ADD ./java/jni.h jni.h
|
||||
RUN bash ./install_jni.sh && rm install_jni.sh
|
||||
|
||||
# Include BUILD_ENVIRONMENT environment variable in image
|
||||
ARG BUILD_ENVIRONMENT
|
||||
ENV BUILD_ENVIRONMENT ${BUILD_ENVIRONMENT}
|
||||
|
||||
# AWS specific CUDA build guidance
|
||||
ENV TORCH_CUDA_ARCH_LIST Maxwell
|
||||
ENV TORCH_NVCC_FLAGS "-Xfatbin -compress-all"
|
||||
|
||||
USER jenkins
|
||||
CMD ["bash"]
|
||||
@ -1,114 +0,0 @@
|
||||
ARG UBUNTU_VERSION
|
||||
|
||||
FROM ubuntu:${UBUNTU_VERSION}
|
||||
|
||||
ARG UBUNTU_VERSION
|
||||
|
||||
ENV DEBIAN_FRONTEND noninteractive
|
||||
|
||||
# Install common dependencies (so that this step can be cached separately)
|
||||
ARG EC2
|
||||
ADD ./common/install_base.sh install_base.sh
|
||||
RUN bash ./install_base.sh && rm install_base.sh
|
||||
|
||||
# Install clang
|
||||
ARG LLVMDEV
|
||||
ARG CLANG_VERSION
|
||||
ADD ./common/install_clang.sh install_clang.sh
|
||||
RUN bash ./install_clang.sh && rm install_clang.sh
|
||||
|
||||
# (optional) Install thrift.
|
||||
ARG THRIFT
|
||||
ADD ./common/install_thrift.sh install_thrift.sh
|
||||
RUN if [ -n "${THRIFT}" ]; then bash ./install_thrift.sh; fi
|
||||
RUN rm install_thrift.sh
|
||||
ENV INSTALLED_THRIFT ${THRIFT}
|
||||
|
||||
# Install user
|
||||
ADD ./common/install_user.sh install_user.sh
|
||||
RUN bash ./install_user.sh && rm install_user.sh
|
||||
|
||||
# Install katex
|
||||
ARG KATEX
|
||||
ADD ./common/install_katex.sh install_katex.sh
|
||||
RUN bash ./install_katex.sh && rm install_katex.sh
|
||||
|
||||
# Install conda
|
||||
ENV PATH /opt/conda/bin:$PATH
|
||||
ARG ANACONDA_PYTHON_VERSION
|
||||
ADD ./common/install_conda.sh install_conda.sh
|
||||
RUN bash ./install_conda.sh && rm install_conda.sh
|
||||
|
||||
# Install gcc
|
||||
ARG GCC_VERSION
|
||||
ADD ./common/install_gcc.sh install_gcc.sh
|
||||
RUN bash ./install_gcc.sh && rm install_gcc.sh
|
||||
|
||||
# Install non-standard Python versions (via Travis binaries)
|
||||
ARG TRAVIS_PYTHON_VERSION
|
||||
ENV PATH /opt/python/$TRAVIS_PYTHON_VERSION/bin:$PATH
|
||||
ADD ./common/install_travis_python.sh install_travis_python.sh
|
||||
RUN bash ./install_travis_python.sh && rm install_travis_python.sh
|
||||
|
||||
# (optional) Install protobuf for ONNX
|
||||
ARG PROTOBUF
|
||||
ADD ./common/install_protobuf.sh install_protobuf.sh
|
||||
RUN if [ -n "${PROTOBUF}" ]; then bash ./install_protobuf.sh; fi
|
||||
RUN rm install_protobuf.sh
|
||||
ENV INSTALLED_PROTOBUF ${PROTOBUF}
|
||||
|
||||
# (optional) Install database packages like LMDB and LevelDB
|
||||
ARG DB
|
||||
ADD ./common/install_db.sh install_db.sh
|
||||
RUN if [ -n "${DB}" ]; then bash ./install_db.sh; fi
|
||||
RUN rm install_db.sh
|
||||
ENV INSTALLED_DB ${DB}
|
||||
|
||||
# (optional) Install vision packages like OpenCV and ffmpeg
|
||||
ARG VISION
|
||||
ADD ./common/install_vision.sh install_vision.sh
|
||||
RUN if [ -n "${VISION}" ]; then bash ./install_vision.sh; fi
|
||||
RUN rm install_vision.sh
|
||||
ENV INSTALLED_VISION ${VISION}
|
||||
|
||||
# (optional) Install Android NDK
|
||||
ARG ANDROID
|
||||
ARG ANDROID_NDK
|
||||
ARG GRADLE_VERSION
|
||||
ADD ./common/install_android.sh install_android.sh
|
||||
ADD ./android/AndroidManifest.xml AndroidManifest.xml
|
||||
ADD ./android/build.gradle build.gradle
|
||||
RUN if [ -n "${ANDROID}" ]; then bash ./install_android.sh; fi
|
||||
RUN rm install_android.sh
|
||||
RUN rm AndroidManifest.xml
|
||||
RUN rm build.gradle
|
||||
ENV INSTALLED_ANDROID ${ANDROID}
|
||||
|
||||
# (optional) Install non-default CMake version
|
||||
ARG CMAKE_VERSION
|
||||
ADD ./common/install_cmake.sh install_cmake.sh
|
||||
RUN if [ -n "${CMAKE_VERSION}" ]; then bash ./install_cmake.sh; fi
|
||||
RUN rm install_cmake.sh
|
||||
|
||||
# (optional) Install non-default Ninja version
|
||||
ARG NINJA_VERSION
|
||||
ADD ./common/install_ninja.sh install_ninja.sh
|
||||
RUN if [ -n "${NINJA_VERSION}" ]; then bash ./install_ninja.sh; fi
|
||||
RUN rm install_ninja.sh
|
||||
|
||||
# Install ccache/sccache (do this last, so we get priority in PATH)
|
||||
ADD ./common/install_cache.sh install_cache.sh
|
||||
ENV PATH /opt/cache/bin:$PATH
|
||||
RUN bash ./install_cache.sh && rm install_cache.sh
|
||||
|
||||
# Add jni.h for java host build
|
||||
ADD ./common/install_jni.sh install_jni.sh
|
||||
ADD ./java/jni.h jni.h
|
||||
RUN bash ./install_jni.sh && rm install_jni.sh
|
||||
|
||||
# Include BUILD_ENVIRONMENT environment variable in image
|
||||
ARG BUILD_ENVIRONMENT
|
||||
ENV BUILD_ENVIRONMENT ${BUILD_ENVIRONMENT}
|
||||
|
||||
USER jenkins
|
||||
CMD ["bash"]
|
||||
@ -1,39 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import os
|
||||
import subprocess
|
||||
import sys
|
||||
import tempfile
|
||||
|
||||
import generate_config_yml
|
||||
|
||||
|
||||
CHECKED_IN_FILE = "config.yml"
|
||||
REGENERATION_SCRIPT = "regenerate.sh"
|
||||
|
||||
PARENT_DIR = os.path.basename(os.path.dirname(os.path.abspath(__file__)))
|
||||
README_PATH = os.path.join(PARENT_DIR, "README.md")
|
||||
|
||||
ERROR_MESSAGE_TEMPLATE = """
|
||||
The checked-in CircleCI "%s" file does not match what was generated by the scripts.
|
||||
Please re-run the "%s" script in the "%s" directory and commit the result. See "%s" for more information.
|
||||
"""
|
||||
|
||||
|
||||
def check_consistency():
|
||||
|
||||
_, temp_filename = tempfile.mkstemp("-generated-config.yml")
|
||||
|
||||
with open(temp_filename, "w") as fh:
|
||||
generate_config_yml.stitch_sources(fh)
|
||||
|
||||
try:
|
||||
subprocess.check_call(["cmp", temp_filename, CHECKED_IN_FILE])
|
||||
except subprocess.CalledProcessError:
|
||||
sys.exit(ERROR_MESSAGE_TEMPLATE % (CHECKED_IN_FILE, REGENERATION_SCRIPT, PARENT_DIR, README_PATH))
|
||||
finally:
|
||||
os.remove(temp_filename)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
check_consistency()
|
||||
@ -1,121 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
"""
|
||||
This script is the source of truth for config.yml.
|
||||
Please see README.md in this directory for details.
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import shutil
|
||||
from collections import namedtuple, OrderedDict
|
||||
|
||||
import cimodel.data.pytorch_build_definitions as pytorch_build_definitions
|
||||
import cimodel.data.binary_build_definitions as binary_build_definitions
|
||||
import cimodel.data.caffe2_build_definitions as caffe2_build_definitions
|
||||
import cimodel.lib.miniutils as miniutils
|
||||
import cimodel.lib.miniyaml as miniyaml
|
||||
|
||||
|
||||
class File(object):
|
||||
"""
|
||||
Verbatim copy the contents of a file into config.yml
|
||||
"""
|
||||
def __init__(self, filename):
|
||||
self.filename = filename
|
||||
|
||||
def write(self, output_filehandle):
|
||||
with open(os.path.join("verbatim-sources", self.filename)) as fh:
|
||||
shutil.copyfileobj(fh, output_filehandle)
|
||||
|
||||
|
||||
class FunctionGen(namedtuple('FunctionGen', 'function depth')):
|
||||
__slots__ = ()
|
||||
|
||||
|
||||
class Treegen(FunctionGen):
|
||||
"""
|
||||
Insert the content of a YAML tree into config.yml
|
||||
"""
|
||||
|
||||
def write(self, output_filehandle):
|
||||
build_dict = OrderedDict()
|
||||
self.function(build_dict)
|
||||
miniyaml.render(output_filehandle, build_dict, self.depth)
|
||||
|
||||
|
||||
class Listgen(FunctionGen):
|
||||
"""
|
||||
Insert the content of a YAML list into config.yml
|
||||
"""
|
||||
def write(self, output_filehandle):
|
||||
miniyaml.render(output_filehandle, self.function(), self.depth)
|
||||
|
||||
|
||||
def horizontal_rule():
|
||||
return "".join("#" * 78)
|
||||
|
||||
|
||||
class Header(object):
|
||||
|
||||
def __init__(self, title, summary=None):
|
||||
self.title = title
|
||||
self.summary_lines = summary or []
|
||||
|
||||
def write(self, output_filehandle):
|
||||
text_lines = [self.title] + self.summary_lines
|
||||
comment_lines = ["# " + x for x in text_lines]
|
||||
lines = miniutils.sandwich([horizontal_rule()], comment_lines)
|
||||
|
||||
for line in filter(None, lines):
|
||||
output_filehandle.write(line + "\n")
|
||||
|
||||
|
||||
# Order of this list matters to the generated config.yml.
|
||||
YAML_SOURCES = [
|
||||
File("header-section.yml"),
|
||||
File("commands.yml"),
|
||||
File("nightly-binary-build-defaults.yml"),
|
||||
Header("Build parameters"),
|
||||
File("pytorch-build-params.yml"),
|
||||
File("caffe2-build-params.yml"),
|
||||
File("binary-build-params.yml"),
|
||||
Header("Job specs"),
|
||||
File("pytorch-job-specs.yml"),
|
||||
File("caffe2-job-specs.yml"),
|
||||
File("binary-job-specs.yml"),
|
||||
File("job-specs-setup.yml"),
|
||||
File("job-specs-custom.yml"),
|
||||
File("binary_update_htmls.yml"),
|
||||
File("binary-build-tests.yml"),
|
||||
File("docker_build_job.yml"),
|
||||
File("workflows.yml"),
|
||||
Listgen(pytorch_build_definitions.get_workflow_jobs, 3),
|
||||
File("workflows-pytorch-macos-builds.yml"),
|
||||
File("workflows-pytorch-android-gradle-build.yml"),
|
||||
File("workflows-pytorch-ios-builds.yml"),
|
||||
File("workflows-pytorch-mobile-builds.yml"),
|
||||
File("workflows-pytorch-ge-config-tests.yml"),
|
||||
Listgen(caffe2_build_definitions.get_workflow_jobs, 3),
|
||||
File("workflows-binary-builds-smoke-subset.yml"),
|
||||
Listgen(binary_build_definitions.get_binary_smoke_test_jobs, 3),
|
||||
Listgen(binary_build_definitions.get_binary_build_jobs, 3),
|
||||
File("workflows-nightly-ios-binary-builds.yml"),
|
||||
File("workflows-nightly-android-binary-builds.yml"),
|
||||
Header("Nightly tests"),
|
||||
Listgen(binary_build_definitions.get_nightly_tests, 3),
|
||||
File("workflows-nightly-uploads-header.yml"),
|
||||
Listgen(binary_build_definitions.get_nightly_uploads, 3),
|
||||
File("workflows-s3-html.yml"),
|
||||
File("workflows-docker-builder.yml")
|
||||
]
|
||||
|
||||
|
||||
def stitch_sources(output_filehandle):
|
||||
for f in YAML_SOURCES:
|
||||
f.write(output_filehandle)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
stitch_sources(sys.stdout)
|
||||
@ -1,8 +0,0 @@
|
||||
#!/bin/bash -xe
|
||||
|
||||
# Allows this script to be invoked from any directory:
|
||||
cd $(dirname "$0")
|
||||
|
||||
NEW_FILE=$(mktemp)
|
||||
./generate_config_yml.py > $NEW_FILE
|
||||
cp $NEW_FILE config.yml
|
||||
@ -1,4 +0,0 @@
|
||||
All the scripts in this directory are callable from `~/workspace/.circleci/scripts/foo.sh`.
|
||||
Don't try to call them as `.circleci/scripts/foo.sh`, that won't
|
||||
(necessarily) work. See Note [Workspace for CircleCI scripts] in
|
||||
job-specs-setup.yml for more details.
|
||||
@ -1,46 +0,0 @@
|
||||
#!/bin/bash
|
||||
set -eux -o pipefail
|
||||
# This step runs on multiple executors with different envfile locations
|
||||
if [[ "$(uname)" == Darwin ]]; then
|
||||
# macos executor (builds and tests)
|
||||
workdir="/Users/distiller/project"
|
||||
elif [[ -d "/home/circleci/project" ]]; then
|
||||
# machine executor (binary tests)
|
||||
workdir="/home/circleci/project"
|
||||
else
|
||||
# docker executor (binary builds)
|
||||
workdir="/"
|
||||
fi
|
||||
|
||||
# It is very important that this stays in sync with binary_populate_env.sh
|
||||
export PYTORCH_ROOT="$workdir/pytorch"
|
||||
export BUILDER_ROOT="$workdir/builder"
|
||||
|
||||
# Clone the Pytorch branch
|
||||
git clone https://github.com/pytorch/pytorch.git "$PYTORCH_ROOT"
|
||||
pushd "$PYTORCH_ROOT"
|
||||
if [[ -n "${CIRCLE_PR_NUMBER:-}" ]]; then
|
||||
# "smoke" binary build on PRs
|
||||
git fetch --force origin "pull/${CIRCLE_PR_NUMBER}/head:remotes/origin/pull/${CIRCLE_PR_NUMBER}"
|
||||
git reset --hard "$CIRCLE_SHA1"
|
||||
git checkout -q -B "$CIRCLE_BRANCH"
|
||||
git reset --hard "$CIRCLE_SHA1"
|
||||
elif [[ -n "${CIRCLE_SHA1:-}" ]]; then
|
||||
# Scheduled workflows & "smoke" binary build on master on PR merges
|
||||
git reset --hard "$CIRCLE_SHA1"
|
||||
git checkout -q -B master
|
||||
else
|
||||
echo "Can't tell what to checkout"
|
||||
exit 1
|
||||
fi
|
||||
git submodule update --init --recursive --quiet
|
||||
echo "Using Pytorch from "
|
||||
git --no-pager log --max-count 1
|
||||
popd
|
||||
|
||||
# Clone the Builder master repo
|
||||
git clone -q https://github.com/pytorch/builder.git "$BUILDER_ROOT"
|
||||
pushd "$BUILDER_ROOT"
|
||||
echo "Using builder from "
|
||||
git --no-pager log --max-count 1
|
||||
popd
|
||||
@ -1,44 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -eux -o pipefail
|
||||
|
||||
# This step runs on multiple executors with different envfile locations
|
||||
if [[ "$(uname)" == Darwin ]]; then
|
||||
envfile="/Users/distiller/project/env"
|
||||
elif [[ -d "/home/circleci/project" ]]; then
|
||||
# machine executor (binary tests)
|
||||
envfile="/home/circleci/project/env"
|
||||
else
|
||||
# docker executor (binary builds)
|
||||
envfile="/env"
|
||||
fi
|
||||
|
||||
# TODO this is super hacky and ugly. Basically, the binary_update_html job does
|
||||
# not have an env file, since it does not call binary_populate_env.sh, since it
|
||||
# does not have a BUILD_ENVIRONMENT. So for this one case, which we detect by a
|
||||
# lack of an env file, we manually export the environment variables that we
|
||||
# need to install miniconda
|
||||
if [[ ! -f "$envfile" ]]; then
|
||||
MINICONDA_ROOT="/home/circleci/project/miniconda"
|
||||
workdir="/home/circleci/project"
|
||||
retry () {
|
||||
$* || (sleep 1 && $*) || (sleep 2 && $*) || (sleep 4 && $*) || (sleep 8 && $*)
|
||||
}
|
||||
export -f retry
|
||||
else
|
||||
source "$envfile"
|
||||
fi
|
||||
|
||||
conda_sh="$workdir/install_miniconda.sh"
|
||||
if [[ "$(uname)" == Darwin ]]; then
|
||||
retry curl -o "$conda_sh" https://repo.continuum.io/miniconda/Miniconda3-latest-MacOSX-x86_64.sh
|
||||
else
|
||||
retry curl -o "$conda_sh" https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
|
||||
fi
|
||||
chmod +x "$conda_sh"
|
||||
"$conda_sh" -b -p "$MINICONDA_ROOT"
|
||||
rm -f "$conda_sh"
|
||||
|
||||
# We can't actually add miniconda to the PATH in the envfile, because that
|
||||
# breaks 'unbuffer' in Mac jobs. This is probably because conda comes with
|
||||
# a tclsh, which then gets inserted before the tclsh needed in /usr/bin
|
||||
@ -1,38 +0,0 @@
|
||||
#!/bin/bash
|
||||
set -ex -o pipefail
|
||||
|
||||
echo ""
|
||||
echo "DIR: $(pwd)"
|
||||
WORKSPACE=/Users/distiller/workspace
|
||||
PROJ_ROOT=/Users/distiller/project
|
||||
export TCLLIBPATH="/usr/local/lib"
|
||||
# Install conda
|
||||
curl -o ~/Downloads/conda.sh https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh
|
||||
chmod +x ~/Downloads/conda.sh
|
||||
/bin/bash ~/Downloads/conda.sh -b -p ~/anaconda
|
||||
export PATH="~/anaconda/bin:${PATH}"
|
||||
source ~/anaconda/bin/activate
|
||||
# Install dependencies
|
||||
conda install numpy ninja pyyaml mkl mkl-include setuptools cmake cffi typing requests --yes
|
||||
export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
|
||||
# sync submodules
|
||||
cd ${PROJ_ROOT}
|
||||
git submodule sync
|
||||
git submodule update --init --recursive
|
||||
# run build script
|
||||
chmod a+x ${PROJ_ROOT}/scripts/build_ios.sh
|
||||
echo "########################################################"
|
||||
cat ${PROJ_ROOT}/scripts/build_ios.sh
|
||||
echo "########################################################"
|
||||
echo "IOS_ARCH: ${IOS_ARCH}"
|
||||
echo "IOS_PLATFORM: ${IOS_PLATFORM}"
|
||||
export BUILD_PYTORCH_MOBILE=1
|
||||
export IOS_ARCH=${IOS_ARCH}
|
||||
export IOS_PLATFORM=${IOS_PLATFORM}
|
||||
unbuffer ${PROJ_ROOT}/scripts/build_ios.sh 2>&1 | ts
|
||||
#store the binary
|
||||
cd ${WORKSPACE}
|
||||
DEST_DIR=${WORKSPACE}/ios
|
||||
mkdir -p ${DEST_DIR}
|
||||
cp -R ${PROJ_ROOT}/build_ios/install ${DEST_DIR}
|
||||
mv ${DEST_DIR}/install ${DEST_DIR}/${IOS_ARCH}
|
||||
@ -1,29 +0,0 @@
|
||||
#!/bin/bash
|
||||
set -ex -o pipefail
|
||||
|
||||
echo ""
|
||||
echo "DIR: $(pwd)"
|
||||
PROJ_ROOT=/Users/distiller/project
|
||||
cd ${PROJ_ROOT}/ios/TestApp
|
||||
# install fastlane
|
||||
sudo gem install bundler && bundle install
|
||||
# install certificates
|
||||
echo "${IOS_CERT_KEY}" >> cert.txt
|
||||
base64 --decode cert.txt -o Certificates.p12
|
||||
rm cert.txt
|
||||
bundle exec fastlane install_cert
|
||||
# install the provisioning profile
|
||||
PROFILE=TestApp_CI.mobileprovision
|
||||
PROVISIONING_PROFILES=~/Library/MobileDevice/Provisioning\ Profiles
|
||||
mkdir -pv "${PROVISIONING_PROFILES}"
|
||||
cd "${PROVISIONING_PROFILES}"
|
||||
echo "${IOS_SIGN_KEY}" >> cert.txt
|
||||
base64 --decode cert.txt -o ${PROFILE}
|
||||
rm cert.txt
|
||||
# run the ruby build script
|
||||
if ! [ -x "$(command -v xcodebuild)" ]; then
|
||||
echo 'Error: xcodebuild is not installed.'
|
||||
exit 1
|
||||
fi
|
||||
PROFILE=TestApp_CI
|
||||
ruby ${PROJ_ROOT}/scripts/xcode_build.rb -i ${PROJ_ROOT}/build_ios/install -x ${PROJ_ROOT}/ios/TestApp/TestApp.xcodeproj -p ${IOS_PLATFORM} -c ${PROFILE} -t ${IOS_DEV_TEAM_ID}
|
||||
@ -1,44 +0,0 @@
|
||||
#!/bin/bash
|
||||
set -ex -o pipefail
|
||||
|
||||
echo ""
|
||||
echo "DIR: $(pwd)"
|
||||
WORKSPACE=/Users/distiller/workspace
|
||||
PROJ_ROOT=/Users/distiller/project
|
||||
ARTIFACTS_DIR=${WORKSPACE}/ios
|
||||
ls ${ARTIFACTS_DIR}
|
||||
ZIP_DIR=${WORKSPACE}/zip
|
||||
mkdir -p ${ZIP_DIR}/install/lib
|
||||
mkdir -p ${ZIP_DIR}/src
|
||||
# copy header files
|
||||
cp -R ${ARTIFACTS_DIR}/arm64/include ${ZIP_DIR}/install/
|
||||
# build a FAT bianry
|
||||
cd ${ZIP_DIR}/install/lib
|
||||
target_libs=(libc10.a libclog.a libcpuinfo.a libeigen_blas.a libpytorch_qnnpack.a libtorch.a)
|
||||
for lib in ${target_libs[*]}
|
||||
do
|
||||
libs=(${ARTIFACTS_DIR}/x86_64/lib/${lib} ${ARTIFACTS_DIR}/arm64/lib/${lib})
|
||||
lipo -create "${libs[@]}" -o ${ZIP_DIR}/install/lib/${lib}
|
||||
done
|
||||
# for nnpack, we only support arm64 build
|
||||
cp ${ARTIFACTS_DIR}/arm64/lib/libnnpack.a ./
|
||||
lipo -i ${ZIP_DIR}/install/lib/*.a
|
||||
# copy the umbrella header and license
|
||||
cp ${PROJ_ROOT}/ios/LibTorch.h ${ZIP_DIR}/src/
|
||||
cp ${PROJ_ROOT}/LICENSE ${ZIP_DIR}/
|
||||
# zip the library
|
||||
ZIPFILE=libtorch_ios_nightly_build.zip
|
||||
cd ${ZIP_DIR}
|
||||
#for testing
|
||||
touch version.txt
|
||||
echo $(date +%s) > version.txt
|
||||
zip -r ${ZIPFILE} install src version.txt LICENSE
|
||||
# upload to aws
|
||||
brew install awscli
|
||||
set +x
|
||||
export AWS_ACCESS_KEY_ID=${AWS_S3_ACCESS_KEY_FOR_PYTORCH_BINARY_UPLOAD}
|
||||
export AWS_SECRET_ACCESS_KEY=${AWS_S3_ACCESS_SECRET_FOR_PYTORCH_BINARY_UPLOAD}
|
||||
set +x
|
||||
# echo "AWS KEY: ${AWS_ACCESS_KEY_ID}"
|
||||
# echo "AWS SECRET: ${AWS_SECRET_ACCESS_KEY}"
|
||||
aws s3 cp ${ZIPFILE} s3://ossci-ios-build/ --acl public-read
|
||||
@ -1,30 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
echo "RUNNING ON $(uname -a) WITH $(nproc) CPUS AND $(free -m)"
|
||||
set -eux -o pipefail
|
||||
source /env
|
||||
|
||||
# Defaults here so they can be changed in one place
|
||||
export MAX_JOBS=12
|
||||
|
||||
# Parse the parameters
|
||||
if [[ "$PACKAGE_TYPE" == 'conda' ]]; then
|
||||
build_script='conda/build_pytorch.sh'
|
||||
elif [[ "$DESIRED_CUDA" == cpu ]]; then
|
||||
build_script='manywheel/build_cpu.sh'
|
||||
else
|
||||
build_script='manywheel/build.sh'
|
||||
fi
|
||||
|
||||
# We want to call unbuffer, which calls tclsh which finds the expect
|
||||
# package. The expect was installed by yum into /usr/bin so we want to
|
||||
# find /usr/bin/tclsh, but this is shadowed by /opt/conda/bin/tclsh in
|
||||
# the conda docker images, so we prepend it to the path here.
|
||||
if [[ "$PACKAGE_TYPE" == 'conda' ]]; then
|
||||
mkdir /just_tclsh_bin
|
||||
ln -s /usr/bin/tclsh /just_tclsh_bin/tclsh
|
||||
export PATH=/just_tclsh_bin:$PATH
|
||||
fi
|
||||
|
||||
# Build the package
|
||||
SKIP_ALL_TESTS=1 unbuffer "/builder/$build_script" | ts
|
||||
@ -1,60 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
source /home/circleci/project/env
|
||||
cat >/home/circleci/project/ci_test_script.sh <<EOL
|
||||
# =================== The following code will be executed inside Docker container ===================
|
||||
set -eux -o pipefail
|
||||
|
||||
# Set up Python
|
||||
if [[ "$PACKAGE_TYPE" == conda ]]; then
|
||||
retry conda create -qyn testenv python="$DESIRED_PYTHON"
|
||||
source activate testenv >/dev/null
|
||||
elif [[ "$DESIRED_PYTHON" == 2.7mu ]]; then
|
||||
export PATH="/opt/python/cp27-cp27mu/bin:\$PATH"
|
||||
elif [[ "$DESIRED_PYTHON" == 3.8m ]]; then
|
||||
export PATH="/opt/python/cp38-cp38/bin:\$PATH"
|
||||
elif [[ "$PACKAGE_TYPE" != libtorch ]]; then
|
||||
python_nodot="\$(echo $DESIRED_PYTHON | tr -d m.u)"
|
||||
export PATH="/opt/python/cp\$python_nodot-cp\${python_nodot}m/bin:\$PATH"
|
||||
fi
|
||||
|
||||
# Install the package
|
||||
# These network calls should not have 'retry's because they are installing
|
||||
# locally and aren't actually network calls
|
||||
# TODO there is duplicated and inconsistent test-python-env setup across this
|
||||
# file, builder/smoke_test.sh, and builder/run_tests.sh, and also in the
|
||||
# conda build scripts themselves. These should really be consolidated
|
||||
pkg="/final_pkgs/\$(ls /final_pkgs)"
|
||||
if [[ "$PACKAGE_TYPE" == conda ]]; then
|
||||
conda install -y "\$pkg" --offline
|
||||
if [[ "$DESIRED_CUDA" == 'cpu' ]]; then
|
||||
conda install -y cpuonly -c pytorch
|
||||
fi
|
||||
retry conda install -yq future numpy protobuf six
|
||||
if [[ "$DESIRED_CUDA" != 'cpu' ]]; then
|
||||
# DESIRED_CUDA is in format cu90 or cu100
|
||||
if [[ "${#DESIRED_CUDA}" == 4 ]]; then
|
||||
cu_ver="${DESIRED_CUDA:2:1}.${DESIRED_CUDA:3}"
|
||||
else
|
||||
cu_ver="${DESIRED_CUDA:2:2}.${DESIRED_CUDA:4}"
|
||||
fi
|
||||
retry conda install -yq -c pytorch "cudatoolkit=\${cu_ver}"
|
||||
fi
|
||||
elif [[ "$PACKAGE_TYPE" != libtorch ]]; then
|
||||
pip install "\$pkg"
|
||||
retry pip install -q future numpy protobuf six
|
||||
fi
|
||||
if [[ "$PACKAGE_TYPE" == libtorch ]]; then
|
||||
pkg="\$(ls /final_pkgs/*-latest.zip)"
|
||||
unzip "\$pkg" -d /tmp
|
||||
cd /tmp/libtorch
|
||||
fi
|
||||
|
||||
# Test the package
|
||||
/builder/check_binary.sh
|
||||
# =================== The above code will be executed inside Docker container ===================
|
||||
EOL
|
||||
echo
|
||||
echo
|
||||
echo "The script that will run in the next step is:"
|
||||
cat /home/circleci/project/ci_test_script.sh
|
||||
@ -1,40 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Do NOT set -x
|
||||
source /home/circleci/project/env
|
||||
set -eu -o pipefail
|
||||
set +x
|
||||
declare -x "AWS_ACCESS_KEY_ID=${PYTORCH_BINARY_AWS_ACCESS_KEY_ID}"
|
||||
declare -x "AWS_SECRET_ACCESS_KEY=${PYTORCH_BINARY_AWS_SECRET_ACCESS_KEY}"
|
||||
cat >/home/circleci/project/login_to_anaconda.sh <<EOL
|
||||
set +x
|
||||
echo "Trying to login to Anaconda"
|
||||
yes | anaconda login \
|
||||
--username "$PYTORCH_BINARY_PJH5_CONDA_USERNAME" \
|
||||
--password "$PYTORCH_BINARY_PJH5_CONDA_PASSWORD"
|
||||
set -x
|
||||
EOL
|
||||
chmod +x /home/circleci/project/login_to_anaconda.sh
|
||||
|
||||
#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!
|
||||
# DO NOT TURN -x ON BEFORE THIS LINE
|
||||
#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!
|
||||
set -eux -o pipefail
|
||||
export PATH="$MINICONDA_ROOT/bin:$PATH"
|
||||
|
||||
# Upload the package to the final location
|
||||
pushd /home/circleci/project/final_pkgs
|
||||
if [[ "$PACKAGE_TYPE" == conda ]]; then
|
||||
retry conda install -yq anaconda-client
|
||||
retry timeout 30 /home/circleci/project/login_to_anaconda.sh
|
||||
anaconda upload "$(ls)" -u pytorch-nightly --label main --no-progress --force
|
||||
elif [[ "$PACKAGE_TYPE" == libtorch ]]; then
|
||||
retry pip install -q awscli
|
||||
s3_dir="s3://pytorch/libtorch/${PIP_UPLOAD_FOLDER}${DESIRED_CUDA}/"
|
||||
for pkg in $(ls); do
|
||||
retry aws s3 cp "$pkg" "$s3_dir" --acl public-read
|
||||
done
|
||||
else
|
||||
retry pip install -q awscli
|
||||
s3_dir="s3://pytorch/whl/${PIP_UPLOAD_FOLDER}${DESIRED_CUDA}/"
|
||||
retry aws s3 cp "$(ls)" "$s3_dir" --acl public-read
|
||||
fi
|
||||
@ -1,24 +0,0 @@
|
||||
#!/bin/bash
|
||||
set -eux -o pipefail
|
||||
|
||||
source "/Users/distiller/project/env"
|
||||
mkdir -p "$PYTORCH_FINAL_PACKAGE_DIR"
|
||||
|
||||
# For some reason `unbuffer` breaks if we change the PATH here, so we
|
||||
# write a script with the PATH change in it and unbuffer the whole
|
||||
# thing
|
||||
build_script="$workdir/build_script.sh"
|
||||
touch "$build_script"
|
||||
chmod +x "$build_script"
|
||||
|
||||
# Build
|
||||
cat >"$build_script" <<EOL
|
||||
export PATH="$workdir/miniconda/bin:$PATH"
|
||||
if [[ "$PACKAGE_TYPE" == conda ]]; then
|
||||
"$workdir/builder/conda/build_pytorch.sh"
|
||||
else
|
||||
export TORCH_PACKAGE_NAME="$(echo $TORCH_PACKAGE_NAME | tr '-' '_')"
|
||||
"$workdir/builder/wheel/build_wheel.sh"
|
||||
fi
|
||||
EOL
|
||||
unbuffer "$build_script" | ts
|
||||
@ -1,34 +0,0 @@
|
||||
#!/bin/bash
|
||||
set -eux -o pipefail
|
||||
|
||||
source "/Users/distiller/project/env"
|
||||
export "PATH=$workdir/miniconda/bin:$PATH"
|
||||
pkg="$workdir/final_pkgs/$(ls $workdir/final_pkgs)"
|
||||
|
||||
# Create a new test env
|
||||
# TODO cut all this out into a separate test job and have an entirely different
|
||||
# miniconda
|
||||
if [[ "$PACKAGE_TYPE" != libtorch ]]; then
|
||||
source deactivate || true
|
||||
conda create -qyn test python="$DESIRED_PYTHON"
|
||||
source activate test >/dev/null
|
||||
fi
|
||||
|
||||
# Install the package
|
||||
if [[ "$PACKAGE_TYPE" == libtorch ]]; then
|
||||
pkg="$(ls $workdir/final_pkgs/*-latest.zip)"
|
||||
unzip "$pkg" -d /tmp
|
||||
cd /tmp/libtorch
|
||||
elif [[ "$PACKAGE_TYPE" == conda ]]; then
|
||||
conda install -y "$pkg" --offline
|
||||
else
|
||||
pip install "$pkg" --no-index --no-dependencies -v
|
||||
fi
|
||||
|
||||
# Test
|
||||
if [[ "$PACKAGE_TYPE" == libtorch ]]; then
|
||||
$workdir/builder/check_binary.sh
|
||||
else
|
||||
pushd "$workdir/pytorch"
|
||||
$workdir/builder/run_tests.sh "$PACKAGE_TYPE" "$DESIRED_PYTHON" "$DESIRED_CUDA"
|
||||
fi
|
||||
@ -1,40 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Do NOT set -x
|
||||
set -eu -o pipefail
|
||||
set +x
|
||||
export AWS_ACCESS_KEY_ID="${PYTORCH_BINARY_AWS_ACCESS_KEY_ID}"
|
||||
export AWS_SECRET_ACCESS_KEY="${PYTORCH_BINARY_AWS_SECRET_ACCESS_KEY}"
|
||||
cat >/Users/distiller/project/login_to_anaconda.sh <<EOL
|
||||
set +x
|
||||
echo "Trying to login to Anaconda"
|
||||
yes | anaconda login \
|
||||
--username "$PYTORCH_BINARY_PJH5_CONDA_USERNAME" \
|
||||
--password "$PYTORCH_BINARY_PJH5_CONDA_PASSWORD"
|
||||
set -x
|
||||
EOL
|
||||
chmod +x /Users/distiller/project/login_to_anaconda.sh
|
||||
|
||||
#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!
|
||||
# DO NOT TURN -x ON BEFORE THIS LINE
|
||||
#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!#!
|
||||
set -eux -o pipefail
|
||||
|
||||
source "/Users/distiller/project/env"
|
||||
export "PATH=$workdir/miniconda/bin:$PATH"
|
||||
|
||||
pushd "$workdir/final_pkgs"
|
||||
if [[ "$PACKAGE_TYPE" == conda ]]; then
|
||||
retry conda install -yq anaconda-client
|
||||
retry /Users/distiller/project/login_to_anaconda.sh
|
||||
retry anaconda upload "$(ls)" -u pytorch-nightly --label main --no-progress --force
|
||||
elif [[ "$PACKAGE_TYPE" == libtorch ]]; then
|
||||
retry pip install -q awscli
|
||||
s3_dir="s3://pytorch/libtorch/${PIP_UPLOAD_FOLDER}${DESIRED_CUDA}/"
|
||||
for pkg in $(ls); do
|
||||
retry aws s3 cp "$pkg" "$s3_dir" --acl public-read
|
||||
done
|
||||
else
|
||||
retry pip install -q awscli
|
||||
s3_dir="s3://pytorch/whl/${PIP_UPLOAD_FOLDER}${DESIRED_CUDA}/"
|
||||
retry aws s3 cp "$(ls)" "$s3_dir" --acl public-read
|
||||
fi
|
||||
@ -1,133 +0,0 @@
|
||||
#!/bin/bash
|
||||
set -eux -o pipefail
|
||||
export TZ=UTC
|
||||
|
||||
# We need to write an envfile to persist these variables to following
|
||||
# steps, but the location of the envfile depends on the circleci executor
|
||||
if [[ "$(uname)" == Darwin ]]; then
|
||||
# macos executor (builds and tests)
|
||||
workdir="/Users/distiller/project"
|
||||
elif [[ -d "/home/circleci/project" ]]; then
|
||||
# machine executor (binary tests)
|
||||
workdir="/home/circleci/project"
|
||||
else
|
||||
# docker executor (binary builds)
|
||||
workdir="/"
|
||||
fi
|
||||
envfile="$workdir/env"
|
||||
touch "$envfile"
|
||||
chmod +x "$envfile"
|
||||
|
||||
# Parse the BUILD_ENVIRONMENT to package type, python, and cuda
|
||||
configs=($BUILD_ENVIRONMENT)
|
||||
export PACKAGE_TYPE="${configs[0]}"
|
||||
export DESIRED_PYTHON="${configs[1]}"
|
||||
export DESIRED_CUDA="${configs[2]}"
|
||||
export DESIRED_DEVTOOLSET="${configs[3]:-}"
|
||||
if [[ "$PACKAGE_TYPE" == 'libtorch' ]]; then
|
||||
export BUILD_PYTHONLESS=1
|
||||
fi
|
||||
|
||||
# Pick docker image
|
||||
export DOCKER_IMAGE=${DOCKER_IMAGE:-}
|
||||
if [[ -z "$DOCKER_IMAGE" ]]; then
|
||||
if [[ "$PACKAGE_TYPE" == conda ]]; then
|
||||
export DOCKER_IMAGE="pytorch/conda-cuda"
|
||||
elif [[ "$DESIRED_CUDA" == cpu ]]; then
|
||||
export DOCKER_IMAGE="pytorch/manylinux-cuda100"
|
||||
else
|
||||
export DOCKER_IMAGE="pytorch/manylinux-cuda${DESIRED_CUDA:2}"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Upload to parallel folder for devtoolsets
|
||||
# All nightlies used to be devtoolset3, then devtoolset7 was added as a build
|
||||
# option, so the upload was redirected to nightly/devtoolset7 to avoid
|
||||
# conflicts with other binaries (there shouldn't be any conflicts). Now we are
|
||||
# making devtoolset7 the default.
|
||||
if [[ "$DESIRED_DEVTOOLSET" == 'devtoolset7' || "$DESIRED_DEVTOOLSET" == *"cxx11-abi"* || "$(uname)" == 'Darwin' ]]; then
|
||||
export PIP_UPLOAD_FOLDER='nightly/'
|
||||
else
|
||||
# On linux machines, this shouldn't actually be called anymore. This is just
|
||||
# here for extra safety.
|
||||
export PIP_UPLOAD_FOLDER='nightly/devtoolset3/'
|
||||
fi
|
||||
|
||||
# We put this here so that OVERRIDE_PACKAGE_VERSION below can read from it
|
||||
export DATE="$(date -u +%Y%m%d)"
|
||||
if [[ "$(uname)" == 'Darwin' ]] || [[ "$DESIRED_CUDA" == "cu101" ]] || [[ "$PACKAGE_TYPE" == conda ]]; then
|
||||
export PYTORCH_BUILD_VERSION="1.4.0.dev$DATE"
|
||||
else
|
||||
export PYTORCH_BUILD_VERSION="1.4.0.dev$DATE+$DESIRED_CUDA"
|
||||
fi
|
||||
export PYTORCH_BUILD_NUMBER=1
|
||||
|
||||
|
||||
JAVA_HOME=
|
||||
BUILD_JNI=OFF
|
||||
if [[ "$PACKAGE_TYPE" == libtorch ]]; then
|
||||
POSSIBLE_JAVA_HOMES=()
|
||||
POSSIBLE_JAVA_HOMES+=(/usr/local)
|
||||
POSSIBLE_JAVA_HOMES+=(/usr/lib/jvm/java-8-openjdk-amd64)
|
||||
POSSIBLE_JAVA_HOMES+=(/Library/Java/JavaVirtualMachines/*.jdk/Contents/Home)
|
||||
for JH in "${POSSIBLE_JAVA_HOMES[@]}" ; do
|
||||
if [[ -e "$JH/include/jni.h" ]] ; then
|
||||
echo "Found jni.h under $JH"
|
||||
JAVA_HOME="$JH"
|
||||
BUILD_JNI=ON
|
||||
break
|
||||
fi
|
||||
done
|
||||
if [ -z "$JAVA_HOME" ]; then
|
||||
echo "Did not find jni.h"
|
||||
fi
|
||||
fi
|
||||
|
||||
cat >>"$envfile" <<EOL
|
||||
# =================== The following code will be executed inside Docker container ===================
|
||||
export TZ=UTC
|
||||
echo "Running on $(uname -a) at $(date)"
|
||||
|
||||
export PACKAGE_TYPE="$PACKAGE_TYPE"
|
||||
export DESIRED_PYTHON="$DESIRED_PYTHON"
|
||||
export DESIRED_CUDA="$DESIRED_CUDA"
|
||||
export LIBTORCH_VARIANT="${LIBTORCH_VARIANT:-}"
|
||||
export BUILD_PYTHONLESS="${BUILD_PYTHONLESS:-}"
|
||||
export DESIRED_DEVTOOLSET="$DESIRED_DEVTOOLSET"
|
||||
|
||||
export DATE="$DATE"
|
||||
export NIGHTLIES_DATE_PREAMBLE=1.4.0.dev
|
||||
export PYTORCH_BUILD_VERSION="$PYTORCH_BUILD_VERSION"
|
||||
export PYTORCH_BUILD_NUMBER="$PYTORCH_BUILD_NUMBER"
|
||||
export OVERRIDE_PACKAGE_VERSION="$PYTORCH_BUILD_VERSION"
|
||||
|
||||
# TODO: We don't need this anymore IIUC
|
||||
export TORCH_PACKAGE_NAME='torch'
|
||||
export TORCH_CONDA_BUILD_FOLDER='pytorch-nightly'
|
||||
|
||||
export USE_FBGEMM=1
|
||||
export JAVA_HOME=$JAVA_HOME
|
||||
export BUILD_JNI=$BUILD_JNI
|
||||
export PIP_UPLOAD_FOLDER="$PIP_UPLOAD_FOLDER"
|
||||
export DOCKER_IMAGE="$DOCKER_IMAGE"
|
||||
|
||||
export workdir="$workdir"
|
||||
export MAC_PACKAGE_WORK_DIR="$workdir"
|
||||
export PYTORCH_ROOT="$workdir/pytorch"
|
||||
export BUILDER_ROOT="$workdir/builder"
|
||||
export MINICONDA_ROOT="$workdir/miniconda"
|
||||
export PYTORCH_FINAL_PACKAGE_DIR="$workdir/final_pkgs"
|
||||
|
||||
export CIRCLE_TAG="${CIRCLE_TAG:-}"
|
||||
export CIRCLE_SHA1="$CIRCLE_SHA1"
|
||||
export CIRCLE_PR_NUMBER="${CIRCLE_PR_NUMBER:-}"
|
||||
export CIRCLE_BRANCH="$CIRCLE_BRANCH"
|
||||
# =================== The above code will be executed inside Docker container ===================
|
||||
EOL
|
||||
|
||||
echo 'retry () {' >> "$envfile"
|
||||
echo ' $* || (sleep 1 && $*) || (sleep 2 && $*) || (sleep 4 && $*) || (sleep 8 && $*)' >> "$envfile"
|
||||
echo '}' >> "$envfile"
|
||||
echo 'export -f retry' >> "$envfile"
|
||||
|
||||
cat "$envfile"
|
||||
@ -1,48 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# This section is used in the binary_test and smoke_test jobs. It expects
|
||||
# 'binary_populate_env' to have populated /home/circleci/project/env and it
|
||||
# expects another section to populate /home/circleci/project/ci_test_script.sh
|
||||
# with the code to run in the docker
|
||||
|
||||
# Expect all needed environment variables to be written to this file
|
||||
source /home/circleci/project/env
|
||||
echo "Running the following code in Docker"
|
||||
cat /home/circleci/project/ci_test_script.sh
|
||||
echo
|
||||
echo
|
||||
set -eux -o pipefail
|
||||
|
||||
# Expect actual code to be written to this file
|
||||
chmod +x /home/circleci/project/ci_test_script.sh
|
||||
|
||||
# Run the docker
|
||||
if [ -n "${USE_CUDA_DOCKER_RUNTIME:-}" ]; then
|
||||
export id=$(docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --runtime=nvidia -t -d "${DOCKER_IMAGE}")
|
||||
else
|
||||
export id=$(docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -t -d "${DOCKER_IMAGE}")
|
||||
fi
|
||||
|
||||
# Copy the envfile and script with all the code to run into the docker.
|
||||
docker cp /home/circleci/project/. "$id:/circleci_stuff"
|
||||
|
||||
# Copy built packages into the docker to test. This should only exist on the
|
||||
# binary test jobs. The package should've been created from a binary build job,
|
||||
# whhich persisted the package to a CircleCI workspace, which this job then
|
||||
# copies into a GPU enabled docker for testing
|
||||
if [[ -d "/home/circleci/project/final_pkgs" ]]; then
|
||||
docker cp /home/circleci/project/final_pkgs "$id:/final_pkgs"
|
||||
fi
|
||||
|
||||
# Copy the needed repos into the docker. These do not exist in the smoke test
|
||||
# jobs, since the smoke test jobs do not need the Pytorch source code.
|
||||
if [[ -d "$PYTORCH_ROOT" ]]; then
|
||||
docker cp "$PYTORCH_ROOT" "$id:/pytorch"
|
||||
fi
|
||||
if [[ -d "$BUILDER_ROOT" ]]; then
|
||||
docker cp "$BUILDER_ROOT" "$id:/builder"
|
||||
fi
|
||||
|
||||
# Execute the test script that was populated by an earlier section
|
||||
export COMMAND='((echo "source /circleci_stuff/env && /circleci_stuff/ci_test_script.sh") | docker exec -i "$id" bash) 2>&1'
|
||||
echo ${COMMAND} > ./command.sh && unbuffer bash ./command.sh | ts
|
||||
@ -1,85 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
set -eux -o pipefail
|
||||
|
||||
export ANDROID_NDK_HOME=/opt/ndk
|
||||
export ANDROID_HOME=/opt/android/sdk
|
||||
|
||||
# Must be in sync with GRADLE_VERSION in docker image for android
|
||||
# https://github.com/pietern/pytorch-dockerfiles/blob/master/build.sh#L155
|
||||
export GRADLE_VERSION=4.10.3
|
||||
export GRADLE_HOME=/opt/gradle/gradle-$GRADLE_VERSION
|
||||
export GRADLE_PATH=$GRADLE_HOME/bin/gradle
|
||||
|
||||
BUILD_ANDROID_INCLUDE_DIR_x86=~/workspace/build_android/install/include
|
||||
BUILD_ANDROID_LIB_DIR_x86=~/workspace/build_android/install/lib
|
||||
|
||||
BUILD_ANDROID_INCLUDE_DIR_x86_64=~/workspace/build_android_install_x86_64/install/include
|
||||
BUILD_ANDROID_LIB_DIR_x86_64=~/workspace/build_android_install_x86_64/install/lib
|
||||
|
||||
BUILD_ANDROID_INCLUDE_DIR_arm_v7a=~/workspace/build_android_install_arm_v7a/install/include
|
||||
BUILD_ANDROID_LIB_DIR_arm_v7a=~/workspace/build_android_install_arm_v7a/install/lib
|
||||
|
||||
BUILD_ANDROID_INCLUDE_DIR_arm_v8a=~/workspace/build_android_install_arm_v8a/install/include
|
||||
BUILD_ANDROID_LIB_DIR_arm_v8a=~/workspace/build_android_install_arm_v8a/install/lib
|
||||
|
||||
PYTORCH_ANDROID_SRC_MAIN_DIR=~/workspace/android/pytorch_android/src/main
|
||||
|
||||
JNI_INCLUDE_DIR=${PYTORCH_ANDROID_SRC_MAIN_DIR}/cpp/libtorch_include
|
||||
mkdir -p $JNI_INCLUDE_DIR
|
||||
|
||||
JNI_LIBS_DIR=${PYTORCH_ANDROID_SRC_MAIN_DIR}/jniLibs
|
||||
mkdir -p $JNI_LIBS_DIR
|
||||
|
||||
ln -s ${BUILD_ANDROID_INCLUDE_DIR_x86} ${JNI_INCLUDE_DIR}/x86
|
||||
ln -s ${BUILD_ANDROID_LIB_DIR_x86} ${JNI_LIBS_DIR}/x86
|
||||
|
||||
if [[ "${BUILD_ENVIRONMENT}" != *-gradle-build-only-x86_32* ]]; then
|
||||
ln -s ${BUILD_ANDROID_INCLUDE_DIR_x86_64} ${JNI_INCLUDE_DIR}/x86_64
|
||||
ln -s ${BUILD_ANDROID_LIB_DIR_x86_64} ${JNI_LIBS_DIR}/x86_64
|
||||
|
||||
ln -s ${BUILD_ANDROID_INCLUDE_DIR_arm_v7a} ${JNI_INCLUDE_DIR}/armeabi-v7a
|
||||
ln -s ${BUILD_ANDROID_LIB_DIR_arm_v7a} ${JNI_LIBS_DIR}/armeabi-v7a
|
||||
|
||||
ln -s ${BUILD_ANDROID_INCLUDE_DIR_arm_v8a} ${JNI_INCLUDE_DIR}/arm64-v8a
|
||||
ln -s ${BUILD_ANDROID_LIB_DIR_arm_v8a} ${JNI_LIBS_DIR}/arm64-v8a
|
||||
fi
|
||||
|
||||
env
|
||||
echo "BUILD_ENVIRONMENT:$BUILD_ENVIRONMENT"
|
||||
|
||||
GRADLE_PARAMS="-p android assembleRelease --debug --stacktrace"
|
||||
if [[ "${BUILD_ENVIRONMENT}" == *-gradle-build-only-x86_32* ]]; then
|
||||
GRADLE_PARAMS+=" -PABI_FILTERS=x86"
|
||||
fi
|
||||
|
||||
if [ -n "{GRADLE_OFFLINE:-}" ]; then
|
||||
GRADLE_PARAMS+=" --offline"
|
||||
fi
|
||||
|
||||
# touch gradle cache files to prevent expiration
|
||||
while IFS= read -r -d '' file
|
||||
do
|
||||
touch "$file" || true
|
||||
done < <(find /var/lib/jenkins/.gradle -type f -print0)
|
||||
|
||||
env
|
||||
|
||||
export GRADLE_LOCAL_PROPERTIES=~/workspace/android/local.properties
|
||||
rm -f $GRADLE_LOCAL_PROPERTIES
|
||||
echo "sdk.dir=/opt/android/sdk" >> $GRADLE_LOCAL_PROPERTIES
|
||||
echo "ndk.dir=/opt/ndk" >> $GRADLE_LOCAL_PROPERTIES
|
||||
echo "cmake.dir=/usr/local" >> $GRADLE_LOCAL_PROPERTIES
|
||||
|
||||
$GRADLE_PATH $GRADLE_PARAMS
|
||||
|
||||
find . -type f -name "*.a" -exec ls -lh {} \;
|
||||
|
||||
while IFS= read -r -d '' file
|
||||
do
|
||||
echo
|
||||
echo "$file"
|
||||
ls -lah "$file"
|
||||
zipinfo -l "$file"
|
||||
done < <(find . -type f -name '*.aar' -print0)
|
||||
|
||||
find . -type f -name *aar -print | xargs tar cfvz ~/workspace/android/artifacts.tgz
|
||||
@ -1,127 +0,0 @@
|
||||
# =================== The following code **should** be executed inside Docker container ===================
|
||||
|
||||
# Install dependencies
|
||||
sudo apt-get -y update
|
||||
sudo apt-get -y install expect-dev
|
||||
|
||||
# This is where the local pytorch install in the docker image is located
|
||||
pt_checkout="/var/lib/jenkins/workspace"
|
||||
|
||||
# Since we're cat-ing this file, we need to escape all $'s
|
||||
echo "cpp_doc_push_script.sh: Invoked with $*"
|
||||
|
||||
# Argument 1: Where to copy the built documentation for Python API to
|
||||
# (pytorch.github.io/$install_path)
|
||||
install_path="$1"
|
||||
if [ -z "$install_path" ]; then
|
||||
echo "error: cpp_doc_push_script.sh: install_path (arg1) not specified"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Argument 2: What version of the Python API docs we are building.
|
||||
version="$2"
|
||||
if [ -z "$version" ]; then
|
||||
echo "error: cpp_doc_push_script.sh: version (arg2) not specified"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
is_master_doc=false
|
||||
if [ "$version" == "master" ]; then
|
||||
is_master_doc=true
|
||||
fi
|
||||
|
||||
# Argument 3: (optional) If present, we will NOT do any pushing. Used for testing.
|
||||
dry_run=false
|
||||
if [ "$3" != "" ]; then
|
||||
dry_run=true
|
||||
fi
|
||||
|
||||
echo "install_path: $install_path version: $version dry_run: $dry_run"
|
||||
|
||||
# ======================== Building PyTorch C++ API Docs ========================
|
||||
|
||||
echo "Building PyTorch C++ API docs..."
|
||||
|
||||
# Clone the cppdocs repo
|
||||
rm -rf cppdocs
|
||||
git clone https://github.com/pytorch/cppdocs
|
||||
|
||||
set -ex
|
||||
|
||||
sudo apt-get -y install doxygen
|
||||
|
||||
# Generate ATen files
|
||||
pushd "${pt_checkout}"
|
||||
pip install -r requirements.txt
|
||||
time python aten/src/ATen/gen.py \
|
||||
-s aten/src/ATen \
|
||||
-d build/aten/src/ATen \
|
||||
aten/src/ATen/Declarations.cwrap \
|
||||
aten/src/THNN/generic/THNN.h \
|
||||
aten/src/THCUNN/generic/THCUNN.h \
|
||||
aten/src/ATen/nn.yaml \
|
||||
aten/src/ATen/native/native_functions.yaml
|
||||
|
||||
# Copy some required files
|
||||
cp aten/src/ATen/common_with_cwrap.py tools/shared/cwrap_common.py
|
||||
cp torch/_utils_internal.py tools/shared
|
||||
|
||||
# Generate PyTorch files
|
||||
time python tools/setup_helpers/generate_code.py \
|
||||
--declarations-path build/aten/src/ATen/Declarations.yaml \
|
||||
--nn-path aten/src/
|
||||
|
||||
# Build the docs
|
||||
pushd docs/cpp
|
||||
pip install breathe==4.11.1 bs4 lxml six
|
||||
pip install --no-cache-dir -e "git+https://github.com/pytorch/pytorch_sphinx_theme.git#egg=pytorch_sphinx_theme"
|
||||
pip install exhale>=0.2.1
|
||||
pip install sphinx==1.8.5
|
||||
# Uncomment once it is fixed
|
||||
# pip install -r requirements.txt
|
||||
time make VERBOSE=1 html -j
|
||||
|
||||
popd
|
||||
popd
|
||||
|
||||
pushd cppdocs
|
||||
|
||||
# Purge everything with some exceptions
|
||||
mkdir /tmp/cppdocs-sync
|
||||
mv _config.yml README.md /tmp/cppdocs-sync/
|
||||
rm -rf *
|
||||
|
||||
# Copy over all the newly generated HTML
|
||||
cp -r "${pt_checkout}"/docs/cpp/build/html/* .
|
||||
|
||||
# Copy back _config.yml
|
||||
rm -rf _config.yml
|
||||
mv /tmp/cppdocs-sync/* .
|
||||
|
||||
# Make a new commit
|
||||
git add . || true
|
||||
git status
|
||||
git config user.email "soumith+bot@pytorch.org"
|
||||
git config user.name "pytorchbot"
|
||||
# If there aren't changes, don't make a commit; push is no-op
|
||||
git commit -m "Automatic sync on $(date)" || true
|
||||
git status
|
||||
|
||||
if [ "$dry_run" = false ]; then
|
||||
echo "Pushing to https://github.com/pytorch/cppdocs"
|
||||
set +x
|
||||
/usr/bin/expect <<DONE
|
||||
spawn git push -u origin master
|
||||
expect "Username*"
|
||||
send "pytorchbot\n"
|
||||
expect "Password*"
|
||||
send "$::env(GITHUB_PYTORCHBOT_TOKEN)\n"
|
||||
expect eof
|
||||
DONE
|
||||
set -x
|
||||
else
|
||||
echo "Skipping push due to dry_run"
|
||||
fi
|
||||
|
||||
popd
|
||||
# =================== The above code **should** be executed inside Docker container ===================
|
||||
@ -1,44 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
# DO NOT ADD 'set -x' not to reveal CircleCI secret context environment variables
|
||||
set -eu -o pipefail
|
||||
|
||||
export ANDROID_NDK_HOME=/opt/ndk
|
||||
export ANDROID_HOME=/opt/android/sdk
|
||||
|
||||
export GRADLE_VERSION=4.10.3
|
||||
export GRADLE_HOME=/opt/gradle/gradle-$GRADLE_VERSION
|
||||
export GRADLE_PATH=$GRADLE_HOME/bin/gradle
|
||||
|
||||
echo "BUILD_ENVIRONMENT:$BUILD_ENVIRONMENT"
|
||||
ls -la ~/workspace
|
||||
|
||||
GRADLE_PROPERTIES=~/workspace/android/gradle.properties
|
||||
|
||||
IS_SNAPSHOT="$(grep 'VERSION_NAME=[0-9\.]\+-SNAPSHOT' "$GRADLE_PROPERTIES")"
|
||||
echo "IS_SNAPSHOT:$IS_SNAPSHOT"
|
||||
|
||||
if [ -z "$IS_SNAPSHOT" ]; then
|
||||
echo "Error: version is not snapshot."
|
||||
elif [ -z "$SONATYPE_NEXUS_USERNAME" ]; then
|
||||
echo "Error: missing env variable SONATYPE_NEXUS_USERNAME."
|
||||
elif [ -z "$SONATYPE_NEXUS_PASSWORD" ]; then
|
||||
echo "Error: missing env variable SONATYPE_NEXUS_PASSWORD."
|
||||
elif [ -z "$ANDROID_SIGN_KEY" ]; then
|
||||
echo "Error: missing env variable ANDROID_SIGN_KEY."
|
||||
elif [ -z "$ANDROID_SIGN_PASS" ]; then
|
||||
echo "Error: missing env variable ANDROID_SIGN_PASS."
|
||||
else
|
||||
GRADLE_LOCAL_PROPERTIES=~/workspace/android/local.properties
|
||||
rm -f $GRADLE_LOCAL_PROPERTIES
|
||||
|
||||
echo "sdk.dir=/opt/android/sdk" >> $GRADLE_LOCAL_PROPERTIES
|
||||
echo "ndk.dir=/opt/ndk" >> $GRADLE_LOCAL_PROPERTIES
|
||||
|
||||
echo "SONATYPE_NEXUS_USERNAME=${SONATYPE_NEXUS_USERNAME}" >> $GRADLE_PROPERTIES
|
||||
echo "SONATYPE_NEXUS_PASSWORD=${SONATYPE_NEXUS_PASSWORD}" >> $GRADLE_PROPERTIES
|
||||
|
||||
echo "signing.keyId=${ANDROID_SIGN_KEY}" >> $GRADLE_PROPERTIES
|
||||
echo "signing.password=${ANDROID_SIGN_PASS}" >> $GRADLE_PROPERTIES
|
||||
|
||||
$GRADLE_PATH -p ~/workspace/android/ uploadArchives
|
||||
fi
|
||||
@ -1,118 +0,0 @@
|
||||
# =================== The following code **should** be executed inside Docker container ===================
|
||||
|
||||
# Install dependencies
|
||||
sudo apt-get -y update
|
||||
sudo apt-get -y install expect-dev
|
||||
|
||||
# This is where the local pytorch install in the docker image is located
|
||||
pt_checkout="/var/lib/jenkins/workspace"
|
||||
|
||||
echo "python_doc_push_script.sh: Invoked with $*"
|
||||
|
||||
set -ex
|
||||
|
||||
# Argument 1: Where to copy the built documentation to
|
||||
# (pytorch.github.io/$install_path)
|
||||
install_path="$1"
|
||||
if [ -z "$install_path" ]; then
|
||||
echo "error: python_doc_push_script.sh: install_path (arg1) not specified"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Argument 2: What version of the docs we are building.
|
||||
version="$2"
|
||||
if [ -z "$version" ]; then
|
||||
echo "error: python_doc_push_script.sh: version (arg2) not specified"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
is_master_doc=false
|
||||
if [ "$version" == "master" ]; then
|
||||
is_master_doc=true
|
||||
fi
|
||||
|
||||
# Argument 3: The branch to push to. Usually is "site"
|
||||
branch="$3"
|
||||
if [ -z "$branch" ]; then
|
||||
echo "error: python_doc_push_script.sh: branch (arg3) not specified"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Argument 4: (optional) If present, we will NOT do any pushing. Used for testing.
|
||||
dry_run=false
|
||||
if [ "$4" != "" ]; then
|
||||
dry_run=true
|
||||
fi
|
||||
|
||||
echo "install_path: $install_path version: $version dry_run: $dry_run"
|
||||
|
||||
git clone https://github.com/pytorch/pytorch.github.io -b $branch
|
||||
pushd pytorch.github.io
|
||||
|
||||
export LC_ALL=C
|
||||
export PATH=/opt/conda/bin:$PATH
|
||||
|
||||
rm -rf pytorch || true
|
||||
|
||||
# Install TensorBoard in python 3 so torch.utils.tensorboard classes render
|
||||
pip install -q https://s3.amazonaws.com/ossci-linux/wheels/tensorboard-1.14.0a0-py3-none-any.whl
|
||||
|
||||
# Get all the documentation sources, put them in one place
|
||||
pushd "$pt_checkout"
|
||||
git clone https://github.com/pytorch/vision
|
||||
pushd vision
|
||||
conda install -q pillow
|
||||
time python setup.py install
|
||||
popd
|
||||
pushd docs
|
||||
rm -rf source/torchvision
|
||||
cp -a ../vision/docs/source source/torchvision
|
||||
|
||||
# Build the docs
|
||||
pip -q install -r requirements.txt || true
|
||||
if [ "$is_master_doc" = true ]; then
|
||||
make html
|
||||
else
|
||||
make html-stable
|
||||
fi
|
||||
|
||||
# Move them into the docs repo
|
||||
popd
|
||||
popd
|
||||
git rm -rf "$install_path" || true
|
||||
mv "$pt_checkout/docs/build/html" "$install_path"
|
||||
|
||||
# Add the version handler by search and replace.
|
||||
# XXX: Consider moving this to the docs Makefile or site build
|
||||
if [ "$is_master_doc" = true ]; then
|
||||
find "$install_path" -name "*.html" -print0 | xargs -0 perl -pi -w -e "s@master\s+\((\d\.\d\.[A-Fa-f0-9]+\+[A-Fa-f0-9]+)\s+\)@<a href='http://pytorch.org/docs/versions.html'>\1 \▼</a>@g"
|
||||
else
|
||||
find "$install_path" -name "*.html" -print0 | xargs -0 perl -pi -w -e "s@master\s+\((\d\.\d\.[A-Fa-f0-9]+\+[A-Fa-f0-9]+)\s+\)@<a href='http://pytorch.org/docs/versions.html'>$version \▼</a>@g"
|
||||
fi
|
||||
|
||||
git add "$install_path" || true
|
||||
git status
|
||||
git config user.email "soumith+bot@pytorch.org"
|
||||
git config user.name "pytorchbot"
|
||||
# If there aren't changes, don't make a commit; push is no-op
|
||||
git commit -m "auto-generating sphinx docs" || true
|
||||
git status
|
||||
|
||||
if [ "$dry_run" = false ]; then
|
||||
echo "Pushing to pytorch.github.io:$branch"
|
||||
set +x
|
||||
/usr/bin/expect <<DONE
|
||||
spawn git push origin $branch
|
||||
expect "Username*"
|
||||
send "pytorchbot\n"
|
||||
expect "Password*"
|
||||
send "$::env(GITHUB_PYTORCHBOT_TOKEN)\n"
|
||||
expect eof
|
||||
DONE
|
||||
set -x
|
||||
else
|
||||
echo "Skipping push due to dry_run"
|
||||
fi
|
||||
|
||||
popd
|
||||
# =================== The above code **should** be executed inside Docker container ===================
|
||||
@ -1,88 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
set -ex -o pipefail
|
||||
|
||||
# Set up NVIDIA docker repo
|
||||
curl -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
|
||||
echo "deb https://nvidia.github.io/libnvidia-container/ubuntu16.04/amd64 /" | sudo tee -a /etc/apt/sources.list.d/nvidia-docker.list
|
||||
echo "deb https://nvidia.github.io/nvidia-container-runtime/ubuntu16.04/amd64 /" | sudo tee -a /etc/apt/sources.list.d/nvidia-docker.list
|
||||
echo "deb https://nvidia.github.io/nvidia-docker/ubuntu16.04/amd64 /" | sudo tee -a /etc/apt/sources.list.d/nvidia-docker.list
|
||||
|
||||
# Remove unnecessary sources
|
||||
sudo rm -f /etc/apt/sources.list.d/google-chrome.list
|
||||
sudo rm -f /etc/apt/heroku.list
|
||||
sudo rm -f /etc/apt/openjdk-r-ubuntu-ppa-xenial.list
|
||||
sudo rm -f /etc/apt/partner.list
|
||||
|
||||
sudo apt-get -y update
|
||||
sudo apt-get -y remove linux-image-generic linux-headers-generic linux-generic docker-ce
|
||||
# WARNING: Docker version is hardcoded here; you must update the
|
||||
# version number below for docker-ce and nvidia-docker2 to get newer
|
||||
# versions of Docker. We hardcode these numbers because we kept
|
||||
# getting broken CI when Docker would update their docker version,
|
||||
# and nvidia-docker2 would be out of date for a day until they
|
||||
# released a newer version of their package.
|
||||
#
|
||||
# How to figure out what the correct versions of these packages are?
|
||||
# My preferred method is to start a Docker instance of the correct
|
||||
# Ubuntu version (e.g., docker run -it ubuntu:16.04) and then ask
|
||||
# apt what the packages you need are. Note that the CircleCI image
|
||||
# comes with Docker.
|
||||
sudo apt-get -y install \
|
||||
linux-headers-$(uname -r) \
|
||||
linux-image-generic \
|
||||
moreutils \
|
||||
docker-ce=5:18.09.4~3-0~ubuntu-xenial \
|
||||
nvidia-container-runtime=2.0.0+docker18.09.4-1 \
|
||||
nvidia-docker2=2.0.3+docker18.09.4-1 \
|
||||
expect-dev
|
||||
|
||||
sudo pkill -SIGHUP dockerd
|
||||
|
||||
retry () {
|
||||
$* || $* || $* || $* || $*
|
||||
}
|
||||
|
||||
retry sudo pip -q install awscli==1.16.35
|
||||
|
||||
if [ -n "${USE_CUDA_DOCKER_RUNTIME:-}" ]; then
|
||||
DRIVER_FN="NVIDIA-Linux-x86_64-430.40.run"
|
||||
wget "https://s3.amazonaws.com/ossci-linux/nvidia_driver/$DRIVER_FN"
|
||||
sudo /bin/bash "$DRIVER_FN" -s --no-drm || (sudo cat /var/log/nvidia-installer.log && false)
|
||||
nvidia-smi
|
||||
fi
|
||||
|
||||
if [[ "${BUILD_ENVIRONMENT}" == *-build ]]; then
|
||||
echo "declare -x IN_CIRCLECI=1" > /home/circleci/project/env
|
||||
echo "declare -x COMMIT_SOURCE=${CIRCLE_BRANCH:-}" >> /home/circleci/project/env
|
||||
echo "declare -x SCCACHE_BUCKET=ossci-compiler-cache-circleci-v2" >> /home/circleci/project/env
|
||||
if [ -n "${USE_CUDA_DOCKER_RUNTIME:-}" ]; then
|
||||
echo "declare -x TORCH_CUDA_ARCH_LIST=5.2" >> /home/circleci/project/env
|
||||
fi
|
||||
export SCCACHE_MAX_JOBS=`expr $(nproc) - 1`
|
||||
export MEMORY_LIMIT_MAX_JOBS=8 # the "large" resource class on CircleCI has 32 CPU cores, if we use all of them we'll OOM
|
||||
export MAX_JOBS=$(( ${SCCACHE_MAX_JOBS} > ${MEMORY_LIMIT_MAX_JOBS} ? ${MEMORY_LIMIT_MAX_JOBS} : ${SCCACHE_MAX_JOBS} ))
|
||||
echo "declare -x MAX_JOBS=${MAX_JOBS}" >> /home/circleci/project/env
|
||||
|
||||
if [[ "${BUILD_ENVIRONMENT}" == *xla* ]]; then
|
||||
# This IAM user allows write access to S3 bucket for sccache & bazels3cache
|
||||
set +x
|
||||
echo "declare -x XLA_CLANG_CACHE_S3_BUCKET_NAME=${XLA_CLANG_CACHE_S3_BUCKET_NAME:-}" >> /home/circleci/project/env
|
||||
echo "declare -x AWS_ACCESS_KEY_ID=${CIRCLECI_AWS_ACCESS_KEY_FOR_SCCACHE_AND_XLA_BAZEL_S3_BUCKET_V2:-}" >> /home/circleci/project/env
|
||||
echo "declare -x AWS_SECRET_ACCESS_KEY=${CIRCLECI_AWS_SECRET_KEY_FOR_SCCACHE_AND_XLA_BAZEL_S3_BUCKET_V2:-}" >> /home/circleci/project/env
|
||||
set -x
|
||||
else
|
||||
# This IAM user allows write access to S3 bucket for sccache
|
||||
set +x
|
||||
echo "declare -x XLA_CLANG_CACHE_S3_BUCKET_NAME=${XLA_CLANG_CACHE_S3_BUCKET_NAME:-}" >> /home/circleci/project/env
|
||||
echo "declare -x AWS_ACCESS_KEY_ID=${CIRCLECI_AWS_ACCESS_KEY_FOR_SCCACHE_S3_BUCKET_V4:-}" >> /home/circleci/project/env
|
||||
echo "declare -x AWS_SECRET_ACCESS_KEY=${CIRCLECI_AWS_SECRET_KEY_FOR_SCCACHE_S3_BUCKET_V4:-}" >> /home/circleci/project/env
|
||||
set -x
|
||||
fi
|
||||
fi
|
||||
|
||||
# This IAM user only allows read-write access to ECR
|
||||
set +x
|
||||
export AWS_ACCESS_KEY_ID=${CIRCLECI_AWS_ACCESS_KEY_FOR_ECR_READ_WRITE_V4:-}
|
||||
export AWS_SECRET_ACCESS_KEY=${CIRCLECI_AWS_SECRET_KEY_FOR_ECR_READ_WRITE_V4:-}
|
||||
eval $(aws ecr get-login --region us-east-1 --no-include-email)
|
||||
set -x
|
||||
@ -1,50 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
set -eux -o pipefail
|
||||
|
||||
# Set up CircleCI GPG keys for apt, if needed
|
||||
curl -L https://packagecloud.io/circleci/trusty/gpgkey | sudo apt-key add -
|
||||
|
||||
# Stop background apt updates. Hypothetically, the kill should not
|
||||
# be necessary, because stop is supposed to send a kill signal to
|
||||
# the process, but we've added it for good luck. Also
|
||||
# hypothetically, it's supposed to be unnecessary to wait for
|
||||
# the process to block. We also have that line for good luck.
|
||||
# If you like, try deleting them and seeing if it works.
|
||||
sudo systemctl stop apt-daily.service || true
|
||||
sudo systemctl kill --kill-who=all apt-daily.service || true
|
||||
|
||||
sudo systemctl stop unattended-upgrades.service || true
|
||||
sudo systemctl kill --kill-who=all unattended-upgrades.service || true
|
||||
|
||||
# wait until `apt-get update` has been killed
|
||||
while systemctl is-active --quiet apt-daily.service
|
||||
do
|
||||
sleep 1;
|
||||
done
|
||||
while systemctl is-active --quiet unattended-upgrades.service
|
||||
do
|
||||
sleep 1;
|
||||
done
|
||||
|
||||
# See if we actually were successful
|
||||
systemctl list-units --all | cat
|
||||
|
||||
# For good luck, try even harder to kill apt-get
|
||||
sudo pkill apt-get || true
|
||||
|
||||
# For even better luck, purge unattended-upgrades
|
||||
sudo apt-get purge -y unattended-upgrades
|
||||
|
||||
cat /etc/apt/sources.list
|
||||
|
||||
# For the bestest luck, kill again now
|
||||
sudo pkill apt || true
|
||||
sudo pkill dpkg || true
|
||||
|
||||
# Try to detect if apt/dpkg is stuck
|
||||
if ps auxfww | grep '[a]pt'; then
|
||||
echo "WARNING: There are leftover apt processes; subsequent apt update will likely fail"
|
||||
fi
|
||||
if ps auxfww | grep '[d]pkg'; then
|
||||
echo "WARNING: There are leftover dpkg processes; subsequent apt update will likely fail"
|
||||
fi
|
||||
@ -1,140 +0,0 @@
|
||||
import argparse
|
||||
import re
|
||||
import sys
|
||||
|
||||
# Modify this variable if you want to change the set of default jobs
|
||||
# which are run on all pull requests.
|
||||
#
|
||||
# WARNING: Actually, this is a lie; we're currently also controlling
|
||||
# the set of jobs to run via the Workflows filters in CircleCI config.
|
||||
|
||||
default_set = set([
|
||||
# PyTorch CPU
|
||||
# Selected oldest Python 2 version to ensure Python 2 coverage
|
||||
'pytorch-linux-xenial-py2.7.9',
|
||||
# PyTorch CUDA
|
||||
'pytorch-linux-xenial-cuda9-cudnn7-py3',
|
||||
# PyTorch ASAN
|
||||
'pytorch-linux-xenial-py3-clang5-asan',
|
||||
# PyTorch DEBUG
|
||||
'pytorch-linux-xenial-py3.6-gcc5.4',
|
||||
# LibTorch
|
||||
'pytorch-libtorch-linux-xenial-cuda9-cudnn7-py3',
|
||||
|
||||
# Caffe2 CPU
|
||||
'caffe2-py2-mkl-ubuntu16.04',
|
||||
# Caffe2 CUDA
|
||||
'caffe2-py3.5-cuda10.1-cudnn7-ubuntu16.04',
|
||||
# Caffe2 ONNX
|
||||
'caffe2-onnx-py2-gcc5-ubuntu16.04',
|
||||
'caffe2-onnx-py3.6-clang7-ubuntu16.04',
|
||||
# Caffe2 Clang
|
||||
'caffe2-py2-clang7-ubuntu16.04',
|
||||
# Caffe2 CMake
|
||||
'caffe2-cmake-cuda9.0-cudnn7-ubuntu16.04',
|
||||
# Caffe2 CentOS
|
||||
'caffe2-py3.6-devtoolset7-cuda9.0-cudnn7-centos7',
|
||||
|
||||
# Binaries
|
||||
'manywheel 2.7mu cpu devtoolset7',
|
||||
'libtorch 2.7m cpu devtoolset7',
|
||||
'libtorch 2.7m cpu gcc5.4_cxx11-abi',
|
||||
'libtorch 2.7 cpu',
|
||||
'libtorch-ios-11.2.1-nightly-x86_64-build',
|
||||
'libtorch-ios-11.2.1-nightly-arm64-build',
|
||||
'libtorch-ios-11.2.1-nightly-binary-build-upload',
|
||||
|
||||
# Caffe2 Android
|
||||
'caffe2-py2-android-ubuntu16.04',
|
||||
# Caffe2 OSX
|
||||
'caffe2-py2-system-macos10.13',
|
||||
# PyTorch OSX
|
||||
'pytorch-macos-10.13-py3',
|
||||
'pytorch-macos-10.13-cuda9.2-cudnn7-py3',
|
||||
# PyTorch Android
|
||||
'pytorch-linux-xenial-py3-clang5-android-ndk-r19c-x86_32-build',
|
||||
'pytorch-linux-xenial-py3-clang5-android-ndk-r19',
|
||||
# PyTorch Android gradle
|
||||
'pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-build-only-x86_32',
|
||||
|
||||
# Pytorch iOS builds
|
||||
'pytorch-ios-11.2.1-x86_64_build',
|
||||
'pytorch-ios-11.2.1-arm64_build',
|
||||
# PyTorch Mobile builds
|
||||
'pytorch-linux-xenial-py3-clang5-mobile-build',
|
||||
|
||||
# Pytorch backward compatibility check
|
||||
'pytorch-linux-backward-compatibility-check-test',
|
||||
|
||||
# XLA
|
||||
'pytorch-xla-linux-xenial-py3.6-clang7',
|
||||
|
||||
# GraphExecutor config jobs
|
||||
'pytorch-linux-xenial-py3.6-gcc5.4-ge_config_simple-test',
|
||||
'pytorch-linux-xenial-py3.6-gcc5.4-ge_config_legacy-test',
|
||||
|
||||
# Other checks
|
||||
'pytorch-short-perf-test-gpu',
|
||||
'pytorch-python-doc-push',
|
||||
'pytorch-cpp-doc-push',
|
||||
])
|
||||
|
||||
# Collection of jobs that are *temporarily* excluded from running on PRs.
|
||||
# Use this if there is a long-running job breakage that we can't fix with a
|
||||
# single revert.
|
||||
skip_override = {
|
||||
# example entry:
|
||||
# 'pytorch-cpp-doc-push': "https://github.com/pytorch/pytorch/issues/<related issue>"
|
||||
}
|
||||
|
||||
# Takes in commit message to analyze via stdin
|
||||
#
|
||||
# This script will query Git and attempt to determine if we should
|
||||
# run the current CI job under question
|
||||
#
|
||||
# NB: Try to avoid hard-coding names here, so there's less place to update when jobs
|
||||
# are updated/renamed
|
||||
#
|
||||
# Semantics in the presence of multiple tags:
|
||||
# - Let D be the set of default builds
|
||||
# - Let S be the set of explicitly specified builds
|
||||
# - Let O be the set of temporarily skipped builds
|
||||
# - Run S \/ (D - O)
|
||||
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument('build_environment')
|
||||
args = parser.parse_args()
|
||||
|
||||
commit_msg = sys.stdin.read()
|
||||
|
||||
# Matches anything that looks like [foo ci] or [ci foo] or [foo test]
|
||||
# or [test foo]
|
||||
RE_MARKER = re.compile(r'\[(?:([^ \[\]]+) )?(?:ci|test)(?: ([^ \[\]]+))?\]')
|
||||
|
||||
markers = RE_MARKER.finditer(commit_msg)
|
||||
|
||||
for m in markers:
|
||||
if m.group(1) and m.group(2):
|
||||
print("Unrecognized marker: {}".format(m.group(0)))
|
||||
continue
|
||||
spec = m.group(1) or m.group(2)
|
||||
if spec is None:
|
||||
print("Unrecognized marker: {}".format(m.group(0)))
|
||||
continue
|
||||
if spec in args.build_environment or spec == 'all':
|
||||
print("Accepting {} due to commit marker {}".format(args.build_environment, m.group(0)))
|
||||
sys.exit(0)
|
||||
|
||||
skip_override_set = set(skip_override.keys())
|
||||
should_run_set = default_set - skip_override_set
|
||||
for spec in should_run_set:
|
||||
if spec in args.build_environment:
|
||||
print("Accepting {} as part of default set".format(args.build_environment))
|
||||
sys.exit(0)
|
||||
|
||||
print("Rejecting {}".format(args.build_environment))
|
||||
for spec, issue in skip_override.items():
|
||||
if spec in args.build_environment:
|
||||
print("This job is temporarily excluded from running on PRs. Reason: {}".format(issue))
|
||||
break
|
||||
sys.exit(1)
|
||||
@ -1,29 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
set -exu -o pipefail
|
||||
|
||||
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
|
||||
|
||||
# Check if we should actually run
|
||||
echo "BUILD_ENVIRONMENT: ${BUILD_ENVIRONMENT:-}"
|
||||
echo "CIRCLE_PULL_REQUEST: ${CIRCLE_PULL_REQUEST:-}"
|
||||
if [ -z "${BUILD_ENVIRONMENT:-}" ]; then
|
||||
echo "Cannot run should_run_job.sh if BUILD_ENVIRONMENT is not defined!"
|
||||
echo "CircleCI scripts are probably misconfigured."
|
||||
exit 1
|
||||
fi
|
||||
if ! [ -e "$SCRIPT_DIR/COMMIT_MSG" ]; then
|
||||
echo "Cannot run should_run_job.sh if you don't have COMMIT_MSG"
|
||||
echo "written out. Are you perhaps running the wrong copy of this script?"
|
||||
echo "You should be running the copy in ~/workspace; SCRIPT_DIR=$SCRIPT_DIR"
|
||||
exit 1
|
||||
fi
|
||||
if [ -n "${CIRCLE_PULL_REQUEST:-}" ]; then
|
||||
if [[ $CIRCLE_BRANCH != "ci-all/"* ]] && [[ $CIRCLE_BRANCH != "nightly" ]] && [[ $CIRCLE_BRANCH != "postnightly" ]] ; then
|
||||
# Don't swallow "script doesn't exist
|
||||
[ -e "$SCRIPT_DIR/should_run_job.py" ]
|
||||
if ! python "$SCRIPT_DIR/should_run_job.py" "${BUILD_ENVIRONMENT:-}" < "$SCRIPT_DIR/COMMIT_MSG" ; then
|
||||
circleci step halt
|
||||
exit
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
@ -1,44 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import urllib.request
|
||||
import re
|
||||
|
||||
import cimodel.data.pytorch_build_definitions as pytorch_build_definitions
|
||||
import cimodel.data.caffe2_build_definitions as caffe2_build_definitions
|
||||
|
||||
RE_VERSION = re.compile(r'allDeployedVersions = "([0-9,]+)"')
|
||||
|
||||
URL_TEMPLATE = (
|
||||
"https://raw.githubusercontent.com/pytorch/ossci-job-dsl/"
|
||||
"master/src/main/groovy/ossci/{}/DockerVersion.groovy"
|
||||
)
|
||||
|
||||
def check_version(job, expected_version):
|
||||
url = URL_TEMPLATE.format(job)
|
||||
with urllib.request.urlopen(url) as f:
|
||||
contents = f.read().decode('utf-8')
|
||||
m = RE_VERSION.search(contents)
|
||||
if not m:
|
||||
raise RuntimeError(
|
||||
"Unbelievable! I could not find the variable allDeployedVersions in "
|
||||
"{}; did the organization of ossci-job-dsl change?\n\nFull contents:\n{}"
|
||||
.format(url, contents)
|
||||
)
|
||||
valid_versions = [int(v) for v in m.group(1).split(',')]
|
||||
if expected_version not in valid_versions:
|
||||
raise RuntimeError(
|
||||
"We configured {} to use Docker version {}; but this "
|
||||
"version is not deployed in {}. Non-deployed versions will be "
|
||||
"garbage collected two weeks after they are created. DO NOT LAND "
|
||||
"THIS TO MASTER without also updating ossci-job-dsl with this version."
|
||||
"\n\nDeployed versions: {}"
|
||||
.format(job, expected_version, url, m.group(1))
|
||||
)
|
||||
|
||||
def validate_docker_version():
|
||||
check_version('pytorch', pytorch_build_definitions.DOCKER_IMAGE_VERSION)
|
||||
check_version('caffe2', caffe2_build_definitions.DOCKER_IMAGE_VERSION)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
validate_docker_version()
|
||||
@ -1,54 +0,0 @@
|
||||
binary_linux_build_params: &binary_linux_build_params
|
||||
parameters:
|
||||
build_environment:
|
||||
type: string
|
||||
default: ""
|
||||
docker_image:
|
||||
type: string
|
||||
default: ""
|
||||
libtorch_variant:
|
||||
type: string
|
||||
default: ""
|
||||
resource_class:
|
||||
type: string
|
||||
default: "2xlarge+"
|
||||
environment:
|
||||
BUILD_ENVIRONMENT: << parameters.build_environment >>
|
||||
LIBTORCH_VARIANT: << parameters.libtorch_variant >>
|
||||
ANACONDA_USER: pytorch
|
||||
resource_class: << parameters.resource_class >>
|
||||
docker:
|
||||
- image: << parameters.docker_image >>
|
||||
|
||||
binary_linux_test_upload_params: &binary_linux_test_upload_params
|
||||
parameters:
|
||||
build_environment:
|
||||
type: string
|
||||
default: ""
|
||||
docker_image:
|
||||
type: string
|
||||
default: ""
|
||||
libtorch_variant:
|
||||
type: string
|
||||
default: ""
|
||||
resource_class:
|
||||
type: string
|
||||
default: "medium"
|
||||
use_cuda_docker_runtime:
|
||||
type: string
|
||||
default: ""
|
||||
environment:
|
||||
BUILD_ENVIRONMENT: << parameters.build_environment >>
|
||||
DOCKER_IMAGE: << parameters.docker_image >>
|
||||
USE_CUDA_DOCKER_RUNTIME: << parameters.use_cuda_docker_runtime >>
|
||||
LIBTORCH_VARIANT: << parameters.libtorch_variant >>
|
||||
resource_class: << parameters.resource_class >>
|
||||
|
||||
binary_mac_params: &binary_mac_params
|
||||
parameters:
|
||||
build_environment:
|
||||
type: string
|
||||
default: ""
|
||||
environment:
|
||||
BUILD_ENVIRONMENT: << parameters.build_environment >>
|
||||
|
||||
@ -1,20 +0,0 @@
|
||||
|
||||
# There is currently no testing for libtorch TODO
|
||||
# binary_linux_libtorch_2.7m_cpu_test:
|
||||
# environment:
|
||||
# BUILD_ENVIRONMENT: "libtorch 2.7m cpu"
|
||||
# resource_class: gpu.medium
|
||||
# <<: *binary_linux_test
|
||||
#
|
||||
# binary_linux_libtorch_2.7m_cu90_test:
|
||||
# environment:
|
||||
# BUILD_ENVIRONMENT: "libtorch 2.7m cu90"
|
||||
# resource_class: gpu.medium
|
||||
# <<: *binary_linux_test
|
||||
#
|
||||
# binary_linux_libtorch_2.7m_cu100_test:
|
||||
# environment:
|
||||
# BUILD_ENVIRONMENT: "libtorch 2.7m cu100"
|
||||
# resource_class: gpu.medium
|
||||
# <<: *binary_linux_test
|
||||
|
||||
@ -1,267 +0,0 @@
|
||||
binary_linux_build:
|
||||
<<: *binary_linux_build_params
|
||||
steps:
|
||||
# See Note [Workspace for CircleCI scripts] in job-specs-setup.yml
|
||||
- should_run_job
|
||||
- run:
|
||||
<<: *binary_checkout
|
||||
- run:
|
||||
<<: *binary_populate_env
|
||||
- run:
|
||||
name: Install unbuffer and ts
|
||||
command: |
|
||||
set -eux -o pipefail
|
||||
source /env
|
||||
OS_NAME=`awk -F= '/^NAME/{print $2}' /etc/os-release`
|
||||
if [[ "$OS_NAME" == *"CentOS Linux"* ]]; then
|
||||
retry yum -q -y install epel-release
|
||||
retry yum -q -y install expect moreutils
|
||||
elif [[ "$OS_NAME" == *"Ubuntu"* ]]; then
|
||||
retry apt-get update
|
||||
retry apt-get -y install expect moreutils
|
||||
conda install -y -c eumetsat expect
|
||||
conda install -y cmake
|
||||
fi
|
||||
- run:
|
||||
name: Update compiler to devtoolset7
|
||||
command: |
|
||||
set -eux -o pipefail
|
||||
source /env
|
||||
if [[ "$DESIRED_DEVTOOLSET" == 'devtoolset7' ]]; then
|
||||
source "/builder/update_compiler.sh"
|
||||
|
||||
# Env variables are not persisted into the next step
|
||||
echo "export PATH=$PATH" >> /env
|
||||
echo "export LD_LIBRARY_PATH=$LD_LIBRARY_PATH" >> /env
|
||||
else
|
||||
echo "Not updating compiler"
|
||||
fi
|
||||
- run:
|
||||
name: Build
|
||||
no_output_timeout: "1h"
|
||||
command: |
|
||||
source "/pytorch/.circleci/scripts/binary_linux_build.sh"
|
||||
- persist_to_workspace:
|
||||
root: /
|
||||
paths: final_pkgs
|
||||
|
||||
# This should really just be another step of the binary_linux_build job above.
|
||||
# This isn't possible right now b/c the build job uses the docker executor
|
||||
# (otherwise they'd be really really slow) but this one uses the macine
|
||||
# executor (b/c we have to run the docker with --runtime=nvidia and we can't do
|
||||
# that on the docker executor)
|
||||
binary_linux_test:
|
||||
<<: *binary_linux_test_upload_params
|
||||
machine:
|
||||
image: ubuntu-1604:201903-01
|
||||
steps:
|
||||
# See Note [Workspace for CircleCI scripts] in job-specs-setup.yml
|
||||
- should_run_job
|
||||
# TODO: We shouldn't attach the workspace multiple times
|
||||
- attach_workspace:
|
||||
at: /home/circleci/project
|
||||
- setup_linux_system_environment
|
||||
- setup_ci_environment
|
||||
- run:
|
||||
<<: *binary_checkout
|
||||
- run:
|
||||
<<: *binary_populate_env
|
||||
- run:
|
||||
name: Prepare test code
|
||||
no_output_timeout: "1h"
|
||||
command: ~/workspace/.circleci/scripts/binary_linux_test.sh
|
||||
- run:
|
||||
<<: *binary_run_in_docker
|
||||
|
||||
binary_linux_upload:
|
||||
<<: *binary_linux_test_upload_params
|
||||
machine:
|
||||
image: ubuntu-1604:201903-01
|
||||
steps:
|
||||
# See Note [Workspace for CircleCI scripts] in job-specs-setup.yml
|
||||
- should_run_job
|
||||
- setup_linux_system_environment
|
||||
- setup_ci_environment
|
||||
- attach_workspace:
|
||||
at: /home/circleci/project
|
||||
- run:
|
||||
<<: *binary_populate_env
|
||||
- run:
|
||||
<<: *binary_install_miniconda
|
||||
- run:
|
||||
name: Upload
|
||||
no_output_timeout: "1h"
|
||||
command: ~/workspace/.circleci/scripts/binary_linux_upload.sh
|
||||
|
||||
# Nighlty build smoke tests defaults
|
||||
# These are the second-round smoke tests. These make sure that the binaries are
|
||||
# correct from a user perspective, testing that they exist from the cloud are
|
||||
# are runnable. Note that the pytorch repo is never cloned into these jobs
|
||||
##############################################################################
|
||||
smoke_linux_test:
|
||||
<<: *binary_linux_test_upload_params
|
||||
machine:
|
||||
image: ubuntu-1604:201903-01
|
||||
steps:
|
||||
- attach_workspace:
|
||||
at: ~/workspace
|
||||
- attach_workspace:
|
||||
at: /home/circleci/project
|
||||
- setup_linux_system_environment
|
||||
- setup_ci_environment
|
||||
- run:
|
||||
<<: *binary_checkout
|
||||
- run:
|
||||
<<: *binary_populate_env
|
||||
- run:
|
||||
name: Test
|
||||
no_output_timeout: "1h"
|
||||
command: |
|
||||
set -ex
|
||||
cat >/home/circleci/project/ci_test_script.sh \<<EOL
|
||||
# The following code will be executed inside Docker container
|
||||
set -eux -o pipefail
|
||||
/builder/smoke_test.sh
|
||||
# The above code will be executed inside Docker container
|
||||
EOL
|
||||
- run:
|
||||
<<: *binary_run_in_docker
|
||||
|
||||
smoke_mac_test:
|
||||
<<: *binary_linux_test_upload_params
|
||||
macos:
|
||||
xcode: "9.0"
|
||||
steps:
|
||||
- attach_workspace:
|
||||
at: ~/workspace
|
||||
- attach_workspace: # TODO - we can `cp` from ~/workspace
|
||||
at: /Users/distiller/project
|
||||
- run:
|
||||
<<: *binary_checkout
|
||||
- run:
|
||||
<<: *binary_populate_env
|
||||
- brew_update
|
||||
- run:
|
||||
<<: *binary_install_miniconda
|
||||
- run:
|
||||
name: Build
|
||||
no_output_timeout: "1h"
|
||||
command: |
|
||||
set -ex
|
||||
source "/Users/distiller/project/env"
|
||||
export "PATH=$workdir/miniconda/bin:$PATH"
|
||||
# TODO unbuffer and ts this, but it breaks cause miniconda overwrites
|
||||
# tclsh. But unbuffer and ts aren't that important so they're just
|
||||
# disabled for now
|
||||
./builder/smoke_test.sh
|
||||
|
||||
binary_mac_build:
|
||||
<<: *binary_mac_params
|
||||
macos:
|
||||
xcode: "9.0"
|
||||
steps:
|
||||
# See Note [Workspace for CircleCI scripts] in job-specs-setup.yml
|
||||
- should_run_job
|
||||
- run:
|
||||
<<: *binary_checkout
|
||||
- run:
|
||||
<<: *binary_populate_env
|
||||
- brew_update
|
||||
- run:
|
||||
<<: *binary_install_miniconda
|
||||
|
||||
- run:
|
||||
name: Build
|
||||
no_output_timeout: "1h"
|
||||
command: |
|
||||
# Do not set -u here; there is some problem with CircleCI
|
||||
# variable expansion with PROMPT_COMMAND
|
||||
set -ex -o pipefail
|
||||
script="/Users/distiller/project/pytorch/.circleci/scripts/binary_macos_build.sh"
|
||||
cat "$script"
|
||||
source "$script"
|
||||
|
||||
- run:
|
||||
name: Test
|
||||
no_output_timeout: "1h"
|
||||
command: |
|
||||
# Do not set -u here; there is some problem with CircleCI
|
||||
# variable expansion with PROMPT_COMMAND
|
||||
set -ex -o pipefail
|
||||
script="/Users/distiller/project/pytorch/.circleci/scripts/binary_macos_test.sh"
|
||||
cat "$script"
|
||||
source "$script"
|
||||
|
||||
- persist_to_workspace:
|
||||
root: /Users/distiller/project
|
||||
paths: final_pkgs
|
||||
|
||||
binary_mac_upload: &binary_mac_upload
|
||||
<<: *binary_mac_params
|
||||
macos:
|
||||
xcode: "9.0"
|
||||
steps:
|
||||
# See Note [Workspace for CircleCI scripts] in job-specs-setup.yml
|
||||
- should_run_job
|
||||
- run:
|
||||
<<: *binary_checkout
|
||||
- run:
|
||||
<<: *binary_populate_env
|
||||
- brew_update
|
||||
- run:
|
||||
<<: *binary_install_miniconda
|
||||
- attach_workspace: # TODO - we can `cp` from ~/workspace
|
||||
at: /Users/distiller/project
|
||||
- run:
|
||||
name: Upload
|
||||
no_output_timeout: "10m"
|
||||
command: |
|
||||
script="/Users/distiller/project/pytorch/.circleci/scripts/binary_macos_upload.sh"
|
||||
cat "$script"
|
||||
source "$script"
|
||||
|
||||
binary_ios_build:
|
||||
<<: *pytorch_ios_params
|
||||
macos:
|
||||
xcode: "11.2.1"
|
||||
steps:
|
||||
- attach_workspace:
|
||||
at: ~/workspace
|
||||
- should_run_job
|
||||
- checkout
|
||||
- run_brew_for_ios_build
|
||||
- run:
|
||||
name: Build
|
||||
no_output_timeout: "1h"
|
||||
command: |
|
||||
script="/Users/distiller/project/.circleci/scripts/binary_ios_build.sh"
|
||||
cat "$script"
|
||||
source "$script"
|
||||
- run:
|
||||
name: Test
|
||||
no_output_timeout: "30m"
|
||||
command: |
|
||||
script="/Users/distiller/project/.circleci/scripts/binary_ios_test.sh"
|
||||
cat "$script"
|
||||
source "$script"
|
||||
- persist_to_workspace:
|
||||
root: /Users/distiller/workspace/
|
||||
paths: ios
|
||||
|
||||
binary_ios_upload:
|
||||
<<: *pytorch_ios_params
|
||||
macos:
|
||||
xcode: "11.2.1"
|
||||
steps:
|
||||
- attach_workspace:
|
||||
at: ~/workspace
|
||||
- should_run_job
|
||||
- checkout
|
||||
- run_brew_for_ios_build
|
||||
- run:
|
||||
name: Upload
|
||||
no_output_timeout: "1h"
|
||||
command: |
|
||||
script="/Users/distiller/project/.circleci/scripts/binary_ios_upload.sh"
|
||||
cat "$script"
|
||||
source "$script"
|
||||
@ -1,96 +0,0 @@
|
||||
|
||||
# update_s3_htmls job
|
||||
# These jobs create html files for every cpu/cu## folder in s3. The html
|
||||
# files just store the names of all the files in that folder (which are
|
||||
# binary files (.whl files)). This is to allow pip installs of the latest
|
||||
# version in a folder without having to know the latest date. Pip has a flag
|
||||
# -f that you can pass an html file listing a bunch of packages, and pip will
|
||||
# then install the one with the most recent version.
|
||||
update_s3_htmls: &update_s3_htmls
|
||||
machine:
|
||||
image: ubuntu-1604:201903-01
|
||||
steps:
|
||||
- attach_workspace:
|
||||
at: ~/workspace
|
||||
- setup_linux_system_environment
|
||||
- run:
|
||||
<<: *binary_checkout
|
||||
# N.B. we do not run binary_populate_env. The only variable we need is
|
||||
# PIP_UPLOAD_FOLDER (which is 'nightly/' for the nightlies and '' for
|
||||
# releases, and sometimes other things for special cases). Instead we
|
||||
# expect PIP_UPLOAD_FOLDER to be passed directly in the env. This is
|
||||
# because, unlike all the other binary jobs, these jobs only get run once,
|
||||
# in a separate workflow. They are not a step in other binary jobs like
|
||||
# build, test, upload.
|
||||
#
|
||||
# You could attach this to every job, or include it in the upload step if
|
||||
# you wanted. You would need to add binary_populate_env in this case to
|
||||
# make sure it has the same upload folder as the job it's attached to. This
|
||||
# function is idempotent, so it won't hurt anything; it's just a little
|
||||
# unnescessary"
|
||||
- run:
|
||||
name: Update s3 htmls
|
||||
no_output_timeout: "1h"
|
||||
command: |
|
||||
set +x
|
||||
echo "declare -x \"AWS_ACCESS_KEY_ID=${PYTORCH_BINARY_AWS_ACCESS_KEY_ID}\"" >> /home/circleci/project/env
|
||||
echo "declare -x \"AWS_SECRET_ACCESS_KEY=${PYTORCH_BINARY_AWS_SECRET_ACCESS_KEY}\"" >> /home/circleci/project/env
|
||||
source /home/circleci/project/env
|
||||
set -eux -o pipefail
|
||||
retry () {
|
||||
$* || (sleep 1 && $*) || (sleep 2 && $*) || (sleep 4 && $*) || (sleep 8 && $*)
|
||||
}
|
||||
retry pip install awscli==1.6
|
||||
"/home/circleci/project/builder/cron/update_s3_htmls.sh"
|
||||
|
||||
# Update s3 htmls for the nightlies
|
||||
update_s3_htmls_for_nightlies:
|
||||
environment:
|
||||
PIP_UPLOAD_FOLDER: "nightly/"
|
||||
<<: *update_s3_htmls
|
||||
|
||||
# Update s3 htmls for the nightlies for devtoolset7
|
||||
update_s3_htmls_for_nightlies_devtoolset7:
|
||||
environment:
|
||||
PIP_UPLOAD_FOLDER: "nightly/devtoolset7/"
|
||||
<<: *update_s3_htmls
|
||||
|
||||
|
||||
# upload_binary_logs job
|
||||
# The builder hud at pytorch.org/builder shows the sizes of all the binaries
|
||||
# over time. It gets this info from html files stored in S3, which this job
|
||||
# populates every day.
|
||||
upload_binary_sizes: &upload_binary_sizes
|
||||
machine:
|
||||
image: ubuntu-1604:201903-01
|
||||
steps:
|
||||
- attach_workspace:
|
||||
at: ~/workspace
|
||||
- setup_linux_system_environment
|
||||
- run:
|
||||
<<: *binary_checkout
|
||||
- run:
|
||||
<<: *binary_install_miniconda
|
||||
- run:
|
||||
name: Upload binary sizes
|
||||
no_output_timeout: "1h"
|
||||
command: |
|
||||
set +x
|
||||
echo "declare -x \"AWS_ACCESS_KEY_ID=${PYTORCH_BINARY_AWS_ACCESS_KEY_ID}\"" > /home/circleci/project/env
|
||||
echo "declare -x \"AWS_SECRET_ACCESS_KEY=${PYTORCH_BINARY_AWS_SECRET_ACCESS_KEY}\"" >> /home/circleci/project/env
|
||||
export DATE="$(date -u +%Y_%m_%d)"
|
||||
retry () {
|
||||
$* || (sleep 1 && $*) || (sleep 2 && $*) || (sleep 4 && $*) || (sleep 8 && $*)
|
||||
}
|
||||
source /home/circleci/project/env
|
||||
set -eux -o pipefail
|
||||
|
||||
# This is hardcoded to match binary_install_miniconda.sh
|
||||
export PATH="/home/circleci/project/miniconda/bin:$PATH"
|
||||
# Not any awscli will work. Most won't. This one will work
|
||||
retry conda create -qyn aws36 python=3.6
|
||||
source activate aws36
|
||||
pip install awscli==1.16.46
|
||||
|
||||
"/home/circleci/project/builder/cron/upload_binary_sizes.sh"
|
||||
|
||||
@ -1,28 +0,0 @@
|
||||
caffe2_params: &caffe2_params
|
||||
parameters:
|
||||
build_environment:
|
||||
type: string
|
||||
default: ""
|
||||
build_ios:
|
||||
type: string
|
||||
default: ""
|
||||
docker_image:
|
||||
type: string
|
||||
default: ""
|
||||
use_cuda_docker_runtime:
|
||||
type: string
|
||||
default: ""
|
||||
build_only:
|
||||
type: string
|
||||
default: ""
|
||||
resource_class:
|
||||
type: string
|
||||
default: "large"
|
||||
environment:
|
||||
BUILD_ENVIRONMENT: << parameters.build_environment >>
|
||||
BUILD_IOS: << parameters.build_ios >>
|
||||
USE_CUDA_DOCKER_RUNTIME: << parameters.use_cuda_docker_runtime >>
|
||||
DOCKER_IMAGE: << parameters.docker_image >>
|
||||
BUILD_ONLY: << parameters.build_only >>
|
||||
resource_class: << parameters.resource_class >>
|
||||
|
||||
@ -1,200 +0,0 @@
|
||||
caffe2_linux_build:
|
||||
<<: *caffe2_params
|
||||
machine:
|
||||
image: ubuntu-1604:201903-01
|
||||
steps:
|
||||
# See Note [Workspace for CircleCI scripts] in job-specs-setup.yml
|
||||
- should_run_job
|
||||
- setup_linux_system_environment
|
||||
- checkout
|
||||
- setup_ci_environment
|
||||
- run:
|
||||
name: Build
|
||||
no_output_timeout: "1h"
|
||||
command: |
|
||||
set -e
|
||||
cat >/home/circleci/project/ci_build_script.sh \<<EOL
|
||||
# =================== The following code will be executed inside Docker container ===================
|
||||
set -ex
|
||||
export BUILD_ENVIRONMENT="$BUILD_ENVIRONMENT"
|
||||
|
||||
# Reinitialize submodules
|
||||
git submodule sync && git submodule update -q --init --recursive
|
||||
|
||||
# conda must be added to the path for Anaconda builds (this location must be
|
||||
# the same as that in install_anaconda.sh used to build the docker image)
|
||||
if [[ "${BUILD_ENVIRONMENT}" == conda* ]]; then
|
||||
export PATH=/opt/conda/bin:$PATH
|
||||
sudo chown -R jenkins:jenkins '/opt/conda'
|
||||
fi
|
||||
|
||||
# Build
|
||||
./.jenkins/caffe2/build.sh
|
||||
|
||||
# Show sccache stats if it is running
|
||||
if pgrep sccache > /dev/null; then
|
||||
sccache --show-stats
|
||||
fi
|
||||
# =================== The above code will be executed inside Docker container ===================
|
||||
EOL
|
||||
chmod +x /home/circleci/project/ci_build_script.sh
|
||||
|
||||
echo "DOCKER_IMAGE: "${DOCKER_IMAGE}
|
||||
time docker pull ${DOCKER_IMAGE} >/dev/null
|
||||
export id=$(docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -t -d -w /var/lib/jenkins ${DOCKER_IMAGE})
|
||||
docker cp /home/circleci/project/. $id:/var/lib/jenkins/workspace
|
||||
|
||||
export COMMAND='((echo "source ./workspace/env" && echo "sudo chown -R jenkins workspace && cd workspace && ./ci_build_script.sh") | docker exec -u jenkins -i "$id" bash) 2>&1'
|
||||
echo ${COMMAND} > ./command.sh && unbuffer bash ./command.sh | ts
|
||||
|
||||
# Push intermediate Docker image for next phase to use
|
||||
if [ -z "${BUILD_ONLY}" ]; then
|
||||
if [[ "$BUILD_ENVIRONMENT" == *cmake* ]]; then
|
||||
export COMMIT_DOCKER_IMAGE=${DOCKER_IMAGE}-cmake-${CIRCLE_SHA1}
|
||||
else
|
||||
export COMMIT_DOCKER_IMAGE=${DOCKER_IMAGE}-${CIRCLE_SHA1}
|
||||
fi
|
||||
docker commit "$id" ${COMMIT_DOCKER_IMAGE}
|
||||
time docker push ${COMMIT_DOCKER_IMAGE}
|
||||
fi
|
||||
|
||||
caffe2_linux_test:
|
||||
<<: *caffe2_params
|
||||
machine:
|
||||
image: ubuntu-1604:201903-01
|
||||
steps:
|
||||
# See Note [Workspace for CircleCI scripts] in job-specs-setup.yml
|
||||
- should_run_job
|
||||
- setup_linux_system_environment
|
||||
- setup_ci_environment
|
||||
- run:
|
||||
name: Test
|
||||
no_output_timeout: "1h"
|
||||
command: |
|
||||
set -e
|
||||
# TODO: merge this into Caffe2 test.sh
|
||||
cat >/home/circleci/project/ci_test_script.sh \<<EOL
|
||||
# =================== The following code will be executed inside Docker container ===================
|
||||
set -ex
|
||||
|
||||
export BUILD_ENVIRONMENT="$BUILD_ENVIRONMENT"
|
||||
|
||||
# libdc1394 (dependency of OpenCV) expects /dev/raw1394 to exist...
|
||||
sudo ln /dev/null /dev/raw1394
|
||||
|
||||
# conda must be added to the path for Anaconda builds (this location must be
|
||||
# the same as that in install_anaconda.sh used to build the docker image)
|
||||
if [[ "${BUILD_ENVIRONMENT}" == conda* ]]; then
|
||||
export PATH=/opt/conda/bin:$PATH
|
||||
fi
|
||||
|
||||
# Upgrade SSL module to avoid old SSL warnings
|
||||
pip -q install --user --upgrade pyOpenSSL ndg-httpsclient pyasn1
|
||||
|
||||
pip -q install --user -b /tmp/pip_install_onnx "file:///var/lib/jenkins/workspace/third_party/onnx#egg=onnx"
|
||||
|
||||
# Build
|
||||
./.jenkins/caffe2/test.sh
|
||||
|
||||
# Remove benign core dumps.
|
||||
# These are tests for signal handling (including SIGABRT).
|
||||
rm -f ./crash/core.fatal_signal_as.*
|
||||
rm -f ./crash/core.logging_test.*
|
||||
# =================== The above code will be executed inside Docker container ===================
|
||||
EOL
|
||||
chmod +x /home/circleci/project/ci_test_script.sh
|
||||
|
||||
if [[ "$BUILD_ENVIRONMENT" == *cmake* ]]; then
|
||||
export COMMIT_DOCKER_IMAGE=${DOCKER_IMAGE}-cmake-${CIRCLE_SHA1}
|
||||
else
|
||||
export COMMIT_DOCKER_IMAGE=${DOCKER_IMAGE}-${CIRCLE_SHA1}
|
||||
fi
|
||||
echo "DOCKER_IMAGE: "${COMMIT_DOCKER_IMAGE}
|
||||
time docker pull ${COMMIT_DOCKER_IMAGE} >/dev/null
|
||||
if [ -n "${USE_CUDA_DOCKER_RUNTIME}" ]; then
|
||||
export id=$(docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --runtime=nvidia -t -d -w /var/lib/jenkins ${COMMIT_DOCKER_IMAGE})
|
||||
else
|
||||
export id=$(docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -t -d -w /var/lib/jenkins ${COMMIT_DOCKER_IMAGE})
|
||||
fi
|
||||
docker cp /home/circleci/project/. "$id:/var/lib/jenkins/workspace"
|
||||
|
||||
export COMMAND='((echo "source ./workspace/env" && echo "sudo chown -R jenkins workspace && cd workspace && ./ci_test_script.sh") | docker exec -u jenkins -i "$id" bash) 2>&1'
|
||||
echo ${COMMAND} > ./command.sh && unbuffer bash ./command.sh | ts
|
||||
|
||||
caffe2_macos_build:
|
||||
<<: *caffe2_params
|
||||
macos:
|
||||
xcode: "9.0"
|
||||
steps:
|
||||
# See Note [Workspace for CircleCI scripts] in job-specs-setup.yml
|
||||
- should_run_job
|
||||
- checkout
|
||||
- run_brew_for_macos_build
|
||||
- run:
|
||||
name: Build
|
||||
no_output_timeout: "1h"
|
||||
command: |
|
||||
set -e
|
||||
|
||||
export IN_CIRCLECI=1
|
||||
|
||||
brew install cmake
|
||||
|
||||
# Reinitialize submodules
|
||||
git submodule sync && git submodule update -q --init --recursive
|
||||
|
||||
# Reinitialize path (see man page for path_helper(8))
|
||||
eval `/usr/libexec/path_helper -s`
|
||||
|
||||
export PATH=/usr/local/opt/python/libexec/bin:/usr/local/bin:$PATH
|
||||
|
||||
# Install Anaconda if we need to
|
||||
if [ -n "${CAFFE2_USE_ANACONDA}" ]; then
|
||||
rm -rf ${TMPDIR}/anaconda
|
||||
curl -o ${TMPDIR}/conda.sh https://repo.continuum.io/miniconda/Miniconda${ANACONDA_VERSION}-latest-MacOSX-x86_64.sh
|
||||
chmod +x ${TMPDIR}/conda.sh
|
||||
/bin/bash ${TMPDIR}/conda.sh -b -p ${TMPDIR}/anaconda
|
||||
rm -f ${TMPDIR}/conda.sh
|
||||
export PATH="${TMPDIR}/anaconda/bin:${PATH}"
|
||||
source ${TMPDIR}/anaconda/bin/activate
|
||||
fi
|
||||
|
||||
pip -q install numpy
|
||||
|
||||
# Install sccache
|
||||
sudo curl https://s3.amazonaws.com/ossci-macos/sccache --output /usr/local/bin/sccache
|
||||
sudo chmod +x /usr/local/bin/sccache
|
||||
export SCCACHE_BUCKET=ossci-compiler-cache-circleci-v2
|
||||
|
||||
# This IAM user allows write access to S3 bucket for sccache
|
||||
set +x
|
||||
export AWS_ACCESS_KEY_ID=${CIRCLECI_AWS_ACCESS_KEY_FOR_SCCACHE_S3_BUCKET_V4}
|
||||
export AWS_SECRET_ACCESS_KEY=${CIRCLECI_AWS_SECRET_KEY_FOR_SCCACHE_S3_BUCKET_V4}
|
||||
set -x
|
||||
|
||||
export SCCACHE_BIN=${PWD}/sccache_bin
|
||||
mkdir -p ${SCCACHE_BIN}
|
||||
if which sccache > /dev/null; then
|
||||
printf "#!/bin/sh\nexec sccache $(which clang++) \$*" > "${SCCACHE_BIN}/clang++"
|
||||
chmod a+x "${SCCACHE_BIN}/clang++"
|
||||
|
||||
printf "#!/bin/sh\nexec sccache $(which clang) \$*" > "${SCCACHE_BIN}/clang"
|
||||
chmod a+x "${SCCACHE_BIN}/clang"
|
||||
|
||||
export PATH="${SCCACHE_BIN}:$PATH"
|
||||
fi
|
||||
|
||||
# Build
|
||||
if [ "${BUILD_IOS:-0}" -eq 1 ]; then
|
||||
unbuffer scripts/build_ios.sh 2>&1 | ts
|
||||
elif [ -n "${CAFFE2_USE_ANACONDA}" ]; then
|
||||
# All conda build logic should be in scripts/build_anaconda.sh
|
||||
unbuffer scripts/build_anaconda.sh 2>&1 | ts
|
||||
else
|
||||
unbuffer scripts/build_local.sh 2>&1 | ts
|
||||
fi
|
||||
|
||||
# Show sccache stats if it is running
|
||||
if which sccache > /dev/null; then
|
||||
sccache --show-stats
|
||||
fi
|
||||
@ -1,90 +0,0 @@
|
||||
commands:
|
||||
# NB: This command must be run as the first command in a job. It
|
||||
# attaches the workspace at ~/workspace; this workspace is generated
|
||||
# by the setup job. Note that ~/workspace is not the default working
|
||||
# directory (that's ~/project).
|
||||
should_run_job:
|
||||
description: "Test if the job should run or not"
|
||||
steps:
|
||||
- attach_workspace:
|
||||
name: Attaching workspace
|
||||
at: ~/workspace
|
||||
- run:
|
||||
name: Should run job
|
||||
no_output_timeout: "2m"
|
||||
command: ~/workspace/.circleci/scripts/should_run_job.sh
|
||||
|
||||
# This system setup script is meant to run before the CI-related scripts, e.g.,
|
||||
# installing Git client, checking out code, setting up CI env, and
|
||||
# building/testing.
|
||||
setup_linux_system_environment:
|
||||
steps:
|
||||
- run:
|
||||
name: Set Up System Environment
|
||||
no_output_timeout: "1h"
|
||||
command: ~/workspace/.circleci/scripts/setup_linux_system_environment.sh
|
||||
|
||||
setup_ci_environment:
|
||||
steps:
|
||||
- run:
|
||||
name: Set Up CI Environment After attach_workspace
|
||||
no_output_timeout: "1h"
|
||||
command: ~/workspace/.circleci/scripts/setup_ci_environment.sh
|
||||
|
||||
brew_update:
|
||||
description: "Update Homebrew and install base formulae"
|
||||
steps:
|
||||
- run:
|
||||
name: Update Homebrew
|
||||
no_output_timeout: "10m"
|
||||
command: |
|
||||
set -ex
|
||||
|
||||
# Update repositories manually.
|
||||
# Running `brew update` produces a comparison between the
|
||||
# current checkout and the updated checkout, which takes a
|
||||
# very long time because the existing checkout is 2y old.
|
||||
for path in $(find /usr/local/Homebrew -type d -name .git)
|
||||
do
|
||||
cd $path/..
|
||||
git fetch --depth=1 origin
|
||||
git reset --hard origin/master
|
||||
done
|
||||
|
||||
export HOMEBREW_NO_AUTO_UPDATE=1
|
||||
|
||||
# Install expect and moreutils so that we can call `unbuffer` and `ts`.
|
||||
# moreutils installs a `parallel` executable by default, which conflicts
|
||||
# with the executable from the GNU `parallel`, so we must unlink GNU
|
||||
# `parallel` first, and relink it afterwards.
|
||||
brew unlink parallel
|
||||
brew install moreutils
|
||||
brew link parallel --overwrite
|
||||
brew install expect
|
||||
|
||||
brew_install:
|
||||
description: "Install Homebrew formulae"
|
||||
parameters:
|
||||
formulae:
|
||||
type: string
|
||||
default: ""
|
||||
steps:
|
||||
- run:
|
||||
name: Install << parameters.formulae >>
|
||||
no_output_timeout: "10m"
|
||||
command: |
|
||||
set -ex
|
||||
export HOMEBREW_NO_AUTO_UPDATE=1
|
||||
brew install << parameters.formulae >>
|
||||
|
||||
run_brew_for_macos_build:
|
||||
steps:
|
||||
- brew_update
|
||||
- brew_install:
|
||||
formulae: libomp
|
||||
|
||||
run_brew_for_ios_build:
|
||||
steps:
|
||||
- brew_update
|
||||
- brew_install:
|
||||
formulae: libtool
|
||||
@ -1,21 +0,0 @@
|
||||
docker_build_job:
|
||||
parameters:
|
||||
image_name:
|
||||
type: string
|
||||
default: ""
|
||||
machine:
|
||||
image: ubuntu-1604:201903-01
|
||||
resource_class: large
|
||||
environment:
|
||||
IMAGE_NAME: << parameters.image_name >>
|
||||
steps:
|
||||
- checkout
|
||||
- run:
|
||||
name: build_docker_image_<< parameters.image_name >>
|
||||
no_output_timeout: "1h"
|
||||
command: |
|
||||
set +x
|
||||
export AWS_ACCESS_KEY_ID=${CIRCLECI_AWS_ACCESS_KEY_FOR_DOCKER_BUILDER_V1}
|
||||
export AWS_SECRET_ACCESS_KEY=${CIRCLECI_AWS_SECRET_KEY_FOR_DOCKER_BUILDER_V1}
|
||||
set -x
|
||||
cd .circleci/docker && ./build_docker.sh
|
||||
@ -1,21 +0,0 @@
|
||||
# WARNING: DO NOT EDIT THIS FILE DIRECTLY!!!
|
||||
# See the README.md in this directory.
|
||||
|
||||
# IMPORTANT: To update Docker image version, please first update
|
||||
# https://github.com/pytorch/ossci-job-dsl/blob/master/src/main/groovy/ossci/pytorch/DockerVersion.groovy and
|
||||
# https://github.com/pytorch/ossci-job-dsl/blob/master/src/main/groovy/ossci/caffe2/DockerVersion.groovy,
|
||||
# and then update DOCKER_IMAGE_VERSION at the top of the following files:
|
||||
# * cimodel/data/pytorch_build_definitions.py
|
||||
# * cimodel/data/caffe2_build_definitions.py
|
||||
# And the inline copies of the variable in
|
||||
# * verbatim-sources/job-specs-custom.yml
|
||||
# (grep for DOCKER_IMAGE)
|
||||
|
||||
version: 2.1
|
||||
|
||||
docker_config_defaults: &docker_config_defaults
|
||||
user: jenkins
|
||||
aws_auth:
|
||||
# This IAM user only allows read-write access to ECR
|
||||
aws_access_key_id: ${CIRCLECI_AWS_ACCESS_KEY_FOR_ECR_READ_WRITE_V4}
|
||||
aws_secret_access_key: ${CIRCLECI_AWS_SECRET_KEY_FOR_ECR_READ_WRITE_V4}
|
||||
@ -1,474 +0,0 @@
|
||||
pytorch_python_doc_push:
|
||||
environment:
|
||||
BUILD_ENVIRONMENT: pytorch-python-doc-push
|
||||
# TODO: stop hardcoding this
|
||||
DOCKER_IMAGE: "308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-xenial-cuda9-cudnn7-py3:405"
|
||||
resource_class: large
|
||||
machine:
|
||||
image: ubuntu-1604:201903-01
|
||||
steps:
|
||||
# See Note [Workspace for CircleCI scripts] in job-specs-setup.yml
|
||||
- should_run_job
|
||||
- setup_linux_system_environment
|
||||
- setup_ci_environment
|
||||
- run:
|
||||
name: Doc Build and Push
|
||||
no_output_timeout: "1h"
|
||||
command: |
|
||||
set -ex
|
||||
export COMMIT_DOCKER_IMAGE=${DOCKER_IMAGE}-${CIRCLE_SHA1}
|
||||
echo "DOCKER_IMAGE: "${COMMIT_DOCKER_IMAGE}
|
||||
time docker pull ${COMMIT_DOCKER_IMAGE} >/dev/null
|
||||
export id=$(docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -t -d -w /var/lib/jenkins ${COMMIT_DOCKER_IMAGE})
|
||||
|
||||
# master branch docs push
|
||||
if [[ "${CIRCLE_BRANCH}" == "master" ]]; then
|
||||
export COMMAND='((echo "export BUILD_ENVIRONMENT=${BUILD_ENVIRONMENT}" && echo "export GITHUB_PYTORCHBOT_TOKEN=${GITHUB_PYTORCHBOT_TOKEN}" && echo "source ./workspace/env" && echo "sudo chown -R jenkins workspace && cd workspace && . ./.circleci/scripts/python_doc_push_script.sh docs/master master site") | docker exec -u jenkins -i "$id" bash) 2>&1'
|
||||
|
||||
# stable release docs push. Due to some circleci limitations, we keep
|
||||
# an eternal PR open for merging v1.2.0 -> master for this job.
|
||||
# XXX: The following code is only run on the v1.2.0 branch, which might
|
||||
# not be exactly the same as what you see here.
|
||||
elif [[ "${CIRCLE_BRANCH}" == "v1.2.0" ]]; then
|
||||
export COMMAND='((echo "export BUILD_ENVIRONMENT=${BUILD_ENVIRONMENT}" && echo "export GITHUB_PYTORCHBOT_TOKEN=${GITHUB_PYTORCHBOT_TOKEN}" && echo "source ./workspace/env" && echo "sudo chown -R jenkins workspace && cd workspace && . ./.circleci/scripts/python_doc_push_script.sh docs/stable 1.2.0 site dry_run") | docker exec -u jenkins -i "$id" bash) 2>&1'
|
||||
|
||||
# For open PRs: Do a dry_run of the docs build, don't push build
|
||||
else
|
||||
export COMMAND='((echo "export BUILD_ENVIRONMENT=${BUILD_ENVIRONMENT}" && echo "export GITHUB_PYTORCHBOT_TOKEN=${GITHUB_PYTORCHBOT_TOKEN}" && echo "source ./workspace/env" && echo "sudo chown -R jenkins workspace && cd workspace && . ./.circleci/scripts/python_doc_push_script.sh docs/master master site dry_run") | docker exec -u jenkins -i "$id" bash) 2>&1'
|
||||
fi
|
||||
|
||||
echo ${COMMAND} > ./command.sh && unbuffer bash ./command.sh | ts
|
||||
|
||||
# Save the docs build so we can debug any problems
|
||||
export DEBUG_COMMIT_DOCKER_IMAGE=${COMMIT_DOCKER_IMAGE}-debug
|
||||
docker commit "$id" ${DEBUG_COMMIT_DOCKER_IMAGE}
|
||||
time docker push ${DEBUG_COMMIT_DOCKER_IMAGE}
|
||||
|
||||
pytorch_cpp_doc_push:
|
||||
environment:
|
||||
BUILD_ENVIRONMENT: pytorch-cpp-doc-push
|
||||
DOCKER_IMAGE: "308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-xenial-cuda9-cudnn7-py3:405"
|
||||
resource_class: large
|
||||
machine:
|
||||
image: ubuntu-1604:201903-01
|
||||
steps:
|
||||
# See Note [Workspace for CircleCI scripts] in job-specs-setup.yml
|
||||
- should_run_job
|
||||
- setup_linux_system_environment
|
||||
- setup_ci_environment
|
||||
- run:
|
||||
name: Doc Build and Push
|
||||
no_output_timeout: "1h"
|
||||
command: |
|
||||
set -ex
|
||||
export COMMIT_DOCKER_IMAGE=${DOCKER_IMAGE}-${CIRCLE_SHA1}
|
||||
echo "DOCKER_IMAGE: "${COMMIT_DOCKER_IMAGE}
|
||||
time docker pull ${COMMIT_DOCKER_IMAGE} >/dev/null
|
||||
export id=$(docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -t -d -w /var/lib/jenkins ${COMMIT_DOCKER_IMAGE})
|
||||
|
||||
# master branch docs push
|
||||
if [[ "${CIRCLE_BRANCH}" == "master" ]]; then
|
||||
export COMMAND='((echo "export BUILD_ENVIRONMENT=${BUILD_ENVIRONMENT}" && echo "export GITHUB_PYTORCHBOT_TOKEN=${GITHUB_PYTORCHBOT_TOKEN}" && echo "source ./workspace/env" && echo "sudo chown -R jenkins workspace && cd workspace && . ./.circleci/scripts/cpp_doc_push_script.sh docs/master master") | docker exec -u jenkins -i "$id" bash) 2>&1'
|
||||
|
||||
# stable release docs push. Due to some circleci limitations, we keep
|
||||
# an eternal PR open (#16502) for merging v1.0.1 -> master for this job.
|
||||
# XXX: The following code is only run on the v1.0.1 branch, which might
|
||||
# not be exactly the same as what you see here.
|
||||
elif [[ "${CIRCLE_BRANCH}" == "v1.0.1" ]]; then
|
||||
export COMMAND='((echo "export BUILD_ENVIRONMENT=${BUILD_ENVIRONMENT}" && echo "export GITHUB_PYTORCHBOT_TOKEN=${GITHUB_PYTORCHBOT_TOKEN}" && echo "source ./workspace/env" && echo "sudo chown -R jenkins workspace && cd workspace && . ./.circleci/scripts/cpp_doc_push_script.sh docs/stable 1.0.1") | docker exec -u jenkins -i "$id" bash) 2>&1'
|
||||
|
||||
# For open PRs: Do a dry_run of the docs build, don't push build
|
||||
else
|
||||
export COMMAND='((echo "export BUILD_ENVIRONMENT=${BUILD_ENVIRONMENT}" && echo "export GITHUB_PYTORCHBOT_TOKEN=${GITHUB_PYTORCHBOT_TOKEN}" && echo "source ./workspace/env" && echo "sudo chown -R jenkins workspace && cd workspace && . ./.circleci/scripts/cpp_doc_push_script.sh docs/master master dry_run") | docker exec -u jenkins -i "$id" bash) 2>&1'
|
||||
fi
|
||||
|
||||
echo ${COMMAND} > ./command.sh && unbuffer bash ./command.sh | ts
|
||||
|
||||
# Save the docs build so we can debug any problems
|
||||
export DEBUG_COMMIT_DOCKER_IMAGE=${COMMIT_DOCKER_IMAGE}-debug
|
||||
docker commit "$id" ${DEBUG_COMMIT_DOCKER_IMAGE}
|
||||
time docker push ${DEBUG_COMMIT_DOCKER_IMAGE}
|
||||
|
||||
pytorch_macos_10_13_py3_build:
|
||||
environment:
|
||||
BUILD_ENVIRONMENT: pytorch-macos-10.13-py3-build
|
||||
macos:
|
||||
xcode: "9.0"
|
||||
steps:
|
||||
# See Note [Workspace for CircleCI scripts] in job-specs-setup.yml
|
||||
- should_run_job
|
||||
- checkout
|
||||
- run_brew_for_macos_build
|
||||
- run:
|
||||
name: Build
|
||||
no_output_timeout: "1h"
|
||||
command: |
|
||||
set -e
|
||||
export IN_CIRCLECI=1
|
||||
|
||||
# Install sccache
|
||||
sudo curl https://s3.amazonaws.com/ossci-macos/sccache --output /usr/local/bin/sccache
|
||||
sudo chmod +x /usr/local/bin/sccache
|
||||
export SCCACHE_BUCKET=ossci-compiler-cache-circleci-v2
|
||||
|
||||
# This IAM user allows write access to S3 bucket for sccache
|
||||
set +x
|
||||
export AWS_ACCESS_KEY_ID=${CIRCLECI_AWS_ACCESS_KEY_FOR_SCCACHE_S3_BUCKET_V4}
|
||||
export AWS_SECRET_ACCESS_KEY=${CIRCLECI_AWS_SECRET_KEY_FOR_SCCACHE_S3_BUCKET_V4}
|
||||
set -x
|
||||
|
||||
chmod a+x .jenkins/pytorch/macos-build.sh
|
||||
unbuffer .jenkins/pytorch/macos-build.sh 2>&1 | ts
|
||||
|
||||
# copy with -a to preserve relative structure (e.g., symlinks), and be recursive
|
||||
cp -a ~/project ~/workspace
|
||||
|
||||
- persist_to_workspace:
|
||||
root: ~/workspace
|
||||
paths:
|
||||
- miniconda3
|
||||
- project
|
||||
|
||||
pytorch_macos_10_13_py3_test:
|
||||
environment:
|
||||
BUILD_ENVIRONMENT: pytorch-macos-10.13-py3-test
|
||||
macos:
|
||||
xcode: "9.0"
|
||||
steps:
|
||||
# See Note [Workspace for CircleCI scripts] in job-specs-setup.yml
|
||||
# This workspace also carries binaries from the build job
|
||||
- should_run_job
|
||||
- run_brew_for_macos_build
|
||||
- run:
|
||||
name: Test
|
||||
no_output_timeout: "1h"
|
||||
command: |
|
||||
set -e
|
||||
export IN_CIRCLECI=1
|
||||
|
||||
# copy with -a to preserve relative structure (e.g., symlinks), and be recursive
|
||||
cp -a ~/workspace/project/. ~/project
|
||||
|
||||
chmod a+x .jenkins/pytorch/macos-test.sh
|
||||
unbuffer .jenkins/pytorch/macos-test.sh 2>&1 | ts
|
||||
- store_test_results:
|
||||
path: test/test-reports
|
||||
|
||||
pytorch_macos_10_13_cuda9_2_cudnn7_py3_build:
|
||||
environment:
|
||||
BUILD_ENVIRONMENT: pytorch-macos-10.13-cuda9.2-cudnn7-py3-build
|
||||
macos:
|
||||
xcode: "9.0"
|
||||
steps:
|
||||
# See Note [Workspace for CircleCI scripts] in job-specs-setup.yml
|
||||
- should_run_job
|
||||
- checkout
|
||||
- run_brew_for_macos_build
|
||||
- run:
|
||||
name: Build
|
||||
no_output_timeout: "1h"
|
||||
command: |
|
||||
set -e
|
||||
|
||||
export IN_CIRCLECI=1
|
||||
|
||||
# Install CUDA 9.2
|
||||
sudo rm -rf ~/cuda_9.2.64_mac_installer.app || true
|
||||
curl https://s3.amazonaws.com/ossci-macos/cuda_9.2.64_mac_installer.zip -o ~/cuda_9.2.64_mac_installer.zip
|
||||
unzip ~/cuda_9.2.64_mac_installer.zip -d ~/
|
||||
sudo ~/cuda_9.2.64_mac_installer.app/Contents/MacOS/CUDAMacOSXInstaller --accept-eula --no-window
|
||||
sudo cp /usr/local/cuda/lib/libcuda.dylib /Developer/NVIDIA/CUDA-9.2/lib/libcuda.dylib
|
||||
sudo rm -rf /usr/local/cuda || true
|
||||
|
||||
# Install cuDNN 7.1 for CUDA 9.2
|
||||
curl https://s3.amazonaws.com/ossci-macos/cudnn-9.2-osx-x64-v7.1.tgz -o ~/cudnn-9.2-osx-x64-v7.1.tgz
|
||||
rm -rf ~/cudnn-9.2-osx-x64-v7.1 && mkdir ~/cudnn-9.2-osx-x64-v7.1
|
||||
tar -xzvf ~/cudnn-9.2-osx-x64-v7.1.tgz -C ~/cudnn-9.2-osx-x64-v7.1
|
||||
sudo cp ~/cudnn-9.2-osx-x64-v7.1/cuda/include/cudnn.h /Developer/NVIDIA/CUDA-9.2/include/
|
||||
sudo cp ~/cudnn-9.2-osx-x64-v7.1/cuda/lib/libcudnn* /Developer/NVIDIA/CUDA-9.2/lib/
|
||||
sudo chmod a+r /Developer/NVIDIA/CUDA-9.2/include/cudnn.h /Developer/NVIDIA/CUDA-9.2/lib/libcudnn*
|
||||
|
||||
# Install sccache
|
||||
sudo curl https://s3.amazonaws.com/ossci-macos/sccache --output /usr/local/bin/sccache
|
||||
sudo chmod +x /usr/local/bin/sccache
|
||||
export SCCACHE_BUCKET=ossci-compiler-cache-circleci-v2
|
||||
# This IAM user allows write access to S3 bucket for sccache
|
||||
set +x
|
||||
export AWS_ACCESS_KEY_ID=${CIRCLECI_AWS_ACCESS_KEY_FOR_SCCACHE_S3_BUCKET_V4}
|
||||
export AWS_SECRET_ACCESS_KEY=${CIRCLECI_AWS_SECRET_KEY_FOR_SCCACHE_S3_BUCKET_V4}
|
||||
set -x
|
||||
|
||||
git submodule sync && git submodule update -q --init --recursive
|
||||
chmod a+x .jenkins/pytorch/macos-build.sh
|
||||
unbuffer .jenkins/pytorch/macos-build.sh 2>&1 | ts
|
||||
|
||||
pytorch_android_gradle_build:
|
||||
environment:
|
||||
BUILD_ENVIRONMENT: pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-build
|
||||
DOCKER_IMAGE: "308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-xenial-py3-clang5-android-ndk-r19c:405"
|
||||
PYTHON_VERSION: "3.6"
|
||||
resource_class: large
|
||||
machine:
|
||||
image: ubuntu-1604:201903-01
|
||||
steps:
|
||||
- should_run_job
|
||||
- setup_linux_system_environment
|
||||
- checkout
|
||||
- setup_ci_environment
|
||||
- run:
|
||||
name: pytorch android gradle build
|
||||
no_output_timeout: "1h"
|
||||
command: |
|
||||
set -eux
|
||||
docker_image_commit=${DOCKER_IMAGE}-${CIRCLE_SHA1}
|
||||
|
||||
docker_image_libtorch_android_x86_32=${docker_image_commit}-android-x86_32
|
||||
docker_image_libtorch_android_x86_64=${docker_image_commit}-android-x86_64
|
||||
docker_image_libtorch_android_arm_v7a=${docker_image_commit}-android-arm-v7a
|
||||
docker_image_libtorch_android_arm_v8a=${docker_image_commit}-android-arm-v8a
|
||||
|
||||
echo "docker_image_commit: "${docker_image_commit}
|
||||
echo "docker_image_libtorch_android_x86_32: "${docker_image_libtorch_android_x86_32}
|
||||
echo "docker_image_libtorch_android_x86_64: "${docker_image_libtorch_android_x86_64}
|
||||
echo "docker_image_libtorch_android_arm_v7a: "${docker_image_libtorch_android_arm_v7a}
|
||||
echo "docker_image_libtorch_android_arm_v8a: "${docker_image_libtorch_android_arm_v8a}
|
||||
|
||||
# x86_32
|
||||
time docker pull ${docker_image_libtorch_android_x86_32} >/dev/null
|
||||
export id_x86_32=$(docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -t -d -w /var/lib/jenkins ${docker_image_libtorch_android_x86_32})
|
||||
|
||||
export COMMAND='((echo "source ./workspace/env" && echo "sudo chown -R jenkins workspace") | docker exec -u jenkins -i "$id_x86_32" bash) 2>&1'
|
||||
echo ${COMMAND} > ./command.sh && unbuffer bash ./command.sh | ts
|
||||
|
||||
# arm-v7a
|
||||
time docker pull ${docker_image_libtorch_android_arm_v7a} >/dev/null
|
||||
export id_arm_v7a=$(docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -t -d -w /var/lib/jenkins ${docker_image_libtorch_android_arm_v7a})
|
||||
|
||||
export COMMAND='((echo "source ./workspace/env" && echo "sudo chown -R jenkins workspace") | docker exec -u jenkins -i "$id_arm_v7a" bash) 2>&1'
|
||||
echo ${COMMAND} > ./command.sh && unbuffer bash ./command.sh | ts
|
||||
|
||||
mkdir ~/workspace/build_android_install_arm_v7a
|
||||
docker cp $id_arm_v7a:/var/lib/jenkins/workspace/build_android/install ~/workspace/build_android_install_arm_v7a
|
||||
|
||||
# x86_64
|
||||
time docker pull ${docker_image_libtorch_android_x86_64} >/dev/null
|
||||
export id_x86_64=$(docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -t -d -w /var/lib/jenkins ${docker_image_libtorch_android_x86_64})
|
||||
|
||||
export COMMAND='((echo "source ./workspace/env" && echo "sudo chown -R jenkins workspace") | docker exec -u jenkins -i "$id_x86_64" bash) 2>&1'
|
||||
echo ${COMMAND} > ./command.sh && unbuffer bash ./command.sh | ts
|
||||
|
||||
mkdir ~/workspace/build_android_install_x86_64
|
||||
docker cp $id_x86_64:/var/lib/jenkins/workspace/build_android/install ~/workspace/build_android_install_x86_64
|
||||
|
||||
# arm-v8a
|
||||
time docker pull ${docker_image_libtorch_android_arm_v8a} >/dev/null
|
||||
export id_arm_v8a=$(docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -t -d -w /var/lib/jenkins ${docker_image_libtorch_android_arm_v8a})
|
||||
|
||||
export COMMAND='((echo "source ./workspace/env" && echo "sudo chown -R jenkins workspace") | docker exec -u jenkins -i "$id_arm_v8a" bash) 2>&1'
|
||||
echo ${COMMAND} > ./command.sh && unbuffer bash ./command.sh | ts
|
||||
|
||||
mkdir ~/workspace/build_android_install_arm_v8a
|
||||
docker cp $id_arm_v8a:/var/lib/jenkins/workspace/build_android/install ~/workspace/build_android_install_arm_v8a
|
||||
|
||||
docker cp ~/workspace/build_android_install_arm_v7a $id_x86_32:/var/lib/jenkins/workspace/build_android_install_arm_v7a
|
||||
docker cp ~/workspace/build_android_install_x86_64 $id_x86_32:/var/lib/jenkins/workspace/build_android_install_x86_64
|
||||
docker cp ~/workspace/build_android_install_arm_v8a $id_x86_32:/var/lib/jenkins/workspace/build_android_install_arm_v8a
|
||||
|
||||
# run gradle buildRelease
|
||||
export COMMAND='((echo "source ./workspace/env" && echo "export BUILD_ENVIRONMENT=${BUILD_ENVIRONMENT}" && echo "export GRADLE_OFFLINE=1" && echo "sudo chown -R jenkins workspace && cd workspace && ./.circleci/scripts/build_android_gradle.sh") | docker exec -u jenkins -i "$id_x86_32" bash) 2>&1'
|
||||
echo ${COMMAND} > ./command.sh && unbuffer bash ./command.sh | ts
|
||||
|
||||
mkdir -p ~/workspace/build_android_artifacts
|
||||
docker cp $id_x86_32:/var/lib/jenkins/workspace/android/artifacts.tgz ~/workspace/build_android_artifacts/
|
||||
|
||||
output_image=$docker_image_libtorch_android_x86_32-gradle
|
||||
docker commit "$id_x86_32" ${output_image}
|
||||
time docker push ${output_image}
|
||||
- store_artifacts:
|
||||
path: ~/workspace/build_android_artifacts/artifacts.tgz
|
||||
destination: artifacts.tgz
|
||||
|
||||
pytorch_android_publish_snapshot:
|
||||
environment:
|
||||
BUILD_ENVIRONMENT: pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-publish-snapshot
|
||||
DOCKER_IMAGE: "308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-xenial-py3-clang5-android-ndk-r19c:405"
|
||||
PYTHON_VERSION: "3.6"
|
||||
resource_class: large
|
||||
machine:
|
||||
image: ubuntu-1604:201903-01
|
||||
steps:
|
||||
- should_run_job
|
||||
- setup_linux_system_environment
|
||||
- checkout
|
||||
- setup_ci_environment
|
||||
- run:
|
||||
name: pytorch android gradle build
|
||||
no_output_timeout: "1h"
|
||||
command: |
|
||||
set -eux
|
||||
docker_image_commit=${DOCKER_IMAGE}-${CIRCLE_SHA1}
|
||||
|
||||
docker_image_libtorch_android_x86_32_gradle=${docker_image_commit}-android-x86_32-gradle
|
||||
|
||||
echo "docker_image_commit: "${docker_image_commit}
|
||||
echo "docker_image_libtorch_android_x86_32_gradle: "${docker_image_libtorch_android_x86_32_gradle}
|
||||
|
||||
# x86_32
|
||||
time docker pull ${docker_image_libtorch_android_x86_32_gradle} >/dev/null
|
||||
export id_x86_32=$(docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -t -d -w /var/lib/jenkins ${docker_image_libtorch_android_x86_32_gradle})
|
||||
|
||||
export COMMAND='((echo "source ./workspace/env" && echo "sudo chown -R jenkins workspace" && echo "export BUILD_ENVIRONMENT=${BUILD_ENVIRONMENT}" && echo "export SONATYPE_NEXUS_USERNAME=${SONATYPE_NEXUS_USERNAME}" && echo "export SONATYPE_NEXUS_PASSWORD=${SONATYPE_NEXUS_PASSWORD}" && echo "export ANDROID_SIGN_KEY=${ANDROID_SIGN_KEY}" && echo "export ANDROID_SIGN_PASS=${ANDROID_SIGN_PASS}" && echo "sudo chown -R jenkins workspace && cd workspace && ./.circleci/scripts/publish_android_snapshot.sh") | docker exec -u jenkins -i "$id_x86_32" bash) 2>&1'
|
||||
echo ${COMMAND} > ./command.sh && unbuffer bash ./command.sh | ts
|
||||
|
||||
output_image=${docker_image_libtorch_android_x86_32_gradle}-publish-snapshot
|
||||
docker commit "$id_x86_32" ${output_image}
|
||||
time docker push ${output_image}
|
||||
|
||||
pytorch_android_gradle_build-x86_32:
|
||||
environment:
|
||||
BUILD_ENVIRONMENT: pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-build-only-x86_32
|
||||
DOCKER_IMAGE: "308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-xenial-py3-clang5-android-ndk-r19c:405"
|
||||
PYTHON_VERSION: "3.6"
|
||||
resource_class: large
|
||||
machine:
|
||||
image: ubuntu-1604:201903-01
|
||||
steps:
|
||||
- should_run_job
|
||||
- run:
|
||||
name: filter out not PR runs
|
||||
no_output_timeout: "5m"
|
||||
command: |
|
||||
echo "CIRCLE_PULL_REQUEST: ${CIRCLE_PULL_REQUEST:-}"
|
||||
if [ -z "${CIRCLE_PULL_REQUEST:-}" ]; then
|
||||
circleci step halt
|
||||
fi
|
||||
- setup_linux_system_environment
|
||||
- checkout
|
||||
- setup_ci_environment
|
||||
- run:
|
||||
name: pytorch android gradle build only x86_32 (for PR)
|
||||
no_output_timeout: "1h"
|
||||
command: |
|
||||
set -e
|
||||
docker_image_libtorch_android_x86_32=${DOCKER_IMAGE}-${CIRCLE_SHA1}-android-x86_32
|
||||
echo "docker_image_libtorch_android_x86_32: "${docker_image_libtorch_android_x86_32}
|
||||
|
||||
# x86
|
||||
time docker pull ${docker_image_libtorch_android_x86_32} >/dev/null
|
||||
export id=$(docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -t -d -w /var/lib/jenkins ${docker_image_libtorch_android_x86_32})
|
||||
|
||||
export COMMAND='((echo "source ./workspace/env" && echo "export BUILD_ENVIRONMENT=${BUILD_ENVIRONMENT}" && echo "export GRADLE_OFFLINE=1" && echo "sudo chown -R jenkins workspace && cd workspace && ./.circleci/scripts/build_android_gradle.sh") | docker exec -u jenkins -i "$id" bash) 2>&1'
|
||||
echo ${COMMAND} > ./command.sh && unbuffer bash ./command.sh | ts
|
||||
|
||||
mkdir -p ~/workspace/build_android_x86_32_artifacts
|
||||
docker cp $id:/var/lib/jenkins/workspace/android/artifacts.tgz ~/workspace/build_android_x86_32_artifacts/
|
||||
|
||||
output_image=${docker_image_libtorch_android_x86_32}-gradle
|
||||
docker commit "$id" ${output_image}
|
||||
time docker push ${output_image}
|
||||
- store_artifacts:
|
||||
path: ~/workspace/build_android_x86_32_artifacts/artifacts.tgz
|
||||
destination: artifacts.tgz
|
||||
|
||||
pytorch_ios_build:
|
||||
<<: *pytorch_ios_params
|
||||
macos:
|
||||
xcode: "11.2.1"
|
||||
steps:
|
||||
# See Note [Workspace for CircleCI scripts] in job-specs-setup.yml
|
||||
- should_run_job
|
||||
- checkout
|
||||
- run_brew_for_ios_build
|
||||
- run:
|
||||
name: Run Fastlane
|
||||
no_output_timeout: "1h"
|
||||
command: |
|
||||
set -e
|
||||
PROJ_ROOT=/Users/distiller/project
|
||||
cd ${PROJ_ROOT}/ios/TestApp
|
||||
# install fastlane
|
||||
sudo gem install bundler && bundle install
|
||||
# install certificates
|
||||
echo ${IOS_CERT_KEY} >> cert.txt
|
||||
base64 --decode cert.txt -o Certificates.p12
|
||||
rm cert.txt
|
||||
bundle exec fastlane install_cert
|
||||
# install the provisioning profile
|
||||
PROFILE=TestApp_CI.mobileprovision
|
||||
PROVISIONING_PROFILES=~/Library/MobileDevice/Provisioning\ Profiles
|
||||
mkdir -pv "${PROVISIONING_PROFILES}"
|
||||
cd "${PROVISIONING_PROFILES}"
|
||||
echo ${IOS_SIGN_KEY} >> cert.txt
|
||||
base64 --decode cert.txt -o ${PROFILE}
|
||||
rm cert.txt
|
||||
- run:
|
||||
name: Build
|
||||
no_output_timeout: "1h"
|
||||
command: |
|
||||
set -e
|
||||
export IN_CIRCLECI=1
|
||||
WORKSPACE=/Users/distiller/workspace
|
||||
PROJ_ROOT=/Users/distiller/project
|
||||
export TCLLIBPATH="/usr/local/lib"
|
||||
# Install conda
|
||||
curl -o ~/Downloads/conda.sh https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh
|
||||
chmod +x ~/Downloads/conda.sh
|
||||
/bin/bash ~/Downloads/conda.sh -b -p ~/anaconda
|
||||
export PATH="~/anaconda/bin:${PATH}"
|
||||
source ~/anaconda/bin/activate
|
||||
# Install dependencies
|
||||
conda install numpy ninja pyyaml mkl mkl-include setuptools cmake cffi typing requests --yes
|
||||
# sync submodules
|
||||
cd ${PROJ_ROOT}
|
||||
git submodule sync
|
||||
git submodule update --init --recursive
|
||||
# export
|
||||
export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
|
||||
# run build script
|
||||
chmod a+x ${PROJ_ROOT}/scripts/build_ios.sh
|
||||
echo "IOS_ARCH: ${IOS_ARCH}"
|
||||
echo "IOS_PLATFORM: ${IOS_PLATFORM}"
|
||||
export BUILD_PYTORCH_MOBILE=1
|
||||
export IOS_ARCH=${IOS_ARCH}
|
||||
export IOS_PLATFORM=${IOS_PLATFORM}
|
||||
unbuffer ${PROJ_ROOT}/scripts/build_ios.sh 2>&1 | ts
|
||||
- run:
|
||||
name: Run Build Tests
|
||||
no_output_timeout: "30m"
|
||||
command: |
|
||||
set -e
|
||||
PROJ_ROOT=/Users/distiller/project
|
||||
PROFILE=TestApp_CI
|
||||
# run the ruby build script
|
||||
if ! [ -x "$(command -v xcodebuild)" ]; then
|
||||
echo 'Error: xcodebuild is not installed.'
|
||||
exit 1
|
||||
fi
|
||||
echo ${IOS_DEV_TEAM_ID}
|
||||
ruby ${PROJ_ROOT}/scripts/xcode_build.rb -i ${PROJ_ROOT}/build_ios/install -x ${PROJ_ROOT}/ios/TestApp/TestApp.xcodeproj -p ${IOS_PLATFORM} -c ${PROFILE} -t ${IOS_DEV_TEAM_ID}
|
||||
if ! [ "$?" -eq "0" ]; then
|
||||
echo 'xcodebuild failed!'
|
||||
exit 1
|
||||
fi
|
||||
- run:
|
||||
name: Run Simulator Tests
|
||||
no_output_timeout: "2h"
|
||||
command: |
|
||||
set -e
|
||||
if [ ${IOS_PLATFORM} != "SIMULATOR" ]; then
|
||||
echo "not SIMULATOR build, skip it."
|
||||
exit 0
|
||||
fi
|
||||
WORKSPACE=/Users/distiller/workspace
|
||||
PROJ_ROOT=/Users/distiller/project
|
||||
source ~/anaconda/bin/activate
|
||||
#install the latest version of PyTorch and TorchVision
|
||||
pip install torch torchvision
|
||||
#run unit test
|
||||
cd ${PROJ_ROOT}/ios/TestApp/benchmark
|
||||
python trace_model.py
|
||||
ruby setup.rb
|
||||
cd ${PROJ_ROOT}/ios/TestApp
|
||||
instruments -s -devices
|
||||
fastlane scan
|
||||
|
||||
@ -1,30 +0,0 @@
|
||||
|
||||
setup:
|
||||
docker:
|
||||
- image: circleci/python:3.7.3
|
||||
steps:
|
||||
- checkout
|
||||
- run:
|
||||
name: Save commit message
|
||||
command: git log --format='%B' -n 1 HEAD > .circleci/scripts/COMMIT_MSG
|
||||
# Note [Workspace for CircleCI scripts]
|
||||
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
# In the beginning, you wrote your CI scripts in a
|
||||
# .circleci/config.yml file, and life was good. Your CI
|
||||
# configurations flourished and multiplied.
|
||||
#
|
||||
# Then one day, CircleCI cometh down high and say, "Your YAML file
|
||||
# is too biggeth, it stresses our servers so." And thus they
|
||||
# asketh us to smite the scripts in the yml file.
|
||||
#
|
||||
# But you can't just put the scripts in the .circleci folder,
|
||||
# because in some jobs, you don't ever actually checkout the
|
||||
# source repository. Where you gonna get the scripts from?
|
||||
#
|
||||
# Here's how you do it: you persist .circleci/scripts into a
|
||||
# workspace, attach the workspace in your subjobs, and run all
|
||||
# your scripts from there.
|
||||
- persist_to_workspace:
|
||||
root: .
|
||||
paths: .circleci/scripts
|
||||
|
||||
@ -1,50 +0,0 @@
|
||||
|
||||
##############################################################################
|
||||
# Binary build (nightlies nightly build) defaults
|
||||
# The binary builds use the docker executor b/c at time of writing the machine
|
||||
# executor is limited to only two cores and is painfully slow (4.5+ hours per
|
||||
# GPU build). But the docker executor cannot be run with --runtime=nvidia, and
|
||||
# so the binary test/upload jobs must run on a machine executor. The package
|
||||
# built in the build job is persisted to the workspace, which the test jobs
|
||||
# expect. The test jobs just run a few quick smoke tests (very similar to the
|
||||
# second-round-user-facing smoke tests above) and then upload the binaries to
|
||||
# their final locations. The upload part requires credentials that should only
|
||||
# be available to org-members.
|
||||
#
|
||||
# binary_checkout MUST be run before other commands here. This is because the
|
||||
# other commands are written in .circleci/scripts/*.sh , so the pytorch source
|
||||
# code must be downloaded on the machine before they can be run. We cannot
|
||||
# inline all the code into this file, since that would cause the yaml size to
|
||||
# explode past 4 MB (all the code in the command section is just copy-pasted to
|
||||
# everywhere in the .circleci/config.yml file where it appears).
|
||||
##############################################################################
|
||||
|
||||
# Checks out the Pytorch and Builder repos (always both of them), and places
|
||||
# them in the right place depending on what executor we're running on. We curl
|
||||
# our .sh file from the interweb to avoid yaml size bloat. Note that many jobs
|
||||
# do not need both the pytorch and builder repos, so this is a little wasteful
|
||||
# (smoke tests and upload jobs do not need the pytorch repo).
|
||||
binary_checkout: &binary_checkout
|
||||
name: Checkout pytorch/builder repo
|
||||
command: ~/workspace/.circleci/scripts/binary_checkout.sh
|
||||
|
||||
# Parses circleci arguments in a consistent way, essentially routing to the
|
||||
# correct pythonXgccXcudaXos build we want
|
||||
binary_populate_env: &binary_populate_env
|
||||
name: Set up binary env variables
|
||||
command: ~/workspace/.circleci/scripts/binary_populate_env.sh
|
||||
|
||||
binary_install_miniconda: &binary_install_miniconda
|
||||
name: Install miniconda
|
||||
no_output_timeout: "1h"
|
||||
command: ~/workspace/.circleci/scripts/binary_install_miniconda.sh
|
||||
|
||||
# This section is used in the binary_test and smoke_test jobs. It expects
|
||||
# 'binary_populate_env' to have populated /home/circleci/project/env and it
|
||||
# expects another section to populate /home/circleci/project/ci_test_script.sh
|
||||
# with the code to run in the docker
|
||||
binary_run_in_docker: &binary_run_in_docker
|
||||
name: Run in docker
|
||||
# This step only runs on circleci linux machine executors that themselves
|
||||
# need to start docker images
|
||||
command: ~/workspace/.circleci/scripts/binary_run_in_docker.sh
|
||||
@ -1,39 +0,0 @@
|
||||
pytorch_params: &pytorch_params
|
||||
parameters:
|
||||
build_environment:
|
||||
type: string
|
||||
default: ""
|
||||
docker_image:
|
||||
type: string
|
||||
default: ""
|
||||
resource_class:
|
||||
type: string
|
||||
default: "large"
|
||||
use_cuda_docker_runtime:
|
||||
type: string
|
||||
default: ""
|
||||
environment:
|
||||
BUILD_ENVIRONMENT: << parameters.build_environment >>
|
||||
DOCKER_IMAGE: << parameters.docker_image >>
|
||||
USE_CUDA_DOCKER_RUNTIME: << parameters.use_cuda_docker_runtime >>
|
||||
resource_class: << parameters.resource_class >>
|
||||
|
||||
pytorch_ios_params: &pytorch_ios_params
|
||||
parameters:
|
||||
build_environment:
|
||||
type: string
|
||||
default: ""
|
||||
ios_arch:
|
||||
type: string
|
||||
default: ""
|
||||
ios_platform:
|
||||
type: string
|
||||
default: ""
|
||||
environment:
|
||||
BUILD_ENVIRONMENT: << parameters.build_environment >>
|
||||
IOS_ARCH: << parameters.ios_arch >>
|
||||
IOS_PLATFORM: << parameters.ios_platform >>
|
||||
|
||||
|
||||
|
||||
|
||||
@ -1,141 +0,0 @@
|
||||
jobs:
|
||||
pytorch_linux_build:
|
||||
<<: *pytorch_params
|
||||
machine:
|
||||
image: ubuntu-1604:201903-01
|
||||
steps:
|
||||
# See Note [Workspace for CircleCI scripts] in job-specs-setup.yml
|
||||
- should_run_job
|
||||
- setup_linux_system_environment
|
||||
- checkout
|
||||
- setup_ci_environment
|
||||
- run:
|
||||
name: Build
|
||||
no_output_timeout: "1h"
|
||||
command: |
|
||||
set -e
|
||||
# Pull Docker image and run build
|
||||
echo "DOCKER_IMAGE: "${DOCKER_IMAGE}
|
||||
time docker pull ${DOCKER_IMAGE} >/dev/null
|
||||
export id=$(docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -t -d -w /var/lib/jenkins ${DOCKER_IMAGE})
|
||||
|
||||
# NB: Temporarily disable the rebase logic in v1.4.0, don't merge this change into master
|
||||
# # TODO We may want to move the rebase logic to a separate step after checkout
|
||||
# # Rebase to master only if in xenial_py3_6_gcc5_4 case
|
||||
# if [[ "${CIRCLE_BRANCH}" != "master" && "${BUILD_ENVIRONMENT}" == *"gcc5"* ]]; then
|
||||
# echo "Merge master branch into $CIRCLE_BRANCH before build in environment $BUILD_ENVIRONMENT"
|
||||
# set -x
|
||||
# git config --global user.email "circleci.ossci@gmail.com"
|
||||
# git config --global user.name "CircleCI"
|
||||
# git config remote.origin.url https://github.com/pytorch/pytorch.git
|
||||
# git config --add remote.origin.fetch +refs/heads/master:refs/remotes/origin/master
|
||||
# git fetch --tags --progress https://github.com/pytorch/pytorch.git +refs/heads/master:refs/remotes/origin/master --depth=100 --quiet
|
||||
# export GIT_MERGE_TARGET=`git log -n 1 --pretty=format:"%H" origin/master`
|
||||
# echo "GIT_MERGE_TARGET: " ${GIT_MERGE_TARGET}
|
||||
# export GIT_COMMIT=${CIRCLE_SHA1}
|
||||
# echo "GIT_COMMIT: " ${GIT_COMMIT}
|
||||
# git checkout -f ${GIT_COMMIT}
|
||||
# git reset --hard ${GIT_COMMIT}
|
||||
# git merge --allow-unrelated-histories --no-edit --no-ff ${GIT_MERGE_TARGET}
|
||||
# set +x
|
||||
# else
|
||||
# echo "Do NOT merge master branch into $CIRCLE_BRANCH in environment $BUILD_ENVIRONMENT"
|
||||
# fi
|
||||
|
||||
git submodule sync && git submodule update -q --init --recursive
|
||||
|
||||
docker cp /home/circleci/project/. $id:/var/lib/jenkins/workspace
|
||||
|
||||
if [[ ${BUILD_ENVIRONMENT} == *"paralleltbb"* ]]; then
|
||||
export PARALLEL_FLAGS="export ATEN_THREADING=TBB USE_TBB=1 "
|
||||
elif [[ ${BUILD_ENVIRONMENT} == *"parallelnative"* ]]; then
|
||||
export PARALLEL_FLAGS="export ATEN_THREADING=NATIVE "
|
||||
fi
|
||||
echo "Parallel backend flags: "${PARALLEL_FLAGS}
|
||||
|
||||
export COMMAND='((echo "export BUILD_ENVIRONMENT=${BUILD_ENVIRONMENT}" && echo '"$PARALLEL_FLAGS"' && echo "source ./workspace/env" && echo "sudo chown -R jenkins workspace && cd workspace && .jenkins/pytorch/build.sh") | docker exec -u jenkins -i "$id" bash) 2>&1'
|
||||
|
||||
echo ${COMMAND} > ./command.sh && unbuffer bash ./command.sh | ts
|
||||
|
||||
# Push intermediate Docker image for next phase to use
|
||||
if [ -z "${BUILD_ONLY}" ]; then
|
||||
# Note [Special build images]
|
||||
# The xla build uses the same docker image as
|
||||
# pytorch-linux-trusty-py3.6-gcc5.4-build. In the push step, we have to
|
||||
# distinguish between them so the test can pick up the correct image.
|
||||
output_image=${DOCKER_IMAGE}-${CIRCLE_SHA1}
|
||||
if [[ ${BUILD_ENVIRONMENT} == *"xla"* ]]; then
|
||||
export COMMIT_DOCKER_IMAGE=$output_image-xla
|
||||
elif [[ ${BUILD_ENVIRONMENT} == *"libtorch"* ]]; then
|
||||
export COMMIT_DOCKER_IMAGE=$output_image-libtorch
|
||||
elif [[ ${BUILD_ENVIRONMENT} == *"android-ndk-r19c-x86_64"* ]]; then
|
||||
export COMMIT_DOCKER_IMAGE=$output_image-android-x86_64
|
||||
elif [[ ${BUILD_ENVIRONMENT} == *"android-ndk-r19c-arm-v7a"* ]]; then
|
||||
export COMMIT_DOCKER_IMAGE=$output_image-android-arm-v7a
|
||||
elif [[ ${BUILD_ENVIRONMENT} == *"android-ndk-r19c-arm-v8a"* ]]; then
|
||||
export COMMIT_DOCKER_IMAGE=$output_image-android-arm-v8a
|
||||
elif [[ ${BUILD_ENVIRONMENT} == *"android-ndk-r19c-x86_32"* ]]; then
|
||||
export COMMIT_DOCKER_IMAGE=$output_image-android-x86_32
|
||||
else
|
||||
export COMMIT_DOCKER_IMAGE=$output_image
|
||||
fi
|
||||
docker commit "$id" ${COMMIT_DOCKER_IMAGE}
|
||||
time docker push ${COMMIT_DOCKER_IMAGE}
|
||||
fi
|
||||
|
||||
pytorch_linux_test:
|
||||
<<: *pytorch_params
|
||||
machine:
|
||||
image: ubuntu-1604:201903-01
|
||||
steps:
|
||||
# See Note [Workspace for CircleCI scripts] in job-specs-setup.yml
|
||||
- should_run_job
|
||||
- setup_linux_system_environment
|
||||
- setup_ci_environment
|
||||
- run:
|
||||
name: Test
|
||||
no_output_timeout: "90m"
|
||||
command: |
|
||||
set -e
|
||||
# See Note [Special build images]
|
||||
output_image=${DOCKER_IMAGE}-${CIRCLE_SHA1}
|
||||
if [[ ${BUILD_ENVIRONMENT} == *"xla"* ]]; then
|
||||
export COMMIT_DOCKER_IMAGE=$output_image-xla
|
||||
elif [[ ${BUILD_ENVIRONMENT} == *"libtorch"* ]]; then
|
||||
export COMMIT_DOCKER_IMAGE=$output_image-libtorch
|
||||
else
|
||||
export COMMIT_DOCKER_IMAGE=$output_image
|
||||
fi
|
||||
echo "DOCKER_IMAGE: "${COMMIT_DOCKER_IMAGE}
|
||||
|
||||
if [[ ${BUILD_ENVIRONMENT} == *"paralleltbb"* ]]; then
|
||||
export PARALLEL_FLAGS="export ATEN_THREADING=TBB USE_TBB=1 "
|
||||
elif [[ ${BUILD_ENVIRONMENT} == *"parallelnative"* ]]; then
|
||||
export PARALLEL_FLAGS="export ATEN_THREADING=NATIVE "
|
||||
fi
|
||||
echo "Parallel backend flags: "${PARALLEL_FLAGS}
|
||||
|
||||
time docker pull ${COMMIT_DOCKER_IMAGE} >/dev/null
|
||||
|
||||
if [ -n "${USE_CUDA_DOCKER_RUNTIME}" ]; then
|
||||
export id=$(docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --runtime=nvidia -t -d -w /var/lib/jenkins ${COMMIT_DOCKER_IMAGE})
|
||||
else
|
||||
export id=$(docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -t -d -w /var/lib/jenkins ${COMMIT_DOCKER_IMAGE})
|
||||
fi
|
||||
|
||||
retrieve_test_reports() {
|
||||
echo "retrieving test reports"
|
||||
docker cp $id:/var/lib/jenkins/workspace/test/test-reports ./ || echo 'No test reports found!'
|
||||
}
|
||||
trap "retrieve_test_reports" ERR
|
||||
|
||||
if [[ ${BUILD_ENVIRONMENT} == *"multigpu"* ]]; then
|
||||
export COMMAND='((echo "export BUILD_ENVIRONMENT=${BUILD_ENVIRONMENT}" && echo "${PARALLEL_FLAGS}" && echo "source ./workspace/env" && echo "sudo chown -R jenkins workspace && cd workspace && .jenkins/pytorch/multigpu-test.sh") | docker exec -u jenkins -i "$id" bash) 2>&1'
|
||||
else
|
||||
export COMMAND='((echo "export BUILD_ENVIRONMENT=${BUILD_ENVIRONMENT}" && echo "${PARALLEL_FLAGS}" && echo "source ./workspace/env" && echo "sudo chown -R jenkins workspace && cd workspace && .jenkins/pytorch/test.sh") | docker exec -u jenkins -i "$id" bash) 2>&1'
|
||||
fi
|
||||
echo ${COMMAND} > ./command.sh && unbuffer bash ./command.sh | ts
|
||||
|
||||
retrieve_test_reports
|
||||
- store_test_results:
|
||||
path: test-reports
|
||||
@ -1,4 +0,0 @@
|
||||
|
||||
##############################################################################
|
||||
# Daily binary build trigger
|
||||
##############################################################################
|
||||
@ -1,101 +0,0 @@
|
||||
# Binary builds (subset, to smoke test that they'll work)
|
||||
#
|
||||
# NB: If you modify this file, you need to also modify
|
||||
# the binary_and_smoke_tests_on_pr variable in
|
||||
# pytorch-ci-hud to adjust the list of whitelisted builds
|
||||
# at https://github.com/ezyang/pytorch-ci-hud/blob/master/src/BuildHistoryDisplay.js
|
||||
|
||||
- binary_linux_build:
|
||||
name: binary_linux_manywheel_2_7mu_cpu_devtoolset7_build
|
||||
build_environment: "manywheel 2.7mu cpu devtoolset7"
|
||||
requires:
|
||||
- setup
|
||||
docker_image: "pytorch/manylinux-cuda100"
|
||||
- binary_linux_build:
|
||||
name: binary_linux_manywheel_3_7m_cu100_devtoolset7_build
|
||||
build_environment: "manywheel 3.7m cu100 devtoolset7"
|
||||
requires:
|
||||
- setup
|
||||
docker_image: "pytorch/manylinux-cuda100"
|
||||
- binary_linux_build:
|
||||
name: binary_linux_conda_2_7_cpu_devtoolset7_build
|
||||
build_environment: "conda 2.7 cpu devtoolset7"
|
||||
requires:
|
||||
- setup
|
||||
docker_image: "pytorch/conda-cuda"
|
||||
# This binary build is currently broken, see https://github_com/pytorch/pytorch/issues/16710
|
||||
# - binary_linux_conda_3_6_cu90_devtoolset7_build
|
||||
- binary_linux_build:
|
||||
name: binary_linux_libtorch_2_7m_cpu_devtoolset7_shared-with-deps_build
|
||||
build_environment: "libtorch 2.7m cpu devtoolset7"
|
||||
requires:
|
||||
- setup
|
||||
libtorch_variant: "shared-with-deps"
|
||||
docker_image: "pytorch/manylinux-cuda100"
|
||||
- binary_linux_build:
|
||||
name: binary_linux_libtorch_2_7m_cpu_gcc5_4_cxx11-abi_shared-with-deps_build
|
||||
build_environment: "libtorch 2.7m cpu gcc5.4_cxx11-abi"
|
||||
requires:
|
||||
- setup
|
||||
libtorch_variant: "shared-with-deps"
|
||||
docker_image: "pytorch/pytorch-binary-docker-image-ubuntu16.04:latest"
|
||||
# TODO we should test a libtorch cuda build, but they take too long
|
||||
# - binary_linux_libtorch_2_7m_cu90_devtoolset7_static-without-deps_build
|
||||
- binary_mac_build:
|
||||
name: binary_macos_wheel_3_6_cpu_build
|
||||
build_environment: "wheel 3.6 cpu"
|
||||
requires:
|
||||
- setup
|
||||
- binary_mac_build:
|
||||
name: binary_macos_conda_2_7_cpu_build
|
||||
build_environment: "conda 2.7 cpu"
|
||||
requires:
|
||||
- setup
|
||||
- binary_mac_build:
|
||||
name: binary_macos_libtorch_2_7_cpu_build
|
||||
build_environment: "libtorch 2.7 cpu"
|
||||
requires:
|
||||
- setup
|
||||
|
||||
- binary_linux_test:
|
||||
name: binary_linux_manywheel_2_7mu_cpu_devtoolset7_test
|
||||
build_environment: "manywheel 2.7mu cpu devtoolset7"
|
||||
requires:
|
||||
- setup
|
||||
- binary_linux_manywheel_2_7mu_cpu_devtoolset7_build
|
||||
docker_image: "pytorch/manylinux-cuda100"
|
||||
- binary_linux_test:
|
||||
name: binary_linux_manywheel_3_7m_cu100_devtoolset7_test
|
||||
build_environment: "manywheel 3.7m cu100 devtoolset7"
|
||||
requires:
|
||||
- setup
|
||||
- binary_linux_manywheel_3_7m_cu100_devtoolset7_build
|
||||
docker_image: "pytorch/manylinux-cuda100"
|
||||
use_cuda_docker_runtime: "1"
|
||||
resource_class: gpu.medium
|
||||
- binary_linux_test:
|
||||
name: binary_linux_conda_2_7_cpu_devtoolset7_test
|
||||
build_environment: "conda 2.7 cpu devtoolset7"
|
||||
requires:
|
||||
- setup
|
||||
- binary_linux_conda_2_7_cpu_devtoolset7_build
|
||||
docker_image: "pytorch/conda-cuda"
|
||||
# This binary build is currently broken, see https://github_com/pytorch/pytorch/issues/16710
|
||||
# - binary_linux_conda_3_6_cu90_devtoolset7_test:
|
||||
- binary_linux_test:
|
||||
name: binary_linux_libtorch_2_7m_cpu_devtoolset7_shared-with-deps_test
|
||||
build_environment: "libtorch 2.7m cpu devtoolset7"
|
||||
requires:
|
||||
- setup
|
||||
- binary_linux_libtorch_2_7m_cpu_devtoolset7_shared-with-deps_build
|
||||
libtorch_variant: "shared-with-deps"
|
||||
docker_image: "pytorch/manylinux-cuda100"
|
||||
- binary_linux_test:
|
||||
name: binary_linux_libtorch_2_7m_cpu_gcc5_4_cxx11-abi_shared-with-deps_test
|
||||
build_environment: "libtorch 2.7m cpu gcc5.4_cxx11-abi"
|
||||
requires:
|
||||
- setup
|
||||
- binary_linux_libtorch_2_7m_cpu_gcc5_4_cxx11-abi_shared-with-deps_build
|
||||
libtorch_variant: "shared-with-deps"
|
||||
docker_image: "pytorch/pytorch-binary-docker-image-ubuntu16.04:latest"
|
||||
|
||||
@ -1,66 +0,0 @@
|
||||
docker_build:
|
||||
triggers:
|
||||
- schedule:
|
||||
cron: "0 15 * * 0"
|
||||
filters:
|
||||
branches:
|
||||
only:
|
||||
- master
|
||||
jobs:
|
||||
- docker_build_job:
|
||||
name: "pytorch-linux-bionic-clang9-thrift-llvmdev"
|
||||
image_name: "pytorch-linux-bionic-clang9-thrift-llvmdev"
|
||||
- docker_build_job:
|
||||
name: "pytorch-linux-xenial-cuda10-cudnn7-py3-gcc7"
|
||||
image_name: "pytorch-linux-xenial-cuda10-cudnn7-py3-gcc7"
|
||||
- docker_build_job:
|
||||
name: "pytorch-linux-xenial-cuda10.1-cudnn7-py3-gcc7"
|
||||
image_name: "pytorch-linux-xenial-cuda10.1-cudnn7-py3-gcc7"
|
||||
- docker_build_job:
|
||||
name: "pytorch-linux-xenial-cuda8-cudnn7-py2"
|
||||
image_name: "pytorch-linux-xenial-cuda8-cudnn7-py2"
|
||||
- docker_build_job:
|
||||
name: "pytorch-linux-xenial-cuda8-cudnn7-py3"
|
||||
image_name: "pytorch-linux-xenial-cuda8-cudnn7-py3"
|
||||
- docker_build_job:
|
||||
name: "pytorch-linux-xenial-cuda9-cudnn7-py2"
|
||||
image_name: "pytorch-linux-xenial-cuda9-cudnn7-py2"
|
||||
- docker_build_job:
|
||||
name: "pytorch-linux-xenial-cuda9-cudnn7-py3"
|
||||
image_name: "pytorch-linux-xenial-cuda9-cudnn7-py3"
|
||||
- docker_build_job:
|
||||
name: "pytorch-linux-xenial-cuda9.2-cudnn7-py3-gcc7"
|
||||
image_name: "pytorch-linux-xenial-cuda9.2-cudnn7-py3-gcc7"
|
||||
- docker_build_job:
|
||||
name: "pytorch-linux-xenial-py2.7.9"
|
||||
image_name: "pytorch-linux-xenial-py2.7.9"
|
||||
- docker_build_job:
|
||||
name: "pytorch-linux-xenial-py2.7"
|
||||
image_name: "pytorch-linux-xenial-py2.7"
|
||||
- docker_build_job:
|
||||
name: "pytorch-linux-xenial-py3-clang5-android-ndk-r19c"
|
||||
image_name: "pytorch-linux-xenial-py3-clang5-android-ndk-r19c"
|
||||
- docker_build_job:
|
||||
name: "pytorch-linux-xenial-py3-clang5-asan"
|
||||
image_name: "pytorch-linux-xenial-py3-clang5-asan"
|
||||
- docker_build_job:
|
||||
name: "pytorch-linux-xenial-py3.5"
|
||||
image_name: "pytorch-linux-xenial-py3.5"
|
||||
- docker_build_job:
|
||||
name: "pytorch-linux-xenial-py3.6-clang7"
|
||||
image_name: "pytorch-linux-xenial-py3.6-clang7"
|
||||
- docker_build_job:
|
||||
name: "pytorch-linux-xenial-py3.6-gcc4.8"
|
||||
image_name: "pytorch-linux-xenial-py3.6-gcc4.8"
|
||||
- docker_build_job:
|
||||
name: "pytorch-linux-xenial-py3.6-gcc5.4"
|
||||
image_name: "pytorch-linux-xenial-py3.6-gcc5.4"
|
||||
- docker_build_job:
|
||||
name: "pytorch-linux-xenial-py3.6-gcc7.2"
|
||||
image_name: "pytorch-linux-xenial-py3.6-gcc7.2"
|
||||
- docker_build_job:
|
||||
name: "pytorch-linux-xenial-py3.6-gcc7"
|
||||
image_name: "pytorch-linux-xenial-py3.6-gcc7"
|
||||
- docker_build_job:
|
||||
name: "pytorch-linux-xenial-pynightly"
|
||||
image_name: "pytorch-linux-xenial-pynightly"
|
||||
@ -1,56 +0,0 @@
|
||||
- pytorch_linux_build:
|
||||
name: nightly_pytorch_linux_xenial_py3_clang5_android_ndk_r19c_x86_32_build
|
||||
build_environment: "pytorch-linux-xenial-py3-clang5-android-ndk-r19c-x86_32"
|
||||
requires:
|
||||
- setup
|
||||
docker_image: "308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-xenial-py3-clang5-android-ndk-r19c:405"
|
||||
filters:
|
||||
branches:
|
||||
only: nightly
|
||||
- pytorch_linux_build:
|
||||
name: nightly_pytorch_linux_xenial_py3_clang5_android_ndk_r19c_x86_64_build
|
||||
build_environment: "pytorch-linux-xenial-py3-clang5-android-ndk-r19c-x86_64"
|
||||
requires:
|
||||
- setup
|
||||
docker_image: "308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-xenial-py3-clang5-android-ndk-r19c:405"
|
||||
filters:
|
||||
branches:
|
||||
only: nightly
|
||||
- pytorch_linux_build:
|
||||
name: nightly_pytorch_linux_xenial_py3_clang5_android_ndk_r19c_arm_v7a_build
|
||||
build_environment: "pytorch-linux-xenial-py3-clang5-android-ndk-r19c-arm-v7a"
|
||||
requires:
|
||||
- setup
|
||||
docker_image: "308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-xenial-py3-clang5-android-ndk-r19c:405"
|
||||
filters:
|
||||
branches:
|
||||
only: nightly
|
||||
- pytorch_linux_build:
|
||||
name: nightly_pytorch_linux_xenial_py3_clang5_android_ndk_r19c_arm_v8a_build
|
||||
build_environment: "pytorch-linux-xenial-py3-clang5-android-ndk-r19c-arm-v8a"
|
||||
requires:
|
||||
- setup
|
||||
docker_image: "308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-xenial-py3-clang5-android-ndk-r19c:405"
|
||||
filters:
|
||||
branches:
|
||||
only: nightly
|
||||
|
||||
- pytorch_android_gradle_build:
|
||||
name: nightly_pytorch_linux_xenial_py3_clang5_android_ndk_r19c_android_gradle_build
|
||||
requires:
|
||||
- nightly_pytorch_linux_xenial_py3_clang5_android_ndk_r19c_x86_32_build
|
||||
- nightly_pytorch_linux_xenial_py3_clang5_android_ndk_r19c_x86_64_build
|
||||
- nightly_pytorch_linux_xenial_py3_clang5_android_ndk_r19c_arm_v7a_build
|
||||
- nightly_pytorch_linux_xenial_py3_clang5_android_ndk_r19c_arm_v8a_build
|
||||
filters:
|
||||
branches:
|
||||
only: nightly
|
||||
|
||||
- pytorch_android_publish_snapshot:
|
||||
name: nightly_pytorch_linux_xenial_py3_clang5_android_ndk_r19c_x86_32_android_publish_snapshot
|
||||
requires:
|
||||
- nightly_pytorch_linux_xenial_py3_clang5_android_ndk_r19c_android_gradle_build
|
||||
context: org-member
|
||||
filters:
|
||||
branches:
|
||||
only: nightly
|
||||
@ -1,33 +0,0 @@
|
||||
# Pytorch iOS binary builds
|
||||
- binary_ios_build:
|
||||
name: pytorch_ios_11_2_1_nightly_x86_64_build
|
||||
build_environment: "libtorch-ios-11.2.1-nightly-x86_64-build"
|
||||
context: org-member
|
||||
ios_platform: "SIMULATOR"
|
||||
ios_arch: "x86_64"
|
||||
requires:
|
||||
- setup
|
||||
filters:
|
||||
branches:
|
||||
only: nightly
|
||||
- binary_ios_build:
|
||||
name: pytorch_ios_11_2_1_nightly_arm64_build
|
||||
build_environment: "libtorch-ios-11.2.1-nightly-arm64-build"
|
||||
context: org-member
|
||||
ios_arch: "arm64"
|
||||
ios_platform: "OS"
|
||||
requires:
|
||||
- setup
|
||||
filters:
|
||||
branches:
|
||||
only: nightly
|
||||
- binary_ios_upload:
|
||||
build_environment: "libtorch-ios-11.2.1-nightly-binary-build-upload"
|
||||
context: org-member
|
||||
requires:
|
||||
- setup
|
||||
- pytorch_ios_11_2_1_nightly_x86_64_build
|
||||
- pytorch_ios_11_2_1_nightly_arm64_build
|
||||
filters:
|
||||
branches:
|
||||
only: nightly
|
||||
@ -1,11 +0,0 @@
|
||||
#- binary_linux_libtorch_2.7m_cpu_test:
|
||||
# requires:
|
||||
# - binary_linux_libtorch_2.7m_cpu_build
|
||||
#- binary_linux_libtorch_2.7m_cu90_test:
|
||||
# requires:
|
||||
# - binary_linux_libtorch_2.7m_cu90_build
|
||||
#- binary_linux_libtorch_2.7m_cu100_test:
|
||||
# requires:
|
||||
# - binary_linux_libtorch_2.7m_cu100_build
|
||||
|
||||
# Nightly uploads
|
||||
@ -1,12 +0,0 @@
|
||||
- pytorch_android_gradle_build-x86_32:
|
||||
name: pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-build-x86_32
|
||||
requires:
|
||||
- pytorch_linux_xenial_py3_clang5_android_ndk_r19c_x86_32_build
|
||||
|
||||
- pytorch_android_gradle_build:
|
||||
name: pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-build
|
||||
requires:
|
||||
- pytorch_linux_xenial_py3_clang5_android_ndk_r19c_x86_32_build
|
||||
- pytorch_linux_xenial_py3_clang5_android_ndk_r19c_x86_64_build
|
||||
- pytorch_linux_xenial_py3_clang5_android_ndk_r19c_arm_v7a_build
|
||||
- pytorch_linux_xenial_py3_clang5_android_ndk_r19c_arm_v8a_build
|
||||
@ -1,16 +0,0 @@
|
||||
- pytorch_linux_test:
|
||||
name: pytorch_linux_xenial_py3_6_gcc5_4_ge_config_legacy_test
|
||||
requires:
|
||||
- setup
|
||||
- pytorch_linux_xenial_py3_6_gcc5_4_build
|
||||
build_environment: "pytorch-linux-xenial-py3.6-gcc5.4-ge_config_legacy-test"
|
||||
docker_image: "308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-xenial-py3.6-gcc5.4:405"
|
||||
resource_class: large
|
||||
- pytorch_linux_test:
|
||||
name: pytorch_linux_xenial_py3_6_gcc5_4_ge_config_simple_test
|
||||
requires:
|
||||
- setup
|
||||
- pytorch_linux_xenial_py3_6_gcc5_4_build
|
||||
build_environment: "pytorch-linux-xenial-py3.6-gcc5.4-ge_config_simple-test"
|
||||
docker_image: "308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-xenial-py3.6-gcc5.4:405"
|
||||
resource_class: large
|
||||
@ -1,17 +0,0 @@
|
||||
# Pytorch iOS PR builds
|
||||
- pytorch_ios_build:
|
||||
name: pytorch_ios_11_2_1_x86_64_build
|
||||
context: org-member
|
||||
build_environment: "pytorch-ios-11.2.1-x86_64_build"
|
||||
ios_arch: "x86_64"
|
||||
ios_platform: "SIMULATOR"
|
||||
requires:
|
||||
- setup
|
||||
- pytorch_ios_build:
|
||||
name: pytorch_ios_11_2_1_arm64_build
|
||||
context: org-member
|
||||
build_environment: "pytorch-ios-11.2.1-arm64_build"
|
||||
ios_arch: "arm64"
|
||||
ios_platform: "OS"
|
||||
requires:
|
||||
- setup
|
||||
@ -1,13 +0,0 @@
|
||||
# Warning: indentation here matters!
|
||||
|
||||
# Pytorch MacOS builds
|
||||
- pytorch_macos_10_13_py3_build:
|
||||
requires:
|
||||
- setup
|
||||
- pytorch_macos_10_13_py3_test:
|
||||
requires:
|
||||
- setup
|
||||
- pytorch_macos_10_13_py3_build
|
||||
- pytorch_macos_10_13_cuda9_2_cudnn7_py3_build:
|
||||
requires:
|
||||
- setup
|
||||
@ -1,7 +0,0 @@
|
||||
# PyTorch Mobile PR builds (use linux host toolchain + mobile build options)
|
||||
- pytorch_linux_build:
|
||||
name: pytorch_linux_xenial_py3_clang5_mobile_build
|
||||
requires:
|
||||
- setup
|
||||
build_environment: "pytorch-linux-xenial-py3-clang5-mobile-build"
|
||||
docker_image: "308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-xenial-py3-clang5-asan:405"
|
||||
@ -1,22 +0,0 @@
|
||||
- update_s3_htmls_for_nightlies:
|
||||
context: org-member
|
||||
requires:
|
||||
- setup
|
||||
filters:
|
||||
branches:
|
||||
only: postnightly
|
||||
- update_s3_htmls_for_nightlies_devtoolset7:
|
||||
context: org-member
|
||||
requires:
|
||||
- setup
|
||||
filters:
|
||||
branches:
|
||||
only: postnightly
|
||||
- upload_binary_sizes:
|
||||
context: org-member
|
||||
requires:
|
||||
- setup
|
||||
filters:
|
||||
branches:
|
||||
only: postnightly
|
||||
|
||||
@ -1,11 +0,0 @@
|
||||
|
||||
##############################################################################
|
||||
##############################################################################
|
||||
# Workflows
|
||||
##############################################################################
|
||||
##############################################################################
|
||||
|
||||
# PR jobs pr builds
|
||||
workflows:
|
||||
build:
|
||||
jobs:
|
||||
52
.clang-tidy
52
.clang-tidy
@ -1,32 +1,30 @@
|
||||
---
|
||||
# NOTE there must be no spaces before the '-', so put the comma last.
|
||||
Checks: '-*,
|
||||
bugprone-*,
|
||||
-bugprone-forward-declaration-namespace,
|
||||
-bugprone-macro-parentheses,
|
||||
-bugprone-lambda-function-name,
|
||||
cppcoreguidelines-*,
|
||||
-cppcoreguidelines-interfaces-global-init,
|
||||
-cppcoreguidelines-owning-memory,
|
||||
-cppcoreguidelines-pro-bounds-array-to-pointer-decay,
|
||||
-cppcoreguidelines-pro-bounds-constant-array-index,
|
||||
-cppcoreguidelines-pro-bounds-pointer-arithmetic,
|
||||
-cppcoreguidelines-pro-type-cstyle-cast,
|
||||
-cppcoreguidelines-pro-type-reinterpret-cast,
|
||||
-cppcoreguidelines-pro-type-static-cast-downcast,
|
||||
-cppcoreguidelines-pro-type-union-access,
|
||||
-cppcoreguidelines-pro-type-vararg,
|
||||
-cppcoreguidelines-special-member-functions,
|
||||
hicpp-exception-baseclass,
|
||||
hicpp-avoid-goto,
|
||||
modernize-*,
|
||||
-modernize-return-braced-init-list,
|
||||
-modernize-use-auto,
|
||||
-modernize-use-default-member-init,
|
||||
-modernize-use-using,
|
||||
performance-*,
|
||||
-performance-noexcept-move-constructor,
|
||||
# NOTE there must be no spaces before the '-', so put the comma first.
|
||||
Checks: '
|
||||
-*
|
||||
,bugprone-*
|
||||
,-bugprone-macro-parentheses
|
||||
,-bugprone-forward-declaration-namespace
|
||||
,cppcoreguidelines-*
|
||||
,-cppcoreguidelines-pro-bounds-array-to-pointer-decay
|
||||
,-cppcoreguidelines-pro-type-static-cast-downcast
|
||||
,-cppcoreguidelines-pro-bounds-pointer-arithmetic
|
||||
,-cppcoreguidelines-pro-bounds-constant-array-index
|
||||
,-cppcoreguidelines-pro-type-cstyle-cast
|
||||
,-cppcoreguidelines-pro-type-reinterpret-cast
|
||||
,-cppcoreguidelines-pro-type-vararg
|
||||
,-cppcoreguidelines-special-member-functions
|
||||
,-cppcoreguidelines-interfaces-global-init
|
||||
,-cppcoreguidelines-owning-memory
|
||||
,hicpp-signed-bitwise
|
||||
,hicpp-exception-baseclass
|
||||
,hicpp-avoid-goto
|
||||
,modernize-*
|
||||
,-modernize-use-default-member-init
|
||||
,-modernize-return-braced-init-list
|
||||
,-modernize-use-auto
|
||||
'
|
||||
WarningsAsErrors: '*'
|
||||
HeaderFilterRegex: 'torch/csrc/.*'
|
||||
AnalyzeTemporaryDtors: false
|
||||
CheckOptions:
|
||||
|
||||
@ -1,2 +0,0 @@
|
||||
--exclude=build/*
|
||||
--exclude=include/*
|
||||
13
.flake8
13
.flake8
@ -1,13 +0,0 @@
|
||||
[flake8]
|
||||
select = B,C,E,F,P,T4,W,B9
|
||||
max-line-length = 120
|
||||
# C408 ignored because we like the dict keyword argument syntax
|
||||
# E501 is not flexible enough, we're using B950 instead
|
||||
ignore =
|
||||
E203,E305,E402,E501,E721,E741,F403,F405,F821,F841,F999,W503,W504,C408,E302,W291,E303,
|
||||
# these ignores are from flake8-bugbear; please fix!
|
||||
B007,B008,
|
||||
# these ignores are from flake8-comprehensions; please fix!
|
||||
C400,C401,C402,C403,C404,C405,C407,C411,
|
||||
per-file-ignores = __init__.py: F401
|
||||
exclude = docs/src,venv,third_party,caffe2,scripts,docs/caffe2,torch/lib/include,torch/lib/tmp_install,build,torch/include,*.pyi,.git
|
||||
1
.github/pytorch-probot.yml
vendored
1
.github/pytorch-probot.yml
vendored
@ -1 +0,0 @@
|
||||
tracking_issue: 24422
|
||||
210
.github/workflows/lint.yml
vendored
210
.github/workflows/lint.yml
vendored
@ -1,210 +0,0 @@
|
||||
name: Lint
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- master
|
||||
pull_request:
|
||||
|
||||
jobs:
|
||||
quick-checks:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Setup Python
|
||||
uses: actions/setup-python@v1
|
||||
with:
|
||||
python-version: 3.x
|
||||
architecture: x64
|
||||
- name: Checkout PyTorch
|
||||
uses: actions/checkout@v1
|
||||
- name: Ensure consistent CircleCI YAML config
|
||||
run: |
|
||||
pip install -r requirements.txt
|
||||
cd .circleci && ./ensure-consistency.py
|
||||
- name: Ensure Docker version is correctly deployed
|
||||
run: .circleci/validate-docker-version.py
|
||||
- name: Shellcheck Jenkins scripts
|
||||
run: |
|
||||
sudo apt-get install -y shellcheck
|
||||
.jenkins/run-shellcheck.sh
|
||||
- name: Ensure no tabs
|
||||
run: |
|
||||
(! git grep -I -l $'\t' -- . ':(exclude)*.svg' ':(exclude)**Makefile' ':(exclude)**/contrib/**' ':(exclude)third_party' ':(exclude).gitattributes' ':(exclude).gitmodules' || (echo "The above files have tabs; please convert them to spaces"; false))
|
||||
- name: Ensure C++ source files are not executable
|
||||
run: |
|
||||
(! find . \( -path ./third_party -o -path ./.git -o -path ./torch/bin -o -path ./build \) -prune -o -type f -executable -regextype posix-egrep -not -regex '.+(\.(bash|sh|py|so)|git-pre-commit)$' -print | grep . || (echo 'The above files have executable permission; please remove their executable permission by using `chmod -x`'; false))
|
||||
- name: MyPy typecheck
|
||||
run: |
|
||||
pip install mypy mypy-extensions
|
||||
mypy @mypy-files.txt
|
||||
- name: C++ docs check
|
||||
run: |
|
||||
sudo apt-get install -y doxygen && pip install -r requirements.txt
|
||||
cd docs/cpp/source && ./check-doxygen.sh
|
||||
|
||||
flake8-py3:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Setup Python
|
||||
uses: actions/setup-python@v1
|
||||
with:
|
||||
python-version: 3.x
|
||||
architecture: x64
|
||||
- name: Fetch PyTorch
|
||||
uses: actions/checkout@v1
|
||||
- name: Checkout PR tip
|
||||
run: |
|
||||
set -eux
|
||||
if [[ "${{ github.event_name }}" == "pull_request" ]]; then
|
||||
# We are on a PR, so actions/checkout leaves us on a merge commit.
|
||||
# Check out the actual tip of the branch.
|
||||
git checkout ${{ github.event.pull_request.head.sha }}
|
||||
fi
|
||||
echo ::set-output name=commit_sha::$(git rev-parse HEAD)
|
||||
id: get_pr_tip
|
||||
- name: Run flake8
|
||||
run: |
|
||||
set -eux
|
||||
pip install flake8
|
||||
flake8 --exit-zero > ${GITHUB_WORKSPACE}/flake8-output.txt
|
||||
cat ${GITHUB_WORKSPACE}/flake8-output.txt
|
||||
- name: Add annotations
|
||||
uses: pytorch/add-annotations-github-action@master
|
||||
with:
|
||||
check_name: 'flake8-py3'
|
||||
linter_output_path: 'flake8-output.txt'
|
||||
commit_sha: ${{ steps.get_pr_tip.outputs.commit_sha }}
|
||||
regex: '^(?<filename>.*?):(?<lineNumber>\d+):(?<columnNumber>\d+): (?<errorCode>\w\d+) (?<errorDesc>.*)'
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
flake8-py2:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Setup Python
|
||||
uses: actions/setup-python@v1
|
||||
with:
|
||||
python-version: 2.x
|
||||
architecture: x64
|
||||
- name: Fetch PyTorch
|
||||
uses: actions/checkout@v1
|
||||
- name: Checkout PR tip
|
||||
run: |
|
||||
set -eux
|
||||
if [[ "${{ github.event_name }}" == "pull_request" ]]; then
|
||||
# We are on a PR, so actions/checkout leaves us on a merge commit.
|
||||
# Check out the actual tip of the branch.
|
||||
git checkout ${{ github.event.pull_request.head.sha }}
|
||||
fi
|
||||
echo ::set-output name=commit_sha::$(git rev-parse HEAD)
|
||||
id: get_pr_tip
|
||||
- name: Run flake8
|
||||
run: |
|
||||
set -eux
|
||||
pip install flake8
|
||||
rm -rf .circleci
|
||||
flake8 --exit-zero > ${GITHUB_WORKSPACE}/flake8-output.txt
|
||||
cat ${GITHUB_WORKSPACE}/flake8-output.txt
|
||||
- name: Add annotations
|
||||
uses: pytorch/add-annotations-github-action@master
|
||||
with:
|
||||
check_name: 'flake8-py2'
|
||||
linter_output_path: 'flake8-output.txt'
|
||||
commit_sha: ${{ steps.get_pr_tip.outputs.commit_sha }}
|
||||
regex: '^(?<filename>.*?):(?<lineNumber>\d+):(?<columnNumber>\d+): (?<errorCode>\w\d+) (?<errorDesc>.*)'
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
|
||||
clang-tidy:
|
||||
if: github.event_name == 'pull_request'
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Setup Python
|
||||
uses: actions/setup-python@v1
|
||||
with:
|
||||
python-version: 3.x
|
||||
architecture: x64
|
||||
- name: Checkout PyTorch
|
||||
uses: actions/checkout@v1
|
||||
- name: Checkout PR tip
|
||||
run: |
|
||||
set -eux
|
||||
if [[ "${{ github.event_name }}" == "pull_request" ]]; then
|
||||
# We are on a PR, so actions/checkout leaves us on a merge commit.
|
||||
# Check out the actual tip of the branch.
|
||||
git checkout ${{ github.event.pull_request.head.sha }}
|
||||
fi
|
||||
echo ::set-output name=commit_sha::$(git rev-parse HEAD)
|
||||
id: get_pr_tip
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
set -eux
|
||||
# Install CUDA
|
||||
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-ubuntu1804.pin
|
||||
sudo mv cuda-ubuntu1804.pin /etc/apt/preferences.d/cuda-repository-pin-600
|
||||
sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub
|
||||
sudo add-apt-repository "deb http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/ /"
|
||||
sudo apt-get update
|
||||
sudo apt-get --no-install-recommends -y install cuda
|
||||
# Install dependencies
|
||||
pip install pyyaml
|
||||
wget -O - https://apt.llvm.org/llvm-snapshot.gpg.key | sudo apt-key add -
|
||||
sudo apt-add-repository "deb http://apt.llvm.org/xenial/ llvm-toolchain-xenial-8 main"
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y clang-tidy-8
|
||||
sudo update-alternatives --install /usr/bin/clang-tidy clang-tidy /usr/bin/clang-tidy-8 1000
|
||||
- name: Run clang-tidy
|
||||
run: |
|
||||
set -eux
|
||||
git remote add upstream https://github.com/pytorch/pytorch
|
||||
git fetch upstream "$GITHUB_BASE_REF"
|
||||
BASE_SHA=${{ github.event.pull_request.base.sha }}
|
||||
HEAD_SHA=${{ github.event.pull_request.head.sha }}
|
||||
MERGE_BASE=$(git merge-base $BASE_SHA $HEAD_SHA)
|
||||
|
||||
if [[ ! -d build ]]; then
|
||||
git submodule update --init --recursive
|
||||
|
||||
export USE_NCCL=0
|
||||
# We really only need compile_commands.json, so no need to build!
|
||||
time python setup.py --cmake-only build
|
||||
|
||||
# Generate ATen files.
|
||||
time python aten/src/ATen/gen.py \
|
||||
-s aten/src/ATen \
|
||||
-d build/aten/src/ATen \
|
||||
aten/src/ATen/Declarations.cwrap \
|
||||
aten/src/THNN/generic/THNN.h \
|
||||
aten/src/THCUNN/generic/THCUNN.h \
|
||||
aten/src/ATen/nn.yaml \
|
||||
aten/src/ATen/native/native_functions.yaml
|
||||
|
||||
# Generate PyTorch files.
|
||||
time python tools/setup_helpers/generate_code.py \
|
||||
--declarations-path build/aten/src/ATen/Declarations.yaml \
|
||||
--nn-path aten/src
|
||||
fi
|
||||
|
||||
# Run Clang-Tidy
|
||||
# The negative filters below are to exclude files that include onnx_pb.h or
|
||||
# caffe2_pb.h, otherwise we'd have to build protos as part of this CI job.
|
||||
python tools/clang_tidy.py \
|
||||
--verbose \
|
||||
--paths torch/csrc/ \
|
||||
--diff "$MERGE_BASE" \
|
||||
-g"-torch/csrc/jit/export.cpp" \
|
||||
-g"-torch/csrc/jit/import.cpp" \
|
||||
-g"-torch/csrc/jit/netdef_converter.cpp" \
|
||||
"$@" > ${GITHUB_WORKSPACE}/clang-tidy-output.txt
|
||||
|
||||
cat ${GITHUB_WORKSPACE}/clang-tidy-output.txt
|
||||
- name: Add annotations
|
||||
uses: suo/add-annotations-github-action@master
|
||||
with:
|
||||
check_name: 'clang-tidy'
|
||||
linter_output_path: 'clang-tidy-output.txt'
|
||||
commit_sha: ${{ steps.get_pr_tip.outputs.commit_sha }}
|
||||
regex: '^(?<filename>.*?):(?<lineNumber>\d+):(?<columnNumber>\d+): (?<errorDesc>.*?) \[(?<errorCode>.*)\]'
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
49
.gitignore
vendored
49
.gitignore
vendored
@ -8,9 +8,6 @@
|
||||
|
||||
## PyTorch
|
||||
|
||||
.coverage
|
||||
.gradle
|
||||
.hypothesis
|
||||
.mypy_cache
|
||||
*/*.pyc
|
||||
*/*.so*
|
||||
@ -25,30 +22,23 @@
|
||||
aten/build/
|
||||
aten/src/ATen/Config.h
|
||||
aten/src/ATen/cuda/CUDAConfig.h
|
||||
caffe2/cpp_test/
|
||||
build/
|
||||
dist/
|
||||
docs/src/**/*
|
||||
docs/cpp/build
|
||||
docs/cpp/source/api
|
||||
log
|
||||
test/.coverage
|
||||
test/.hypothesis/
|
||||
test/cpp/api/mnist
|
||||
test/custom_operator/model.pt
|
||||
test/data/gpu_tensors.pt
|
||||
test/data/legacy_modules.t7
|
||||
test/data/*.pt
|
||||
test/backward_compatibility/new_schemas.txt
|
||||
dropout_model.pt
|
||||
test/generated_type_hints_smoketest.py
|
||||
test/data/legacy_serialized.pt
|
||||
test/data/linear.pt
|
||||
test/htmlcov
|
||||
test/cpp_extensions/install/
|
||||
test/test-reports/
|
||||
third_party/build/
|
||||
tools/shared/_utils_internal.py
|
||||
torch.egg-info/
|
||||
torch/__init__.pyi
|
||||
torch/nn/functional.pyi
|
||||
torch/nn/modules/*.pyi
|
||||
torch/csrc/autograd/generated/*
|
||||
torch/csrc/cudnn/cuDNN.cpp
|
||||
torch/csrc/generated
|
||||
@ -62,8 +52,6 @@ torch/csrc/nn/THNN_generic.cwrap
|
||||
torch/csrc/nn/THNN_generic.h
|
||||
torch/csrc/nn/THNN.cpp
|
||||
torch/csrc/nn/THNN.cwrap
|
||||
torch/bin/
|
||||
torch/cmake/
|
||||
torch/lib/*.a*
|
||||
torch/lib/*.dll*
|
||||
torch/lib/*.exe*
|
||||
@ -71,27 +59,16 @@ torch/lib/*.dylib*
|
||||
torch/lib/*.h
|
||||
torch/lib/*.lib
|
||||
torch/lib/*.so*
|
||||
torch/lib/protobuf*.pc
|
||||
torch/lib/build
|
||||
torch/lib/caffe2/
|
||||
torch/lib/cmake
|
||||
torch/lib/include
|
||||
torch/lib/pkgconfig
|
||||
torch/lib/protoc
|
||||
torch/lib/protobuf/
|
||||
torch/lib/tmp_install
|
||||
torch/lib/torch_shm_manager
|
||||
torch/lib/site-packages/
|
||||
torch/lib/python*
|
||||
torch/lib64
|
||||
torch/include/
|
||||
torch/share/
|
||||
torch/test/
|
||||
torch/version.py
|
||||
# Root level file used in CI to specify certain env configs.
|
||||
# E.g., see .circleci/config.yaml
|
||||
env
|
||||
.circleci/scripts/COMMIT_MSG
|
||||
|
||||
# IPython notebook checkpoints
|
||||
.ipynb_checkpoints
|
||||
@ -174,9 +151,6 @@ docs/source/scripts/activation_images/
|
||||
# OSX dir files
|
||||
.DS_Store
|
||||
|
||||
# GDB history
|
||||
.gdb_history
|
||||
|
||||
## Caffe2
|
||||
|
||||
# build, distribute, and bins (+ python proto bindings)
|
||||
@ -211,6 +185,7 @@ docs/dev
|
||||
*.sst
|
||||
*.ldb
|
||||
LOCK
|
||||
LOG*
|
||||
CURRENT
|
||||
MANIFEST-*
|
||||
|
||||
@ -227,6 +202,11 @@ caffe2.egg-info
|
||||
# Files generated by CLion
|
||||
cmake-build-debug
|
||||
|
||||
# Files generated by ctags
|
||||
CTAGS
|
||||
tags
|
||||
TAGS
|
||||
|
||||
# BEGIN NOT-CLEAN-FILES (setup.py handles this marker. Do not change.)
|
||||
#
|
||||
# Below files are not deleted by "setup.py clean".
|
||||
@ -241,12 +221,3 @@ cmake-build-debug
|
||||
# Files generated when a patch is rejected
|
||||
*.orig
|
||||
*.rej
|
||||
|
||||
# Files generated by ctags
|
||||
CTAGS
|
||||
GTAGS
|
||||
GRTAGS
|
||||
GSYMS
|
||||
GPATH
|
||||
tags
|
||||
TAGS
|
||||
|
||||
150
.gitmodules
vendored
150
.gitmodules
vendored
@ -1,120 +1,84 @@
|
||||
[submodule "third_party/pybind11"]
|
||||
ignore = dirty
|
||||
path = third_party/pybind11
|
||||
url = https://github.com/pybind/pybind11.git
|
||||
path = third_party/pybind11
|
||||
url = https://github.com/pybind/pybind11.git
|
||||
[submodule "third_party/cub"]
|
||||
ignore = dirty
|
||||
path = third_party/cub
|
||||
url = https://github.com/NVlabs/cub.git
|
||||
path = third_party/cub
|
||||
url = https://github.com/NVlabs/cub.git
|
||||
[submodule "third_party/eigen"]
|
||||
ignore = dirty
|
||||
path = third_party/eigen
|
||||
url = https://github.com/eigenteam/eigen-git-mirror.git
|
||||
path = third_party/eigen
|
||||
url = https://github.com/eigenteam/eigen-git-mirror.git
|
||||
[submodule "third_party/googletest"]
|
||||
ignore = dirty
|
||||
path = third_party/googletest
|
||||
url = https://github.com/google/googletest.git
|
||||
path = third_party/googletest
|
||||
url = https://github.com/google/googletest.git
|
||||
[submodule "third_party/benchmark"]
|
||||
ignore = dirty
|
||||
path = third_party/benchmark
|
||||
url = https://github.com/google/benchmark.git
|
||||
path = third_party/benchmark
|
||||
url = https://github.com/google/benchmark.git
|
||||
[submodule "third_party/protobuf"]
|
||||
ignore = dirty
|
||||
path = third_party/protobuf
|
||||
url = https://github.com/protocolbuffers/protobuf.git
|
||||
path = third_party/protobuf
|
||||
url = https://github.com/google/protobuf.git
|
||||
[submodule "third_party/ios-cmake"]
|
||||
ignore = dirty
|
||||
path = third_party/ios-cmake
|
||||
url = https://github.com/Yangqing/ios-cmake.git
|
||||
path = third_party/ios-cmake
|
||||
url = https://github.com/Yangqing/ios-cmake.git
|
||||
[submodule "third_party/NNPACK"]
|
||||
ignore = dirty
|
||||
path = third_party/NNPACK
|
||||
url = https://github.com/Maratyszcza/NNPACK.git
|
||||
path = third_party/NNPACK
|
||||
url = https://github.com/Maratyszcza/NNPACK.git
|
||||
[submodule "third_party/gloo"]
|
||||
ignore = dirty
|
||||
path = third_party/gloo
|
||||
url = https://github.com/facebookincubator/gloo
|
||||
path = third_party/gloo
|
||||
url = https://github.com/facebookincubator/gloo
|
||||
[submodule "third_party/NNPACK_deps/pthreadpool"]
|
||||
ignore = dirty
|
||||
path = third_party/pthreadpool
|
||||
url = https://github.com/Maratyszcza/pthreadpool.git
|
||||
path = third_party/pthreadpool
|
||||
url = https://github.com/Maratyszcza/pthreadpool.git
|
||||
[submodule "third_party/NNPACK_deps/FXdiv"]
|
||||
ignore = dirty
|
||||
path = third_party/FXdiv
|
||||
url = https://github.com/Maratyszcza/FXdiv.git
|
||||
path = third_party/FXdiv
|
||||
url = https://github.com/Maratyszcza/FXdiv.git
|
||||
[submodule "third_party/NNPACK_deps/FP16"]
|
||||
ignore = dirty
|
||||
path = third_party/FP16
|
||||
url = https://github.com/Maratyszcza/FP16.git
|
||||
path = third_party/FP16
|
||||
url = https://github.com/Maratyszcza/FP16.git
|
||||
[submodule "third_party/NNPACK_deps/psimd"]
|
||||
ignore = dirty
|
||||
path = third_party/psimd
|
||||
url = https://github.com/Maratyszcza/psimd.git
|
||||
path = third_party/psimd
|
||||
url = https://github.com/Maratyszcza/psimd.git
|
||||
[submodule "third_party/zstd"]
|
||||
ignore = dirty
|
||||
path = third_party/zstd
|
||||
url = https://github.com/facebook/zstd.git
|
||||
path = third_party/zstd
|
||||
url = https://github.com/facebook/zstd.git
|
||||
[submodule "third-party/cpuinfo"]
|
||||
ignore = dirty
|
||||
path = third_party/cpuinfo
|
||||
url = https://github.com/pytorch/cpuinfo.git
|
||||
path = third_party/cpuinfo
|
||||
url = https://github.com/Maratyszcza/cpuinfo.git
|
||||
[submodule "third_party/python-enum"]
|
||||
ignore = dirty
|
||||
path = third_party/python-enum
|
||||
url = https://github.com/PeachPy/enum34.git
|
||||
path = third_party/python-enum
|
||||
url = https://github.com/PeachPy/enum34.git
|
||||
[submodule "third_party/python-peachpy"]
|
||||
ignore = dirty
|
||||
path = third_party/python-peachpy
|
||||
url = https://github.com/Maratyszcza/PeachPy.git
|
||||
path = third_party/python-peachpy
|
||||
url = https://github.com/Maratyszcza/PeachPy.git
|
||||
[submodule "third_party/python-six"]
|
||||
ignore = dirty
|
||||
path = third_party/python-six
|
||||
url = https://github.com/benjaminp/six.git
|
||||
path = third_party/python-six
|
||||
url = https://github.com/benjaminp/six.git
|
||||
[submodule "third_party/ComputeLibrary"]
|
||||
path = third_party/ComputeLibrary
|
||||
url = https://github.com/ARM-software/ComputeLibrary.git
|
||||
[submodule "third_party/onnx"]
|
||||
ignore = dirty
|
||||
path = third_party/onnx
|
||||
url = https://github.com/onnx/onnx.git
|
||||
path = third_party/onnx
|
||||
url = https://github.com/onnx/onnx.git
|
||||
[submodule "third_party/onnx-tensorrt"]
|
||||
ignore = dirty
|
||||
path = third_party/onnx-tensorrt
|
||||
url = https://github.com/onnx/onnx-tensorrt
|
||||
path = third_party/onnx-tensorrt
|
||||
url = https://github.com/onnx/onnx-tensorrt
|
||||
[submodule "third_party/sleef"]
|
||||
ignore = dirty
|
||||
path = third_party/sleef
|
||||
url = https://github.com/shibatch/sleef
|
||||
path = third_party/sleef
|
||||
url = https://github.com/shibatch/sleef
|
||||
[submodule "third_party/ideep"]
|
||||
ignore = dirty
|
||||
path = third_party/ideep
|
||||
url = https://github.com/intel/ideep
|
||||
path = third_party/ideep
|
||||
url = https://github.com/intel/ideep
|
||||
[submodule "third_party/nccl/nccl"]
|
||||
ignore = dirty
|
||||
path = third_party/nccl/nccl
|
||||
url = https://github.com/NVIDIA/nccl
|
||||
path = third_party/nccl/nccl
|
||||
url = https://github.com/NVIDIA/nccl
|
||||
[submodule "third_party/gemmlowp/gemmlowp"]
|
||||
ignore = dirty
|
||||
path = third_party/gemmlowp/gemmlowp
|
||||
url = https://github.com/google/gemmlowp.git
|
||||
path = third_party/gemmlowp/gemmlowp
|
||||
url = https://github.com/google/gemmlowp.git
|
||||
[submodule "third_party/QNNPACK"]
|
||||
ignore = dirty
|
||||
path = third_party/QNNPACK
|
||||
url = https://github.com/pytorch/QNNPACK
|
||||
path = third_party/QNNPACK
|
||||
url = https://github.com/pytorch/QNNPACK
|
||||
[submodule "third_party/neon2sse"]
|
||||
ignore = dirty
|
||||
path = third_party/neon2sse
|
||||
url = https://github.com/intel/ARM_NEON_2_x86_SSE.git
|
||||
path = third_party/neon2sse
|
||||
url = https://github.com/intel/ARM_NEON_2_x86_SSE.git
|
||||
[submodule "third_party/fbgemm"]
|
||||
ignore = dirty
|
||||
path = third_party/fbgemm
|
||||
url = https://github.com/pytorch/fbgemm
|
||||
[submodule "third_party/foxi"]
|
||||
ignore = dirty
|
||||
path = third_party/foxi
|
||||
url = https://github.com/houseroad/foxi.git
|
||||
[submodule "third_party/tbb"]
|
||||
path = third_party/tbb
|
||||
url = https://github.com/01org/tbb
|
||||
branch = tbb_2018
|
||||
[submodule "android/libs/fbjni"]
|
||||
ignore = dirty
|
||||
path = android/libs/fbjni
|
||||
url = https://github.com/facebookincubator/fbjni.git
|
||||
path = third_party/fbgemm
|
||||
url = https://github.com/pytorch/fbgemm
|
||||
|
||||
@ -1,60 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
source "$(dirname "${BASH_SOURCE[0]}")/common.sh"
|
||||
|
||||
# Anywhere except $ROOT_DIR should work. This is so the python import doesn't
|
||||
# get confused by any 'caffe2' directory in cwd
|
||||
cd "$INSTALL_PREFIX"
|
||||
|
||||
if [[ $BUILD_ENVIRONMENT == *-cuda* ]]; then
|
||||
num_gpus=$(nvidia-smi -L | wc -l)
|
||||
elif [[ $BUILD_ENVIRONMENT == *-rocm* ]]; then
|
||||
num_gpus=$(rocminfo | grep 'Device Type.*GPU' | wc -l)
|
||||
else
|
||||
num_gpus=0
|
||||
fi
|
||||
|
||||
caffe2_pypath="$(cd /usr && $PYTHON -c 'import os; import caffe2; print(os.path.dirname(os.path.realpath(caffe2.__file__)))')"
|
||||
# Resnet50
|
||||
if (( $num_gpus == 0 )); then
|
||||
"$PYTHON" "$caffe2_pypath/python/examples/imagenet_trainer.py" --train_data null --batch_size 128 --epoch_size 12800 --num_epochs 2 --use_cpu
|
||||
fi
|
||||
if (( $num_gpus >= 1 )); then
|
||||
"$PYTHON" "$caffe2_pypath/python/examples/imagenet_trainer.py" --train_data null --batch_size 128 --epoch_size 12800 --num_epochs 2 --num_gpus 1
|
||||
"$PYTHON" "$caffe2_pypath/python/examples/imagenet_trainer.py" --train_data null --batch_size 256 --epoch_size 25600 --num_epochs 2 --num_gpus 1 --float16_compute --dtype float16
|
||||
fi
|
||||
if (( $num_gpus >= 2 )); then
|
||||
"$PYTHON" "$caffe2_pypath/python/examples/imagenet_trainer.py" --train_data null --batch_size 256 --epoch_size 25600 --num_epochs 2 --num_gpus 2
|
||||
fi
|
||||
if (( $num_gpus >= 4 )); then
|
||||
"$PYTHON" "$caffe2_pypath/python/examples/imagenet_trainer.py" --train_data null --batch_size 512 --epoch_size 51200 --num_epochs 2 --num_gpus 4
|
||||
fi
|
||||
|
||||
# ResNext
|
||||
if (( $num_gpus == 0 )); then
|
||||
"$PYTHON" "$caffe2_pypath/python/examples/imagenet_trainer.py" --resnext_num_groups 32 --resnext_width_per_group 4 --num_layers 101 --train_data null --batch_size 32 --epoch_size 3200 --num_epochs 2 --use_cpu
|
||||
fi
|
||||
if (( $num_gpus >= 1 )); then
|
||||
"$PYTHON" "$caffe2_pypath/python/examples/imagenet_trainer.py" --resnext_num_groups 32 --resnext_width_per_group 4 --num_layers 101 --train_data null --batch_size 32 --epoch_size 3200 --num_epochs 2 --num_gpus 1
|
||||
"$PYTHON" "$caffe2_pypath/python/examples/imagenet_trainer.py" --resnext_num_groups 32 --resnext_width_per_group 4 --num_layers 101 --train_data null --batch_size 64 --epoch_size 3200 --num_epochs 2 --num_gpus 1 --float16_compute --dtype float16
|
||||
fi
|
||||
if (( $num_gpus >= 2 )); then
|
||||
"$PYTHON" "$caffe2_pypath/python/examples/imagenet_trainer.py" --resnext_num_groups 32 --resnext_width_per_group 4 --num_layers 101 --train_data null --batch_size 64 --epoch_size 6400 --num_epochs 2 --num_gpus 2
|
||||
fi
|
||||
if (( $num_gpus >= 4 )); then
|
||||
"$PYTHON" "$caffe2_pypath/python/examples/imagenet_trainer.py" --resnext_num_groups 32 --resnext_width_per_group 4 --num_layers 101 --train_data null --batch_size 128 --epoch_size 12800 --num_epochs 2 --num_gpus 4
|
||||
fi
|
||||
|
||||
# Shufflenet
|
||||
if (( $num_gpus == 0 )); then
|
||||
"$PYTHON" "$caffe2_pypath/python/examples/imagenet_trainer.py" --train_data null --batch_size 32 --epoch_size 3200 --num_epochs 2 --use_cpu --model shufflenet
|
||||
fi
|
||||
if (( $num_gpus >= 1 )); then
|
||||
"$PYTHON" "$caffe2_pypath/python/examples/imagenet_trainer.py" --train_data null --batch_size 32 --epoch_size 3200 --num_epochs 2 --num_gpus 1 --model shufflenet
|
||||
fi
|
||||
if (( $num_gpus >= 2 )); then
|
||||
"$PYTHON" "$caffe2_pypath/python/examples/imagenet_trainer.py" --train_data null --batch_size 64 --epoch_size 6400 --num_epochs 2 --num_gpus 2 --model shufflenet
|
||||
fi
|
||||
if (( $num_gpus >= 4 )); then
|
||||
"$PYTHON" "$caffe2_pypath/python/examples/imagenet_trainer.py" --train_data null --batch_size 128 --epoch_size 12800 --num_epochs 2 --num_gpus 4 --model shufflenet
|
||||
fi
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user