Examples¶
Here you can find example configurations per different programming languages/frameworks.
Android¶
Cirrus CI has a set of Docker images ready for Android development.
If these images are not the right fit for your project you can always use any custom Docker image with Cirrus CI. For those
images .cirrus.yml
configuration file can look like:
container:
image: ghcr.io/cirruslabs/android-sdk:30
check_android_task:
check_script: ./gradlew check connectedCheck
Or like this if a running hardware accelerated emulator is needed for the tests:
container:
image: ghcr.io/cirruslabs/android-sdk:30
cpu: 4
memory: 12G
kvm: true
check_android_task:
install_emulator_script:
sdkmanager --install "system-images;android-30;google_apis;x86"
create_avd_script:
echo no | avdmanager create avd --force
-n emulator
-k "system-images;android-30;google_apis;x86"
start_avd_background_script:
$ANDROID_HOME/emulator/emulator
-avd emulator
-no-audio
-no-boot-anim
-gpu swiftshader_indirect
-no-snapshot
-no-window
# assemble while emulator is starting
assemble_instrumented_tests_script:
./gradlew assembleDebugAndroidTest
wait_for_avd_script:
adb wait-for-device shell 'while [[ -z $(getprop sys.boot_completed) ]]; do sleep 3; done; input keyevent 82'
check_script: ./gradlew check connectedCheck
Info
Please don't forget to setup Remote Build Cache for your Gradle project.
Android Lint¶
The Cirrus CI annotator supports providing inline reports on PRs and can parse Android Lint reports. Here is an example of an Android Lint
task that you can add to your .cirrus.yml
:
task:
name: Android Lint
lint_script: ./gradlew lintDebug
always:
android-lint_artifacts:
path: "**/reports/lint-results-debug.xml"
type: text/xml
format: android-lint
Bazel¶
Bazel Team provides a set of official Docker images with Bazel pre-installed. Here is
an example of how .cirrus.yml
can look like for Bazel:
If these images are not the right fit for your project you can always use any custom Docker image with Cirrus CI.
Remote Cache¶
Cirrus CI has built-in HTTP Cache which is compatible with Bazel's remote cache.
Here is an example of how Cirrus CI HTTP Cache can be used with Bazel:
C++¶
Official GCC Docker images can be used for builds. Here is an example of a .cirrus.yml
that runs tests:
Crystal¶
Official Crystal Docker images can be used for builds. Here is an example
of a .cirrus.yml
that caches dependencies and runs tests:
container:
image: crystallang/crystal:latest
spec_task:
shard_cache:
fingerprint_script: cat shard.lock
populate_script: shards install
folder: lib
spec_script: crystal spec
Elixir¶
Official Elixir Docker images can be used for builds. Here is an example of a .cirrus.yml
that runs tests:
Erlang¶
Official Erlang Docker images can be used for builds. Here is an example of a .cirrus.yml
that runs tests:
Flutter¶
Cirrus CI provides a set of Docker images with Flutter and Dart SDK pre-installed.
Here is an example of how .cirrus.yml
can be written for Flutter:
container:
image: ghcr.io/cirruslabs/flutter:latest
test_task:
pub_cache:
folder: ~/.pub-cache
test_script: flutter test --machine > report.json
always:
report_artifacts:
path: report.json
format: flutter
If these images are not the right fit for your project you can always use any custom Docker image with Cirrus CI.
Flutter Web¶
Our Docker images with Flutter and Dart SDK pre-installed have special *-web
tags
with Chromium pre-installed. You can use these tags to run Flutter Web
First define a new chromium
platform in your dart_test.yaml
:
define_platforms:
chromium:
name: Chromium
extends: chrome
settings:
arguments: --no-sandbox
executable:
linux: chromium
Now you'll be able to run tests targeting web via pub run test test -p chromium
Go¶
The best way to test Go projects is by using official Go Docker images. Here is
an example of how .cirrus.yml
can look like for a project using Go Modules:
GolangCI Lint¶
We highly recommend to configure some sort of linting for your Go project. One of the options is GolangCI Lint.
The Cirrus CI annotator supports providing inline reports on PRs and can parse GolangCI Lint reports. Here is an example of a GolangCI Lint
task that you can add to your .cirrus.yml
:
Gradle¶
We recommend use of the official Gradle Docker containers since they have Gradle specific configurations already set up. For example, standard Java containers don't have
a pre-configured user and as a result don't have HOME
environment variable presented which makes Gradle complain.
Caching¶
To preserve caches between Gradle runs, add a cache instruction as shown below.
The trick here is to clean up ~/.gradle/caches
folder in the very end of a build. Gradle creates some unique nondeterministic
files in ~/.gradle/caches
folder on every run which makes Cirrus CI re-upload the cache every time. This way, you get faster builds!
container:
image: gradle:jdk11
check_task:
gradle_cache:
folder: ~/.gradle/caches
check_script: gradle check
cleanup_before_cache_script:
- rm -rf ~/.gradle/caches/$GRADLE_VERSION/
- rm -rf ~/.gradle/caches/transforms-1
- rm -rf ~/.gradle/caches/journal-1
- rm -rf ~/.gradle/caches/jars-3/*/buildSrc.jar
- find ~/.gradle/caches/ -name "*.lock" -type f -delete
arm_container:
image: gradle:jdk11
check_task:
gradle_cache:
folder: ~/.gradle/caches
check_script: gradle check
cleanup_before_cache_script:
- rm -rf ~/.gradle/caches/$GRADLE_VERSION/
- rm -rf ~/.gradle/caches/transforms-1
- rm -rf ~/.gradle/caches/journal-1
- rm -rf ~/.gradle/caches/jars-3/*/buildSrc.jar
- find ~/.gradle/caches/ -name "*.lock" -type f -delete
Build Cache¶
Here is how HTTP Cache can be used with Gradle by adding the following code to settings.gradle
:
ext.isCiServer = System.getenv().containsKey("CIRRUS_CI")
ext.isMasterBranch = System.getenv()["CIRRUS_BRANCH"] == "master"
ext.buildCacheHost = System.getenv().getOrDefault("CIRRUS_HTTP_CACHE_HOST", "localhost:12321")
buildCache {
local {
enabled = !isCiServer
}
remote(HttpBuildCache) {
url = "http://${buildCacheHost}/"
enabled = isCiServer
push = isMasterBranch
}
}
If your project uses a buildSrc
directory, the build cache configuration should also be applied to buildSrc/settings.gradle
.
To do this, put the build cache configuration above into a separate gradle/buildCacheSettings.gradle
file, then apply it to both your settings.gradle
and buildSrc/settings.gradle
.
In settings.gradle
:
In buildSrc/settings.gradle
:
Please make sure you are running Gradle commands with --build-cache
flag or have org.gradle.caching
enabled in gradle.properties
file.
Here is an example of a gradle.properties
file that we use internally for all Gradle projects:
org.gradle.daemon=true
org.gradle.caching=true
org.gradle.parallel=true
org.gradle.configureondemand=true
org.gradle.jvmargs=-Dfile.encoding=UTF-8
JUnit¶
Here is a .cirrus.yml
that, parses and uploads JUnit reports at the end of the build:
junit_test_task:
junit_script: <replace this comment with instructions to run the test suites>
always:
junit_result_artifacts:
path: "**/test-results/**.xml"
format: junit
type: text/xml
If it is running on a pull request, annotations will also be displayed in-line.
Maven¶
Official Maven Docker images can be used for building and testing Maven projects:
MySQL¶
The Additional Containers feature makes it super simple to run the same Docker
MySQL image as you might be running in production for your application. Getting a running instance of the latest GA
version of MySQL can used with the following six lines in your .cirrus.yml
:
With the configuration above MySQL will be available on localhost:3306
. Use empty password to login as root
user.
Node¶
Official NodeJS Docker images can be used for building and testing Node.JS applications.
npm¶
Here is an example of a .cirrus.yml
that caches node_modules
based on contents of package-lock.json
file and runs tests:
Yarn¶
Here is an example of a .cirrus.yml
that caches node_modules
based on the contents of a yarn.lock
file and runs tests:
Yarn 2¶
Yarn 2 (also known as Yarn Berry), has a different package cache location (.yarn/cache
).
To run tests, it would look like this:
ESLint Annotations¶
ESLint reports are supported by Cirrus CI Annotations.
This way you can see all the linting issues without leaving the pull request you are reviewing! You'll need to generate an
ESLint
report file (for example, eslint.json
) in one of your task's scripts. Then save it as an artifact in eslint
format:
task:
# boilerplate
eslint_script: ...
always:
eslint_report_artifact:
path: eslint.json
format: eslint
Protocol Buffers Linting¶
Here is an example of how *.proto
files can be linted using Buf CLI.
Python¶
Official Python Docker images can be used for builds. Here is an example of a .cirrus.yml
that caches installed packages based on contents of requirements.txt
and runs pytest
:
Building PyPI Packages¶
Also using the Python Docker images, you can run tests if you are making packages for PyPI. Here is an example .cirrus.yml
for doing so:
Linting¶
You can easily set up linting with Cirrus CI and flake8, here is an example .cirrus.yml
:
Unittest
Annotations¶
Python Unittest reports are supported by Cirrus CI Annotations.
This way you can see what tests are failing without leaving the pull request you are reviewing! Here is an example
of a .cirrus.yml
that produces and stores Unittest
reports:
unittest_task:
container:
image: python:slim
install_dependencies_script: |
pip3 install unittest_xml_reporting
run_tests_script: python3 -m xmlrunner tests
# replace 'tests' with the module,
# unittest.TestCase, or unittest.TestSuite
# that the tests are in
always:
upload_results_artifacts:
path: ./*.xml
format: junit
type: text/xml
unittest_task:
arm_container:
image: python:slim
install_dependencies_script: |
pip3 install unittest_xml_reporting
run_tests_script: python3 -m xmlrunner tests
# replace 'tests' with the module,
# unittest.TestCase, or unittest.TestSuite
# that the tests are in
always:
upload_results_artifacts:
path: ./*.xml
format: junit
type: text/xml
Now you should get annotations for your test results.
Qodana¶
Qodana by JetBrains is a code quality monitoring tool that identifies and suggests fixes for bugs, security vulnerabilities, duplications, and imperfections. It brings all the smart features you love in the JetBrains IDEs.
Here is an example of .cirrus.yml
configuration file which will save Qodana's report as an artifact, will parse it and
report as annotations:
task:
name: Qodana
container:
image: jetbrains/qodana:latest
env:
CIRRUS_WORKING_DIR: /data/project
generate_report_script:
- /opt/idea/bin/entrypoint --save-report --report-dir=report
always:
results_artifacts:
path: "report/results/result-allProblems.json"
format: qodana
Release Assets¶
Cirrus CI doesn't provide a built-in functionality to upload artifacts on a GitHub release but this functionality can be
added via a script. For a release, Cirrus CI will provide CIRRUS_RELEASE
environment variable along with CIRRUS_TAG
environment variable. CIRRUS_RELEASE
indicates release id which can be used to upload assets.
Cirrus CI only requires write access to Check API and doesn't require write access to repository contents because of security
reasons. That's why you need to create a personal access token with full access
to repo
scope. Once an access token is created, please create an encrypted variable
from it and save it to .cirrus.yml
:
Now you can use a script to upload your assets:
#!/usr/bin/env bash
if [[ "$CIRRUS_RELEASE" == "" ]]; then
echo "Not a release. No need to deploy!"
exit 0
fi
if [[ "$GITHUB_TOKEN" == "" ]]; then
echo "Please provide GitHub access token via GITHUB_TOKEN environment variable!"
exit 1
fi
file_content_type="application/octet-stream"
files_to_upload=(
# relative paths of assets to upload
)
for fpath in $files_to_upload
do
echo "Uploading $fpath..."
name=$(basename "$fpath")
url_to_upload="https://uploads.github.com/repos/$CIRRUS_REPO_FULL_NAME/releases/$CIRRUS_RELEASE/assets?name=$name"
curl -X POST \
--data-binary @$fpath \
--header "Authorization: token $GITHUB_TOKEN" \
--header "Content-Type: $file_content_type" \
$url_to_upload
done
Ruby¶
Official Ruby Docker images can be used for builds.
Here is an example of a .cirrus.yml
that caches installed gems based on Ruby version,
contents of Gemfile.lock
, and runs rspec
:
container:
image: ruby:latest
rspec_task:
bundle_cache:
folder: /usr/local/bundle
fingerprint_script:
- echo $RUBY_VERSION
- cat Gemfile.lock
populate_script: bundle install
rspec_script: bundle exec rspec --format json --out rspec.json
always:
rspec_report_artifacts:
path: rspec.json
type: text/json
format: rspec
arm_container:
image: ruby:latest
rspec_task:
bundle_cache:
folder: /usr/local/bundle
fingerprint_script:
- echo $RUBY_VERSION
- cat Gemfile.lock
populate_script: bundle install
rspec_script: bundle exec rspec --format json --out rspec.json
always:
rspec_report_artifacts:
path: rspec.json
type: text/json
format: rspec
Repositories without Gemfile.lock
When you are not committing Gemfile.lock
(in Ruby gems repositories, for example)
you can run bundle install
(or bundle update
) in install_script
instead of populate_script
in bundle_cache
. Cirrus Agent is clever enough to re-upload
cache entry only if cached folder has been changed during task execution.
Here is an example of a .cirrus.yml
that always runs bundle install
:
Test Parallelization
It's super easy to add intelligent test splitting by using Knapsack Pro and matrix modification. After setting up Knapsack Pro gem, you can add sharding like this:
task:
matrix:
name: rspec (shard 1)
name: rspec (shard 2)
name: rspec (shard 3)
name: rspec (shard 4)
bundle_cache:
folder: /usr/local/bundle
fingerprint_script: cat Gemfile.lock
populate_script: bundle install
rspec_script: bundle exec rake knapsack_pro:rspec
Which will create four shards that will theoretically run tests 4x faster by equally splitting all tests between these four shards.
RSpec and RuboCop Annotations¶
Cirrus CI natively supports RSpec and RuboCop machine-parsable JSON reports.
To get behavior-driven test annotations, generate and upload a rspec
artifact from your lint task:
container:
image: ruby:latest
task:
name: RSpec
bundle_cache:
folder: /usr/local/bundle
fingerprint_script:
- echo $RUBY_VERSION
- cat Gemfile.lock
populate_script: bundle install
script: bundle exec rspec --format json --out rspec.json
always:
rspec_artifacts:
path: rspec.json
type: text/json
format: rspec
arm_container:
image: ruby:latest
task:
name: RSpec
bundle_cache:
folder: /usr/local/bundle
fingerprint_script:
- echo $RUBY_VERSION
- cat Gemfile.lock
populate_script: bundle install
script: bundle exec rspec --format json --out rspec.json
always:
rspec_artifacts:
path: rspec.json
type: text/json
format: rspec
Generate a rubocop
artifact to quickly gain context for linter/formatter annotations:
container:
image: ruby:latest
task:
name: RuboCop
bundle_cache:
folder: /usr/local/bundle
fingerprint_script:
- echo $RUBY_VERSION
- cat Gemfile.lock
populate_script: bundle install
script: bundle exec rubocop --format json --out rubocop.json
always:
rubocop_artifacts:
path: rubocop.json
type: text/json
format: rubocop
arm_container:
image: ruby:latest
task:
name: RuboCop
bundle_cache:
folder: /usr/local/bundle
fingerprint_script:
- echo $RUBY_VERSION
- cat Gemfile.lock
populate_script: bundle install
script: bundle exec rubocop --format json --out rubocop.json
always:
rubocop_artifacts:
path: rubocop.json
type: text/json
format: rubocop
Rust¶
Official Rust Docker images can be used for builds. Here is a basic example of .cirrus.yml
that caches crates in $CARGO_HOME
based on contents of Cargo.lock
:
container:
image: rust:latest
test_task:
registry_cache:
folder: $CARGO_HOME/registry
fingerprint_script: cat Cargo.lock
target_cache:
folder: target
fingerprint_script:
- rustc --version
- cat Cargo.lock
build_script: cargo build
test_script: cargo test
before_cache_script: rm -rf $CARGO_HOME/registry/index
arm_container:
image: rust:latest
test_task:
registry_cache:
folder: $CARGO_HOME/registry
fingerprint_script: cat Cargo.lock
target_cache:
folder: target
fingerprint_script:
- rustc --version
- cat Cargo.lock
build_script: cargo build
test_script: cargo test
before_cache_script: rm -rf $CARGO_HOME/registry/index
Caching Cleanup
Please note before_cache_script
that removes registry index from the cache before uploading it in the end of a successful task.
Registry index is changing very rapidly making the cache invalid. before_cache_script
deletes the index and leaves only the required crates for caching.
Rust Nightly¶
It is possible to use nightly builds of Rust via an official rustlang/rust:nightly
container.
Here is an example of a .cirrus.yml
to run tests against the latest stable and nightly versions of Rust:
test_task:
matrix:
- container:
image: rust:latest
- allow_failures: true
container:
image: rustlang/rust:nightly
registry_cache:
folder: $CARGO_HOME/registry
fingerprint_script: cat Cargo.lock
target_cache:
folder: target
fingerprint_script:
- rustc --version
- cat Cargo.lock
build_script: cargo build
test_script: cargo test
before_cache_script: rm -rf $CARGO_HOME/registry/index
test_task:
matrix:
- arm_container:
image: rust:latest
- allow_failures: true
arm_container:
image: rustlang/rust:nightly
registry_cache:
folder: $CARGO_HOME/registry
fingerprint_script: cat Cargo.lock
target_cache:
folder: target
fingerprint_script:
- rustc --version
- cat Cargo.lock
build_script: cargo build
test_script: cargo test
before_cache_script: rm -rf $CARGO_HOME/registry/index
FreeBSD Caveats
Vanila FreeBSD VMs don't set some environment variables required by Cargo for effective caching.
Specifying HOME
environment variable to some arbitrarily location should fix caching:
freebsd_instance:
image-family: freebsd-14-0
task:
name: cargo test (stable)
env:
HOME: /tmp # cargo needs it
install_script: pkg install -y rust
cargo_cache:
folder: $HOME/.cargo/registry
fingerprint_script: cat Cargo.lock
build_script: cargo build --all
test_script: cargo test --all --all-targets
before_cache_script: rm -rf $HOME/.cargo/registry/index
XCLogParser¶
XCLogParser is a CLI tool that parses Xcode and xcodebuild
's logs (xcactivitylog
files) and produces reports in different formats.
Here is an example of .cirrus.yml
configuration file which will save XCLogParser's flat JSON report as an artifact, will parse it and report as annotations:
macos_instance:
image: big-sur-xcode
task:
name: XCLogParser
build_script:
- xcodebuild -scheme noapp -derivedDataPath ~/dd
always:
xclogparser_parse_script:
- brew install xclogparser
- xclogparser parse --project noapp --reporter flatJson --output xclogparser.json --derived_data ~/dd
xclogparser_upload_artifacts:
path: "xclogparser.json"
type: text/json
format: xclogparser