Feature branch complete, you push the button to submit your merge request, then wait for peers to review. Your changes enter the Continuous Integration (CI) pipeline, possibly building and likely validating your code changes with unit/integration tests and style linters. Results are sent back to the repository to display in-line with comments from your peers. With an approved merge, your Continuous Deployment (CD) plan kicks in to deliver your new feature to its appropriate environment.
A typical CI/CD pipeline workflow usually includes some form of automation server - either a SaaS platform such as Travis CI or CircleCI, or an internal solution like Jenkins. Both Github and AWS offer their own suite of tools through Actions and CodeBuild, but we’re going to focus this article on GitLab Runner.
Each job defined within a
.gitlab-ci.yml pipeline configuration is executed by a runner. The runner instance can be a virtual (cloud) server, bare-metal machine, Docker container, or a Kubernetes cluster and can be utilized for specific projects or shared across a group. Without proper planning, runners can easily become a bottle-neck in your CI/CD pipeline, with idle machines wasting resources. GitLab, however, supports a forked version of Docker Machine to be used along with a runner manager for an autoscaled configuration.
The manager will not run any jobs itself, focusing instead on generating new runner containers for each job defined in
.gitlab-ci.yml. As such, a basic cloud server (e.g., AWS
t3.micro ) is the only dedicated resource needed. One manager can define multiple runner configurations, allowing you to customize pipelines with tags. Additionally, with efficient pipeline stages and directed acyclic graph (DAG) definitions, you can ensure that independent jobs are run in parallel, regardless of stage. The manager can be configured to limit concurrent runners (jobs); prevent, or ensure, idle runners; and define autoscaling timeframes (peak hours) for different configurations.
Because autoscaling creates new instances for each container, pipelines can no longer depend on traditional server cache. Instead, the distributed caching feature must be enabled and configured within the runner manager. Cache is runner-specific by default, but can be shared between runners with the
# /etc/gitlab-runner/config.toml [runners.cache] Type = "s3" Shared = true [runners.cache.s3] [runners.cache.gcs] [runners.cache.azure]
Additionally, you will not need to use docker cache, so this should be disabled in the
# /etc/gitlab-runner/config.toml [runners.docker] disable_cache = true
Special consideration needs to be given to pipelines which generate a docker image. As the job itself is executed within a docker container, typically
docker build and
docker push statements would not be available. The Docker-in-Docker (dind) image allows for this, but a better alternative is to bind-mount the docker socket to the container by updating the manager’s configuration to prevent running the container in privileged mode:
# /etc/gitlab-runner/config.toml [[runners]] ... [runers.docker] image = "docker:stable" volumes = ["/var/run/docker.sock:/var/run/docker.sock"]
In addition to docker machine, GitLab supports an executor for use in AWS Fargate. Instead of creating virtual servers to run each container (job), the runner manager will launch a Fargate task into an ECS cluster. There are some limits to this strategy, including lack of support for the
service keyword in
.gitlab-ci.yml or the capability to run Docker-in-Docker jobs. However, you can configure your runner manager to support multiple runner configurations; e.g., running database-integrated test jobs on-demand, and deployment jobs in ECS.
# .github-ci.yml build job: tags: [dind] ... test job: tags: [service] ... deploy job: tags: [serverless] ...