FP Complete


It’s always been clear to developers that a project’s source code and how to build that source code are inextricably linked. After all, we’ve been including Makefiles (and, more recently, declarative build specifications like pom.xml for Maven and stack.yaml for Haskell Stack) with our source code since time immemorial (well, 1976).

What has been less clear is that the build process and environment are also important, especially when using Continuous Integration. First-generation CI systems such as Jenkins CI and Bamboo have you use a web-based UI in order to set up the build variables, jobs, stages, and triggers. You set up an agent/slave running on a machine that has tools, system libraries, and other prerequisites installed (or just run the builds on the same machine as the CI system). When a new version of your software needs a change to its build pipeline, you log into the web UI and make that change. If you need a newer version of a system library or tool for your build, you SSH into the build agent and make the changes on the command line (or maybe you use a configuration management tool such as Chef or Ansible to help).

New generation CI systems such as Travis CI, AppVeyor, and Gitlab’s integrated CI instead have you put as much of this information as possible in a file inside the repository (e.g. named .travis.yml or .gitlab-ci.yml). With the advent of convenient cloud VMs and containerization using Docker, each build job can specify a full OS image with the required prerequisites, and the services it needs to have running. Jobs are fully isolated from each other. When a new version of your software needs a change to its build pipeline, that change is made right alongside the source code. If you need a newer version of a system library or tool for your build, you just push a new Docker image and instruct the build to use it.

Tracking the build pipeline and environment alongside the source code rather than through a web UI has a number advantages:

There are also potential pitfalls to be aware of:

Of course, nothing described here is entirely new. You could be judicious about having the a first generation CI system only make very simple call-outs to scripts in your repo, and those scripts could use VMs or chrooted environments themselves. In fact, these have been long considered best practices. Jenkins has plug-ins to integrate with any VM or containerization environment you can think of, as well as plugins to support in-repo pipelines. The difference is that the newer generation of CI systems make this way of operating the default rather than something you have to do extra work to achieve (albeit with a loss of some of the flexibility of the first generation tools).

CI has always been a important part of the FP Complete development and DevOps arsenal, and these principles are at the core of our approach regardless of which CI system is being used. We have considerable experience converting existing CI pipelines to these principles in both first-generation and newer generation CI systems, and we offer consulting and training.

Subscribe to our blog via email

Email subscriptions come from our Atom feed and are handled by Blogtrottr. You will only receive notifications of blog posts, and can unsubscribe any time.

Tagged