Containerization

Popularized by Docker, containerization makes it easy to package an application with its full set of system dependencies. By orchestrating the deployment of those containers using Kubernetes, you can achieve replicable, manageable clusters of containers.

  • It's easier to share system resources (especially RAM) with the host OS
  • It requires less overall CPU to run (since it shares a kernel with the host)
  • The startup time is less
  • It's typically easier to share files and network interfaces with the host

The downside of containerization is that it is not a complete silo like virtualization, since you’re still sharing some of the host operating system components. But, for many common workflows, containerization provides the functionality you need.

This article covers. at a high level, some of the workflows you will want to consider implementing containerization. There are many more detailed blog posts in our resources section.

The most common case of using containers is deployment. In a non-container world, deployment typically involves some significant configuration of server machines, potentially with configuration management tools like Chef and Puppet. This introduces lots of room for your servers to end up in an unexpected state, as well as higher latency before a machine is ready to start answering requests.

With container-based deployment, your CI pipeline defines a complete formula for creating a pristine image from scratch. Your CI pipeline can further test this image to make sure it’s working as expected. Then, instead of reconfiguring your servers with updated system libraries, static assets, and an executable, you can atomically swap out which image is running.

Orchestration tools like Kubernetes make it easy to configure a cluster to run multiple copies of your images for scalability and fault tolerance, and to perform red/black deployments to upgrade your entire cluster to new versions. It also makes it possible to roll back to a previous version of the image in case a bug is discovered.

Let’s say you’re working on an application that depends on some system libraries, using some code generation tools, and it needs a locally running database. Getting this set up manually on Linux can be straightforward enough. But it doesn’t always take into account all the requirements you will need to run a stable solution. Some pain points you may encounter include:

  • Which distribution did you get it set up on? What if another team member wants to use a different distro?
  • What if another project needs different versions of the tools or system libraries?
  • What if you need to work on Windows or OS X?

Setting up development environments can be a time-consuming prospect. When onboarding new team members, it can represent a significant delay. Often the situation ends up with a new team member pushing some code that “works for them” but breaks on another machine.

Containers can solve this challenge. Instead of installing all the appropriate tools on your operating system, the typical workflow is:

  • A DevOps team sets up a CI job to build a Docker image with all necessary tools and libraries
  • The project’s CI build uses this Docker image for building and testing the project
  • On your local machine, you perform builds inside a Docker container using the same image used on CI

Docker’s built-in support for bind mounting and sharing network interfaces makes this kind of workflow convenient. And build tools like Stack can automate this process even more.

The most common case of using containers is deployment. In a non-container world, deployment typically involves some significant configuration of server machines, potentially with configuration management tools like Chef and Puppet. This introduces lots of room for your servers to end up in an unexpected state, as well as higher latency before a machine is ready to start answering requests.

With container-based deployment, your CI pipeline defines a complete formula for creating a pristine image from scratch. Your CI pipeline can further test this image to make sure it’s working as expected. Then, instead of reconfiguring your servers with updated system libraries, static assets, and an executable, you can atomically swap out which image is running.

Orchestration tools like Kubernetes make it easy to configure a cluster to run multiple copies of your images for scalability and fault tolerance, and to perform red/black deployments to upgrade your entire cluster to new versions. It also makes it possible to roll back to a previous version of the image in case a bug is discovered.

Let’s say you’re working on an application that depends on some system libraries, using some code generation tools, and it needs a locally running database. Getting this set up manually on Linux can be straightforward enough. But it doesn’t always take into account all the requirements you will need to run a stable solution. Some pain points you may encounter include:

  • Which distribution did you get it set up on? What if another team member wants to use a different distro?
  • What if another project needs different versions of the tools or system libraries?
  • What if you need to work on Windows or OS X?

Setting up development environments can be a time-consuming prospect. When onboarding new team members, it can represent a significant delay. Often the situation ends up with a new team member pushing some code that “works for them” but breaks on another machine.

Containers can solve this challenge. Instead of installing all the appropriate tools on your operating system, the typical workflow is:

  • A DevOps team sets up a CI job to build a Docker image with all necessary tools and libraries
  • The project’s CI build uses this Docker image for building and testing the project
  • On your local machine, you perform builds inside a Docker container using the same image used on CI

Docker’s built-in support for bind mounting and sharing network interfaces makes this kind of workflow convenient. And build tools like Stack can automate this process even more.