FP Complete


A few years back, we published a blog post about deploying a Rust application using Docker and Kubernetes. That application was a Telegram bot. We’re going to do something similar today, but with a few meaningful differences:

  1. We’re going to be deploying a web app. Don’t get too excited: this will be an incredibly simply piece of code, basically copy-pasted from the actix-web documentation.
  2. We’re going to build the deployment image on Github Actions
  3. And we’re going to be building this using Windows Containers instead of Linux. (Sorry for burying the lead.)

We put this together for testing purposes when rolling out Windows support in our managed Kubernetes product, Kube360® here at FP Complete. I wanted to put this post together to demonstrate a few things:

Alright, let’s dive in! And if any of those topics sound interesting, and you’d like to learn more about FP Complete offerings, please contact us for more information on our offerings.

Prereqs

Quick sidenote before we dive in. Windows Containers only run on Windows machines. Not even all Windows machines will support Windows Containers. You’ll need Windows 10 Pro or a similar license, and have Docker installed on that machine. You’ll also need to ensure that Docker is set to use Windows instead of Linux containers.

If you have all of that set up, you’ll be able to follow along with most of the steps below. If not, you won’t be able to build or run the Docker images on your local machine.

Also, for running the application on Kubernetes, you’ll need a Kubernetes cluster with Windows nodes. I’ll be using the FP Complete Kube360 test cluster on Azure in this blog post, though we’ve previously tested in on both AWS and on-prem clusters too.

The Rust application

The source code for this application will be, by far, the most uninteresting part of this post. As mentioned, it’s basically a copy-paste of an example straight from the actix-web documentation featuring mutable state. It turns out this was a great way to test out basic Kubernetes functionality like health checks, replicas, and autohealing.

We’re going to build this using the latest stable Rust version as of writing this post, so create a rust-toolchain file with the contents:

1.47.0

Our Cargo.toml file will be pretty vanilla, just adding in the dependency on actix-web:

[package]
name = "windows-docker-web"
version = "0.1.0"
authors = ["Michael Snoyman <[email protected]>"]
edition = "2018"

[dependencies]
actix-web = "3.1"

If you want to see the Cargo.lock file I compiled with, it’s available in the source repo.

And finally, the actual code in src/main.rs:

use actix_web::{get, web, App, HttpServer};
use std::sync::Mutex;

struct AppState {
    counter: Mutex<i32>,
}

#[get("/")]
async fn index(data: web::Data<AppState>) -> String {
    let mut counter = data.counter.lock().unwrap();
    *counter += 1;
    format!("Counter is at {}", counter)
}

#[actix_web::main]
async fn main() -> std::io::Result<()> {
    let host = "0.0.0.0:8080";
    println!("Trying to listen on {}", host);
    let app_state = web::Data::new(AppState {
        counter: Mutex::new(0),
    });
    HttpServer::new(move || App::new().app_data(app_state.clone()).service(index))
        .bind(host)?
        .run()
        .await
}

This code creates an application state (a mutex of an i32), defines a single GET handler that increments that variable and prints the current value, and then hosts this on 0.0.0.0:8080. Not too shabby.

If you’re following along with the code, now would be a good time to cargo run and make sure you’re able to load up the site on your localhost:8080.

Dockerfile

If this is your first foray into Windows Containers, you may be surprised to hear me say “Dockerfile.” Windows Container images can be built with the same kind of Dockerfiles you’re used to from the Linux world. This even supports more advanced features, such as multistage Dockerfiles, which we’re going to take advantage of here.

There are a number of different base images provided by Microsoft for Windows Containers. We’re going to be using Windows Server Core. It provides enough capabilities for installing Rust dependencies (which we’ll see shortly), without including too much unneeded extras. Nanoserver is a much lighterweight image, but it doesn’t play nicely with the Microsoft Visual C++ runtime we’re using for the -msvc Rust target.

NOTE I’ve elected to use the -msvc target here instead of -gnu for two reasons. Firstly, it’s closer to the actual use cases we need to support in Kube360, and therefore made a better test case. Also, as the default target for Rust on Windows, it seemed appropriate. It should be possible to set up a more minimal nanoserver-based image based on the -gnu target, if someone’s interested in a “fun” side project.

The complete Dockerfile is available on Github, but let’s step through it more carefully. As mentioned, we’ll be performing a multistage build. We’ll start with the build image, which will install the Rust build toolchain and compile our application. We start off by using the Windows Server Core base image and switching the shell back to the standard cmd.exe:

FROM mcr.microsoft.com/windows/servercore:1809 as build

# Restore the default Windows shell for correct batch processing.
SHELL ["cmd", "/S", "/C"]

Next we’re going to install the Visual Studio buildtools necessary for building Rust code:

# Download the Build Tools bootstrapper.
ADD https://aka.ms/vs/16/release/vs_buildtools.exe /vs_buildtools.exe

# Install Build Tools with the Microsoft.VisualStudio.Workload.AzureBuildTools workload,
# excluding workloads and components with known issues.
RUN vs_buildtools.exe --quiet --wait --norestart --nocache 
    --installPath C:BuildTools 
    --add Microsoft.Component.MSBuild 
    --add Microsoft.VisualStudio.Component.Windows10SDK.18362 
    --add Microsoft.VisualStudio.Component.VC.Tools.x86.x64 
 || IF "%ERRORLEVEL%"=="3010" EXIT 0

And then we’ll modify the entrypoint to include the environment modifications necessary to use those buildtools:

# Define the entry point for the docker container.
# This entry point starts the developer command prompt and launches the PowerShell shell.
ENTRYPOINT ["C:\BuildTools\Common7\Tools\VsDevCmd.bat", "&&", "powershell.exe", "-NoLogo", "-ExecutionPolicy", "Bypass"]

Next up is installing rustup, which is fortunately pretty easy:

RUN curl -fSLo rustup-init.exe https://win.rustup.rs/x86_64
RUN start /w rustup-init.exe -y -v && echo "Error level is %ERRORLEVEL%"
RUN del rustup-init.exe

RUN setx /M PATH "C:UsersContainerAdministrator.cargobin;%PATH%"

Then we copy over the relevant source files and kick off a build, storing the generated executable in c:output:

COPY Cargo.toml /project/Cargo.toml
COPY Cargo.lock /project/Cargo.lock
COPY rust-toolchain /project/rust-toolchain
COPY src/ /project/src
RUN cargo install --path /project --root /output

And with that, we’re done with our build! Time to jump over to our runtime image. We don’t need the Visual Studio buildtools in this image, but we do need the Visual C++ runtime:

FROM mcr.microsoft.com/windows/servercore:1809

ADD https://download.microsoft.com/download/6/A/A/6AA4EDFF-645B-48C5-81CC-ED5963AEAD48/vc_redist.x64.exe /vc_redist.x64.exe
RUN c:vc_redist.x64.exe /install /quiet /norestart

With that in place, we can copy over our executable from the build image and set it as the default CMD in the image:

COPY --from=build c:/output/bin/windows-docker-web.exe /

CMD ["/windows-docker-web.exe"]

And just like that, we’ve got a real life Windows Container. If you’d like to, you can test it out yourself by running:

> docker run --rm -p 8080:8080 fpco/windows-docker-web:f8a3192e63f2e699cc67716488a633f5e0893446

If you connect to port 8080, you should see our painfully simple app. Hurrah!

Building with Github Actions

One of the nice things about using a multistage Dockerfile for performing the build is that our CI scripts become very simple. Instead of needing to set up an environment with correct build tools or any other configuration, our script:

The downside is that there is no build caching at play with this setup. There are multiple methods to mitigate this problem, such as creating helper build images that pre-bake the dependencies. Or you can perform the builds on the host on CI and only use the Dockerfile for generating the runtime image. Those are interesting tweaks to try out another time.

Taking on the simple multistage approach though, we have the following in our .github/workflows/container.yml file:

name: Build a Windows container

on:
    push:
        branches: [master]

jobs:
    build:
        runs-on: windows-latest

        steps:
        - uses: actions/checkout@v1

        - name: Build and push
          shell: bash
          run: |
            echo "${{ secrets.DOCKER_HUB_TOKEN }}" | docker login --username fpcojenkins --password-stdin
            IMAGE_ID=fpco/windows-docker-web:$GITHUB_SHA
            docker build -t $IMAGE_ID .
            docker push $IMAGE_ID

I like following the convention of tagging my images with the Git SHA of the commit. Other people prefer different tagging schemes, it’s all up to you.

Manifest files

Now that we have a working Windows Container image, the next step is to deploy it to our Kube360 cluster. Generally, we use ArgoCD and Kustomize for managing app deployments within Kube360, which lets us keep a very nice Gitops workflow. Instead, for this blog post, I’ll show you the raw manifest files. It will also let us play with the k3 command line tool, which also happens to be written in Rust.

First we’ll have a Deployment manifest to manage the pods running the application itself. Since this is a simple Rust application, we can put very low resource limits on this. We’re going to disable the Istio sidebar, since it’s not compatible with Windows. We’re going to ask Kubernetes to use the Windows machines to host these pods. And we’re going to set up some basic health checks. All told, this is what our manifest file looks like:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: windows-docker-web
  labels:
    app.kubernetes.io/component: webserver
spec:
  replicas: 1
  minReadySeconds: 5
  selector:
    matchLabels:
      app.kubernetes.io/component: webserver
  template:
    metadata:
      labels:
        app.kubernetes.io/component: webserver
      annotations:
        sidecar.istio.io/inject: "false"
    spec:
      runtimeClassName: windows-2019
      containers:
        - name: windows-docker-web
          image: fpco/windows-docker-web:f8a3192e63f2e699cc67716488a633f5e0893446
          ports:
            - name: http
              containerPort: 8080
          readinessProbe:
            httpGet:
              path: /
              port: 8080
            initialDelaySeconds: 10
            periodSeconds: 10
          livenessProbe:
            httpGet:
              path: /
              port: 8080
            initialDelaySeconds: 10
            periodSeconds: 10
          resources:
            requests:
              memory: 128Mi
              cpu: 100m
            limits:
              memory: 128Mi
              cpu: 100m

Awesome, that’s the most complicated by far of the three manifests. Next we’ll put a fairly stock-standard Service in front of that deployment:

apiVersion: v1
kind: Service
metadata:
  name: windows-docker-web
  labels:
    app.kubernetes.io/component: webserver
spec:
  ports:
  - name: http
    port: 80
    targetPort: http
  type: ClusterIP
  selector:
    app.kubernetes.io/component: webserver

This exposes a services on port 80, and targets the http port (port 8080) inside the deployment. Finally, we have our Ingress. Kube360 uses external DNS to automatically set DNS records, and cert-manager to automatically grab TLS certificates. Our manifest looks like this:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-ingress-prod
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
  name: windows-docker-web
spec:
  rules:
  - host: windows-docker-web.az.fpcomplete.com
    http:
      paths:
      - backend:
          serviceName: windows-docker-web
          servicePort: 80
  tls:
  - hosts:
    - windows-docker-web.az.fpcomplete.com
    secretName: windows-docker-web-tls

Now that we have our application inside a Docker image, and we have our manifest files to instruct Kubernetes on how to run it, we just need to deploy these manifests and we’ll be done.

Launch

With our manifests in place, we can finally deploy them. You can use kubectl directly to do this. Since I’m deploying to Kube360, I’m going to use the k3 command line tool, which automates the process of logging in, getting temporary Kubernetes credentials, and providing those to the kubectl command via an environment variable. These steps could be run on Windows, Mac, or Linux. But since we’ve done the rest of this post on Windows, I’ll use my Windows machine for this too.

> k3 init test.az.fpcomplete.com
> k3 kubectl apply -f deployment.yaml
Web browser opened to https://test.az.fpcomplete.com/k3-confirm?nonce=c1f764d8852f4ff2a2738fb0a2078e68
Please follow the login steps there (if needed).
Then return to this terminal.
Polling the server. Please standby.
Checking ...
Thanks, got the token response. Verifying token is valid
Retrieving a kubeconfig for use with k3 kubectl
Kubeconfig retrieved. You are now ready to run kubectl commands with `k3 kubectl ...`
deployment.apps/windows-docker-web created
> k3 kubectl apply -f ingress.yaml
ingress.networking.k8s.io/windows-docker-web created
> k3 kubectl apply -f service.yaml
service/windows-docker-web created

I told k3 to use the test.az.fpcomplete.com cluster. On the first k3 kubectl call, it detected that I did not have valid credentials for the cluster, and opened up my browser to a page that allowed me to log in. One of the design goals in Kube360 is to strongly leverage existing identity providers, such as Azure AD, Google Directory, Okta, Microsoft 365, and others. This is not only more secure than copy-pasting kubeconfig files with permanent credentials around, but more user friendly. As you can see, the process above was pretty automated.

It’s easy enough to check that the pods are actually running and healthy:

> k3 kubectl get pods
NAME                                  READY   STATUS    RESTARTS   AGE
windows-docker-web-5687668cdf-8tmn2   1/1     Running   0          3m2s

Initially, the ingress controller looked like this while it was getting TLS certificates:

> k3 kubectl get ingress
NAME                        CLASS    HOSTS                                  ADDRESS   PORTS     AGE
cm-acme-http-solver-zlq6j   <none>   windows-docker-web.az.fpcomplete.com             80        0s
windows-docker-web          <none>   windows-docker-web.az.fpcomplete.com             80, 443   3s

And after cert-manager gets the TLS certificate, it will switch over to:

> k3 kubectl get ingress
NAME                 CLASS    HOSTS                                  ADDRESS          PORTS     AGE
windows-docker-web   <none>   windows-docker-web.az.fpcomplete.com   52.151.225.139   80, 443   90s

And finally, our site is live! Hurrah, a Rust web application compiled for Windows and running on Kubernetes inside Azure.

NOTE Depending on when you read this post, the web app may or may not still be live, so don’t be surprised if you don’t get a response if you try to connect to that host.

Conclusion

This post was a bit light on actual Rust code, but heavy on a lot of Windows scripting. As I think many Rustaceans already know, the dev experience for Rust on Windows is top notch. What may not have been obvious is how pleasant the Docker experience is on Windows. There are definitely some pain points, like the large images involved and needing to install the VC runtime. But overall, with a bit of cargo-culting, it’s not too bad. And finally, having a cluster with Windows support ready via Kube360 makes deployment a breeze.

If anyone has follow up questions about anything here, please reach out to me on Twitter or contact our team at FP Complete. In addition to our Kube360 product offering, FP Complete provides many related services, including:

If you liked this post, please check out some related posts:

Subscribe to our blog via email

Email subscriptions come from our Atom feed and are handled by Blogtrottr. You will only receive notifications of blog posts, and can unsubscribe any time.

Tagged