One thing that has been coming up more and more in my experiments has been the need to create custom container images for various tasks. I also would like to set up a system that doesn’t require any containerization specific tools on my development workstation like Docker or Podman. This means I need to figure out a way to build containers from within a Kubernetes cluster. Most of my time is spent on OpenShift but I also have a Rancher cluster at home that I would like to use the same process on. This unfortunately rules out the BuildConfig object that OpenShift provides since it won’t be portable between the two.

What this leaves us with is a K8s Job controller running an instance of the Buildah container image. We’re going with Buildah over something like Docker-in-Docker since it doesn’t require me to expose any sockets to the container making things at least a little more secure. Additionally, since buildah can take input from a file on the local filesystem, a HTTP/HTTPS endpoint, or a Git repository I am going to write a Helm chart that will provide all the configuration flexibility that I want.

A lot of the configuration in the chart was adapted from this blog post along with modifications to allow the user to pass in configuration through the different methods. I also added templates for both push and pull secrets to authenticate with Docker repositories. With both Quay and DockerHub requiring authentication for a lot of resources it made sense to build that functionality in. The final chart is in this GitHub repository and I plan on adapting and expanding it as time goes on.

To keep the container as isolated as possible we’re going to use the --storage-driver vfs option because we don’t have to mount the /dev/fuse device with the container. We are also going to mount the container cache directory and the build directory as emptyDir volumes instead of attaching them to PersistentVolumes. It might be possible to update the chart to allow for a ReadWriteMany image cache volume but setting up a caching proxy registry is probably less fuss and less dependent on your underlying storage technology.

I did learn, as detailed here, that it is super important to provide the --storage-driver vfs option to all of the buildah calls so that it can find the image it just created. If you don’t it will look like your image has disappeared when it hasn’t and the built image won’t push to the registry.

The simplest use case for this chart is the ConfigMap based configuration where the contents of the Dockerfile are stored in a ConfigMap and loaded into the container as a volume mount. The example below shows how you could use this method to build an Ubuntu container with Ansible installed which you then push back to DockerHub.

# build-from-configmap-values.yml
---
build:
  configMap:
    enabled: true
    dockerfileData: |
      FROM docker.io/library/ubuntu:20.10
      RUN apt-get update && \
          apt-get install -y software-properties-common && \
          apt-add-repository ppa:ansible/ansible && \
          apt-get install -y ansible && \
          apt-get clean
  source:
    authSecret:
      enabled: true
      registry: docker.io
      username: <REDACTED>
      password: <REDACTED>
  destination:
    registry: docker.io
    repository: example/ubuntu-ansible
    tag: latest
    authSecret:
      enabled: true
      username: <REDACTED>
      password: <REDACTED>

I’m still testing the HTTPS and Git functionality but I think the ability to build images entirely within Kubernetes is going to be really helpful in my continued experimentation.