Building container images in Kubernetes with kaniko

September 6, 2022
Philipp Defner

Introduction

Building container images in a CI pipeline and pushing the resulting image to an image registry is a common task. At JustWatch we have used, and shared, our very own tool called artificer for that use case. This was back in 2018.

Since then, the landscape has changed a lot and new tools were released. One of them is kaniko which is now a mature project we decided to use. After maintaining our own pre-baked container images for a while we decided to switch to the general purpose image provided by kaniko to reduce the amount of work needed to maintain and update our own images.

The project itself describes itself as following:

kaniko is a tool to build container images from a Dockerfile, inside a container or Kubernetes cluster. kaniko doesn’t depend on a Docker daemon and executes each command within a Dockerfile completely in userspace. This enables building container images in environments that can’t easily or securely run a Docker daemon, such as a standard Kubernetes cluster.

In this blog post we want to give you a short overview over how to build images for your Go services, with the help of kaniko in a Kubernetes cluster.

Context and directory structure

The Go project we want to build has multiple entry points. We want to build separate binaries for each of them. The entry points are all located in the cmd directory and each of them has their own Dockerfile.

jw-content-api|master ⇒ tree -L 3
.
├── Makefile
├── cmd
│   ├── app1
│   │   ├── Dockerfile
│   │   └── api.go
│   └── app2
│       ├── Dockerfile
│       └── api.go
├── go.mod
├── go.sum
├── .golangci.yml
├── .gitlab-ci.yml
└── vendor

Update .gitlab-ci.yml file

The first step is to set up the Gitlab CI pipeline. We use templating to make CI steps inherit common configuration options (See .build-template and extends).

Below is a simplified example of a .gitlab-ci.yml file. This is what it could look like for Go 1.19.

image: golang:1.19

variables:
  GOCACHE: "/tmp/gocache"
  TERM: dumb
  GOFLAGS: "-mod=vendor"
  GCR_HOST: "eu.gcr.io"
  GCR_PROJECT: "justwatch-compute"
  NAME: "jw-content-api"

.build-template:
  stage: build-and-push
  image:
    name: gcr.io/kaniko-project/executor:debug
    entrypoint: [""]
  script:
    - mkdir -p /kaniko/.docker
    - echo  ${DOCKER_CONFIG} > /kaniko/.docker/config.json

stages:
  - build-and-push

app1:
  extends: .build-template
  script:
    - /kaniko/executor --context "${CI_PROJECT_DIR}" --context-sub-path "cmd/app1/" --dockerfile "${CI_PROJECT_DIR}/cmd/app1/Dockerfile" --destination "${GCR_HOST}/${GCR_PROJECT}/${NAME}-app1:${CI_COMMIT_SHORT_SHA}"

app2:
  extends: .build-template
  script:
    - /kaniko/executor --context "${CI_PROJECT_DIR}" --context-sub-path "cmd/app2/" --dockerfile "${CI_PROJECT_DIR}/cmd/cron/Dockerfile" --destination "${GCR_HOST}/${GCR_PROJECT}/${NAME}-app2:${CI_COMMIT_SHORT_SHA}"

Setting up kaniko executor

The important section is the build stage calling the kaniko executor. Make sure to provide the right paths as the context and context-sub-path flags are a bit confusing.

Given the directory structure mentioned at the beginning it would seem plausible to expected kaniko to accept --dockerfile Dockerfile as the sub path context is set to that directory. That doesn’t seem to be the case though so be careful.

The variables are defined in the .gitlab-ci.yml itself or come from the predefined variables that Gitlab CI makes available in the build run. In our case the --destination is the path to our Google Container Registry with the name of the final image we want to push.

Dockerfile for entry points

Each entry point will have its own Dockerfile as shown in the directory listing above. This Dockerfile is referenced by the kaniko executor.

To reduce the final image size we split the build process into multiple stages. The important part in that case is that both images are alpine based.

Note: We were running into an issue (see below) with using golang:1.19 as the build image and alpine:latest as the runner. Changing it to golang:1.19-alpine fixed that.

standard_init_linux.go:228: exec user process caused: no such file or directory

Example Dockerfile for the app1 entry point:

FROM golang:1.19-alpine as builder
ADD . /go/src/jus.tw.cx/jw-content-api
WORKDIR /go/src/jus.tw.cx/jw-content-api
RUN go build -o /go/bin/jw-content-api ./cmd/app1/

FROM alpine:latest
COPY --from=builder /go/bin/jw-content-api /usr/local/bin/jw-content-api
CMD [ "/usr/local/bin/jw-content-api" ]
EXPOSE 8080

Conclusion

At this point we have a lightweight image pushed to our container registry for each entry point we defined. This can then be referenced in our Kubernetes deployments and subsequently deployed via ChatOps. ChatOps? All our services can be deployed, rolled back, migrated through Slack. This is something we developed at JustWatch and use for many years.

This post was written by Philipp Defner, Lead Software Engineer at JustWatch. If you like to work on things like that (and Go, Postgres, Elasticsearch, ScyllaDB) take a look at our current openings. We are always hiring.

JustWatch We're the team behind JustWatch. We blog about business and tech and we are hiring.