Kubernetes Migration

Featured image

I got fed up with the brittleness and custom nature of the process to update this blog. While the process worked, it didn’t feel like the right way to be doing things. Docker Swarm isn’t the right tool for the job anymore and having to run GitLab runners on my server just to execute deployment scripts felt hacky.

Enter Kubernetes (k8s). I’d first looked at Kubernetes a few years ago when we were first migrating workloads to Docker at work. While I thought Kubernetes was the answer to our container orchestration woes, at the time it felt pretty immature and difficult to setup which led us to use Docker Swarm. That lasted for a short while before we made the switch to Kubernetes using Rancher to help configure the cluster. This was around the time I switched to a less dev-y role at work which meant my experience with the cluster was limited.

I’m now doing slightly more dev at work which means I need to get more familiar with Kubernetes. It’s come a long way in a few short years and so far I’m loving using it (not that it hasn’t caused me immense frustration at times - why are stale config items still hanging around?).

What better way to test out Kubernetes than to migrate my blog and other web services?1

Installing Kubernetes

I went the easy route and used microk8s to run on my new Vultr VPS2. microk8s promises to have low overhead and run in constrained environments which my Vultr VPS definitely is.

sudo snap install microk8s --classic
microk8s enable ingress dns storage rbac helm

I needed helm installed to use the integration with GitLab. Subsequently I haven’t used this integration but it’s good to know I can.

To add my domain into the cluster I updated the file /var/snap/microk8s/current/certs/csr.conf.template and added extra entries to the [ alt_names ] section.

[ alt_names ]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster
DNS.5 = kubernetes.default.svc.cluster.local
DNS.6 = my.domain.name

Configuring a Namespace

Very easy.

apiVersion: v1
kind: Namespace
metadata:
  name: my-namespace

Using Cloudflare with Microk8s

I was able to get Cloudflare and Kubernetes working with the default microk8s ingress controller using the details from this blog. All I had to do was generate an Origin CA Certificate and install this certificate as a secret.

It must be in the same namespace as your application (e.g. my-namespace), not the ingress namespace.

apiVersion: v1
data:
  tls.crt: LS..Qo=
  tls.key: LS..0K
kind: Secret
metadata:
  name: domain-cert
  namespace: my-namespace
type: kubernetes.io/tls

GitLab Login

To pull images from my private GitLab container registry I needed to setup Kubernetes with a username and password. I used the instructions from here to create a secrets file like the one below.

apiVersion: v1
kind: Secret
metadata:
  namespace: my-namespace
  name: blogdockerconfig
data:
  .dockerconfigjson: ey...9Cg==
type: kubernetes.io/dockerconfigjson

I couldn’t create the required config.json file so I had to manually create the JSON and feed it through the base64 command line to generate the required content for the .dockerconfigjson key. I made a small script to generate the base64 string required to add into the secret.

#!/bin/bash

TOKEN=$1
USERNAME=gitlab-ci-token

AUTH_STRING=${USERNAME}:${TOKEN}
AUTH_BASE64=$(echo $AUTH_STRING | base64)

JSON_STRING="{\"auths\":{\"registry.gitlab.com\":{\"username\":\"gitlab-ci-token\",\"password\":\"${TOKEN}\",\"auth\":\"${AUTH_BASE64}\"}}}"

DOCKERCONFIG=$(echo -n $JSON_STRING | base64)

echo $DOCKERCONFIG

Configure the Service

This is my combined .yml file I used to test my cluster. I ran a development version of my blog. This file creates a Deployment with one instance of my blog running, a Service to map a port to my Deployment and an Ingress to map an external DNS name to my Service (at least I think I’ve got that right - it’s working anyway).

apiVersion: apps/v1
kind: Deployment
metadata:
  name: devblog
  namespace: my-namespace
spec:
  replicas: 1
  selector:
    matchLabels:
      blog: dev
  template:
    metadata:
      labels:
        blog: dev
    spec:
      containers:
      - name: demo
        image: "registry.gitlab.com/username/project:tag"
      imagePullSecrets:
        - name: blogdockerconfig
---
apiVersion: v1
kind: Service
metadata:
    name: blog-dev
    namespace: my-namespace
spec:
    type: NodePort
    selector:
        blog: dev
    ports:
    - port: 80
      targetPort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: dev-blog-ingress
  namespace: my-namespace
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
  tls:
  - hosts:
    - domain.com
    secretName: domain-cert
  rules:
  - host: blog-dev.domain.com
    http:
      paths:
      - path: /
        backend:
          serviceName: blog-dev
          servicePort: 80

It Works!

With this in place I’m able to deploy a version of my blog just by kubectl applying my .yml file. I’ve tested running multiple different Ingress rules from separate .yml files for the same host and it all works. This is much easier than I thought it would be.

I’ll have to investigate this more but I am using a lot more memory on my VPS than when I wasn’t running Kubernetes. I knew there would be overhead but I’ll need to see what it looks like once I add more services.

And I can perform a rolling update with a new tag using the command below (this is mostly for my reference so I remember the command).

kubectl -n my-namespace set image deployment/devblog demo=rregistry.gitlab.com/username/project:new-tag

A Satisfied Kubernetes Customer

The process to go from nothing to a full fledged application stack handling multiple web applications listening on multiple DNS entries and secured with proper SSL certificates has been amazing. While I probably couldn’t write a Kubernetes .yaml file from scratch, I’m confident I can configure secrets, deployments, services and ingresses for a broad range of use cases.

Since beginning this blog post I’ve extended my cluster to add:

I’ll likely write follow up posts on a few of these things.

I don’t think I could go back to deploying services any other way. Once the cluster exists, adding workloads is incredibly easy. I’m looking forward to doing more.

Bonus

I’m using tmux on my remote server and it always annoys me that by default shell prompts in tmux aren’t colourful. So run the following command to get a colourful shell prompt.

echo 'set -g default-terminal "xterm-256color"' >> ~/.tmux.conf

Fortuitously as I was starting to play with Kubernetes I got an email from Docker talking about Lens which bills itself as an IDE for Kubernetes. It works well for my needs and gives me a lot of the functionality Rancher gives me at work. I would recommend giving it a try.


  1. Another blog post about me setting up the blog itself. I promise I’ll do different types of posts at some stage. ↩︎

  2. I splurged and went up to the $10/month option from Vultr. ↩︎