Setting Up This Blog
I decided I wanted to host my own blog. Why? Just so I could play around with setting up these sorts of things. I was just going to sign up for Medium but decided I wanted to get my hands dirty (and waste a bunch of time it turns out).
Hosting Details
I’ve currently got a small VPS with Vultr which costs me about $5/month. It’s plenty fine for my needs currently. I’m also hosting an instance of code-server here which lets me have a full coding environment in the cloud and allows me to code from my iPad (another blog post perhaps). I may use code-server to update this blog in the future.
I’ve got Cloudflare running in front of this which gives me a couple of things:
- DNS hosting
- Caching and CDN
- Cloudflare access which I can’t recommend highly enough if you need to secure your site (like code-server perhaps)
Running the Blog
On my VPS I’m using Caddy to serve my pages over HTTPS with the Cloudflare plugin for obtaining certs. I usually use NGINX and Let’s Encrypt but thought I’d try something different this time. I would prefer not to have the SSL broken by Cloudflare in the middle but the other benefits of Cloudflare outweight that issue. The blog itself uses Hugo to build and runs inside an NGINX Docker container. Caddy then proxies the requests to the NGINX container.
I’m currently using the hugo-coder theme.
Building the Blog
The source code for the blog is hosted on GitLab and I use their CI/CD to:
- build the static site using Hugo;
- package into a Docker image; and
- force an update on my server.
I used this page to help set up the CI pipeline.
I initially had issues with getting GitLab to build my Docker images. It was failing to log in to the Docker registry with similar issues to the ones documented here. In the end this is what worked for me.
image: docker:19.03.8
services:
- docker:19.03.8-dind
variables:
GIT_SUBMODULE_STRATEGY: recursive
DOCKER_IMAGE_NAME: "registry.gitlab.com/username/projectname"
DOCKER_HOST: tcp://docker:2376
DOCKER_TLS_CERTDIR: "/certs"
stages:
- build
- push
before_script:
- echo -n $CI_JOB_TOKEN | docker login -u gitlab-ci-token --password-stdin $CI_REGISTRY
Build:
stage: build
script:
- docker pull $DOCKER_IMAGE_NAME:latest || true
- >
docker build
--pull
--cache-from $DOCKER_IMAGE_NAME:latest
--tag $DOCKER_IMAGE_NAME:$CI_COMMIT_SHA .
- docker push $DOCKER_IMAGE_NAME:$CI_COMMIT_SHA
Push:
variables:
GIT_STRATEGY: none
stage: push
only:
- master
script:
- docker pull $DOCKER_IMAGE_NAME:$CI_COMMIT_SHA
- docker tag $DOCKER_IMAGE_NAME:$CI_COMMIT_SHA $DOCKER_IMAGE_NAME:latest
- docker push $DOCKER_IMAGE_NAME:latest
To trigger redeploys on my server I’m currently using the webhook package. It allows me to listen for events and kick off shell scripts. In the future I might try and write my own version using Rust.
Issues with Styles
There’s always issues with CSS, always. For some reason the metadata about my CSS imports was wrong and therefore wouldn’t load. The error message looked like this:
Cannot load stylesheet https://.../style.css. Failed integrity metadata check. Content length: 10367, Expected content length: 10367, Expected metadata: sha256-dEhZWCZJgq17TrSu5diPv3r8GpOZ4AMCZfdpUqmyIqc=
So, is this a Hugo issue, a theme issue or a build issue? None of the above. I had set up Cloudflare to auto-minify CSS and JS which was caussing the browser not to trust the minified CSS file. Turning off the auto-minify setting fixed this. Given the small amount of CSS and JS being served this shouldn’t cause a huge issue for me or my visitors.
Building a Staging Server
My basic workflow for adding new blog posts is:
- Push a draft post to
develop
branch - Build a
:latest-dev
tagged docker image - Deploy to staging server and view
- If it’s all good, push to
master
branch - Build a
:latest
tagged docker image - Deploy to prod server
So I needed to have my staging server always serving up the :layest-develop
docker image. As I said earlier I’m using the webhook package to listen for webhooks from GitLab.
Unfortunately it wasn’t as straight forward as I had hoped. Initially I had GitLab trigger on push events to the develop branch. The webhook was being triggered but because I wanted to build the docker image in GitLab and then pull it down to my staging server, the image wasn’t built by the time my script was run on the staging server. GitLab has an option to trigger on Pipeline events but you can’t restrict it to successful pipeline events.
Luckily webhook allows you to put constraints on webhooks which includes reading the content of the webhook. So I needed to check the build status and the branch it was running on as shown below.
...
"trigger-rule": {
"and": [
{
"match": {
"type": "value",
"value": "super secret token here",
"parameter": {
"source": "header",
"name": "X-Gitlab-Token"
}
}
},
{
"match":
{
"type": "value",
"value": "develop",
"parameter": {
"source": "payload",
"name": "object_attributes.ref"
}
}
},
{
"match":
{
"type": "value",
"value": "success",
"parameter": {
"source": "payload",
"name": "object_attributes.status"
}
}
}
]
}
...
And then I decided to change what I was doing and started using GitLab Environments to do my deployments which meant my webhook checks on Pipeline events were no longer needed… It doesn’t give me much over and above using straight webhooks but I thought I would give them a go. The relevant parts of .gitlab-ci.yml
are below.
Deploy_Staging:
stage: deploy
image: curlimages/curl:7.70.0
variables:
GIT_STRATEGY: none
script:
- 'curl -H "X-Gitlab-Token: ${WEBHOOK_TOKEN}" https://blog-staging/hooks/stage-blog'
environment:
name: Staging
url: https://blog-staging/
only:
- develop
And That’s It
I now have a full end-to-end workflow for writing a post, staging it and then deploying to production. It’s overkill but I (mostly) enjoyed getting it set up.