Space Invaders on Kubernetes

A while ago I blogged about an awesome Chaos Engineering tools built by Eugenio Marzo (t) call KubeInvaders.

Since then Eugenio has updated the repo to make it easier to deploy KubeInvaders using Helm! So here’s how to deploy KubeInvaders to Azure Kubernetes Service using Helm.

Pre-requisities that need to be installed to run the code here are: –

Windows Subsystem for Linux (or a bash terminal)
Azure-Cli
Kubectl
Helm

First thing to do is log in with the azure cli: –

az login

Create a resource group: –

az group create --name kubeinvaders --location EASTUS

Spin up a AKS cluster: –

az aks create --resource-group kubeinvaders --name kubeinvadersclu --node-count 2

Get credentials to connect kubectl to AKS cluster: –

az aks get-credentials --resource-group kubeinvaders --name kubeinvadersclu

Confirm connection to AKS cluster: –

kubectl get nodes

Add the helm repo for the ingress-nginx controller: –

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx

Confirm helm repositories: –

helm repo list

Install the ingress-nginx controller: –

helm install ingress-nginx ingress-nginx/ingress-nginx \
--create-namespace \
--namespace ingress-basic \
--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz

EDIT – 2023-01 – Updated to add in the annotation

List resources in the ingress-basic namespace: –

kubectl get all -n ingress-basic

Note the external IP of the controller and set the IP address to a variable: –

IP="XX.XX.XXX.XX"

Set a DNS name for the external IP address to a variable: –

DNSNAME="SOMETHING-kubeinvaders"

Get the resource-id of the external ip: –

PUBLICIPID=$(az network public-ip list --query "[?ipAddress!=null]|[?contains(ipAddress, '$IP')].[id]" --output tsv)

Update external ip address with DNS name: –

az network public-ip update --ids $PUBLICIPID --dns-name $DNSNAME

Display the FQDN: –

az network public-ip show --ids $PUBLICIPID --query "[dnsSettings.fqdn]" --output tsv

Now we’re ready to deploy KubeInvaders!

Add the kubeinvaders helm repository: –

helm repo add kubeinvaders https://lucky-sideburn.github.io/helm-charts/

Confirm helm repositories: –

helm repo list

Create a kubeinvaders namespace: –

kubectl create namespace kubeinvaders

Deploy kubeinvaders: –

helm install kubeinvaders --set-string target_namespace="default" \
-n kubeinvaders kubeinvaders/kubeinvaders \
--set ingress.enabled=true \
--set ingress.hostName=SOMETHING-kubeinvaders.eastus.cloudapp.azure.com \
--set image.tag=v1.9

EDIT – 2023-01 – Updated to add in –set ingress.enabled=true

Now go to the FQDN set above in your browser.

If you get a 404 when going to the website it is because there’s a line missing from the annotations of the kubeinvaders ingress.

To fix this edit the ingress: –

kubectl edit ingress -n kubeinvaders

And add the following line: –

kubernetes.io/ingress.class: "nginx"

Save the updated ingress and go back to your FQDN and there is KubeInvaders!

Thanks for reading!

New Pluralsight Course – Kubernetes Package Administration with Helm

My first course Kubernetes Package Administration with Helm has been published on Pluralsight and is now available!

Check out the course overview here

This course is aimed at anyone who wants to get into working with Helm to deploy and manage applications running on Kubernetes.

It’s divided into three modules covering: –

Helm Overview

  • A guide to what Helm is and its history
  • Setting up your local environment to work with Helm
  • Installing Helm and adding the Stable Helm repository

Exploring Helm Releases

  • Deploying a Helm Chart to Kubernetes
  • Retrieving information about a Helm Release
  • Upgrading a Helm Release
  • Rolling back a Helm Release
  • Downloading and exploring a Helm Chart

Configuring Helm Repositories

  • How to create and package a Helm Chart
  • Pushing a Chart to a local/remote Helm repository

All modules are accompanied with demos to take you through each topic discussed. The code for the demos is available on Github here

By the end of the course you’ll have the skills to confidently work with applications deployed to Kubernetes with Helm.

A kubectl plugin to decode secrets created by Helm

Last week I wrote a blog post about Decoding Helm Secrets.

The post goes through deploying a Helm Chart to Kubernetes and then running the following to decode the secrets that Helm creates in order for it to be able to rollback a release: –

kubectl get secret sh.helm.release.v1.testchart.v1 -o jsonpath="{ .data.release }" | base64 -d | base64 -d | gunzip -c | jq '.chart.templates[].data' | tr -d '"' | base64 -d

But that’s a bit long winded eh? I don’t really fancy typing that every time I want to have a look at those secrets. So I’ve created a kubectl plugin that’ll do it for us!

Here’s the code: –

#!/bin/bash

# get helm secrets from Kubernetes cluster
SECRET=$(kubectl get secret $1 -o jsonpath='{ .data.release }' ) 

# decode the secrets
DECODED_SECRET=$(echo $SECRET | base64 -d | base64 -d | gunzip -c )

# parse the decoded secrets, pulling out the templates and removing whitespace
DATA=$(echo $DECODED_SECRET | jq '.chart.templates[]' | tr -d '[:space:]' )

# assign each entry in templates to an array
ARRAY=($(echo $DATA | tr '} {' '\n'))

# loop through each entry in the array
for i in "${ARRAY[@]}"
do
        # splitting name and data into separate items in another array
        ITEMS=($(echo $i | tr ',' '\n'))

        # parsing the name field
        echo "${ITEMS[0]}" | sed -e 's/name/""/g; s/templates/""/g' | tr -d '/:"'

        # decoding and parsing the data field
        echo "${ITEMS[1]}" | sed -e 's/data/""/g' | tr -d '":' | base64 -d

        # adding a blank line at the end
        echo ''
done  

It’s up in Github as a Gist but to use the plugin, pull it down with curl and drop it into a file in your PATH environment variable. Here I’m dropping it into /usr/local/bin: –

curl https://gist.githubusercontent.com/dbafromthecold/fdd1bd8b7e921075d3d37fcb8eb9a025/raw/afa873b0ef343859ed4119eeb9f41bf733e8cea2/DecodeHelmSecrets.sh > /usr/local/bin/kubectl-decodehelm

Make it executable: –

chmod +x /usr/local/bin/kubectl-decodehelm

Now confirm that the plugin is there: –

sudo kubectl plugin list


N.B. – I’m running this with sudo as I’m in WSL which will error out when checking my Windows paths if I don’t use sudo

Let’s test it out! I’m going to deploy the mysql chart from the stable repository: –

helm install mysql stable/mysql

Once deployed, we’ll have one secret created by Helm: –

kubectl get secrets

Now let’s use the plugin to decode the information in that secret: –

kubectl decodehelm sh.helm.release.v1.mysql.v1

And there’s the decoded secret! Well, just a sample of it in that screenshot as the mysql Chart contains a few yaml files.

The format of the output is: –

  • Filename (in the above example… NOTES.txt
  • Decoded file (so we’re seeing the text in the notes file for the mysql Chart)

Thanks for reading!

Decoding Helm Secrets

Helm is a great tool for deploying applications to Kubernetes. We can bundle up all our yaml files for deployments, services etc. and deploy them to a cluster with one easy command.

But another really cool feature of Helm, the ability to easily upgrade and roll back a release (the term for an instance of a Helm chart running in a cluster).

Now, you can do this with kubectl. If I upgrade a deployment with kubectl apply I can then use kubectl rollout undo to roll back that upgrade. That’s great! And it’s one of the best features of Kubernetes.

What happens when you upgrade a deployment is that a new replicaset is created for that deployment, which is running the upgraded application in a new set of pods.

If we rollback with kubectl rollout undo the pods in the newest replicaset are deleted, and pods in an older replicaset are spun back up, rolling back the upgrade.

But there’s a potential problem here. What happens if that old replicaset is deleted?

If that happens, we wouldn’t be able to rollback the upgrade. Well we wouldn’t be able to roll it back with kubectl rollout undo, but what happens if we’re using Helm?

Let’s run through a demo and have a look.

So I’m on Windows 10, running in WSL 2, my distribution is Ubuntu: –

ubuntu

N.B. – The below code will work in a powershell session on Windows, apart from a couple of commands where I’m using Linux specific command line tools, hence why I’m in my WSL 2 distribution. (No worries if you’re on a Mac or native Linux distro)

Anyway I’m going to navigate to Helm directory on my local machine, where I am going to create a test chart: –

cd /mnt/c/Helm

Create a chart called testchart: –

helm create testchart

Remove all unnecessary files in the templates directory: –

rm -rf ./testchart/templates/*

Create a deployment yaml file: –

kubectl create deployment nginx \
--image=nginx:1.17 \
--dry-run=client \
--output=yaml > ./testchart/templates/deployment.yaml

Which will create the following yaml and save it as deployment.yaml in the templates directory: –

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx:1.17
        name: nginx
        resources: {}
status: {}

Now create the deployment so we can run the expose command below: –

kubectl create deployment nginx --image=nginx:1.17 

Generate the yaml for the service with the kubectl expose command: –

kubectl expose deployment nginx \
--type=LoadBalancer \
--port=80 \
--dry-run=client \
--output=yaml > ./testchart/templates/service.yaml

Which will give us the following yaml and save it as service.yaml in the templates directory: –

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: nginx
  name: nginx
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx
  type: LoadBalancer
status:
  loadBalancer: {}

Delete the deployment, it’s not needed: –

kubectl delete deployment nginx

Recreate the values.yaml file with a value for the container image: –

rm ./testchart/values.yaml
echo "containerImage: nginx:1.17" > ./testchart/values.yaml

Then replace the hard coded container image in the deployment.yaml with a template directive: –

sed -i 's/nginx:1.17/{{ .Values.containerImage }}/g' ./testchart/templates/deployment.yaml

So the deployment.yaml file now looks like this: –

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
    spec:
      containers:
      - image: {{ .Values.containerImage }}
        name: nginx
        resources: {}
status: {}

Which means that the container image is not hard coded. It’ll take the value of nginx:1.17 from the values.yaml file or we can override it with the set flag (which we’ll do in a minute).

But first, deploy the chart to my local Kubernetes cluster running in Docker Desktop: –

helm install testchart ./testchart

Confirm release: –

helm list


N.B. – That app version is the default version set in the Chart.yaml file (which I haven’t updated)

Check image running in deployment: –

kubectl get deployment -o jsonpath='{ .items[*].spec.template.spec.containers[*].image }{"\n"}'

Great. That’s deployed and the container image is the one set in the values.yaml file in the Chart.

Now upgrade the release, replacing the default container image value with the set flag: –

helm upgrade testchart ./testchart --set containerImage=nginx:1.18

Confirm release has been upgraded (check the revision number): –

helm list

Also, confirm with the release history: –

helm history testchart

So we can see the initial deployment of the release and then the upgrade. App version remains the same as I haven’t changed the value in the Chart.yaml file. However, the image has been changed and we can see that with: –

kubectl get deployment -o jsonpath='{ .items[*].spec.template.spec.containers[*].image }{"\n"}'

So we’ve upgraded the image that’s running for the one pod in the deployment.

Let’s have a look at the replicasets of the deployment: –

kubectl get replicasets

So we have two replicasets for the deployment created by our Helm release. The inital one running nginx v1.17 and the newest one running nginx v1.18.

If we wanted to rollback the upgrade with kubectl, this would work (don’t run this code!): –

kubectl rollout undo deployment nginx

What would happen here is the that the pod under the newset replicaset would be deleted and a pod under the old replicaset would be spun up, rolling back nginx to v1.17.

But we’re not going to do that, as we’re using Helm.

Let’s grab the oldest replicaset name: –

REPLICA_SET=$(kubectl get replicasets -o jsonpath='{.items[0].metadata.name }' --sort-by=.metadata.creationTimestamp)

And delete it: –

kubectl delete replicasets $REPLICA_SET

So we now only have the one replicaset: –

kubectl get replicasets

Now try to rollback using the kubectl rollout undo command: –

kubectl rollout undo deployment nginx

The reason that failed is that we deleted the old replicaset, so there’s no history for that deployment, which we can see with: –

kubectl rollout history deployment nginx

But Helm has the history: –

helm history testchart

So we can rollback: –

helm rollback testchart 1

View release status: –

helm list

View release history: –

helm history testchart

View replicasets: –

kubectl get replicasets

The old replicaset is back! How? Let’s have a look at secrets within the cluster: –

kubectl get secrets

Ahhh, bet you anything the Helm release history is stored in those secrets! The initial release (v1), the upgrade (v2), and the rollback (v3).

Let’s have a closer look at the first one: –

kubectl get secret sh.helm.release.v1.testchart.v1 -o json

Hmm, that release field looks interesting. What we could do is base64 decode it and then run it through decompression on http://www.txtwizard.net/compression which would give us: –

{
"name":"testchart",
"info":
	{
		"first_deployed":"2020-08-09T11:21:20.4665817+01:00",
		"last_deployed":"2020-08-09T11:21:20.4665817+01:00",
		"deleted":"",
		"description":"Install complete",
		"status":"superseded"},
		"chart":{"metadata":
	{
		"name":"testchart",
		"version":"0.1.0",
		"description":"A Helm chart for Kubernetes",
		"apiVersion":"v2",
		"appVersion":"1.16.0",
		"type":"application"},
		"lock":null,
		"templates":[
			{
				"name":
				"templates/deployment.yaml",
				"data":"YXBpVmVyc2lvbjogYXBwcy92MQpraW5kOiBEZXBsb3ltZW50Cm1ldGFkYXRhOgogIGNyZWF0aW9uVGltZXN0YW1wOiBudWxsCiAgbGFiZWxzOgogICAgYXBwOiBuZ2lueAogIG5hbWU6IG5naW54CnNwZWM6CiAgcmVwbGljYXM6IDEKICBzZWxlY3RvcjoKICAgIG1hdGNoTGFiZWxzOgogICAgICBhcHA6IG5naW54CiAgc3RyYXRlZ3k6IHt9CiAgdGVtcGxhdGU6CiAgICBtZXRhZGF0YToKICAgICAgY3JlYXRpb25UaW1lc3RhbXA6IG51bGwKICAgICAgbGFiZWxzOgogICAgICAgIGFwcDogbmdpbngKICAgIHNwZWM6CiAgICAgIGNvbnRhaW5lcnM6CiAgICAgIC0gaW1hZ2U6IHt7IC5WYWx1ZXMuY29udGFpbmVySW1hZ2UgfX0KICAgICAgICBuYW1lOiBuZ2lueAogICAgICAgIHJlc291cmNlczoge30Kc3RhdHVzOiB7fQo="},{"name":"templates/service.yaml","data":"YXBpVmVyc2lvbjogdjEKa2luZDogU2VydmljZQptZXRhZGF0YToKICBjcmVhdGlvblRpbWVzdGFtcDogbnVsbAogIGxhYmVsczoKICAgIGFwcDogbmdpbngKICBuYW1lOiBuZ2lueApzcGVjOgogIHBvcnRzOgogIC0gcG9ydDogODAKICAgIHByb3RvY29sOiBUQ1AKICAgIHRhcmdldFBvcnQ6IDgwCiAgc2VsZWN0b3I6CiAgICBhcHA6IG5naW54CiAgdHlwZTogTG9hZEJhbGFuY2VyCnN0YXR1czoKICBsb2FkQmFsYW5jZXI6IHt9Cg=="}],"values":{"containerImage":"nginx:1.17"},"schema":null,"files":[{"name":".helmignore","data":"IyBQYXR0ZXJucyB0byBpZ25vcmUgd2hlbiBidWlsZGluZyBwYWNrYWdlcy4KIyBUaGlzIHN1cHBvcnRzIHNoZWxsIGdsb2IgbWF0Y2hpbmcsIHJlbGF0aXZlIHBhdGggbWF0Y2hpbmcsIGFuZAojIG5lZ2F0aW9uIChwcmVmaXhlZCB3aXRoICEpLiBPbmx5IG9uZSBwYXR0ZXJuIHBlciBsaW5lLgouRFNfU3RvcmUKIyBDb21tb24gVkNTIGRpcnMKLmdpdC8KLmdpdGlnbm9yZQouYnpyLwouYnpyaWdub3JlCi5oZy8KLmhnaWdub3JlCi5zdm4vCiMgQ29tbW9uIGJhY2t1cCBmaWxlcwoqLnN3cAoqLmJhawoqLnRtcAoqLm9yaWcKKn4KIyBWYXJpb3VzIElERXMKLnByb2plY3QKLmlkZWEvCioudG1wcm9qCi52c2NvZGUvCg=="}]},
				"manifest":"---\n# 
					Source: testchart/templates/service.yaml\n
					apiVersion: v1\n
					kind: Service\nmetadata:\n  
					creationTimestamp: null\n  
					labels:\n    
					app: nginx\n  
					name: nginx\n
					spec:\n  
					ports:\n  
					- port: 80\n    
					protocol: TCP\n    
					targetPort: 80\n  
					selector:\n    
					app: nginx\n  
					type: LoadBalancer\n
					status:\n  loadBalancer: {}\n---\n# 
					
					Source: testchart/templates/deployment.yaml\n
					apiVersion: apps/v1\n
					kind: Deployment\n
					metadata:\n  
					creationTimestamp: null\n  
					labels:\n    
					app: nginx\n  
					name: nginx\nspec:\n  
					replicas: 1\n  
					selector:\n    
					matchLabels:\n      
					app: nginx\n  
					strategy: {}\n  
					template:\n    
					metadata:\n      
					creationTimestamp: null\n      
					labels:\n        
					app: nginx\n    
					spec:\n      
					containers:\n      
					- image: nginx:1.17\n        
					name: nginx\n        
					resources: {}\n
					status: {}\n",
					"version":1,
					"namespace":"default"
			}

BOOM! That look like our deployment and service manifests! We can see all the information contained in our initial Helm release (confirmed as the container image is nginx:1.17)!

So by storing this information as secrets in the target Kubernetes cluster, Helm can rollback an upgrade even if the old replicaset has been deleted! Pretty cool!

Not very clean though, eh? And have a look at that data field…that looks suspiciously like more encrypted information (well, because it is 🙂 ).

Let’s decrypt it! This time on the command line: –

kubectl get secret sh.helm.release.v1.testchart.v1 -o jsonpath="{ .data.release }" | base64 -d | base64 -d | gunzip -c | jq '.chart.templates[].data' | tr -d '"' | base64 -d

Ha! There’s the deployment and service yaml files!

By using Helm we can rollback a release even if the old replicaset of the deployment has been deleted as Helm stores the history of a release in secrets in the target Kubernetes cluster. And by using the code above, we can decrypt those secrets and have a look at the information they contain.

Thanks for reading!

Using Github as a repository for SQL Server Helm Charts

In a previous post I ran through how to create a custom SQL Server Helm chart.

Now that the chart has been created, we need somewhere to store it.

We could keep it locally but what if we wanted to use our own Helm chart repository? That way we wouldn’t have to worry about deleting the chart on our local machine.

I use Github to store all my code to guard against accidentally deleting it (I’ve done that more than once) so why not use Github to store my Helm charts?

Let’s run through setting up a Github repo to store our Helm charts.

First thing to do is create a new Github repo: –

Clone the new repo down:-

git clone https://github.com/dbafromthecold/SqlServerHelmCharts.git

Navigate to the repo:-

cd C:\git\dbafromthecold\SqlServerHelmCharts

And now package the SQL Server Helm chart and copy it to the cloned, empty repo:-

helm package C:\Helm\testsqlchart \

Now create the index.yaml file in the repo (this is what we’ll link to in order to identify the repo locally):-

helm repo index .

Commit the changes and push to GitHub:-

git add .
git commit -m 'Added testsqlchart to repo'
git push

Go back to the Github repo and view the raw index.yaml file: –

Grab the HTTPS address (removing the index.yaml from the end) and drop it into the helm repo add statement:-

helm repo add dbafromthecold https://raw.githubusercontent.com/dbafromthecold/SqlServerHelmCharts/master

To confirm the repo has been added:-

helm repo list

To see if all is well:-

helm repo update

If you’re using VS Code and have the Kubernetes extension, you will be able to view the new repo under the Helm section: –

Final test is to perform a dry run:-

helm install dbafromthecold/testsqlchart --version 0.1.0 --dry-run --debug

If all looks good, then deploy!

helm install dbafromthecold/testsqlchart --version 0.1.0

To check your helm deployments: –

helm list

To check on the deployment/pod/service created by the helm chart: –

kubectl get deployments

kubectl get pods

kubectl get services

Once that external IP comes up it can be dropped into SSMS to connect: –

And BOOM! Connected to SQL Server running in Kubernetes deployed via a Helm package from a custom repo! 🙂

N.B. – To delete the deployment (deployment name grabbed from helm list): –

helm delete cantankerous-marsupial

Thanks for reading!