Will it cluster? k3s on your Raspberry Pi

In this post we'll test-drive k3s which is a stripped-down Kubernetes distribution from Rancher Labs. With a single binary and a one-line bootstrap process it's even easier than before to create a light-weight cluster. So grab your Raspberry Pi and get ready to deploy the smallest Kubernetes distribution ever.

You may have seen my previous work with Kubernetes and Docker on Raspberry Pi such as Build your own bare-metal ARM cluster. I'm hoping that this post will be a lot simpler to follow, with fewer workarounds and even more resources left over for your projects to consume.

Featured: Raspberry Pi x5 Compute Module (COM) holder with Gigabit ethernet, from mininodes.com

Why k3s?

Darren Shepherd, Chief Architect at Rancher Labs is known for building simple solutions and accessible user-experiences for distributed systems. k3s is one of his latest experiements to reduce the footprint and bootstrap-process of Kubernetes to a single binary.

The k3s binary available on GitHub comes in at around 40mb and bundles all the low-level components required such as containerd, runc and even kubectl. k3s can take the place of kubeadm which started as part of a response from the Kubernetes community to up their game for user-experience of bootstrapping clusters.

kubeadm is now able to create production-ready multi-master clusters, but is not well-suited for the Raspberry Pi. This is because it assumes hosts have high CPU/memory and low-latency. When I ran through the installation for k3s the first time it was several times quicker to boot up than kubeadm, but the important part was that it worked first-time, every time without any manual hacks or troubleshooting.

Note: k3s just like Kubernetes, also works on armhf (Raspberry Pi), ARM64 (Packet/AWS/Scaleway) and x86_64 (regular PCs/VMs).

Pre-reqs

I'll list the pre-requisites and add some affiliate links to Amazon US.

Clustering parts

If you're running with more than one RPi then buying multiple cases or multiple power adapters can be a false economy.

Prepare the RPi

Let's start the tutorial.

Flash the OS to the SD card

Let's not make things complicated by messing about with bespoke operating systems. The Raspberry Pi team have done a great job with Raspbian and for a headless system Raspbian Lite is easy to use and quick to flash.

On MacOS you can usually type in: sudo touch /Volumes/boot/ssh for this step.

Power-up the device & customise it

Now power-up your device. It will be accessible on your network over ssh using the following command:

$ ssh pi@raspberrypi.local

Log in with the password raspberry and then type in sudo raspi-config.

Update the following:

  • Set the GPU memory split to 16mb
  • Set the hostname to whatever you want (write it down?)
  • Change the password for the pi user

I also highly recommend setting a static IP for each Raspberry Pi in your cluster.

Copy over your ssh key

Do you have an ssh key?

$ ls -l ~/.ssh/id_rsa.pub

If that says file not found, then let's generate a key-pair for use with SSH. This means you can set a complicated password, or disable password login completely and rely on your public key to log into each RPi without typing a password in.

Hit enter to everything:

$ ssh-keygen

Finally run: ssh-copy-id pi@raspberrypi.local

Enable container features

We need to enable container features in the kernel, edit /boot/cmdline.txt and add the following to the end of the line:

 cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory

Now reboot the device.

Create the k3s cluster

Note: during installation kubectl will be aliased to the command k3s kubectl so that we can use the pre-packaged version of kubectl.

If you type in docker after the installation, you won't find the command installed. This is because k3s uses a low-level component called containerd directly.

Bootstrap the k3s server

We can install k3s using a utility script which gets the latest stable version from the releases page and then installs a systemd service to start k3s automatically.

On one of the nodes log-in and run the following:

$ curl -sfL https://get.k3s.io | sh -

Check that the systemd service started correctly:

$ sudo systemctl status k3s

Wait for k3s to start and to download the required images from the Kubernetes registry. This may take a few minutes.

Grab the join key from this node with:

$ sudo cat /var/lib/rancher/k3s/server/node-token

K1089729d4ab5e51a44b1871768c7c04ad80bc6319d7bef5d94c7caaf9b0bd29efc::node:1fcdc14840494f3ebdcad635c7b7a9b7

Introducing k3sup (update)

You can now automate the installation and bootstrap of k3s onto any cloud, VM or Raspberry Pi with k3sup.

k3sup gives you access to kubectl in under a minute:

k3sup install --ip $SERVER --user pi
k3sup join --ip $AGENT --server-ip $SERVER --user pi

Try it out to fetch your KUBECONFIG for use from your laptop.

Once you have a KUBECONFIG from your k3s cluster, or any Kubernetes cluster at all you can use arkade install to add things like OpenFaaS, inlets-operator, metrics-sever, nginx, and more. Just check which of the apps is available for arm.

Here's an example of how easy it becomes to install OpenFaaS for instance:

arkade install openfaas

You can install the Kubernetes dashboard too:

arkade install kubernetes-dashboard

Feel free to add your Star on GitHub for arkade and k3sup.

Join a worker

Now log into another node and download the binary as before, moving it to /usr/local/bin/.

Now join any number of your worker nodes to the server with the following:

$ export K3S_URL="https://192.168.0.32:6443"

$ export K3S_TOKEN="K1089729d4ab5e51a44b1871768c7c04ad80bc6319d7bef5d94c7caaf9b0bd29efc::node:1fcdc14840494f3ebdcad635c7b7a9b7"

$ curl -sfL https://get.k3s.io | sh -

If you installed k3s manually using a binary, then you can join your node to the server in this way:

$ sudo k3s agent --server ${K3S_URL} --token ${K3S_TOKEN}

List your nodes

$ kubectl get node -o wide

NAME   STATUS   ROLES    AGE     VERSION         INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION   CONTAINER-RUNTIME
cm3    Ready    <none>   9m45s   v1.13.4-k3s.1   192.168.0.30   <none>        Raspbian GNU/Linux 9 (stretch)   4.14.79-v7+      containerd://1.2.4+unknown
cm4    Ready    <none>   13m     v1.13.4-k3s.1   192.168.0.32   <none>        Raspbian GNU/Linux 9 (stretch)   4.14.79-v7+      containerd://1.2.4+unknown

We can see our nodes and that they are using containerd rather than full Docker. This is part of how Darren was able to reduce the footprint.

Deploy a microservice

We can now log into the k3s server and deploy a microservice. We'll deploy figlet which will take a body over HTTP on port 8080 and return an ASCII-formatted string.

  • Create a service (with a NodePort):

Save: openfaas-figlet-svc.yaml.

cat <<EOF > openfaas-figlet-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: openfaas-figlet
  labels:
    app: openfaas-figlet
spec:
  type: NodePort
  ports:
    - port: 8080
      protocol: TCP
      targetPort: 8080
      nodePort: 31111
  selector:
    app: openfaas-figlet
EOF
  • Now create a deployment

The deployment will be used to schedule a Pod using a Docker image published in the OpenFaaS Function Store.

Save: openfaas-figlet-dep.yaml.

cat <<EOF > openfaas-figlet-dep.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: openfaas-figlet
  labels:
   app: openfaas-figlet
spec:
  replicas: 1
  selector:
    matchLabels:
      app: openfaas-figlet
  template:
    metadata:
      labels:
        app: openfaas-figlet
    spec:
      containers:
      - name: openfaas-figlet
        image: functions/figlet:latest-armhf
        imagePullPolicy: Always
        ports:
        - containerPort: 8080
          protocol: TCP
EOF
  • Now apply the configuration:
$ kubectl apply -f openfaas-figlet-dep.yaml,openfaas-figlet-svc.yaml

deployment.apps/openfaas-figlet created
service/openfaas-figlet created

Wait for the figlet microservice to come up:

$ kubectl rollout status deploy/openfaas-figlet

deployment "openfaas-figlet" successfully rolled out

Now invoke the function:

echo -n "I like $(uname -m)" | curl --data-binary @- http://127.0.0.1:31111
 ___   _ _ _                                    _____ _
|_ _| | (_) | _____    __ _ _ __ _ __ _____   _|___  | |
 | |  | | | |/ / _ \  / _` | '__| '_ ` _ \ \ / /  / /| |
 | |  | | |   <  __/ | (_| | |  | | | | | \ V /  / / | |
|___| |_|_|_|\_\___|  \__,_|_|  |_| |_| |_|\_/  /_/  |_|

Share your microservice with your friends

You can use inlets - The Cloud Native Tunnel to create a tunnel to the public Internet for your Raspberry Pi k3s cluster. All you need to do is to create a cheap VPS or EC2 node to get a public IP address that connects back to your cluster.

Why expose your cluster?

  • Deploy new versions of your code from CI/CD
  • Access dashboards and UIs
  • Receive webhooks
  • Share the IP with your friends
  • Access your APIs, services, and storage on the k3s cluster

The advantage of using inlets is that it's an open source tunnel, which comes with no limitations. You can self-host the tunnel server (exit-server) wherever you like the world and even add a custom DNS entry and TLS for free.

A PRO edition is also offered which integrates directly with your Kubernetes IngressController and can fetch TLS certificates from LetsEncrypt.

Tunnel to a node with inletsctl

You can run a tunnel on one of your nodes, which will connect to your VPS and use its allocated IP as a "Virtual IP". Anyone who hits that IP will be able to access your Raspberry Pi.

inletsctl automates the process of creating a VM and starting the "inlets server" which faces the Internet, but you can also setup your own server if you like.

On our RPi, we run the "inlets client" and when both are deployed, anyone can access our service.

Run this on one of the RPi nodes or the server/master:

curl -sSLf https://inletsctl.inlets.dev | sudo sh

Now create an exit server on DigitalOcean, or pick another provider (see the project README).

export ACCESS_TOKEN="obtain-from-digitalocean-dashboard"
inletsctl create --provider digitalocean \
  --region lon1 \
  --access-token $ACCESS_TOKEN

After the tunnel server has been created, you will receive a command to connect to the tunnel from your RPi.

Pros:

  • Easy to set up
  • You can set up your exit server manually and install Caddy or Nginx on it
  • You can run the server command with systemd to restart it

This is the output from the command:

Inlets OSS exit-node summary:
  IP: 209.97.132.44
  Auth-token: 10fd0e6e2cdc199a1ebbdd9e78825f8b17392631

Command:
  export UPSTREAM=http://127.0.0.1:8000
  inlets client --remote "ws://209.97.132.44:8080" \
	--token "10fd0e6e2cdc199a1ebbdd9e78825f8b17392631" \
	--upstream $UPSTREAM

To Delete:
	inletsctl delete --provider digitalocean --id "183747947"

Now download the OSS inlets client:

sudo inletsctl download

Where you see UPSTREAM, you can change this to the NodePort from earlier i.e. 31111

export UPSTREAM=http://127.0.0.1:31111
inlets client --remote "ws://209.97.132.44:8080" \
  --token "10fd0e6e2cdc199a1ebbdd9e78825f8b17392631" \
  --upstream $UPSTREAM

Now use the IP you were given i.e. 209.97.132.44 to connect to your service on port 80. You can run this step from your laptop.

$ curl -SLs --data $(whoami) http://209.97.132.44
       _           
  __ _| | _____  __
 / _` | |/ _ \ \/ /
| (_| | |  __/>  < 
 \__,_|_|\___/_/\_\

You can check the logs to see if they tried it:

$ kubectl logs deploy/openfaas-figlet -f

You can even scale up the microservice:

$ kubectl scale deploy/openfaas-figlet --replicas=4

Then find out which nodes the Pods were created on:

$ kubectl get pods -l app=openfaas-figlet -o wide
NAME                               READY   STATUS    RESTARTS   AGE   IP          NODE
openfaas-figlet-8486c9f585-4ks2f   1/1     Running   0          26s   10.42.0.6   cm4 
openfaas-figlet-8486c9f585-d7kpk   1/1     Running   0          26s   10.42.1.3   cm3 
openfaas-figlet-8486c9f585-l7x89   1/1     Running   0          10m   10.42.1.2   cm3 
openfaas-figlet-8486c9f585-nhqj6   1/1     Running   0          25s   10.42.1.4   cm3 

Tunnel your IngressController

You can use the inlets-operator project to integrate inlets OSS/PRO with your Kubernetes cluster. The operator will automate everything for you from creating and deleting cloud hosts, to making any LoadBalancer service available with its own IP.

Pros:

  • Automatic encryption when using inlets PRO
  • The operator runs as a Deployment, so it is resilient and will restart when needed
  • Tunnelling the IngressController means you can get TLS certs
  • The IngressController is the native way applications are deployed to production in Kubernetes

See also:

What else can you do with k3s?

We're only scratching the surface here. You can see Darren demo k3s and OpenFaaS in a CNCF Webinar below:

Try Serverless with OpenFaaS

One way of looking at your cluster is like a big computer. The cluster is your computer. With that in mind we can deploy OpenFaaS - Serverless Functions Made Simple.

It allows you to define a function or endpoint on Kubernetes in a very short period of time with minimal investment and a low learning curve.

You can deploy OpenFaaS to Kubernetes or k3s on ARM using my tutorial or the documentation:

You can install OpenFaaS using arkade, a Go CLI that makes installing Kubernetes applications easy.

Run this on your laptop, not on the RPi:

curl -sSL https://get-arkade.dev | sudo sh

arkade install openfaas

Watch for the instructions at the end for how to access OpenFaaS, if you want to see this screen again simply type in arkade info openfaas.

The OpenFaaS UI and REST API will be available on port 31112 on each Raspberry Pi in your cluster.

You can customise the configuration with arkade install openfaas --help

A complete OpenFaaS online training workshop is available here, but note that the instructions assume you are using Kubernetes on the cloud, or on a PC.

You should also note that if you create your own functions, that the Docker images need to be built on a Raspberry Pi and not on your PC.

See k3s & OpenFaaS auto-scaling on Raspberry Pi 4

In this live walkthrough with my brand new Raspberry Pi 4s, I show you how to install Kubernetes (k3s) to create a cluster and then how to deploy OpenFaaS and see it auto-scale based upon metrics.

Tear down (optional)

Either run:

$ sudo /usr/local/bin/k3s-uninstall.sh

Or stop k3s on each node and remove the data directory.

$ sudo systemctl stop k3s
$ sudo systemctl disable k3s
$ sudo rm -rf /var/lib/rancher

For more information on k3s, see the page on GitHub.

If you created any exit-servers with inletsctl, then you can delete them from your DigitalOcen dashboard.

Wrapping-up

It is very early for k3s on ARM, but at this stage it's certainly more usable than the alternatives. If you're considering building a cluster for tinkering and for learning more about Kubernetes then you can't go wrong with trying k3s.

Next time you setup k3s try out my new tool k3sup. It uses your SSH key to bring up k3s on any cloud, VM, or Raspberry Pi and then downloads a KUBECONFIG file so that you can use kubectl from your own computer.

Did you like the blog post? Follow me on Twitter @alexellisuk for more.

Share your cluster

It would be great to add your build my Readers' Clusters.

Here's how to take part:

  • Write a short blog post on your experiences, learnings or feedback.
  • Or Tweet a photo

Then send a Pull Request to the README.md file of my k8s-on-raspbian repo.

Learn how to build your own lab

If you'd like to build your own private cloud, or just learn more about networking, you can pick up my eBook or video workshop package with 30 USD off until 9th April.

Check out my netbooting workshop

Continue learning more about Kubernetes:

k3s is a compliant Kubernetes distribution which means if you learn k3s, you're learning Kubernetes and as I tweeted earlier last week - it's never too late to start learning Kubernetes and nobody ever got fired for that.

You may also like