Kubernetes on bare-metal in 10 minutes

Kubernetes is an open-source container orchestration framework which was built upon the learnings of Google. It enables you to run applications using containers in a production ready-cluster. Kubernetes has many moving parts and there are countless ways to configure its pieces - from the various system components, network transport drivers, CLI utilities not to mention applications and workloads.

In this blog post we'll install Kubernetes 1.6 on a bare-metal machine with Ubuntu 16.04 in about 10 minutes. At the end you'll be able to start learning how to interact with Kubernetes via its CLI kubectl.

Kubernetes overview:

Above: Kubernetes Components by Julia Evans

Pre-reqs

I suggest using Packet for running through the tutorial which will offer a bare-metal host - you can also run through this on a VM or your own PC if you're running Ubuntu 16.04 as your OS.

Head over to Packet.net and create a new project. For this example we can take advantage of the Type 0 host which gives you 4x Atom cores and 8GB of RAM for $0.05/hour.

When you provision the host make sure you pick Ubuntu 16.04 as the OS. Unlike Docker Swarm - Kubernetes is best paired with older versions of Docker. Fortunately the Ubuntu apt repository contains Docker 1.12.6.

  • Install Docker
$ apt-get update \
  && apt-get install -qy docker.io

Don't upgrade the Docker version on this host. You can still build images in your CI pipe-line or on your laptop with newer versions

Installation

  • Install Kubernetes apt repo
$ apt-get update && apt-get install -y apt-transport-https \
  && curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
OK

$ echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list

Now update your packages list with apt-get update.

  • Install kubelet, kubeadm and kubernetes-cni

The kubelet is responsible for running containers on your hosts. kubeadm is a convenience utility to configure the various components that make up a working cluster and kubernetes-cni represents the networking components.

CNI stands for Container Networking Interface which is a spec that defines how network drivers should interact with Kubernetes

$ apt-get update \
  && apt-get install -y kubelet kubeadm kubernetes-cni
  • Initialize your cluster with kubeadm

From the docs:

kubeadm aims to create a secure cluster out of the box via mechanisms such as RBAC.

Docker Swarm provides an overlay networking driver by default - but with kubeadm this decision is left to us. The team are still working on updating their instructions - so I'll show you how to use the most similar driver to Docker's overlay driver (flannel by CoreOS).

Update: if you want a quick script to run in all the changes up to this point in one shot run the following:

$ curl -sL https://gist.githubusercontent.com/alexellis/7315e75635623667c32199368aa11e95/raw/b025dfb91b43ea9309ce6ed67e24790ba65d7b67/kube.sh | sh

Prepare the host - notes for Kubernetes 1.7/1.8

If you are using Kubernetes 1.7+ then the following applies:

  • Swap must be disabled

You can check if you have swap enabled by typing in cat /proc/swaps. If you have a swap file or partition enabled then turn it off with swapoff. You can make this permanent by commenting out the swap file in /etc/fstab.

Flannel

Flannel provides a software defined network (SDN) using the Linux kernel's overlay and ipvlan modules.

Another popular SDN offering is Weave Net by WeaveWorks. Find out more here.

Packet provides two networks for its machines - the first is a datacenter link which goes between your hosts in a specific region and project and the second faces the public Internet. There is no default firewall - if you want to lock things down you'll have to configure iptables or ufw rules manually.

You can find your private/datacenter IP address through ifconfig:

root@kubeadm:~# ifconfig bond0:0  
bond0:0   Link encap:Ethernet  HWaddr 0c:c4:7a:e5:48:d4  
          inet addr:10.80.75.9  Bcast:255.255.255.255  Mask:255.255.255.254
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1

We'll now use the internal IP address to broadcast the Kubernetes API - rather than the Internet-facing address.

You must replace --apiserver-advertise-address with the IP of your host.

$ kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.80.75.9 --kubernetes-version stable-1.8
  • --apiserver-advertise-address determines which IP address Kubernetes should advertise its API server on.
  • --pod-network-cidr is needed for the flannel driver and specifies an address space for containers

  • --skip-preflight-checks allows kubeadm to check the host kernel for required features. If you run into issues where a host has the kernel meta-data removed you may need to run with this flag.

  • --kubernetes-version stable-1.8 this pins the version of the cluster to 1.8, but if you want to use Kubernetes 1.7 for example - then just alter the version. Removing this flag will use whatever counts as "latest".

Here's the output we got:

[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.8.1
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [kubehost1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.100.195.129]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 55.504048 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node kubehost1 as master by adding a label and a taint
[markmaster] Master kubehost1 tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: f2292a.77a85956eb6acbd6
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!  
To start using your cluster, you need to run (as a regular user):

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:  
  http://kubernetes.io/docs/admin/addons/
You can now join any number of machines by running the following on each node  
as root:

 kubeadm join --token f2292a.77a85956eb6acbd6 10.100.195.129:6443 --discovery-token-ca-cert-hash sha256:0c4890b8d174078072545ef17f295a9badc5e2041dc68c419880cca93d084098
  • Configure an unprivileged user-account

Packet's Ubuntu installation ships without an unprivileged user-account, so let's add one.

# useradd packet -G sudo -m -s /bin/bash
# passwd packet
  • Configure environmental variables as the new user

You can now configure your environment with the instructions at the end of the init message above.

Switch into the new user account with: sudo su packet.

$ cd $HOME
$ sudo whoami

$ sudo cp /etc/kubernetes/admin.conf $HOME/
$ sudo chown $(id -u):$(id -g) $HOME/admin.conf
$ export KUBECONFIG=$HOME/admin.conf

$ echo "export KUBECONFIG=$HOME/admin.conf" | tee -a ~/.bashrc
  • Apply your pod network (flannel)

We will now apply configuration to the cluster using kubectl and two entries from the flannel docs:

$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

serviceaccount "flannel" created  
configmap "kube-flannel-cfg" created  
daemonset "kube-flannel-ds" created

$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml
clusterrole "flannel" created  
clusterrolebinding "flannel" created  

Update: the links above were changed recently by CoreOS - so I've changed them to the latest versions.

We've now configured networking for pods.

  • Allow a single-host cluster

Kubernetes is about multi-host clustering - so by default containers cannot run on master nodes in the cluster. Since we only have one node - we'll taint it so that it can run containers for us.

$ kubectl taint nodes --all node-role.kubernetes.io/master-

An alternative at this point would be to provision a second machine and use the join token from the output of kubeadm.

  • Check it's working

Many of the Kubernetes components run as containers on your cluster in a hidden namespace called kube-system. You can see whether they are working like this:

$ kubectl get all --namespace=kube-system

NAME                                 READY     STATUS    RESTARTS   AGE  
po/etcd-kubeadm                      1/1       Running   0          12m  
po/kube-apiserver-kubeadm            1/1       Running   0          12m  
po/kube-controller-manager-kubeadm   1/1       Running   0          13m  
po/kube-dns-692378583-kqvdd          3/3       Running   0          13m  
po/kube-flannel-ds-w9xvp             2/2       Running   0          1m  
po/kube-proxy-4vgwp                  1/1       Running   0          13m  
po/kube-scheduler-kubeadm            1/1       Running   0          13m

NAME           CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE  
svc/kube-dns   10.96.0.10   <none>        53/UDP,53/TCP   14m

NAME              DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE  
deploy/kube-dns   1         1         1            1           14m

NAME                    DESIRED   CURRENT   READY     AGE  
rs/kube-dns-692378583   1         1         1         13m  

As you can see all of the services are in a state of Running which indicates a healthy cluster. If these components are still being downloaded from the Internet they may appear as not started.

Run a container

You can now run a container on your cluster. Kubernetes organises containers into Pods which share a common IP address, are always scheduled on the same node (host) and can share storage volumes.

First check you have no pods (containers) running with:

$ kubectl get pods

Now use kubectl run to deploy a container. We'll deploy a Node.js and Express.js microservice that generates GUIDs over HTTP.

This code was originally written for a Docker Swarm tutorial and you can find the source-code there - Scale a real microservice with Docker 1.12 Swarm Mode

$ kubectl run guids --image=alexellis2/guid-service:latest --port 9000
deployment "guids" created  

You'll now be able to see the Name assigned to the new Pod and when it gets started up:

$ kubectl get pods
NAME                     READY     STATUS    RESTARTS   AGE  
guids-2617315942-lzwdh   0/1       Pending   0          11s  

Use the Name to check on the pod:

$ kubectl describe pod guids-2617315942-lzwdh
...
Pulling            pulling image "alexellis2/guid-service:latest"  
...

Once running you can get the IP address and use curl to generate GUIDs:

$ kubectl describe pod guids-2617315942-lzwdh | grep IP:
IP:        10.244.0.3

$ curl http://10.244.0.3:9000/guid ; echo
{"guid":"4659819e-cf00-4b45-99d1a9f81bdcf6ae","container":"guids-2617315942-lzwdh"}

$ curl http://10.244.0.3:9000/guid ; echo
{"guid":"1604b4cb-88d2-49e2-bd38-73b589da0469","container":"guids-2617315942-lzwdh"}

If you want to see the logs for your Pod type in:

$ kubectl logs guids-2617315942-lzwdh
listening on port 9000  

A very useful feature for debugging containers is the ability to attach to the console via a shell to execute ad-hoc commands in the container:

$ kubectl exec -t -i guids-2617315942-lzwdh sh
/ # head -n3 /etc/os-release
NAME="Alpine Linux"  
ID=alpine  
VERSION_ID=3.5.2  
/ # exit
  • View the Dashboard UI

The Kubernetes dashboard can be deployed as another Pod, which we can then view on our local machine. Since we did not expose Kubernetes over the Internet we'll use an SSH tunnel to view the site.

$ kubectl create -f https://git.io/kube-dashboard
$ kubectl proxy
Starting to serve on 127.0.0.1:8001  

Now tunnel to the Packet host and navigate to http://localhost:8001/ui/ in a web-browser.

$ ssh -L 8001:127.0.0.1:8001 -N

For more information on the Dashboard check it out on Github.

Wrapping up

You've now created a Kubernetes cluster and run your first micro-service. From here you can start to learn all the components that make up a cluster and explore tutorials using the kubectl CLI.

  • Learn by example

I found Kubernetes by Example by Michael Hausenblas to be a detailed and accessible guide.

  • Add more nodes

Now that you've provisioned your single-node cluster with Packet - you can go ahead and add more Type 0 nodes with the join token you got from kubeadm.

  • Contrast to Docker Swarm

Docker Swarm is the native orchestration built into Docker's CE and EE products - you can setup a cluster in a single command. You can learn more about Swarm through my Docker Swarm tutorial series.

Acknowledgements:

Thanks to @mhausenblas, @_errm and @kubernetesonarm for feedback on the post and for sharing their tips on setting up a Kubernetes cluster.

You might also like

Get started with Serverless functions today

Alex Ellis

Read more posts by this author.

United Kingdom http://alexellis.io/

Subscribe to alex ellis' blog

Get the latest posts delivered right to your inbox.

or subscribe via RSS with Feedly!