To say that service-mesh is a controversial area of cloud computing, would be an understatement, but things are changing and deploying something like Istio no longer requires a MacBook with 32GB of RAM.
In this post I'll show you how you can get a full Istio demo up and running with a public IP directly to your laptop. We'll do all this within the short period of time between finishing work and eating your evening meal (tea-time).
The resources you'll need
- A computer or laptop running MacOS, Linux is also fair-game and Windows may work, but I'm not testing that workflow here
- k3sup - k3sup is an app installer that takes a helm chart and bundles it behind a nice Golang CLI, it can also install k3s, but that's not relevant today
- Docker for Mac / Docker Daemon - installed in the normal way, you probably have this already
- KinD - the "darling" of the Kubernetes community is Kubernetes IN Docker, a small one-shot cluster that can run inside a Docker container
Create the Kubernetes cluster with KinD
To show how lightweight we can make Istio now, we're going to use KinD, which runs inside a container with Docker for Mac or the Docker daemon. MacOS cannot actually run containers or Kubernetes itself, so projects like Docker for Mac create a small Linux VM and hide it away. You can see how many CPUs and how much RAM is allocated through the preferences window.
Get a KinD binary release. The source-code is available, but unless you're a developer, you don't need to compile it.
curl -Lo ./kind "https://github.com/kubernetes-sigs/kind/releases/download/v0.7.0/kind-$(uname)-amd64" chmod +x ./kind sudo mv /kind /usr/local/bin
Now create a cluster:
kind create cluster Creating cluster "kind" ... ✓ Ensuring node image (kindest/node:v1.17.0) 🖼 ✓ Preparing nodes 📦 ✓ Writing configuration 📜 ✓ Starting control-plane 🕹️ ✓ Installing CNI 🔌 ✓ Installing StorageClass 💾 Set kubectl context to "kind-kind" You can now use your cluster with: kubectl cluster-info --context kind-kind Have a nice day! 👋
We can check that our single node is ready now:
kubectl get node -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME kind-control-plane Ready master 35s v1.17.0 172.17.0.2 <none> Ubuntu 19.10 5.3.0-26-generic containerd://1.3.2
Looks good, now to get some software
You can install Istio using the documentation site at Istio.io, but we're going to use k3sup instead since it gives us a one-line install and also bundles a version of Istio configuration for constrained development environments like a KinD cluster.
k3sup will also use the official helm chart for Istio and download helm3 to its own private location so that you don't even need to worry about that.
curl -sSLf https://get.k3sup.dev/ | sudo sh
Now see what apps you can install:
k3sup app install --help Install a Kubernetes app Usage: k3sup app install [flags] k3sup app install [command] Examples: k3sup app install [APP] k3sup app install openfaas --help k3sup app install inlets-operator --token-file $HOME/do k3sup app install --help Available Commands: cert-manager Install cert-manager chart Install the specified helm chart cron-connector Install cron-connector for OpenFaaS crossplane Install Crossplane inlets-operator Install inlets-operator istio Install istio kafka-connector Install kafka-connector for OpenFaaS kubernetes-dashboard Install kubernetes-dashboard linkerd Install linkerd metrics-server Install metrics-server minio Install minio mongodb Install mongodb nginx-ingress Install nginx-ingress openfaas Install openfaas openfaas-ingress Install openfaas ingress with TLS postgresql Install postgresql tiller Install tiller Flags: -h, --help help for install --kubeconfig string Local path for your kubeconfig file (default "kubeconfig") Use "k3sup app install [command] --help" for more information about a command.
Now install the Istio app, and if you're an expert, you can provide overrides to turn off additional features like Prometheus.
k3sup app install istio --help Install istio Usage: k3sup app install istio [flags] Examples: k3sup app install istio --loadbalancer Flags: -h, --help help for istio --init Run the Istio init to add CRDs etc (default true) --namespace string Namespace for the app (default "istio-system") --set stringArray Use custom flags or override existing flags (example --set=prometheus.enabled=false) --update-repo Update the helm repo (default true)
k3sup app install istio --helm3 --init=true
--init=true flag first initialises your cluster with the various Istio CRDs required before going ahead and installing the main Istio bundle.
k3sup app install istio \ --helm3 \ --set=prometheus.enabled=false \ --init=true
You'll see k3sup will make a folder $HOME/.k3sup/bin/helm3 and download helm, then update the repos available and finally it will install Istio.
Now the control-plane is up and running:
kubectl get deploy -n istio-system NAME READY UP-TO-DATE AVAILABLE AGE istio-citadel 1/1 1 1 66s istio-ingressgateway 1/1 1 1 66s istio-pilot 1/1 1 1 66s istio-sidecar-injector 1/1 1 1 66s istio-telemetry 1/1 1 1 66s prometheus 1/1 1 1 66s
Try the BookInfo Application
The the BookInfo Application is still the canonical example that the Istio team provide as a demo. If you have your own configurations or examples, feel free to try them out too.
Enable side-car injection and then deploy the BookInfo manifests:
kubectl label namespace default istio-injection=enabled kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.4/samples/bookinfo/platform/kube/bookinfo.yaml kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.4/samples/bookinfo/networking/bookinfo-gateway.yaml
Access the BookInfo Application
If you run the following command, then you can access the BookInfo Application from your local computer on localhost:
kubectl port-forward -n istio-system \ deploy/istio-ingressgateway 31380:80
Simply navigate to: http://localhost:31380/productpage
Get a public IP for the BookInfo Application
Let's now get a public IP for the BookInfo Application, so that we can start accepting incoming requests from the Internet. Having a public IP means that we can simulate our cluster being on a public cloud provider, just like an AWS EKS cluster which would usually cost us hundreds of dollars per month to run.
You may have heard of tools like Ngrok in the past which can create a HTTP tunnel, these are not OSS, and are subject to quite strict connection limits which make them a poor fit for this type of tutorial.
See how we have no IP for our Istio Gateway?
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) istio-ingressgateway LoadBalancer 10.96.252.61 <pending> 15020:30733/TCP,80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:31136/TCP,15030:31999/TCP,15031:32135/TCP,15032:32245/TCP,15443:32001/TCP
Instead we'll use the InletsOperator which is an Operator for Kubernetes which can detect Services of type LoadBalancer and then create a public IP for them via a tunnel and public cloud.
k3sup app for the inlets-operator, let's install it, but first set up a cloud access token for your favourite provider. GCE, EC2, DigitalOcean, Scaleway, Packet.com and others are available. I'm going to use DigitalOcean which should cost around 5 USD / mo to keep up 24/7.
Why use k3sup when there's a helm chart available? It's easier and all the important flags are documented.
k3sup app install inlets-operator --help Install inlets-operator to get public IPs for your cluster Usage: k3sup app install inlets-operator [flags] Examples: k3sup app install inlets-operator --namespace default Flags: --helm3 Use helm3 instead of the default helm2 -h, --help help for inlets-operator -l, --license string The license key if using inlets-pro -n, --namespace string The namespace used for installation (default "default") --organization-id string The organization id (Used by Scaleway --pro-client-image string Docker image for inlets-pro's client --project-id string Project ID to be used (Used by GCE and packet) -p, --provider string The default provider to use (default "digitalocean") -r, --region string The default region to provision the exit node (Used by Digital Ocean, Packet and Scaleway (default "lon1") -t, --token-file string Text file containing token or a service account JSON file --update-repo Update the helm repo (default true) -z, --zone string The zone to provision the exit node (Used by GCE (default "us-central1-a")
So here we go:
k3sup app install inlets-operator \ --token-file ~/Downloads/do-access-token \ --provider digitalocean \ --region lon1 \ --helm3
Here's the output, which we can see again at any time with
k3sup app info inlets-operator:
======================================================================= = inlets-operator has been installed. = ======================================================================= # The default configuration is for DigitalOcean and your secret is # stored as "inlets-access-key" in the "default" namespace. # To get your first Public IP run the following: kubectl run nginx-1 --image=nginx --port=80 --restart=Always kubectl expose deployment nginx-1 --port=80 --type=LoadBalancer # Find your IP in the "EXTERNAL-IP" field, watch for "<pending>" to # change to an IP kubectl get svc -w # When you're done, remove the tunnel by deleting the service kubectl delete svc/nginx-1 # Find out more at: # https://github.com/inlets/inlets-operator Thanks for using k3sup!
Now you can start watching for the public IP to appear:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) istio-ingressgateway LoadBalancer 10.96.252.61 126.96.36.199 15020:30733/TCP,80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:31136/TCP,15030:31999/TCP,15031:32135/TCP,15032:32245/TCP,15443:32001/TCP
Open the URL in a browser using the /productpage URL and the EXTERNAL-IP:
What about TLS and TCP?
The tunnel we ran above used inlets-operator and the free OSS inlets version which can tunnel HTTP/s. We may want Istio to serve a TLS certificate from LetsEncrypt and to obtain that with cert-manager.
To have the inlets-operator work with inlets-pro, simply reinstall it and run the command with the
export LICENSE="" # Get a free 14-day trial from https://github.com/inlets/inlets-pro-pkg k3sup app install inlets-operator \ --token-file ~/Downloads/do-access-token \ --provider digitalocean \ --region lon1 \ --helm3 \ --license $LICENSE
Now inlets-pro will forward pure TCP traffic for all of the Istio ports exposed directly to your Istio gateway including port 443 for TLS.
If you want to add cert-manager, you can use the k3sup app which again wraps up the helm chart into a single command:
k3sup app install cert-manager
You can create a public DNS A record for the IP address given to you by the inlets-operator and then use that to get a TLS certificate.
If you'd like bonus points or are a seasoned Istio user, try out the tutorial using inlets-pro and report back: Kubernetes Ingress with Cert-Manager.
Note that we are installing Istio 1.3 in this tutorial, and the current version is 1.4, which suggests needing to use the SDS feature to configure HTTPS.
That's it, tea is probably ready now and you need to shut the laptop down for the evening. Run the following to have the inlets-operator delete your public IP:
kubectl delete ns istio-system
Now remove your KinD cluster:
kind delete cluster
Here's one I created earlier:
Just had a quick call with @craigbox and walked through going from no K8s to @IstioMesh with a public IP, using KinD, @inletsdev and the bookstore demo.— Alex Ellis (@alexellisuk) January 28, 2020
Istio and the inlets-operator were installed with one-liners - "k3sup app install"
Check it out https://t.co/DWnY3oDVhN pic.twitter.com/gQjgvKr4yJ
You may also like
k3sup and inlets:
- Star or Fork k3sup on GitHub
- TLS the easy way for OpenFaaS with nginx-ingress, inlets-operator and cert-manager
- Get a LoadBalancer for your private Kubernetes cluster
Service-mesh labs with OpenFaaS: