HTTPS for your local endpoints with inlets and Caddy

Updated: Aug 2020 - new diagrams, inletsctl for automated provisioning, and introducing inlets PRO

What is the problem?

Over the holidays I was reflecting on a network connectivity problem that I faced whilst employed by a large enterprise company. It turned out that this was a common problem which was being faced by my new team working at yet another large enterprise.

How do you get incoming HTTP traffic to a service running behind a restrictive firewall?

The team needed to get incoming HTTP traffic in the form of webhooks to test the work we were doing with OpenFaaS and OpenFaaS Cloud whilst developing code on their laptop with no routable IP address.

My solution for this problem is called inlets. inlets is written in Go and recently trended for about a week on Hacker News and gained over 2.2k GitHub stars along with a dozen PRs from developers in the far east. inlets now has just over 5k stars. In this post we'll learn more about how to secure inlets with HTTPS for an encrypted connection.

See also: The Need for A Cloud Native Tunnel

What's the solution?

There are already several good solutions for this problem which create a tunnel from the outside world to services on our local environments - whether that be a Raspberry Pi, a home-lab or a laptop.

You can read more on GitHub about why I felt a new solution was required.

Let me introduce you to inlets.

Conceptual diagram

The goal of the project is to Expose your local endpoints to the Internet.

Bill of materials:

  • an exit-server or node - this is a machine outside out firewall with full access to the Internet and a public IP address. Our users will connect to the exit-server and be routed to local endpoints inside our firewall over a websocket tunnel
  • a client - the client acts as a reverse proxy or bridge - when it hears a request, it will proxy that to a local service such as an Express.js server and then send a result back
  • A permanent tunnel using a websocket - most corporate firewalls will allow an outbound TCP connection to be established over your existing HTTP/S proxy using a CONNECT message.

Each HTTP request to the exit-server is serialized and published on the websocket as a control message - then blocks. A client will then receive the request, decide if it knows how to proxy that site then will fetch the resource and send it back down the websocket as a serialized response.

Finally the user's HTTP request unblocks and writes the response to the caller.

Securing the tunnel

By default, for development inlets is configured to use a non-encrypted tunnel which is vulnerable to man-in-the-middle (or MITM) attacks. I have a roadmap item to bake TLS into inlets using the new libraries made available by Caddy, but for the time-being we can do this by running the Caddy binary on our exit-server.

Enabling HTTPS means that our users connect to an encrypted endpoint and our inlets clients can also connect to our server over an encrypted tunnel.

Tutorial

For our exit-server we will use a DigitalOcean droplet, but if you are really on a budget you can use a cheaper VPS like Scaleway.com. The main requirement is that we have a VM or VPS with a public IP address.

This is the conceptual diagram of what we're going to create:

inlets-tls

The exit server will run a reverse proxy, its responsibility will be to obtain, renew, and serve TLS certificates.

When complete, the inlets server's websocket will be encrypted, as will any sites we expose, as long as they have an entry in the reverse proxy's configuration file. For this setup we'll be using Caddy, but other projects also work well.

Setup an exit-server (automated)

First sign-up to DigitalOcean using my code to get free credits: 100 USD credit for 60 days.

Download inletsctl, which provisions exit servers using cloud APIs:

# Or run without sudo, and copy the binary over yourself
curl -SLs https://inletsctl.inlets.dev/ | sudo sh

Create a DigitalOcean API key from your dashboard and save it as $HOME/do-api-key.txt

Run the following, you can customise the region:

inletsctl create --provider digitalocean \
 --region lon1 \
 --access-token-file $HOME/do-api-key.txt

Now wait until the setup is complete, it will give you the instructions to get started.

You will also be emailed a first-time root password for your new VM.

Setup an exit-server (manual)

  • First sign-up to DigitalOcean using my code to get free credits: 100 USD credit for 60 days.

  • Create a Droplet in a region near you using Ubuntu 18.04.x

  • The cheapest Droplet is suitable for our purposes - 5 USD with 1GB RAM / 1vCPU and 1TB transfer

  • Select additional options. Here you can pick User data to automate the setup of the inlets server

  • Enter your user-data using the text from our userdata.sh script.

  • Add any SSH keys you want to use for logging into the exit-server

  • Name your host i.e. inlets-exit-server-1

  • Deploy the droplet

  • Get your public IP address

Configure DNS

  • Now create an A record for your domain name pointing at the new Public IP address of your exit-server. I used Namecheap.com to provision a cheap domain name for about 2 USD. You may have even just bought a .dev domain in all that craziness. This is a good opportunity for you to put it to use.

Enable Caddy 1.x for HTTPS

  • Log in to your VM using ssh and update the port from 80 to 8080 with:
sudo sed -i s/80/8080/g /etc/systemd/system/inlets.service
sudo systemctl daemon-reload 
sudo systemctl restart inlets

We are doing this so that Caddy can run on both port 80 and 443.

  • Now grab your token for authenticating your client:
sudo cat /etc/default/inlets | cut -d"=" -f2

cd4bf5db2601ec9075425102d2b12a9ee5413d4a
  • Download the latest Caddy binary from the Releases page - on a VPS you want a binary with a name like caddy_v0.11.5_linux_amd64.tar.gz. You can use wget https:// to download the file.

  • Uncompress the tar.gz file: tar -xvf caddy_v0.11.5_linux_amd64.tar.gz

  • Here are the instructions for Linux:

Use a 1.x release rather than 2.0, where the configuration format changed.

curl -sLSf https://github.com/caddyserver/caddy/releases/download/v1.0.4/caddy_v1.0.4_linux_amd64.tar.gz > caddy_v1.0.4_linux_amd64.tar.gz

sudo tar -xvf caddy_v1.0.4_linux_amd64.tar.gz --strip-components=0 -C /usr/local/bin

sudo cp /usr/local/bin/init/linux-systemd/caddy.service /etc/systemd/system/
sudo systemctl enable caddy

sudo rm -rf /usr/local/bin/init

sudo mkdir -p /etc/caddy
  • Create /etc/caddy/Caddyfile replacing exit.domain.com with your own DNS record:
exit.domain.com

proxy / 127.0.0.1:8080 {
  transparent
}

proxy /tunnel 127.0.0.1:8080 {
  transparent
  websocket
}
  • Start the Caddy process with caddy. The first time you run this command, Caddy will ask for your email address. Enter your email and then wait for the TLS certificate to be issued by LetsEncrypt.
  • Now exit Caddy and run sudo systemctl start caddy to use the systemd unit file we installed

The systemd unit file means that Caddy will restart upon reboot and if its process crashes.

Connect your client

Over on your laptop you can now start your local endpoint either directly or via a Docker container.

The simplest HTTP server is probably the one built-into Python. It will serve files from whatever directory you run it in.

  • Create a temporary directory:
mkdir ~/filestore/
cd ~/filestore/
echo "Welcome to my filestore" > welcome.txt
# If Python version returned above is 3.X
python3 -m http.server
# On windows try "python" instead of "python3"
# If Python version returned above is 2.X
python -m SimpleHTTPServer

By default it listens on port 8000: Serving HTTP on 0.0.0.0 port 8000 ... so this will be our --upstream value.

You can test the local URL at: http://127.0.0.1:8000

Connect your client

Install inlets on your laptop or local computer such as your Raspberry Pi:

# Download and copy to /usr/local/bin/
sudo inletsctl download

# Or download, and you can move the binary manually
inletsctl download

Now connect the client to your tunnel server:

inlets client \
 --remote wss://exit.domain.com \
 --upstream=exit.domain.com=http://127.0.0.1:8000 \
 --token=cd4bf5db2601ec9075425102d2b12a9ee5413d4a
  • Note the use of --token from earlier. This authenticates our client to our exit-server to prevent unauthorized access.
  • wss:// shows we are using an encrypted tunnel to prevent tampering and MITM

Now you and your friends can visit https://exit.domain.com and access the Simple Python HTTP server running on your laptop.

If you have multiple domain names and multiple services on your laptop simply change the --upstream flag to reflect that.

For OpenFaaS on Docker Swarm that may be:

inlets client \
 --remote wss://exit.domain.com \
 --upstream=gateway.domain.com=http://127.0.0.1:8080,prometheus.domain.com=http://127.0.0.1:9090

You'll see output similar to:

2019/03/15 11:56:31 Upstream: lon1-exit.domain.com => http://127.0.0.1:8000
2019/03/15 11:56:31 connecting to wss://lon1-exit.domain.com/tunnel
2019/03/15 11:56:32 Connected to websocket: 192.168.0.71:49631

Just remember to add each DNS A record for each sub-domain you want to be accessible from the exit-server.

If you're running your client on a Linux computer or Raspberry Pi, you can create a systemd unit file so that the tunnel restarts and comes up upon reboot.

Did you know?

inlets also works on a Raspberry Pi, so you can run the client on a Raspberry Pi 24/7 as a way to get incoming traffic to your services on your Raspberry Pi cluster, or as a cheap gateway pointing at other computers in your network.

Link: Buy your Raspberry Pi cluster here and follow my latest tutorial: k3s: Will it cluster?

Link: Build a 10USD Internet Gateway with a Raspberry Pi Zero

Wrapping up

Show your support: Star or fork the project here: inlets/inlets and follow @inletsdev on Twitter.

You now have a completely free tunnel set up for around 5USD/month which can punch through almost any firewall. The code is open-source under the MIT license and built by community.

inlets-pro - TCP tunnels for work

Since launching inlets, OpenFaaS Ltd has developed inlets-pro which is a professional product for companies to use for L4 TCP tunnelling over websockets.

  • automatic TLS and end-to-end encryption
  • pure TCP L4 pass-through for databases, SSH, HTTPS, databases, Kubernetes, custom protocols and more
  • multiple-port support
  • get and serve TLS certificates from your private network using LetsEncrypt, Caddy, Nginx, Traefik, and other standard tooling
  • buy professional services and support from OpenFaaS Ltd

Apply for a free, 14-day trial now: inlets-pro

Buy a personal license on the OpenFaaS Ltd store.

Get on the Insiders Track

New: to get all my latest news, updates, and early access to new projects like Inlets: subscribe to my Insiders Track via GitHub Sponsors

If you need help with inlets, you can try one of the many tutorials available, or join OpenFaaS Slack and the #inlets channel. The community offer free support, but if you need more than that, reach out over email to sales@openfaas.com to find out how OpenFaaS Ltd can help.

You can view the roadmap on GitHub to see what's coming next. Contributions are also welcome, and for comments, questions or suggestions you can follow me on Twitter @alexellisuk