Tailscale on Kubernetes
Kubernetes is a popular method for deploying, scaling, and managing containerized applications. There are many ways you can run Tailscale in inside a Kubernetes Cluster, e.g., as a sidecar, as a proxy, or as a subnet router. This doc shows several common ways.
Tailscale’s managed Docker image
Tailscale has a published Docker image that Tailscale manages and builds from source. It’s available in Docker Hub and GitHub Packages, just run:
docker pull tailscale/tailscale:latest
or
docker pull ghcr.io/tailscale/tailscale:latest
The current version of the Makefile
required for the examples in this doc is in the tailscale repo.
Prerequisites
You can follow the examples in this doc by cloning from GitHub. For example:
gh repo clone tailscale/tailscale
cd tailscale/docs/k8s
Setup
-
(Optional) You can choose to use an auth key to automate your container logging in to your tailnet. Create an auth key in the Keys page of the admin console. We recommend using an ephemeral key for this purpose, since it will automatically clean up devices after they shut down. We also recommend using a reusable key so when containers are stopped and started they can still connect to your tailnet.
If you don’t provide the key, you can still authenticate by logging in at the URL provided in logs when using the container image below.
apiVersion: v1 kind: Secret metadata: name: tailscale-auth stringData: TS_AUTHKEY: tskey-0123456789abcdef
-
Build and push the container
export IMAGE_TAG=tailscale-k8s:latest make push
-
Tailscale (v1.16 or later) supports storing state inside a Kubernetes Secret.
Configure role-based access control (RBAC) to allow the Tailscale pod to read/write the
tailscale
secret.export SA_NAME=tailscale export TS_KUBE_SECRET=tailscale-auth make rbac
Sample Sidecar
Running as a sidecar allows you to directly expose a Kubernetes pod over Tailscale. This is particularly useful if you do not wish to expose a service on the public internet. This method allows bi-directional connectivity between the pod and other devices on the tailnet. You can use ACLs to control traffic flow.
-
Create and login to the sample nginx pod with a Tailscale sidecar:
make sidecar # If not using an auth key, authenticate by grabbing the Login URL here: kubectl logs nginx ts-sidecar
-
Check if you can connect to nginx over Tailscale:
curl http://nginx
Or, if you have MagicDNS disabled:
curl "http://$(tailscale ip -4 nginx)"
Userspace Sidecar
You can also run the sidecar in userspace networking mode. The obvious benefit is reducing the amount of permissions Tailscale needs to run. The downside is that for outbound connectivity from the pod to the tailnet you would need to use either the SOCKS5 proxy or HTTP proxy.
-
Create and login to the sample nginx pod with a Tailscale sidecar:
make userspace-sidecar # If not using an auth key, authenticate by grabbing the Login URL here: kubectl logs nginx ts-sidecar
-
Check if you can connect to nginx over Tailscale:
curl http://nginx
Or, if you have MagicDNS disabled:
curl "http://$(tailscale ip -4 nginx)"
Sample Proxy
Running a Tailscale proxy allows you to provide inbound connectivity to a Kubernetes Service.
-
Provide the
ClusterIP
of the service you want to reach by either:Creating a new deployment
kubectl create deployment nginx --image nginx kubectl expose deployment nginx --port 80 export TS_DEST_IP="$(kubectl get svc nginx -o=jsonpath='{.spec.clusterIP}')"
Using an existing service
export TS_DEST_IP="$(kubectl get svc <SVC_NAME> -o=jsonpath='{.spec.clusterIP}')"
-
Deploy the proxy pod:
make proxy # If not using an auth key, authenticate by grabbing the Login URL here: kubectl logs proxy
-
Check if you can connect to nginx over Tailscale:
curl http://proxy
Or, if you have MagicDNS disabled:
curl "http://$(tailscale ip -4 proxy)"
Subnet Router
Running a Tailscale subnet router allows you to access the entire Kubernetes cluster network (assuming NetworkPolicies allow) over Tailscale.
-
Identify the Pod/Service CIDRs that cover your Kubernetes cluster. These will vary depending on which CNI you are using and on the Cloud Provider you are using. Add these to the
TS_ROUTES
variable as comma-separated values.SERVICE_CIDR=10.20.0.0/16 POD_CIDR=10.42.0.0/15 export TS_ROUTES=$SERVICE_CIDR,$POD_CIDR
-
Deploy the subnet-router pod.
make subnet-router # If not using an auth key, authenticate by grabbing the Login URL here: kubectl logs subnet-router
-
In the Machines page of the admin console, ensure that the routes for the subnet-router are enabled.
-
Make sure that any client you want to connect from has
--accept-routes
enabled. -
Check if you can connect to a
ClusterIP
or aPodIP
over Tailscale:# Get the Service IP INTERNAL_IP="$(kubectl get svc <SVC_NAME> -o=jsonpath='{.spec.clusterIP}')" # or, the Pod IP # INTERNAL_IP="$(kubectl get po <POD_NAME> -o=jsonpath='{.status.podIP}')" INTERNAL_PORT=8080 curl http://$INTERNAL_IP:$INTERNAL_PORT
Optional: Add DNS information
By default, we do not set DNS for containers. To enable MagicDNS for a Kubernetes container, you will need to export TS_ACCEPT_DNS=true
in the environment.