Kubernetes operator
The Tailscale Kubernetes Operator lets you:
- Access the Kubernetes control plane using an API server proxy
- Expose a tailnet service to your Kubernetes cluster (cluster egress)
- Expose a cluster workload to your tailnet (cluster ingress)
- Expose a cluster workload to another cluster (cross-cluster connectivity)
- Expose a cloud service to your tailnet
- Deploy exit nodes and subnet routers
- Deploy app connector
- Deploy
tsrecorder - Expose multi-cluster applications to internal users
- Manage multi-cluster deployments with ArgoCD
Setting up the Kubernetes operator
Prerequisites
Tailscale Kubernetes Operator must be configured with OAuth client credentials. The operator uses these credentials to manage devices via Tailscale API and to create auth keys for itself and the devices it manages.
-
In your tailnet policy file, create the tags
tag:k8s-operatorandtag:k8s, and maketag:k8s-operatoran owner oftag:k8s. If you want yourServicesto be exposed with tags other than the defaulttag:k8s, create those as well and maketag:k8s-operatoran owner."tagOwners": { "tag:k8s-operator": [], "tag:k8s": ["tag:k8s-operator"], }You can use the visual policy editor to manage your tailnet policy file. Refer to the visual editor reference for guidance on using the visual editor.
-
Create an OAuth client in the Trust credentials page of the admin console. Create the client with
Devices CoreandAuth Keyswrite scopes, and the tagtag:k8s-operator.
Installation
A default operator installation creates a tailscale namespace, an operator Deployment in the tailscale namespace, RBAC for the operator, a "tailscale" IngressClass, and ProxyClass, Connector, ProxyGroup, DNSConfig, and Recorder Custom Resource Definitions.
There are two ways to install the Tailscale Kubernetes Operator: using Helm or applying static manifests with kubectl.
Helm
Tailscale Kubernetes Operator's Helm charts are available from two chart repositories.
The https://pkgs.tailscale.com/helmcharts repository contains well-tested charts for stable Tailscale versions.
Helm charts and container images for a new stable Tailscale version are released a few days after the official release. This is done to avoid releasing image versions with potential bugs in the core Linux client or core libraries.
The https://pkgs.tailscale.com/unstable/helmcharts repository contains charts with the very latest changes, published in between official releases.
The charts in both repositories are different versions of the same chart and you can upgrade from one to the other.
To install the latest Kubernetes Tailscale operator from https://pkgs.tailscale.com/helmcharts in tailscale namespace:
-
Add
https://pkgs.tailscale.com/helmchartsto your local Helm repositories:helm repo add tailscale https://pkgs.tailscale.com/helmcharts -
Update your local Helm cache:
helm repo update -
Install the operator passing the OAuth client credentials that you created earlier:
helm upgrade \ --install \ tailscale-operator \ tailscale/tailscale-operator \ --namespace=tailscale \ --create-namespace \ --set-string oauth.clientId="<OAauth client ID>" \ --set-string oauth.clientSecret="<OAuth client secret>" \ --wait
Static manifests with kubectl
-
Download the Tailscale Kubernetes Operator manifest file from the tailscale/tailscale repository.
-
Edit your version of the manifest file:
- Find
# SET CLIENT ID HEREand replace it with your OAuth client ID. - Find
# SET CLIENT SECRET HEREand replace it with your OAuth client secret. The OAuth client secret is case-sensitive.
For both the client ID and secret, quote the value, to avoid any potential yaml misinterpretation of unquoted strings. For example, use:
client_id: "k123456CNTRL" client_secret: "tskey-client-k123456CNTRL-abcdef"instead of:
client_id: k123456CNTRL client_secret: tskey-client-k123456CNTRL-abcdef - Find
-
Apply the edited file to your Kubernetes cluster:
kubectl apply -f manifest.yaml
Validation
Verify that the Tailscale operator has joined your tailnet. Open the Machines page of the admin console and look for a node named tailscale-operator (or your customized hostname) tagged with the tag:k8s-operator tag. It may take some time for the operator to join your tailnet as the container image downloads and the Pod starts.
(Optional) Pre-creating a ProxyGroup
When a user configures an ingress or egress proxy, the default mode for the operator is to create a tailnet device deployed as a StatefulSet with a single Pod.
This model has a few caveats:
-
a single
Podmeans that there will be some downtime during proxy upgrades, cluster upgrades, etc -
a
Podper proxy may not be feasible for large installations (high resource consumption)
Tailscale Kubernetes Operator 1.76 and later provides the ability to pre-create a multi-replica ProxyGroup. Ingress and egress can then be exposed redundantly by using the ProxyGroup.
To create a ProxyGroup with three replicas for Tailscale egress Services:
-
Apply the following manifest:
apiVersion: tailscale.com/v1alpha1 kind: ProxyGroup metadata: name: ts-proxies spec: type: egress replicas: 3 -
(Optional) Wait for the
ProxyGroupto become ready:
kubectl wait proxygroup ts-proxies --for=condition=ProxyGroupReady=true
For the above ProxyGroup the operator creates a StatefulSet with three replicas, each of which is a tailnet device.
Egress Services can now refer to the newly created ProxyGroup, see Configure an egress Service using ProxyGroup.
You can find all available ProxyGroup configuration options on GitHub →
Supported versions
Operator and proxies
Tailscale recommends that you use the same version for the operator and the proxies, because majority of our tests run against the same versions.
The operator supports proxies running a Tailscale version up to four minor versions earlier than the operator's version. The operator does not support proxies running a Tailscale version later than the operator's version.
Kubernetes versions
The earliest supported version of Kubernetes is v1.23.0.
CNI compatibility
The operator creates proxies that configure custom routing and forwarding rules in each proxy Pod's network namespace only.
Because the proxying is implemented in the proxy Pod's namespace, the routing and firewall configuration on the Node (for example, using iptables, eBPF, or any other mechanism) doesn't affect the proxies.
This means that the proxies work with most CNI configurations out of the box.
EKS Fargate
On EKS Fargate, currently the only supported operator features are Tailscale Ingress and Tailscale API server proxy.
Tailscale ingress Services, Tailscale egress Services and the Connector configurations currently contain privileged containers and containers with CAP_NET_ADMIN.
This is not supported on EKS Fargate.
Cilium in kube-proxy replacement mode
You must enable bypassing socket load balancer in Pods' namespaces if you run Cilium in kube-proxy replacement mode and want to do one or more of the following:
- Expose a Kubernetes
Serviceto your tailnet as a Tailscale LoadBalancerService. - Expose a Kubernetes
Serviceto your tailnet usingtailscale.com/exposeannotation. - Expose a
ServiceCIDR range viaConnector.
This is needed because when Cilium runs in kube-proxy replacement mode with the socket load balancing in Pods' namespaces enabled, connections from Pods to ClusterIPs go over a TCP socket (instead of going out via Pods' veth devices) and thus bypasses Tailscale firewall rules that are attached to netfilter hooks.
If you encounter bandwidth issues while using Cilium, use the --devices flag to explicitly specify which network interfaces Cilium should monitor for the maximum transmission unit (MTU). This prevents Cilium from defaulting to the MTU of the tailscale0 interface and instead ensures it uses the MTU of a physical interface on the host.
Customization
Learn how to customize the operator and resources it manages.
Troubleshooting
Learn how to troubleshoot the operator and resources it manages.
Limitations
- There are no dashboards or metrics. We are interested to hear what metrics you would find useful — do reach out.
- The container images, charts or manifests are not signed. We are working on this.
- The static manifests are currently only available from tailscale/tailscale codebase. We are working to improve this flow.
- Using the operator on OpenShift is currently not supported.
Glossary
ProxyIn the context of this document, a proxy is the Tailscale node deployed for each user-configured component that the operator manages (such as a Tailscale Ingress or a Connector).
The proxy is deployed as a StatefulSet in the operator's namespace (defaults to tailscale).
The StatefulSets name is prefixed by a portion of the configured component's name.
If you need to reliably refer to the proxy's StatefulSet, you can use label selectors.
For example, to find StatefulSet for a Tailscale Ingress resource named ts-ingress in prod namespace , you can run:
$ kubectl get statefulset \
--namespace tailscale \
--selector="tailscale.com/managed=true,tailscale.com/parent-resource-type=ingress,tailscale.com/parent-resource=ts-ingress,tailscale.com/parent-resource-ns=prod"
The tailscale.com/parent-resource label is set to svc for a Service and to connector for a Connector.
The tailscale.com labels are also propagated to the Pod.
