Kubernetes operator
The Tailscale Kubernetes operator lets you:
- Access the Kubernetes control plane using an API server proxy
- Expose a tailnet service to your Kubernetes cluster (cluster egress)
- Expose a cluster workload to your tailnet (cluster ingress)
- Expose a cluster workload to another cluster (cross-cluster connectivity)
- Expose a cloud service to your tailnet
- Deploy exit nodes and subnet routers
- Deploy
tsrecorder
Setting up the Kubernetes operator
Prerequisites
Tailscale Kubernetes Operator must be configured with OAuth client credentials. The operator uses these credentials to manage devices via Tailscale API and to create auth keys for itself and the devices it manages.
-
In your tailnet policy file, create the tags
tag:k8s-operator
andtag:k8s
, and maketag:k8s-operator
an owner oftag:k8s
. If you want yourServices
to be exposed with tags other than the defaulttag:k8s
, create those as well and maketag:k8s-operator
an owner."tagOwners": { "tag:k8s-operator": [], "tag:k8s": ["tag:k8s-operator"], }
-
Create an OAuth client in the OAuth clients page of the admin console. Create the client with
Devices Core
andAuth Keys
write scopes, and the tagtag:k8s-operator
.
Installation
A default operator installation creates a tailscale
namespace, an operator Deployment
in the tailscale
namespace, RBAC for the operator, and ProxyClass
and Connector
Custom Resource Definitions.
Helm
Tailscale Kubernetes Operator's Helm charts are available from two chart repositories.
The https://pkgs.tailscale.com/helmcharts
repository contains well-tested charts for stable Tailscale versions.
Helm charts and container images for a new stable Tailscale version are released a few days after the official release. This is done to avoid releasing image versions with potential bugs in the core Linux client or core libraries.
The https://pkgs.tailscale.com/unstable/helmcharts
repository contains charts with the very latest changes, published in between official releases.
The charts in both repositories are different versions of the same chart and you can upgrade from one to the other.
To install the latest Kubernetes Tailscale operator from https://pkgs.tailscale.com/helmcharts
in tailscale
namespace:
-
Add
https://pkgs.tailscale.com/helmcharts
to your local Helm repositories:helm repo add tailscale https://pkgs.tailscale.com/helmcharts
-
Update your local Helm cache:
helm repo update
-
Install the operator passing the OAuth client credentials that you created earlier:
helm upgrade \ --install \ tailscale-operator \ tailscale/tailscale-operator \ --namespace=tailscale \ --create-namespace \ --set-string oauth.clientId="<OAauth client ID>" \ --set-string oauth.clientSecret="<OAuth client secret>" \ --wait
Static manifests with kubectl
-
Download the Tailscale Kubernetes operator manifest file from the tailscale/tailscale repo.
-
Edit your version of the manifest file:
- Find
# SET CLIENT ID HERE
and replace it with your OAuth client ID. - Find
# SET CLIENT SECRET HERE
and replace it with your OAuth client secret. The OAuth client secret is case-sensitive.
For both the client ID and secret, quote the value, to avoid any potential yaml misinterpretation of unquoted strings. For example, use:
client_id: "k123456CNTRL" client_secret: "tskey-client-k123456CNTRL-abcdef"
instead of:
client_id: k123456CNTRL client_secret: tskey-client-k123456CNTRL-abcdef
- Find
-
Apply the edited file to your Kubernetes cluster:
kubectl apply -f manifest.yaml
(Optional) Pre-creating a ProxyGroup
Currently when a user configures an ingress or an egress
proxy, the default mode for the operator is to create a tailnet device deployed as a StatefulSet
with a single
Pod
.
This model has a few caveats:
-
a single
Pod
means that there will be some downtime during proxy upgrades, cluster upgrades, etc -
a
Pod
per proxy may not be feasible for large installations (high resource consumption)
Tailscale Kubernetes operator 1.76 and later provides the ability to pre-create a multi-replica ProxyGroup
.
Ingress and egress can then be exposed redundantly via the
ProxyGroup
.
ProxyGroup
currently can only be used for Tailscale egress. We are working on ingress support.
To create a ProxyGroup
with three replicas for Tailscale egress Service
s:
-
Apply the following manifest:
apiVersion: tailscale.com/v1alpha1 kind: ProxyGroup metadata: name: ts-proxies spec: type: egress replicas: 3
-
(Optional) Wait for the
ProxyGroup
to become ready:
$ kubectl wait proxygroup ts-proxies --for=condition=ProxyGroupReady=true
For the above ProxyGroup
the operator creates a StatefulSet
with three replicas, each of which is a tailnet device.
Egress Service
s can now refer to the newly created ProxyGroup
, see Configure an egress Service
using ProxyGroup
.
You can find all available ProxyGroup
configuration options on GitHub →
Validation
Verify that the Tailscale operator has joined your tailnet. Open the Machines page of the admin console and look for a node named tailscale-operator, tagged with the tag:k8s-operator
tag. It may take a minute or two for the operator to join your tailnet, due to the time required to download and start the container image in Kubernetes.
Supported versions
Operator and proxies
Tailscale recommends that you use the same version for the operator and the proxies, because majority of our tests run against the same versions.
The operator supports proxies running a Tailscale version up to four minor versions earlier than the operator's version. The operator does not support proxies running a Tailscale version later than the operator's version.
Kubernetes versions
The earliest supported version of Kubernetes is v1.23.0.
CNI compatibility
The operator creates proxies that configure custom routing and forwarding rules in each proxy Pod
's network namespace only.
Because the proxying is implemented in the proxy Pod
's namespace, the routing and firewall configuration on the Node
(for example, using iptables, eBPF, or any other mechanism) doesn't affect the proxies.
This means that the proxies work with most CNI configurations out of the box.
EKS Fargate
On EKS Fargate, currently the only supported operator features are Tailscale Ingress
and Tailscale API server proxy.
Tailscale ingress Service
s, Tailscale egress Service
s and the Connector
configurations currently contain privileged containers and containers with CAP_NET_ADMIN.
This is not supported on EKS Fargate.
Cilium in kube-proxy replacement mode
You must enable bypassing socket load balancer in Pods' namespaces if you run Cilium in kube-proxy replacement mode and want to do one or more of the following:
- Expose a Kubernetes
Service
to your tailnet as a Tailscale LoadBalancerService
. - Expose a Kubernetes
Service
to your tailnet usingtailscale.com/expose
annotation. - Expose a
Service
CIDR range viaConnector
.
This is needed because when Cilium runs in kube-proxy replacement mode with the socket load balancing in Pod
s' namespaces enabled, connections from Pod
s to ClusterIP
s go over a TCP socket (instead of going out via Pod
s' veth devices) and thus bypasses Tailscale firewall rules that are attached to netfilter hooks.
Customization
Learn how to customize the operator and resources it manages.
Troubleshooting
Learn how to troubleshoot the operator and resources it manages.
Limitations
- There are no dashboards or metrics. We are interested to hear what metrics you would find useful — do reach out.
- The container images, charts or manifests are not signed. We are working on this.
- The static manifests are currently only available from tailscale/tailscale codebase. We are working to improve this flow.
- Using the operator on OpenShift is currently not supported.
Glossary
ProxyIn the context of this document, a proxy is the Tailscale node deployed for each user-configured component that the operator manages (such as a Tailscale Ingress
or a Connector
).
The proxy is deployed as a StatefulSet
in the operator's namespace (defaults to tailscale
).
The StatefulSet
s name is prefixed by a portion of the configured component's name.
If you need to reliably refer to the proxy's StatefulSet
, you can use label selectors.
For example, to find StatefulSet
for a Tailscale Ingress
resource named ts-ingress
in prod
namespace , you can run:
$ kubectl get statefulset \
--namespace tailscale \
--selector="tailscale.com/managed=true,tailscale.com/parent-resource-type=ingress,tailscale.com/parent-resource=ts-ingress,tailscale.com/parent-resource-ns=prod"
The tailscale.com/parent-resource
label is set to svc
for a Service
and to connector
for a Connector
.
The tailscale.com
labels are also propagated to the Pod
.