Expose a tailnet service to your Kubernetes cluster (cluster egress)
You can make services that are external to your cluster, but available in your tailnet, available to your Kubernetes cluster workloads by making the associated tailnet node accessible from the cluster.
You can configure the operator to set up an in-cluster egress proxy for a tailnet device by creating a Kubernetes Service that specifies a tailnet device either by its Tailscale IP address or its MagicDNS name. In both cases your cluster workloads will refer to the tailnet service by the Kubernetes Service name.
Prerequisites
Expose a tailnet node to your cluster using its Tailscale MagicDNS name
-
Ensure that MagicDNS is enabled for your cluster.
-
Create a Kubernetes
Serviceof typeExternalNameannotated with the MagicDNS name of the tailnet device that you wish to make available:apiVersion: v1 kind: Service metadata: annotations: tailscale.com/tailnet-fqdn: <Tailscale MagicDNS name> name: rds-staging # service name spec: externalName: placeholder # any value - will be overwritten by operator type: ExternalName
Note that the value of the tailscale.com/tailnet-fqdn annotation must be the full MagicDNS name of the tailnet service (not just hostname). The final dot is optional.
Expose a tailnet node to your cluster using its Tailscale IP address
-
Create a Kubernetes
Serviceof typeExternalNameannotated with the Tailscale IP address of the tailnet node you want to make available:apiVersion: v1 kind: Service metadata: annotations: tailscale.com/tailnet-ip: <Tailscale IP address> name: rds-staging # service name spec: externalName: placeholder # any value - will be overwritten by operator type: ExternalName
The value of the tailscale.com/tailnet-ip annotation can be either a tailnet IPv4 or IPv6 address or an IPv4 or IPv6 address exposed by a Tailscale subnet router.
IP address ranges are not supported. 4via6 addresses are also not supported.
See docs for more information on how to access an IP address behind a subnet router.
Expose a tailnet HTTPS service to your cluster workloads
Cluster workloads that need access to tailnet services that use Tailscale HTTPS likely need to refer to those services by their MagicDNS names for the TLS handshake to succeed.
Sometimes, cluster workloads must access a service running in the same cluster, which is exposed over Tailscale Ingress through its MagicDNS name. Both of these use cases require the ability of the workloads to resolve the MagicDNS names of the services they need to access.
With Kubernetes operator v1.66 and later, you can configure the cluster to resolve MagicDNS names of the tailnet services exposed using cluster egress proxies and the MagicDNS names of cluster workloads exposed to tailnet using Tailscale Ingress.
You can configure the operator to deploy a nameserver for ts.net DNS names of the tailnet services exposed using cluster egress proxies and for Tailscale Ingress in the cluster, then add the nameserver as a stub nameserver to your cluster DNS plugin.
-
Create a
DNSConfigcustom resource:apiVersion: tailscale.com/v1alpha1 kind: DNSConfig metadata: name: ts-dns spec: nameserver: image: repo: tailscale/k8s-nameserver tag: unstableThe
DNSConfigcustom resource tells the Tailscale Kubernetes Operator to deploy a nameserver in the operator's namespace and dynamically populate the nameserver with the following DNS records:-
Arecords mapping cluster egress proxyPods' IP addresses to the MagicDNS names of the exposed tailnet services -
Arecords mapping Tailscale Ingress proxyPods' IP addresses to the MagicDNS names of theIngresses
-
-
Find the IP address of the nameserver:
$ kubectl get dnsconfig ts-dns NAME NAMESERVERIP ts-dns 10.100.124.196 -
(If your cluster uses CoreDNS) update the
Corefile.Update Corefile in
corednsConfigMapinkube-systemnamespace with a stanza forts.netstub nameserver:Corefile: | .:53 { ... } ts.net { errors cache 30 forward . 10.100.124.196 } -
(If your cluster uses
kube-dns) update thekube-dnsconfig.Update
kube-dnsConfigMapinkube-systemnamespace to add a stub nameserver forts.netDNS names:data: stubDomains: | { "ts.net": [ "10.100.124.196" ] } -
Access HTTPS tailnet services from the Kubernetes cluster.
Create an egress
Service:apiVersion: v1 kind: Service metadata: annotations: tailscale.com/tailnet-fqdn: "<full MagicDNS name of the tailnet node>" name: ts-egress spec: externalName: unused type: ExternalNameThe operator automatically populates the nameserver's configuration with an
Arecord mapping the MagicDNS name of the exposed tailnet service to proxyPod's IP address. This record allows your cluster workloads to access the tailnet service using its MagicDNS name. -
Access a Tailscale
Ingressusing its MagicDNS name from the cluster:Create a Tailscale
Ingressresource with atailscale.com/experimental-forward-cluster-traffic-via-ingressannotation:apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: nginx annotations: tailscale.com/experimental-forward-cluster-traffic-via-ingress: "true" spec: defaultBackend: service: name: nginx port: name: https ingressClassName: tailscaleThe operator automatically populates the nameserver's configuration with an
Arecord mapping the MagicDNS name of the TailscaleIngressto the proxyPod's IP address. ForIngressresources annotated withtailscale.com/experimental-forward-cluster-traffic-via-ingress, the operator also ensures that the proxy created for thatIngresslistens on itsPodIP address, so it can be accessed by cluster workloads.
We are actively seeking feedback for MagicDNS name resolution in cluster, especially with regards to further automating the workflow — do reach out if you have feedback or suggestions.
In Tailscale version 1.90 or later, this feature supports clusters with IPv6 Pod CIDRs as well as IPv4 CIDRs. Earlier versions only support IPv4 CIDRs.
Configure an egress Service using ProxyGroup
The operator by default creates a StatefulSet with a single proxy Pod for each egress Service.
If you want to expose the egress Services on multiple Pods redundantly or coalesce a number of egress
Services on a smaller number of proxy Pods, you can instead use a pre-created ProxyGroup.
To expose an egress Service on a ProxyGroup:
-
Follow the instructions to pre-create an egress
ProxyGroup. -
Create an ExternalName
Servicethat references theProxyGroupand the tailnet target:apiVersion: v1 kind: Service metadata: annotations: tailscale.com/tailnet-fqdn: "<full MagicDNS name of the tailnet node>" tailscale.com/proxy-group: "<ProxyGroup name>" name: ts-egress namespace: default spec: externalName: placeholder # any value - will be overwritten by the operator type: ExternalName ports: - port: 8080 protocol: TCP name: web # any value - port: 3002 protocol: TCP name: debug # any valueYou can specify the tailnet target either by a MagicDNS name or tailnet IP address.
The ExternalName Service must explicitly specify all port numbers that you want to access in the tailnet service in the spec.ports section.
Cluster traffic received to the ports (and protocols) specified on the ExternalName Service will be proxied to the same ports of the tailnet target.
Note that setting service.spec.ports fields other than port, protocol and name will have no effect.
-
(Optional) Wait for the
Serviceto become ready:kubectl wait svc ts-egress --for=condition=TailscaleEgressSvcReady=true -
Cluster workloads can now access the tailnet target by ExternalName
Service's DNS name, for example:$ curl ts-egress.default.svc.cluster.local:8080 ...
Cluster traffic for ts-egress Service will be round robin load balanced across the ProxyGroup replicas.
Any number (restricted by resource consumption) of egress Services can reference a single ProxyGroup.
You can also create multiple egress ProxyGroups.
Access an IP address behind a subnet router
It is currently not possible to access 4via6 IP addresses using the egress proxy.
If you have a service with a static IP address that is behind a subnet router, you can make it accessible to cluster workloads using egress proxies.
-
Create a
ProxyClassto ensure that the egress proxy accepts advertised subnet routes:apiVersion: tailscale.com/v1alpha1 kind: ProxyClass metadata: generation: 2 name: accept-routes spec: tailscale: acceptRoutes: true -
Create an ExternalName
Servicethat references theProxyClassand the target IP behind the subnet router:apiVersion: v1 kind: Service metadata: annotations: tailscale.com/tailnet-ip: "<IP-behind-the-subnet-router>" tailscale.com/proxy-class: accept-routes name: ts-egress spec: externalName: unused type: ExternalName
Cluster workloads should now be able to access the target behind the subnet router using the ExternalName Service's cluster DNS name. For example:
curl.ts-egress.default.svc.cluster.local:8080
...
Validation
Wait for the Tailscale Kubernetes Operator to update spec.externalName of the Kubernetes Service that you created. The Service external name should get set to the Kubernetes DNS name of another Kubernetes Service that is fronting the egress proxy in tailscale namespace. The proxy is responsible for routing traffic to the exposed Tailscale node over the tailnet.
After the Service external name gets updated, workloads in your cluster should be able to access the exposed tailnet service by referring to it via the Kubernetes DNS name of the Service that you created.
Customization
Learn how to customize the operator and resources it manages.
Troubleshooting
Learn how to troubleshoot the operator and resources it manages.
Limitations
- Egress to external services supports using an IPv4 or IPv6 address for a single route in the
tailscale.com/tailnet-ipannotation, but not IP ranges. - Egress to external services currently only supports clusters where privileged pods are permitted (that is, GKE Autopilot is not supported).
