Get started
Login
© 2024

Expose a tailnet service to your Kubernetes cluster (cluster egress)

You can make services that are external to your cluster, but available in your tailnet, available to your Kubernetes cluster workloads by making the associated tailnet node accessible from the cluster.

You can configure the operator to set up an in-cluster egress proxy for a tailnet device by creating a Kubernetes Service that specifies a tailnet device either by its Tailscale IP address or its MagicDNS name. In both cases your cluster workloads will refer to the tailnet service by the Kubernetes Service name.

Prerequisites

Expose a tailnet node to your cluster using its Tailscale IP address

  1. Create a Kubernetes Service of type ExternalName annotated with the Tailscale IP address of the tailnet node you want to make available:

    apiVersion: v1
    kind: Service
    metadata:
      annotations:
        tailscale.com/tailnet-ip: <Tailscale IP address>
      name: rds-staging   # service name
    spec:
      externalName: placeholder   # any value - will be overwritten by operator
      type: ExternalName
    

The value of the tailscale.com/tailnet-ip annotation can be either a tailnet IPv4 or IPv6 address for either a tailnet device or a route in a tailnet subnet. IP address ranges are not supported.

Expose a tailnet node to your cluster using its Tailscale MagicDNS name

  1. Ensure that MagicDNS is enabled for your cluster.

  2. Create a Kubernetes Service of type ExternalName annotated with the MagicDNS name of the tailnet device that you wish to make available:

    apiVersion: v1
    kind: Service
    metadata:
      annotations:
        tailscale.com/tailnet-fqdn: <Tailscale MagicDNS name>
      name: rds-staging   # service name
    spec:
      externalName: placeholder   # any value - will be overwritten by operator
      type: ExternalName
    

Note that the value of the tailscale.com/tailnet-fqdn annotation must be the full MagicDNS name of the tailnet service (not just hostname). The final dot is optional.

Expose a tailnet HTTPS service to your cluster workloads

Cluster workloads that need access to tailnet services that use Tailscale HTTPS likely need to refer to those services by their MagicDNS names for the TLS handshake to succeed.

Sometimes, cluster workloads must access a service running in the same cluster, which is exposed over Tailscale Ingress through its MagicDNS name. Both of these use cases require the ability of the workloads to resolve the MagicDNS names of the services they need to access.

With Kubernetes operator v1.66 and later, you can configure the cluster to resolve MagicDNS names of the tailnet services exposed using cluster egress proxies and the MagicDNS names of cluster workloads exposed to tailnet using Tailscale Ingress.

You can configure the operator to deploy a nameserver for ts.net DNS names of the tailnet services exposed using cluster egress proxies and for Tailscale Ingress in the cluster, then add the nameserver as a stub nameserver to your cluster DNS plugin.

  1. Create a DNSConfig custom resource:

    apiVersion: tailscale.com/v1alpha1
    kind: DNSConfig
    metadata:
      name: ts-dns
    spec:
      nameserver:
        image:
          repo: tailscale/k8s-nameserver
          tag: unstable
    

    The DNSConfig custom resource tells the Tailscale Kubernetes operator to deploy a nameserver in the operator's namespace and dynamically populate the nameserver with the following DNS records:

    • A records mapping cluster egress proxy Pods' IP addresses to the MagicDNS names of the exposed tailnet services

    • A records mapping Tailscale Ingress proxy Pods' IP addresses to the MagicDNS names of the Ingresses

  2. Find the IP address of the nameserver:

    $ kubectl get dnsconfig ts-dns
    NAME     NAMESERVERIP
    ts-dns   10.100.124.196
    
  3. (If your cluster uses CoreDNS) update the Corefile.

    Update Corefile in coredns ConfigMap in kube-system namespace with a stanza for ts.net stub nameserver:

    Corefile: |
          .:53 {
            ...
          }
          ts.net {
            errors
            cache 30
            forward . 10.100.124.196
          }
    
  4. (If your cluster uses kube-dns) update the kube-dns config.

    Update kube-dns ConfigMap in kube-system namespace to add a stub nameserver for ts.net DNS names:

    data:
        stubDomains: |
          {
            "ts.net": [
              "10.100.124.196"
            ]
          }
    
  5. Access HTTPS tailnet services from the Kubernetes cluster.

    Create an egress Service:

    apiVersion: v1
    kind: Service
    metadata:
      annotations:
        tailscale.com/tailnet-fqdn: "<full MagicDNS name of the tailnet node>"
      name: ts-egress
    spec:
      externalName: unused
      type: ExternalName
    

    The operator automatically populates the nameserver's configuration with an A record mapping the MagicDNS name of the exposed tailnet service to proxy Pod's IP address. This record allows your cluster workloads to access the tailnet service using its MagicDNS name.

  6. Access a Tailscale Ingress using its MagicDNS name from the cluster:

    Create a Tailscale Ingress resource with a tailscale.com/experimental-forward-cluster-traffic-via-ingress annotation:

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: nginx
      annotations:
        tailscale.com/experimental-forward-cluster-traffic-via-ingress: "true"
    spec:
      defaultBackend:
        service:
          name: nginx
          port:
            name: https
      ingressClassName: tailscale
    

    The operator automatically populates the nameserver's configuration with an A record mapping the MagicDNS name of the Tailscale Ingress to the proxy Pod's IP address. For Ingress resources annotated with tailscale.com/experimental-forward-cluster-traffic-via-ingress, the operator also ensures that the proxy created for that Ingress listens on its Pod IP address, so it can be accessed by cluster workloads.

We are actively seeking feedback for MagicDNS name resolution in cluster, especially with regards to further automating the workflow — please do reach out if you have feedback or suggestions.

This feature currently does not work in clusters with IPv6 Pod CIDR.

Configure an egress Service using ProxyGroup

The operator by default creates a StatefulSet with a single proxy Pod for each egress Service.

If you want to expose the egress Services on multiple Pods redundantly or coalesce a number of egress Services on a smaller number of proxy Pods, you can instead use a pre-created ProxyGroup.

This functionality is available in Tailscale 1.76 and later.

To expose an egress Service on a ProxyGroup:

  1. Follow the instructions to pre-create an egress ProxyGroup.

  2. Create an ExternalName Service that references the ProxyGroup and the tailnet target:

        apiVersion: v1
        kind: Service
        metadata:
          annotations:
            tailscale.com/tailnet-fqdn: "<full MagicDNS name of the tailnet node>"
            tailscale.com/proxy-group: "<ProxyGroup name>"
          name: ts-egress
          namespace: default
        spec:
          externalName: placeholder # any value - will be overwritten by the operator
          type: ExternalName
          ports:
          - port: 8080
            protocol: TCP
            name: web # any value
          - port: 3002
            protocol: TCP
            name: debug # any value
    

    You can specify the tailnet target either by a MagicDNS name or tailnet IP address.

The ExternalName Service must explicitly specify all port numbers that you want to access on the tailnet service in the spec.ports section. Cluster traffic received to the ports (and protocols) specified on the ExternalName Service will be proxied to the same ports of the tailnet target.

Note that setting service.spec.ports fields other than port, protocol and name will have no effect.

  1. (Optional) Wait for the Service to become ready:

    $ kubectl wait svc ts-egress --for=condition=TailscaleEgressSvcReady=true
    
  2. Cluster workloads can now access the tailnet target by ExternalName Service's DNS name, for example:

    $ curl ts-egress.default.svc.cluster.local:8080
    ...
    

Cluster traffic for ts-egress Service will be round robin load balanced across the ProxyGroup replicas. Any number (restricted by resource consumption) of egress Services can reference a single ProxyGroup. You can also create multiple egress ProxyGroups.

Validation

Wait for the Tailscale Kubernetes operator to update spec.externalName of the Kubernetes Service that you created. The Service external name should get set to the Kubernetes DNS name of another Kubernetes Service that is fronting the egress proxy in tailscale namespace. The proxy is responsible for routing traffic to the exposed Tailscale node over the tailnet.

After the Service external name gets updated, workloads in your cluster should be able to access the exposed tailnet service by referring to it via the Kubernetes DNS name of the Service that you created.

Customization

Learn how to customize the operator and resources it manages.

Troubleshooting

Learn how to troubleshoot the operator and resources it manages.

Limitations

  • Egress to external services supports using an IPv4 or IPv6 address for a single route in the tailscale.com/tailnet-ip annotation, but not IP ranges.
  • Egress to external services currently only supports clusters where privileged pods are permitted (that is, GKE Autopilot is not supported).

Last updated Dec 20, 2024