Get started
Login
© 2024

Kubernetes operator

The Tailscale Kubernetes operator lets you:

Tailscale Kubernetes operator is available for all plans.
Kubernetes operator is currently in beta. To try it, follow the steps below to enable it for your network using Tailscale v1.50 or later.

Setting up the Kubernetes operator

Prerequisites

Tailscale Kubernetes Operator must be configured with OAuth client credentials. The operator uses these credentials to manage devices via Tailscale API and to create auth keys for itself and the devices it manages.

  1. In your tailnet policy file, create the ACL tags tag:k8s-operator and tag:k8s, and make tag:k8s-operator an owner of tag:k8s. If you want your Services to be exposed with tags other than the default tag:k8s, create those as well and make tag:k8s-operator an owner.

    "tagOwners": {
       "tag:k8s-operator": [],
       "tag:k8s": ["tag:k8s-operator"],
    }
    
  2. Create an OAuth client in the OAuth clients page of the admin console. Create the client with Devices write scope and the tag tag:k8s-operator.

Installation

A default operator installation creates a tailscale namespace, an operator Deployment in the tailscale namespace, RBAC for the operator, and ProxyClass and Connector Custom Resource Definitions.

Helm

Tailscale Kubernetes Operator's Helm charts are available from two chart repositories.

The https://pkgs.tailscale.com/helmcharts repository contains well-tested charts for stable Tailscale versions.

Helm charts and container images for a new stable Tailscale version are released a few days after the official release. This is done to avoid releasing image versions with potential bugs in the core Linux client or core libraries.

The https://pkgs.tailscale.com/unstable/helmcharts repository contains charts with the very latest changes, published in between official releases.

The charts in both repositories are different versions of the same chart and you can upgrade from one to the other.

To install the latest Kubernetes Tailscale operator from https://pkgs.tailscale.com/helmcharts in tailscale namespace:

  1. Add https://pkgs.tailscale.com/helmcharts to your local Helm repositories:

    helm repo add tailscale https://pkgs.tailscale.com/helmcharts
    
  2. Update your local Helm cache:

    helm repo update
    
  3. Install the operator passing the OAuth client credentials that you created earlier:

    helm upgrade \
      --install \
      tailscale-operator \
      tailscale/tailscale-operator \
      --namespace=tailscale \
      --create-namespace \
      --set-string oauth.clientId=<OAauth client ID> \
      --set-string oauth.clientSecret=<OAuth client secret> \
      --wait
    

Static manifests with kubectl

  1. Download the Tailscale Kubernetes operator manifest file from the tailscale/tailscale repo.

  2. Edit your version of the manifest file:

    1. Find # SET CLIENT ID HERE and replace it with your OAuth client ID.
    2. Find # SET CLIENT SECRET HERE and replace it with your OAuth client secret. The OAuth client secret is case-sensitive.

    For both the client ID and secret, quote the value, to avoid any potential yaml misinterpretation of unquoted strings. For example, use:

    client_id: "k123456CNTRL"
    client_secret: "tskey-client-k123456CNTRL-abcdef"
    

    instead of:

    client_id: k123456CNTRL
    client_secret: tskey-client-k123456CNTRL-abcdef
    
  3. Apply the edited file to your Kubernetes cluster:

    kubectl apply -f manifest.yaml
    

Validation

Verify that the Tailscale operator has joined your tailnet. Open the Machines page of the admin console and look for a node named tailscale-operator, tagged with the tag:k8s-operator tag. It may take a minute or two for the operator to join your tailnet, due to the time required to download and start the container image in Kubernetes.

Exposing a Kubernetes cluster workload to your tailnet (cluster ingress)

You can use the Tailscale Kubernetes operator to expose a Kubernetes cluster workload to your tailnet in three ways:

  • Create a LoadBalancer type Service with the tailscale loadBalancerClass that fronts your workload
  • Annotate an existing Service that fronts your workload
  • Create an Ingress resource fronting a Service or Services for the workloads you wish to expose

Exposing a cluster workload via a tailscale Load Balancer Service

Create a new Kubernetes Service of type LoadBalancer:

  1. Set spec.type to LoadBalancer.
  2. Set spec.loadBalancerClass to tailscale.

Once provisioning is complete, the Service status will show the fully-qualified domain name of the Service in your tailnet. You can view the Service status by running kubectl get service <service name>.

You should also see a new node with that name appear in the Machines page of the admin console.

Exposing a cluster workload by annotating an existing Service

If the Service you want to expose already exists, you can expose it to Tailscale using object annotations.

Edit the Service and under metadata.annotations, add the annotation tailscale.com/expose with the value "true". Note that "true" is quoted because annotation values are strings, and an unquoted true will be incorrectly interpreted as a boolean.

In this mode, Kubernetes doesn’t tell you the Tailscale machine name. You can look up the node in the Machines of the admin console to learn its machine name. By default, the machine name of an exposed Service is <k8s-namespace>-<k8s-servicename>, but it can be changed.

Exposing a Service using Ingress

You can use the Tailscale Kubernetes operator to expose an Ingress resource in your Kubernetes cluster to your tailnet. When configured using an Ingress resource, you also get the ability to identify callers using HTTP headers injected by the Ingress proxy.

Ingress resources only support TLS, and are only exposed over HTTPS. You must enable HTTPS on your tailnet.

Edit the Ingress resource you want to expose to use the Ingress class tailscale:

  1. Set spec.ingressClassName to tailscale.
  2. Set tls.hosts to the desired host name of the Tailscale node. Only the first label is used. See custom machine names for more details.

For example, to expose an Ingress resource nginx to your tailnet:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx
spec:
  defaultBackend:
    service:
      name: nginx
      port:
        number: 80
  ingressClassName: tailscale
  tls:
    - hosts:
        - nginx

The backend is HTTP by default. To use HTTPS on the backend, either set the port name to https or the port number to 443:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx
spec:
  defaultBackend:
    service:
      name: nginx
      port:
        name: https
  ingressClassName: tailscale
---
apiVersion: v1
kind: Service
metadata:
  name: nginx
spec:
  ports:
    - name: https
      port: 443
      targetPort: 443
  type: ClusterIP

A single Ingress resource can be used to front multiple backend Services:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress
spec:
  ingressClassName: tailscale
  rules:
    - http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: ui-svc
                port:
                  number: 80
          - path: /api
            pathType: Prefix
            backend:
              service:
                name: api-svc
                port:
                  number: 80

Currently the only supported Ingress path type is Prefix. Requests for paths with other path types will be routed according to Prefix rules.

Exposing a Service to the public internet using Ingress and Tailscale Funnel

You can also use the Tailscale Kubernetes operator to expose an Ingress resource in your Kubernetes cluster to the public internet using Tailscale Funnel. To do so:

  1. Add a tailscale.com/funnel: "true" annotation:

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: funnel
      annotations:
        tailscale.com/funnel: "true"
    spec:
      defaultBackend:
        service:
          name: funnel
          port:
            number: 80
      ingressClassName: tailscale
      tls:
        - hosts:
            - funnel
    
  2. Update the ACLs for your tailnet to allow Kubernetes Operator proxy services to use Tailscale Funnel.

Add a node attribute to allow nodes created by the Operator to use Funnel:

"nodeAttrs": [
  {
    "target": ["tag:k8s"], // tag that Tailscale Operator uses to tag proxies; defaults to 'tag:k8s'
    "attr":   ["funnel"],
  },
  ...
]

Note that even if your policy has the funnel attribute assigned to autogroup:member (which is the default), you still need to add it to the tag used by proxies, since autogroup:member does not include tagged nodes.

Removing a Service

Any of the following actions remove a Kubernetes Service you exposed from your tailnet:

  • Delete the Service entirely
  • If you are using the tailscale.com/expose annotation, remove the annotation
  • If you are using an Ingress resource, delete it or change or unset spec.ingressClassName

Deleting a Service's Tailscale node in the admin console does not clean up the Kubernetes state associated with that Service.

Accessing the Kubernetes control plane using an API server proxy

You can use the Tailscale Kubernetes operator to expose and access the Kubernetes control plane (kube-apiserver) over Tailscale.

The Tailscale API server proxy can run in one of two modes:

  • in auth mode, requests from tailnet, that are proxied over to Kubernetes API server, are additionally impersonated using the sender's tailnet identity. Kubernetes RBAC can then be used to configure granular API server permissions for individual tailnet identities or groups.

  • in noauth mode, requests from tailnet will be proxied over to Kubernetes API server, but not authenticated. This mechanism can be used in combination with another authentication/authorizaton mechanism, such as authenticating proxy provided by an external IDP or a cloud provider.

Prerequisites

The API server proxy runs as part of the same process as the Tailscale Kubernetes operator and is reached via the same tailnet node. It is exposed on port 443. Ensure that your ACLs allow all devices/users who want to access the API server via the proxy, access to the Tailscale Kubernetes operator. For example, to allow all tailnet devices tagged with tag:k8s-readers access to the proxy, create ACL rule like this:

{
	"action": "accept",
	"src": ["tag:k8s-readers"],
	"dst": ["tag:k8s-operator:443"]
}

Being able to access the proxy over tailnet does not grant tailnet users any default permissions to access Kubernetes API server resources. Tailnet users will only be able to access API server resources that they have been explicitly authorized to access, that is, by Kubernetes RBAC.

To use a Tailscale Kubernetes API server proxy, you need to enable HTTPS for your tailnet.

Configuring the API server proxy in auth mode

Installation

Helm

If you are installing Tailscale Kubernetes operator with Helm, you can install the proxy in auth mode by passing --set-string apiServerProxyConfig.mode=true flag to the install command:

helm upgrade \
  --install \
  tailscale-operator \
  tailscale/tailscale-operator \
  --namespace=tailscale \
  --create-namespace \
  --set-string oauth.clientId=<OAauth client ID> \
  --set-string oauth.clientSecret=<OAuth client secret> \
  --set-string apiServerProxyConfig.mode="true" \
  --wait
Static manifests with kubectl

If you are installing Tailscale Kubernetes operator using static manifests:

  1. Set the API_SERVER_PROXY env var in the Tailscale Kubernetes operator deployment manifest to "true"

    name: APISERVER_PROXY
    value: "true"
    
  2. Download and apply RBAC for the API server proxy from the tailscale/tailscale repo.

Configuring authentication and authorization

API server proxy in auth mode impersonates requests from tailnet to the Kubernetes API server. You can then use Kubernetes RBAC to control what API server resources tailnet identities are allowed to access.

The impersonation is applied as follows:

  • if the user, who sends a request to Kube API server via the proxy, is in a tailnet user group for which API server proxy ACL grants have been configured for that proxy instance, the request will be impersonated as from a Kubernetes group specified in the grant. Additionally, it will also be impersonated as from a Kubernetes user whose name matches tailnet user's name.
  • if ACL grants are not used and the node from which the request is sent is tagged, the request will be impersonated as if from a Kubernetes group whose name matches the tag.
  • if ACL grants are not used and the node from which the request is sent is not tagged, the request will be impersonated as if from a Kubernetes user whose name matches the sender's tailnet username.

Impersonating Kubernetes groups with ACL grants

You can use ACL grants to configure what Kubernetes API server resources Tailscale user groups are allowed to access.

For example, to give tailnet user group group:prod cluster admin access and give tailnet user group group:k8s-readers read permissions for most Kubernetes resources:

  1. Update your ACL grants:

      {
        "grants": [{
          "src": ["group:prod"],
          "dst": ["tag:k8s-operator"],
          "app": {
            "tailscale.com/cap/kubernetes": [{
              "impersonate": {
                "groups": ["system:masters"],
              },
            }],
          },
        }{
          "src": ["group:k8s-readers"],
          "dst": ["tag:k8s-operator"],
          "app": {
            "tailscale.com/cap/kubernetes": [{
              "impersonate": {
                "groups": ["tailnet-readers"],
              },
            }],
          },
        }]
      }
    
    • grants.src is the Tailscale user group to which the grant applies
    • grants.dst must be the tag of the Tailscale Kubernetes operator
    • system:masters is a Kubernetes group that has default RBAC bindings in all clusters. Kubernetes creates a default ClusterRole cluster-admin that allows all actions against all Kubernetes API server resources and a ClusterRoleBinding cluster-admin that binds the cluster-admin ClusterRole to system:masters group.
    • tailnet-readers is a Kubernetes group that you will bind the default Kubernetes view ClusterRole to in a following step. (Note that Kubernetes group names do not refer to existing identities in Kubernetes- they do not need to be precreated to start using them in (Cluster)RoleBindings)
  2. Bind tailnet-readers to the view ClusterRole:

    kubectl create clusterrolebinding tailnet-readers-view --group=tailnet-readers --clusterrole=view
    
    
Impersonating Kubernetes groups with tagged tailnet nodes

If the request is sent from a tagged device it will be impersonated as if from a Kubernetes goup whose name matches the tag. For example, a request from a tailnet node tagged with tag:k8s-readers will be authenticated by the API server as from a Kubernetes group tag:k8s-readers.

You can create Kubernetes (Cluster)Roles and (Cluster)RoleBindings to configure what permissions the group should have or bind an existing (Cluster)Role to the group.

For example, to grant nodes tagged with tag:k8s-readers read-only access to most Kubernetes resources, you can bind Kubernetes group tag:k8s-users to the default Kubernetes view ClusterRole:

kubectl create clusterrolebinding tailnet-readers --group="tag:k8s-readers" --clusterrole=view
Impersonating Kubernetes users

If the request is not sent from a tagged device it will be impersonated as if from a Kubernetes user named the same as the sender's tailnet user.

You can then create Kubernetes (Cluster)Roles and (Cluster)RoleBindings to configure what permissions the user should have or bind an existing (Cluster)Role to the user.

For example, to allow tailnet user alice@tailscale.com read-only access to most Kubernetes resources, you can bind Kubernetes user alice@tailscale.com to the default Kubernetes view ClusterRole like so:

kubectl create clusterrolebinding alice-view --user="alice@tailscale.com"  --clusterrole=view

Configuring kubeconfig

You can run the following CLI command to configure your kubeconfig for authentication with kubectl via the Tailscale Kubernetes API server proxy: tailscale configure kubeconfig <operator-hostname>. By default, the hostname for the operator node is tailscale-operator.

Configuring API server proxy in noauth mode

The noauth mode of the API server proxy is useful if you want to use Tailscale to provide access to the Kubernetes API server over tailnet, but want to keep using your existing authentication and authorization mechanism.

Installation

Helm

If you are installing Tailscale Kubernetes operator with Helm, you can install the proxy in auth mode by passing --set-string apiServerProxyConfig.mode=noauth flag to the install command:

helm upgrade \
  --install \
  tailscale-operator \
  tailscale/tailscale-operator \
  --namespace=tailscale \
  --create-namespace \
  --set-string oauth.clientId=<OAauth client ID> \
  --set-string oauth.clientSecret=<OAuth client secret> \
  --set-string apiServerProxyConfig.mode="noauth" \
  --wait
Static manifests with kubectl

If you are installing Tailscale Kubernetes operator using static manifests:

  1. Set the API_SERVER_PROXY env var in the Tailscale Kubernetes operator deployment manifest to "noauth"

    name: APISERVER_PROXY
    value: "noauth"
    

Authentication and authorization

When ran in noauth mode, API server proxy exposes Kubernetes API server to the tailnet, but does not provide authentication. You can use the proxy endpoint which is <hostname of the Tailscale operator>:443 instead of Kubernetes API server address and set up authentication and authorization over that using any other mechanism such as another authenticating proxy provided by your managed Kubernetes provider or IDP or similar.

Exposing a tailnet service to your Kubernetes cluster (cluster egress)

You can make services that are external to your cluster, but available on your tailnet, available to your Kubernetes cluster workloads by making the associated tailnet node accessible from the cluster.

You can configure the operator to set up an in-cluster egress proxy for a tailnet node by creating a Kubernetes Service that specifies a tailnet node either by its Tailscale IP address or its MagicDNS name. In both cases your cluster workloads will refer to the tailnet service by the Kubernetes Service name.

Expose a tailnet node to your cluster using its Tailscale IP address

  1. Create a Kubernetes Service of type ExternalName annotated with the Tailscale IP address of the tailnet node you want to make available:

    apiVersion: v1
    kind: Service
    metadata:
      annotations:
        tailscale.com/tailnet-ip: <Tailscale IP address>
      name: rds-staging   # service name
    spec:
      externalName: placeholder   # any value - will be overwritten by operator
      type: ExternalName
    

Value of the tailscale.com/tailnet-ip annotation can be either a tailnet IPv4 or IPv6 address, for either a Tailscale node or a route in a Tailscale subnet. IP ranges are not supported.

Expose a tailnet node to your cluster using its Tailscale MagicDNS name

  1. Ensure that MagicDNS is enabled for your cluster.

  2. Create a Kubernetes Service of type ExternalName annotated with the MagicDNS name of the tailnet node that you wish to make available:

    apiVersion: v1
    kind: Service
    metadata:
      annotations:
        tailscale.com/tailnet-fqdn: <Tailscale MagicDNS name>
      name: rds-staging   # service name
    spec:
      externalName: placeholder   # any value - will be overwritten by operator
      type: ExternalName
    

Note that the value of the tailscale.com/tailnet-fqdn annotation must be the full MagicDNS name of the tailnet service (not just hostname). The final dot is optional.

Expose a tailnet HTTPS service to your cluster workloads

Cluster workloads that need access to tailnet services that use Tailscale HTTPS likely need to refer to those services by their MagicDNS names for the TLS handshake to succeed. In some cases, cluster workloads must access a service running in the same cluster, which is exposed over Tailscale Ingress through its MagicDNS name. Both of these use cases require the ability of the workloads to resolve the MagicDNS names of the services they need to access.

With Kubernetes operator v1.66 and later, you can configure the cluster to resolve MagicDNS names of the tailnet services exposed using cluster egress proxies, as well as MagicDNS names of cluster workloads exposed to tailnet using Tailscale Ingress. You can configure the operator to deploy a nameserver for ts.net DNS names of tailnet services exposed using cluster egress proxies and for Tailscale Ingress in the cluster, then add the nameserver as a stub nameserver to your cluster DNS plugin.

  1. Create a DNSConfig custom resource:

    apiVersion: tailscale.com/v1alpha1
    kind: DNSConfig
    metadata:
      name: ts-dns
    spec:
      nameserver:
        image:
          repo: tailscale/k8s-nameserver
          tag: unstable
    

    The DNSConfig custom resource tells the Tailscale Kubernetes operator to deploy a nameserver in the operator's namespace and dynamically populate the nameserver with the following DNS records:

    • A records mapping cluster egress proxy Pods' IP addresses to the MagicDNS names of the exposed tailnet services

    • A records mapping Tailscale Ingress proxy Pods' IP addresses to the MagicDNS names of the Ingresses

  2. Find the IP address of the nameserver:

    $ kubectl get dnsconfig ts-dns
    NAME     NAMESERVERIP
    ts-dns   10.100.124.196
    
  3. (If your cluster uses CoreDNS) update the Corefile.

    Update Corefile in coredns ConfigMap in kube-system namespace with a stanza for ts.net sub nameserver:

    Corefile: |
          .:53 {
            ...
          }
          ts.net {
            errors
            cache 30
            forward . 10.100.124.196
          }
    
  4. (If your cluster uses kube-dns) update the kube-dns config.

    Update kube-dns ConfigMap in kube-system namespace to add a stub nameserver for ts.net DNS names:

    data:
        stubDomains: |
          {
            "ts.net": [
              "10.100.124.196"
          ]
        }
    
  5. Access HTTPS tailnet services from the Kubernetes cluster.

    Create an egress Service:

    apiVersion: v1
    kind: Service
    metadata:
      annotations:
        tailscale.com/tailnet-fqdn: "<full MagicDNS name of the tailnet node>"
      name: ts-egress
    spec:
      externalName: unused
      type: ExternalName
    

    The operator automatically populates the nameserver's configuration with an A record mapping the MagicDNS name of the exposed tailnet service to proxy Pod's IP address. This record allows your cluster workloads to access the tailnet service using its MagicDNS name.

  6. Access a Tailscale Ingress using its MagicDNS name from the cluster:

    Create a Tailscale Ingress resource with a tailscale.com/experimental-forward-cluster-traffic-via-ingress annotation:

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: nginx
      annotations:
        tailscale.com/experimental-forward-cluster-traffic-via-ingress: "true"
    spec:
      defaultBackend:
        service:
          name: nginx
          port:
            name: https
      ingressClassName: tailscale
    

    The operator automatically populates the nameserver's configuration with an A record mapping the MagicDNS name of the Tailscale Ingress to the proxy Pod's IP address. For Ingress resources annotated with tailscale.com/experimental-forward-cluster-traffic-via-ingress, the operator also ensures that the proxy created for that Ingress listens on its Pod IP address, so it can be accessed by cluster workloads.

We are actively seeking feedback for MagicDNS name resolution in cluster, especially with regards to further automating the workflow — please do reach out if you have feedback or suggestions.

This feature currently does not work in clusters with IPv6 Pod CIDR.

Validation

Wait for the Tailscale Kubernetes operator to update spec.externalName of the Kubernetes Service that you created. The Service external name should get set to the Kubernetes DNS name of another Kubernetes Service that is fronting the egress proxy in tailscale namespace. The proxy is responsible for routing traffic to the exposed Tailscale node over the tailnet.

Once the Service external name gets updated, workloads in your cluster should be able to access the exposed tailnet service by referring to it via the Kubernetes DNS name of the Service that you created.

Exposing a Service in one cluster to another cluster (cross-cluster connectivity)

You can use the Tailscale Kubernetes operator to expose a Service in one cluster to another cluster. This is done by exposing the Service on destination cluster A to the tailnet (cluster ingress), and connecting from a source Service in cluster B to the tailnet (cluster egress) in order to access the Service running in cluster A.

This will need to be configured for each Ingress and Egress pair of Services. To set this up for access via ingress to a Service in cluster A and routing via egress from a Service in cluster B:

  1. Set up Ingress in cluster A for the Service you wish to access.
  2. Expose the external Service (running in cluster A) using its Tailscale IP address in cluster B with an annotation on the external Service

Expose a cloud service to your tailnet

You can use the Tailscale Kubernetes operator to expose any cloud service, such as an RDS database, that is on cluster network, to your tailnet. If you have a cloud service, that is not publicly accessible, but is accessible to a Kubernetes cluster on that cloud, you can make it available to your tailnet using an operator deployed in the cluster.

Expose a cloud service using a Kubernetes ExternalName Service

If the cloud service that you wish to expose has a DNS name that can be resolved from within the cluster, you can expose it using an ExternalName Service.

For example, to expose an RDS database and connect to it from a tailnet client:

  1. Deploy Tailscale Kubernetes operator to a Kubernetes cluster that is on the same network as the RDS instance.

    Follow the installation instructions to deploy the operator.

  2. Create an ExternalName Service with tailscale.com/expose: "true" annotation and spec.externalName set to the DNS name of the RDS instance:

    apiVersion: v1
    kind: Service
    metadata:
      name: my-rds
      annotations:
        tailscale.com/expose: "true"
    spec:
      type: ExternalName
      externalName: my-rds.eu-central-1.rds.amazonaws.com
    
  3. Retrieve the Tailscale MagicDNS name of the cluster proxy, that the operator creates for the Service, using the view-secret kubectl plugin:

    $ rds_magic_dns_name=$(kubectl view-secret \
      $(kubectl get secret -n tailscale   \
      --selector tailscale.com/parent-resource=my-rds,tailscale.com/parent-resource-ns=default,tailscale.com/parent-resource-type=svc \
      -ojsonpath='{.items[0].metadata.name}') \
      -n tailscale \
      device_fqdn)
    
  4. You can now connect to the RDS instance from a tailnet client using the MagicDNS name of the proxy as the database hostname.

    For example, for a Postgres database:

    $ psql -h ${rds_magic_dns_name} -U postgres
    

The cluster proxies created for ExternalName Services forward TCP traffic, so you should be able to use them with different backend protocols, such as PostgreSQL.

The Tailscale Kubernetes operator periodically (currently every 10 minutes) attempts to resolve the IP addresses of the backend cloud service and reconfigures the proxy rules, if needed.

For proxies deployed with firewall in nftables mode, the traffic will only be proxied to the first IP address that the DNS name resolves to.

ExternalName Services support the same tailscale.com labels and annotations as other Services.

We are actively seeking feedback about this feature — reach out if you would like it to support additional workflows.

Expose a cloud service or services using Connector

If the cloud service that you intend to expose does not have a DNS name that can be resolved from within a cluster, or you want to expose a whole CIDR range, you can do so using Connector:

apiVersion: tailscale.com/v1alpha1
kind: Connector
metadata:
  name: my-rds-instances
spec:
  subnetRouter:
    advertiseRoutes:
      - "<rds-cidr-range>"

The above Connector instance configures the operator to deploy an in-cluster subnet router that exposes the configured CIDR range to your tailnet.

Shared cluster egress and cluster ingress proxy configuration

Configuration options in this section apply to both cluster egress and cluster ingress (configured via a Service or Ingress) proxies.

The API server proxy currently runs as part of the same process as the Kubernetes operator. You can use the available operator configuration options to configure the API server proxy parameters.

Customizing ACL tags

Currently cluster ingress and cluster egress proxies join your tailnet as separate Tailscale devices tagged by one or more ACL tags.

The Tailscale operator must be a tag owner of all the proxy tags: if you want to tag a proxy device with tag:foo,tag:bar, the tagOwners section of the tailnet policy file must list tag:k8s-operator as one of the owners of both tag:foo and tag:bar.

Currently ACL tags can not be modified once a proxy has been created.

Default tags

By default, a proxy device joins your tailnet tagged with the ACL tag tag:k8s. You can modify the default tag or tags when installing the operator.

If you install the operator with Helm you can use .proxyConfig.defaultTags in the Helm values file.

If you install the operator with static manifests you can set the PROXY_TAGS env var in the deployment manifest.

Multiple tags must be passed as a comma separated string, that is, tag:foo,tag:bar.

Tags for individual proxies

To override the default tags for an individual proxy device, you can set tailscale.com/tags annotation on the Service or Ingress resource, used to tell the operator to create the proxy, to a comma separated list of the desired tags.

For example, setting tailscale.com/tags = "tag:foo,tag:bar" will result in the proxy device having the tags tag:foo and tag:bar.

Using custom machine names

Cluster ingress and egress proxies support overriding the hostname they announce while registering with Tailscale. For Services custom hostname can be set via a tailscale.com/hostname annotation. For Ingresses a custom hostname can be set via .spec.tls.hosts field (only the first value will be used).

Note that this only sets a custom OS hostname reported by the node. Actual machine name will differ if there already is a device on the network with the same name.

Machine names are subject to the constraints of DNS: they can be up to 63 characters long, must start and end with a letter, and consist of only letters, numbers, and -.

Cluster resource customization using ProxyClass Custom Resource

Tailscale operator v1.60 and newer provides ability to customize configuraton of cluster resources created by the operator using ProxyClass Custom Resource Definition.

You can specify cluster resource configuration for custom labels and resource requests using a ProxyClass Custom Resource.

You can then:

  • Apply configuration from a particular ProxyClass to cluster resources created for a tailscale Ingress or Service using a tailscale.com/proxy-class=<proxy-class-name> label on the Ingress or Service.

  • Apply configuration from a particular ProxyClass to cluster resources created for a Connector using connector.spec.proxyClass field.

The following example demonstrates how to use a ProxyClass that specifies custom labels and node selector that should get applied to Pods for a tailscale Ingress, a cluster egress proxy and a Connector:

  1. Create a ProxyClass resource:

    apiVersion: tailscale.com/v1alpha1
    kind: ProxyClass
    metadata:
      name: prod
    spec:
      statefulSet:
        pod:
          labels:
            team: eng
            environment: prod
          nodeSelector:
            beta.kubernetes.io/os: "linux"
    
  2. Create a tailscale Ingress with tailscale.com/proxy-class=prod label:

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: my-app
      labels:
        tailscale.com/proxy-class: "prod"
    spec:
      rules:
      ...
      ingressClassName: tailscale
    
  3. Create a cluster egress Service with a tailscale.com/proxy-class=prod label:

    apiVersion: v1
    kind: Service
    metadata:
      annotations:
        tailscale.com/tailnet-ip: <tailnet-ip>
      labels:
        tailscale.com/proxy-class: "prod"
      name: my-tailnet-service
    spec:
    
  4. Create a Connector that refers to the 'prod' ProxyClass:

    apiVersion: tailscale.com/v1alpha1
    kind: Connector
    metadata:
      name: prod
    spec:
      proxyClass: prod
      ...
    

You can find all available ProxyClass configuration options on GitHub →

Deploying exit nodes and subnet routers on Kubernetes using Connector Custom Resource

Tailscale Kubernetes operator installation includes a Connector Custom Resource Definition.

Connector can be used to configure the operator to deploy a Tailscale node that acts as a Tailscale subnet router, exit-node, or both.

For example, you can deploy a Connector that acts as a subnet router and exposes to your tailnet cluster Service CIDRs or some cloud service CIDRs that are available from the cluster, but not publicly accessible.

To create a Connector that exposes 10.40.0.0/14 CIDR to your tailnet:

  1. (Optional) Set the tag of the Connector node to be auto-approved. By default, the node will be tagged with tag:k8s. Custom tag or tags can be set via .connector.spec.tags in step 2. If you set a custom tag, you must also ensure that operator is an owner of this tag.

  2. Create a Connector Custom Resource:

    apiVersion: tailscale.com/v1alpha1
    kind: Connector
    metadata:
      name: ts-pod-cidrs
    spec:
      hostname: ts-pod-cidrs
      subnetRouter:
        advertiseRoutes:
          - "10.40.0.0/14"
    
  3. Wait for the Connector resources to get created:

    $ kubectl get connector ts-pod-cidrs
    NAME   SUBNETROUTES                  ISEXITNODE   STATUS
    ts-pod-cidrs   10.40.0.0/14         false        ConnectorCreated
    
  4. (Optional) If you did not configure the route to be auto-approved in step 1, open the Machines page of the admin console and manually approve the newly created ts-pod-cidrs node to advertise the 10.40.0.0/14 route.

  5. (Optional and for Linux clients only) Ensure that clients that need to access resources in the subnet have accepted the advertised route.

You can find all available Connector configuration options on GitHub →

IPv6 support

Ingress

To proxy traffic to IPv6 backends, you might need to disable IPv4 tailnet addresses for the proxy tailnet nodes.

You need to disable IPv4 tailnet addresses for:

You can disable tailnet IPv4 addresses for a specific tag using a disable-ipv4 node attribute in ACLs.

The following node attributes configuration example disables IPv4 addresses for all nodes tagged with tag:k8s:

"nodeAttrs": [
		{
			"target": ["tag:k8s"],
			"attr": [
				"disable-ipv4",
			],
		},
]

Tailnet IPv6 connectivity does not depend on host support for IPv6, so you can disable IPv4 addresses for nodes running on hosts that do not support IPv6.

Similarly, tailnet clients can connect to proxies with only tailnet IPv6 addresses even if they aren't running on hosts with IPv6 support.

Egress

Tailscale operator egress proxies do not support IPv6. Both the tailnet service exposed to cluster workloads and the proxy tailnet node must have a tailnet IPv4 address.

Let us know if the lack of IPv6 support for egress is causing issues for your workflow.

Supported versions

Operator and proxies

Tailscale recommends that you use the same version for the operator and the proxies, because majority of our tests run against the same versions.

The operator supports proxies running a Tailscale version up to four minor versions earlier than the operator's version. The operator does not support proxies running a Tailscale version later than the operator's version.

Kubernetes versions

The earliest supported version of Kubernetes is v1.23.0.

CNI compatibility

The operator creates proxies that configure custom routing and forwarding rules in each proxy Pod's network namespace only. Because the proxying is implemented in the proxy Pod's namespace, the routing and firewall configuration on the Node (for example, using iptables, eBPF, or any other mechanism) doesn't affect the proxies. This means that the proxies work with most CNI configurations out of the box.

Cilium in kube-proxy replacement mode

You must enable bypassing socket load balancer in Pods' namespaces if you run Cilium in kube-proxy replacement mode and want to do one or more of the following:

This is needed because when Cilium runs in kube-proxy replacement mode with the socket load balancing in Pods' namespaces enabled, connections from Pods to ClusterIPs go over a TCP socket (instead of going out via Pods' veth devices) and thus bypasses Tailscale firewall rules that are attached to netfilter hooks.

Troubleshooting

Using logs

If you are experiencing issues with your installation, it might be useful to take a look at the operator logs.

For ingress and egress proxies and the Connector the operator creates a single replica StatefulSet in the tailscale namespace, that is responsible for proxying the traffic to and from the tailnet. If the StatefulSet has been successfully created, you should also take a look at the logs of its Pod.

Operator logs

You can increase operator's log level to get debug logs.

To set log level to debug for an operator deployed using Helm run:

$ helm upgrade --install \
  operator tailscale/tailscale-operator \
  --set operatorConfig.logging=debug

If you deployed the operator using static manifests, you can set OPERATOR_LOGGING environment variable for the operator's Deployment to debug.

To view the logs run:

$ kubectl logs deployment/operator --namespace tailscale

Proxy logs

To get logs for the proxy created for an Ingress resource run:

$ pod_name=$(kubectl get pod --selector=tailscale.com/parent-resource-type=ingress,tailscale.com/parent-resource=<ingress-name>,tailscale.com/parent-resource-ns=<ingress-namespace> \
  --namespace tailscale -ojsonpath='{.items[0].metadata.name}')
$ kubectl logs ${pod_name} --namespace tailscale

To get logs for a proxy created for an ingress or egress Service run:

$ pod_name=$(kubectl get pod --selector=tailscale.com/parent-resource-type=svc,tailscale.com/parent-resource=<service-name>,tailscale.com/parent-resource-ns=<service-namespace> \
  --namespace tailscale -ojsonpath='{.items[0].metadata.name}')
$ kubectl logs ${pod_name} --namespace tailscale

To get logs for a proxy created for a Connector run:

$ pod_name=$(kubectl get pod --selector=tailscale.com/parent-resource-type=connector,tailscale.com/parent-resource=<connector-name> \
  --namespace tailscale -ojsonpath='{.items[0].metadata.name}')
$ kubectl logs ${pod_name} --namespace tailscale

Troubleshooting TLS connection errors

If you are connecting to a workload exposed to tailnet over Ingress or to kube API server over the operator's API server proxy, you can sometimes run into TLS connection errors.

Check the following, in sequence:

  1. HTTPS is not enabled for the tailnet.

    To use tailscale Ingress or API server proxy you must ensure that HTTPS is enabled for your tailnet.

  2. LetsEncrypt certificate has not yet been provisioned.

    If HTTPS is enabled, the errors are most likely related to LetsEncrypt certificate provisioning flow.

    For each Tailscale Ingress resource, the operator deploys a Tailscale node that runs a TLS server. This server is provisioned with a LetsEncrypt certificate for the MagicDNS name of the node. For the API server proxy, the operator also runs an in-process TLS server that proxies tailnet traffic to the Kubernetes API server. This server gets provisioned with a LetsEncrypt certificate for the MagicDNS name of the operator.

    In both cases the certificates get provisioned lazily, the first time a client connects to the server. It takes some time to provision it, so you might see some TLS timeout errors.

    You can take a look at the logs to follow the certificate provisioning process:

    For API server proxy, review the operator's logs:

    There is nothing you can currently do to prevent the first client connection sometimes erroring. Do reach out if this is causing issues for your workflow.

  3. You have hit LetsEncrypt rate limits.

    If the connection does not succeed even after first attempt to connect, you should verify that you have not hit LetsEncrypt rate limits. If a limit has been hit, you will be able to see the error returned from LetsEncrypt in the logs.

    We are currently working on making it less likely for users to hit LetsEncrypt rate limits. See related discussion in tailscale/tailscale#11119.

Troubleshooting cluster egress/cluster ingress proxies

The proxy pod is deployed in the tailscale namespace, and will have a name of the form ts-<annotated-service-name>-<random-string>.

If there are issues reaching the external service, verify the proxy pod is properly deployed:

  • Review the logs of the proxy pod
  • Review the logs of the operator. You can do this by running kubectl logs deploy/operator --namespace tailscale. The log level can be configured using the OPERATOR_LOGGING environment variable in the operator's manifest file.
  • Verify that the cluster workload is able to send traffic to the proxy pod in the tailscale namespace

Limitations

  • There are no dashboards or metrics. We are interested to hear what metrics you would find useful — do reach out.
  • The container images, charts or manifests are not signed. We are working on this.
  • The static manifests are currently only available from tailscale/tailscale codebase. We are working to improve this flow.

Cluster ingress

  • Tags are only considered during initial provisioning. That is, editing tailscale.com/tags on an already exposed Service doesn’t update the tags until you clean up and re-expose the Service.
  • The requested machine name is only considered during initial provisioning. That is, editing tailscale.com/hostname on an already exposed Service doesn't update the machine name until you clean up and re-expose the Service.
  • Cluster-ingress using Kubernetes Ingress resource requires TLS certificates. Currently the certificates are provisioned on the first connect. This means that the first connection might be slow or even time out.

API server proxy

  • The API server proxy runs inside of the cluster. If your cluster is non-functional or is unable to schedule pods, you may lose access to the API server proxy.
  • API server proxy requires TLS certificates. Currently the certificates are provisioned on the first API call via the proxy. This means that the first call might be slow or even time out.

Cluster egress

  • Egress to external services supports using an IPv4 or IPv6 address for a single route in the tailscale.com/tailnet-ip annotation, but not IP ranges.
  • Egress to external services currently only supports clusters where privileged pods are permitted (that is, GKE Autopilot is not supported).

Glossary

Proxy

In the context of this document, a proxy is the Tailscale node deployed for each user-configured component that the operator manages (such as a Tailscale Ingress or a Connector). The proxy is deployed as a StatefulSet in the operator's namespace (defaults to tailscale). The StatefulSets name is prefixed by a portion of the configured component's name.

If you need to reliably refer to the proxy's StatefulSet, you can use label selectors. For example, to find StatefulSet for a Tailscale Ingress resource named ts-ingress in prod namespace , you can run:

$ kubectl get statefulset \
  --namespace tailscale \
  --selector="tailscale.com/managed=true,tailscale.com/parent-resource-type=ingress,tailscale.com/parent-resource=ts-ingress,tailscale.com/parent-resource-ns=prod"

The tailscale.com/parent-resource label is set to svc for a Service and to connector for a Connector. The tailscale.com labels are also propagated to the Pod.