Access the Kubernetes control plane using an API server proxy
You can use the Tailscale Kubernetes operator to expose and access the Kubernetes control plane (kube-apiserver) over Tailscale.
The Tailscale API server proxy can run in one of two modes:
-
Auth mode: In auth mode, requests from the tailnet proxied over to the Kubernetes API server are additionally impersonated using the sender's tailnet identity. Kubernetes RBAC can then be used to configure granular API server permissions for individual tailnet identities or groups.
-
Noauth mode: In noauth mode, requests from the tailnet will be proxied over to the Kubernetes API server but not authenticated. This mechanism can be combined with another authentication/authorization mechanism, such as an authenticating proxy provided by an external IDP or a cloud provider.
Prerequisites
-
Enable HTTPS for your tailnet.
-
The API server proxy runs as part of the same process as the Tailscale Kubernetes operator and is reached using the same tailnet device. It is exposed on port
443
. Ensure that your ACLs allow all devices and users who want to access the API server using the proxy have access to the Tailscale Kubernetes operator. For example, to allow all tailnet devices tagged withtag:k8s-readers
access to the proxy, create an ACL rule like this:"acls": [{ "action": "accept", "src": ["tag:k8s-readers"], "dst": ["tag:k8s-operator:443"] }]
Access to the proxy over the tailnet does not grant tailnet users any default permissions to access Kubernetes API server resources. Tailnet users will only be able to access API server resources that they have been explicitly authorized to access by Kubernetes RBAC.
Configuring the API server proxy in auth mode
Installation
Helm
If you are installing Tailscale Kubernetes operator with Helm, you can install the proxy in auth mode by passing --set-string apiServerProxyConfig.mode=true
flag to the install command:
helm upgrade \
--install \
tailscale-operator \
tailscale/tailscale-operator \
--namespace=tailscale \
--create-namespace \
--set-string oauth.clientId=<OAauthClientId> \
--set-string oauth.clientSecret=<OAuthClientSecret> \
--set-string apiServerProxyConfig.mode="true" \
--wait
Static manifests with kubectl
If you are installing Tailscale Kubernetes operator using static manifests:
-
Set the environment variable
API_SERVER_PROXY=true
in the Tailscale Kubernetes operator deployment manifest.name: APISERVER_PROXY value: "true"
-
Download and apply RBAC for the API server proxy from the tailscale/tailscale repo.
Configuring authentication and authorization
API server proxy in auth mode impersonates requests from the tailnet to the Kubernetes API server. You can then use Kubernetes RBAC to control what API server resources tailnet identities can access.
The impersonation is applied as follows:
- If the user who sends a request to the Kube API server using the proxy is in a tailnet user group for which API server proxy grants have been configured for that proxy instance, the request will be impersonated as a Kubernetes group specified in the grant. It will also be impersonated as a Kubernetes user whose name matches the tailnet user's name.
- If grants are not used and the node from which the request is sent is tagged, the request will be impersonated as if from a Kubernetes group whose name matches the tag.
- If grants are not used and the node from which the request is sent is not tagged, the request will be impersonated as a Kubernetes user whose name matches the sender's tailnet username.
Impersonating Kubernetes groups with grants
You can use grants to configure the Kubernetes API server resources Tailscale user groups can access.
For example, to give a tailnet user group group:prod
cluster admin access and give the tailnet user group group:k8s-readers
read permission for most Kubernetes resources:
-
Update your grants:
{ "grants": [{ "src": ["group:prod"], "dst": ["tag:k8s-operator"], "app": { "tailscale.com/cap/kubernetes": [{ "impersonate": { "groups": ["system:masters"], }, }], }, }{ "src": ["group:k8s-readers"], "dst": ["tag:k8s-operator"], "app": { "tailscale.com/cap/kubernetes": [{ "impersonate": { "groups": ["tailnet-readers"], }, }], }, }] }
grants.src
is the Tailscale user group the grant applies to.grants.dst
must be the tag of the Tailscale Kubernetes operator.system:masters
is a Kubernetes group with default RBAC bindings in all clusters. Kubernetes creates a defaultClusterRole
cluster-admin
that allows all actions against all Kubernetes API server resources and aClusterRoleBinding
cluster-admin
that binds thecluster-admin
ClusterRole
tosystem:masters
group.tailnet-readers
is a Kubernetes group that you will bind the default Kubernetesview
ClusterRole
to in a following step. (Note that Kubernetes group names do not refer to existing identities in Kubernetes- they do not need to be pre-created to start using them in(Cluster)RoleBinding
s).
-
Bind
tailnet-readers
to theview
ClusterRole
:kubectl create clusterrolebinding tailnet-readers-view --group=tailnet-readers --clusterrole=view
Impersonating Kubernetes groups with tagged tailnet nodes
If the request is sent from a tagged device, it will be impersonated as a Kubernetes group whose name matches the tag. For example, a request from a tailnet device tagged with tag:k8s-readers
will be authenticated by the API server as from a Kubernetes group tag:k8s-readers
.
You can create Kubernetes (Cluster)Roles
and (Cluster)RoleBindings
to configure the permissions the group should have or bind an existing (Cluster)Role
to the group.
For example, to grant devices tagged with tag:k8s-readers
read-only access to most Kubernetes resources, you can bind Kubernetes group tag:k8s-users
to the default Kubernetes view
ClusterRole:
kubectl create clusterrolebinding tailnet-readers --group="tag:k8s-readers" --clusterrole=view
Impersonating Kubernetes users
If the request is not sent from a tagged device, it will be impersonated as a Kubernetes user named the same as the sender's tailnet user.
You can then create Kubernetes (Cluster)Roles
and (Cluster)RoleBindings
to configure the permissions the user should have or bind an existing (Cluster)Role
to the user.
For example, to allow the tailnet user alice@tailscale.com
read-only access to most Kubernetes resources, bind Kubernetes user alice@tailscale.com
to the default Kubernetes view
ClusterRole like so:
kubectl create clusterrolebinding alice-view --user="alice@tailscale.com" --clusterrole=view
Configuring kubeconfig
You can run the following CLI command to configure your kubeconfig
for authentication with kubectl
using the Tailscale Kubernetes API server proxy: tailscale configure kubeconfig <operator-hostname>
. By default, the hostname for the operator node is tailscale-operator
.
Configuring API server proxy in noauth mode
The noauth mode of the API server proxy is useful if you want to use Tailscale to provide access to the Kubernetes API server over the tailnet, but want to keep using your existing authentication and authorization mechanism.
Installation
Helm
If you are installing Tailscale Kubernetes operator with Helm, you can install the proxy in auth mode by passing --set-string apiServerProxyConfig.mode=noauth
flag to the install command:
helm upgrade \
--install \
tailscale-operator \
tailscale/tailscale-operator \
--namespace=tailscale \
--create-namespace \
--set-string oauth.clientId=<OAauth client ID> \
--set-string oauth.clientSecret=<OAuth client secret> \
--set-string apiServerProxyConfig.mode="noauth" \
--wait
Static manifests with kubectl
If you are installing Tailscale Kubernetes operator using static manifests:
-
Set the environment variable
API_SERVER_PROXY=noauth
in the Tailscale Kubernetes operator deployment manifest.name: APISERVER_PROXY value: "noauth"
Authentication and authorization
When run in noauth mode, the API server proxy exposes the Kubernetes API server to the tailnet but does not provide authentication. You can use the proxy endpoint (<TailscaleOperatorHostname>:443
) instead of the Kubernetes API server address and set up authentication and authorization over that using any other mechanism (such as another authenticating proxy provided by your managed Kubernetes provider or IDP or similar).
Customization
Learn how to customize the operator and resources it manages.
Troubleshooting
Learn how to troubleshoot the operator and resources it manages.
Limitations
- The API server proxy runs inside the cluster. If your cluster is non-functional or unable to schedule pods, you might lose access to the API server proxy and potentially your cluster.
- The API server proxy requires TLS certificates. Currently, the certificates are provisioned automatically by the proxy on the first API call, meaning the first call might be slow or even time out.