Get started
Login
© 2024

Log streaming

Configuration audit log streaming is available for the Personal, Personal Plus, and Enterprise plans.
Network flow log streaming is available for the Enterprise plan.

Log streaming lets you collect and send configuration audit logs or network flow logs about your Tailscale network (known as a tailnet) into various systems for collection and analysis. Tailscale supports two kinds of log streaming integrations.

SIEM integrations

You can stream logs into a Security information and event management (SIEM) system to help detect and respond to security threats, set up alerting and monitoring rules, and the like.

We support log streaming integrations for the following SIEM systems:

Amazon S3 and S3-compatible services

You can stream your logs to Amazon S3 and S3-compatible services for various cloud storage providers.

We support log streaming integrations for sending logs to the following S3 bucket types:

Prerequisites

  • You need an endpoint and credentials for either your SIEM integration or S3 cloud storage provider. Consult your vendor's documentation for how to get an endpoint and API credentials.
  • You need to be an Owner, Admin, Network admin, or IT admin to add, edit, and delete a streaming destination.

Configuration log streaming

Configuration audit log streaming is available for the Personal, Personal Plus, and Enterprise plans.

Add configuration log streaming

Configuration audit log streaming for Amazon S3 is currently in beta.
  1. Open the Configuration logs page of the admin console.
  2. Select Start streaming.
  3. In the Start streaming configuration logs dialog, enter the following information:
    1. Select a destination: Select the AWS | S3 radio button.

    2. Sub-destinations: Select the AWS S3 radio button.

    3. Region: Enter AWS region where S3 bucket is located. For a list of available AWS regions, refer to AWS service endpoints.

    4. Bucket: Enter the S3 bucket name where you want to upload logs.

    5. Compression: Select none, zstd, or gzip. The default compression method is zstd.

    6. Upload period: Enter how often to upload new objects, specified in minutes. Take into consideration latency and bandwidth. The default period is 1 minute and the range you can use is between 1 minute and 24 hours.

    7. (Optional) Object key prefix : Enter the S3 object key to prefix for the file name string. For example, a prefix of audit-logs/ will produce S3 object keys similar to audit-logs/2024/04/22/12:34:56.json. This lets you upload both audit and network logs to the same S3 bucket, with the ability to separate the data.

    8. Role ARN: Enter the Amazon Resource Name (ARN). This grants your tailnet access to write to your S3 bucket. For more details, refer to Access to AWS accounts owned by third parties.

    9. Required IAM trust policy: This box will contain a pre-populated Principal and Condition strings that you must add to your AWS role trust policy. These values are unique to your integration and need to be used exactly as given. Copy the contents in the box, open your AWS IAM console, and add it to your exiting trust policy. Or you can add the string to a new trust policy. The following is an example of a trust policy showing you where to add these details and the formatting to use.

      {
         "Version": "2024-10-10",
         "Statement": [
            {
                  "Effect": "Allow",
                  "Principal": {
                     "AWS": "891612552178"
                  },
                  "Condition": {
                     "StringEquals": {
                        "sts:ExternalId": "69f73fe7-cfcc-414e-8d45-35eb5d70fe7d"
                     }
                  },
                  "Action": "sts:AssumeRole"
            }
         ]
      }
      
    10. Select Start streaming.

  4. Check your S3 monitoring tools to verify you are successfully streaming configuration audit logs from your tailnet to your S3 bucket.

Depending on network conditions, there may be a delay before you can see the log streaming appear in your third-party tools.

Edit a configuration log streaming destination

You can change the information for passing your logs to your preferred streaming destination.

  1. Open the Configuration logs page of the admin console.
  2. For the system that you want to update, select the Action dropdown, then select Edit.
  3. Update the values as needed.
  4. Select Save changes.

If you are editing a log streaming destination for an Amazon S3 bucket, you can update the Role ARN field (the Amazon resource name) if the resource name already belongs to the specified AWS account. If the role does not belong to the AWS account, you must delete the log streaming destination in the admin console and create a new one.

Delete a configuration log streaming destination

  1. Open the Configuration logs page of the admin console.
  2. For the integration that you want to delete, select the Action dropdown, then select Delete.
  3. In the confirmation dialog, select Delete.

Network log streaming

Network flow log streaming is available for the Enterprise plan.

Add a network log streaming destination

Network log streaming for Amazon S3 is currently in beta.
  1. Make sure you have network flow logs enabled for your tailnet.
  2. Select Start streaming.
  3. In the Start streaming network logs dialog, enter the following information:
    1. Select a destination: Select the AWS | S3 radio button.

    2. Sub-destinations: Select the AWS S3 radio button.

    3. Region: Enter AWS region where S3 bucket is located. For a list of available AWS regions, refer to AWS service endpoints.

    4. Bucket: Enter the S3 bucket name where you want to upload logs.

    5. Compression: From the drop-down menu, select none, zstd, or gzip. The default compression is zstd.

    6. Upload period: Enter how often to upload new objects, specified in minutes. Take into consideration latency and bandwidth. The default period is 1 minute and the range you can use is between 1 minute and 24 hours.

    7. (Optional) Object key prefix : Enter the S3 object key to prefix for the file name string. For example, a prefix of audit-logs/ will produce S3 object keys similar to audit-logs/2024/04/22/12:34:56.json. This lets you upload both audit and network logs to the same S3 bucket, with the ability to separate the data.

    8. Role ARN: Enter the Amazon Resource Name (ARN). This grants your tailnet access to write to your S3 bucket. For more details, refer to Access to AWS accounts owned by third parties.

    9. Required IAM trust policy: This box will contain a pre-populated Principal and Condition strings that you must add to your AWS role trust policy. These values are unique to your integration and need to be used exactly as given. Copy the contents in the box, open your AWS IAM console, and add it to your exiting trust policy. Or you can add the string to a new trust policy. The following is an example of a trust policy showing you where to add these details and the formatting to use.

      {
         "Version": "2024-10-10",
         "Statement": [
            {
                  "Effect": "Allow",
                  "Principal": {
                     "AWS": "891612552178"
                  },
                  "Condition": {
                     "StringEquals": {
                        "sts:ExternalId": "69f73fe7-cfcc-414e-8d45-35eb5d70fe7d"
                     }
                  },
                  "Action": "sts:AssumeRole"
            }
         ]
      }
      
    10. Select Start streaming.

  4. Check your S3 monitoring tools to verify you are successfully streaming network flow logs from your tailnet to your S3 bucket.

Depending on network conditions, there may be a delay before you can see the log streaming appear in your third-party tools.

Edit a network log streaming destination

You can change the information for passing your logs to your preferred streaming destination.

  1. Open the Network flow logs page of the admin console.
  2. For the system that you want to update, select the Action dropdown, then select Edit.
  3. Update the values as needed.
  4. Select Save changes.

If you are editing a log streaming destination for an Amazon S3 bucket, you can update the Role ARN field (the Amazon resource name) if the resource name already belongs to the specified AWS account. If the role does not belong to the AWS account, you must delete the log streaming destination in the admin console and create a new one.

Delete a network log streaming destination

  1. Open the Network flow logs page of the admin console.
  2. For the integration that you want to delete, select the Action dropdown, then select Delete.
  3. In the confirmation dialog, select Delete.

Private endpoints

Log streaming can publish logs to a host that is directly reachable over the public internet, in which case the endpoint must use HTTPS for security. Alternatively, log streaming can publish logs to a private host that is not directly reachable over the public internet by utilizing Tailscale for connectivity. Plain HTTP may be used since the underlying transport is secured by Tailscale using WireGuard.

Use of log streaming to a private host is detected automatically based on the host specified in the endpoint URL.

A screenshot of the URL used for private endpoints

The host must reference a node within your tailnet and can be any of the following:

  • The name of a Tailscale node (for example, splunk).
  • The fully-qualified domain name of a Tailscale node (for example, splunk.yak-bebop.ts.net).
  • The IPv6 address of a Tailscale node (for example, fd7a:115c:a1e0:ab12:0123:4567:89ab:cdef).

Only IPv6 addresses are supported for log streaming. IPv4 addresses are not supported for log streaming because Tailscale uses CGNAT for IPv4 addresses assigned to nodes within a single tailnet. This can present an issue because IPv4 addresses can be reused across a tailnet. IPv6 addresses are not reused, and hostnames will always route to the correct node.

Log streaming to a private endpoint operates by sharing your node into a Tailscale-managed tailnet, where a Tailscale-managed node will publish logs directly to your node. This requires both sharing your node out to Tailscale's logstream tailnet, and modifying your tailnet policy file to support incoming traffic to your node from the logstream@tailscale user.

When adding or updating an endpoint that points to a private host, the control plane may need to share your node and/or update the tailnet policy file on your behalf. If additional configuration changes are needed, a follow-up dialog box will ask you for permission to perform the necessary actions. Audit log events will be generated for these operations and the actions will be attributed to you.

A screenshot of the confirmation dialog to share the node or update the tailnet policy file

After adding or updating the endpoint, the node will be listed on the Machines page of the admin console as having been shared out to the logstream@tailscale user. Also, the tailnet policy file will be modified with a rule similar to the following:

{
  // Private log streaming enables audit and network logs to be directly
  // uploaded to a node in your tailnet without exposing it to the public internet.
  // This access rule provides access for a Tailscale-managed node to upload logs
  // directly to the specified node.
  // See https://tailscale.com/kb/1255/log-streaming/#private-endpoints
  "action": "accept",
  "src":    ["logstream@tailscale"],
  "dst":    ["[nodeAddressV6]:port"],
}

where:

  • nodeAddressV6 is the IPv6 address of the Tailscale node.
  • port is the service port for the log streaming system.

The IPv6 address is specified as the log stream publisher that can communicate with your node over v6 of the Internet protocol.

Only IPv6 addresses are supported for log streaming. IPv4 addresses are not supported for log streaming because Tailscale uses CGNAT for IPv4 addresses assigned to nodes within a single tailnet. This can present an issue because IPv4 addresses can be reused across a tailnet. IPv6 addresses are not reused, and hostnames will always route to the correct node.

Since log streaming to a private host may require the ability to share nodes and the ability to update the tailnet policy file, only the Admin and Network admin roles have sufficient permissions to unilaterally make use of private endpoints. The IT admin has the ability to share nodes, but lacks the ability to update the tailnet policy file. An IT admin can still make use of private endpoints, but requires either an Admin or Network admin to manually update the tailnet policy file before logs can start streaming.

If your tailnet is configured to use GitOps for management of Tailscale ACLs, you will receive an error when Tailscale attempts to update your tailnet policy file to support incoming traffic from the logstream@tailscale user. To avoid this error, first use GitOps to add an access rule that allows incoming traffic from the logstream@tailscale user to the node that you use for the private endpoint, and then add your private endpoint as the log streaming URL.

Additional SIEM systems

We strive to support many common SIEM systems used by our customers, but we cannot support all the commercial and open-source SIEM and logging tools available. Some SIEM systems have Splunk HTTP Event Collector (Splunk HEC) compatible endpoints such as DataSet by SentinelOne. If your SIEM supports Splunk HEC, configure configuration audit log streaming and network flow log streaming per the instructions above to stream logs directly to your SIEM as if it were Splunk.

If we do not support your SIEM system, you can use Vector, an open-source high-performance observability data pipeline, to ingest log data from Tailscale via Vector's Splunk HEC support and deliver it to Vector's many supported SIEM systems, called "sinks" in Vector's terminology. Vector supports a number of sinks such as object storage systems, messaging queuing systems, Grafana Loki, New Relic, and more.

Vector deployment

To use Vector with log streaming from Tailscale:

  1. Follow Vector's deployment guide to deploy a machine running Vector to your infrastructure.
  2. Configure Vector's Splunk HTTP Event Collector (HEC) source to allow Tailscale to send log data to Vector.
  3. Configure the Vector sink for your SIEM as the destination for the log streaming data.
  4. Configure configuration audit log streaming and network flow log streaming per the instructions above to stream logs to your Vector instance, ideally using private endpoints.

Vector example configuration

The Vector configuration below receives data via the splunk_hec source and outputs data to the file sink:

# /etc/vector/vector.yaml

sources:
  splunk_hec:
    type: "splunk_hec"
    address: "100.x.y.z:8088" # Your Vector Tailscale device's IP or hostname
    valid_tokens:
      - "YOUR TOKEN"

sinks:
  file_sink:
    type: "file"
    inputs:
      - "splunk_hec"
    path: "/vector-data-dir/tailscale-%Y-%m-%d.log"
    encoding:
      codec: "json"

Last updated Nov 21, 2024