Aperture by Tailscale configuration
Last validated:
During the alpha testing period, Aperture by Tailscale is available at no additional cost across all Tailscale plans. Request access at aperture.tailscale.com. Like the Tailscale Personal plan, Aperture by Tailscale comes with similar usage limits of three free users. Contact us for pricing if you need more than three users.
Aperture by Tailscale uses a configuration file to define LLM providers, access control policies, database settings, and optional integrations. The configuration file controls which models are available, how requests authenticate with upstream providers, and who can access what. Admins can edit the configuration from the Settings page of the Aperture web interface.
Minimal configuration
A minimal configuration requires at least one provider with a base URL and at least one model. The following example shows a minimal configuration:
{
"providers": {
"anthropic": {
"baseurl": "https://api.anthropic.com",
"apikey": "YOUR_ANTHROPIC_API_KEY",
"models": [
"claude-sonnet-4-5",
"claude-opus-4-5",
],
"authorization": "x-api-key",
"compatibility": {
"anthropic_messages": true,
}
}
}
}
If you omit apikey, Aperture logs a warning at startup but continues to run. Most providers require an API key for authentication, so add one unless your provider handles authentication differently.
Default configuration
New Aperture instances use a default configuration that includes OpenAI and Anthropic providers with common models. The default grants all users access to all models. The following shows the default configuration:
{
// The temp_grants section is similar to the grants section in the tailnet policy file.
"temp_grants": [
// Grant admin access (permission to see the settings and all other users in
// the dashboard)
{
"src": [
// Explicitly identify certain users by their Tailscale login.
"example-user@example.com",
// Grant admin access to everyone by default.
// Remove this after you've configured explicit admin
// access for yourself.
// BE CAREFUL! If you remove this without granting explicit
// admin access to yourself, you'll lose your ability
// to edit this file.
"*",
],
"grants": [
{"role": "admin"},
],
},
// Every user who can access Aperture gets at least user-level access.
// Remove this and we'll deny access entirely by default.
// Admin access in a separate grant take precedence over this section.
{
"src": ["*"],
"grants": [
{"role": "user"},
],
},
// Default: allow all users to access all models from all providers.
// Without this grant, users can't access any models (deny by default).
{
"src": ["*"],
"grants": [
{
"providers": [
{"provider": "*", "model": "*"},
]
},
],
},
// This example hook sends traffic to Oso if it matches certain parameters.
// You need to also configure Oso in the "hooks" section for this to work.
{
"src": [
// No users by default. Try "*" to capture everyone's traffic.
],
"grants": [
{
"hook": {
"match": {
"providers": ["*"],
"models": ["*"],
// Capturing only tool calls
"events": ["tool_call_entire_request"],
},
"hook": "oso",
"fields": ["user_message", "tools", "request_body", "response_body"],
},
},
],
},
],
// Configure your LLM backends here.
// Fill your API keys in below to share these providers with your team.
// There's no limit to the number of providers you can configure.
"providers": {
"openai": {
"baseurl": "https://api.openai.com",
"name": "OpenAI",
"apikey": "YOUR_OPENAI_API_KEY",
"models": [
"gpt-5",
"gpt-5-mini",
"gpt-5-nano",
"gpt-4.1",
"gpt-4.1-nano",
"gpt-5.1-codex",
"gpt-5.1-codex-max",
],
"compatibility": {
"openai_chat": true,
"openai_responses": true,
"anthropic_messages": false,
},
},
"anthropic": {
"baseurl": "https://api.anthropic.com",
"name": "Anthropic",
"apikey": "YOUR_ANTHROPIC_API_KEY",
"models": [
"claude-sonnet-4-5",
"claude-sonnet-4-5-20250929",
"claude-haiku-4-5",
"claude-haiku-4-5-20251001",
"claude-opus-4-5",
"claude-opus-4-5-20251101",
],
"compatibility": {
"openai_chat": false,
"openai_responses": false,
"anthropic_messages": true,
},
},
},
// Hooks are configured API endpoints that Aperture calls under certain
// conditions. The conditions themselves are configured in the
// "grants" section.
"hooks": {
"oso": {
"url": "https://api.osohq.com/api/agents/v1/model-request",
"apikey": "YOUR_OSO_API_KEY",
},
},
}
Configuration reference
The configuration file contains several top-level sections that control different aspects of Aperture's behavior. The following table describes the available top-level sections:
| Section | Required | Description |
|---|---|---|
providers | Yes | Map of LLM provider configurations. |
temp_grants | No | Access control policies for users and models. |
hooks | No | Webhook endpoint configurations. |
providers
The providers section defines the LLM providers to which Aperture routes requests. Each provider is identified by a unique string key. The following example shows the basic structure:
{
"providers": {
"openai": { ... },
"anthropic": { ... },
"private": { ... }
}
}
Each provider configuration accepts the following fields:
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
baseurl | string | Yes | N/A | Base URL for the provider's API. |
models | array | Yes | N/A | List of model IDs available from this provider. |
apikey | string | No | "" | API key for authentication. |
authorization | string | No | "bearer" | Authorization header type. |
tailnet | boolean | No | false | Route requests over the tailnet. |
name | string | No | "" | Display name for the UI. |
description | string | No | "" | Description shown in the UI. |
compatibility | object | No | Varies by provider | API compatibility flags. |
Authorization types
Different providers require different authorization header formats. The authorization field specifies which format to use. The following table describes the available authorization types:
| Value | Header format | Used by |
|---|---|---|
bearer | Authorization: Bearer <key> | OpenAI and most providers |
x-api-key | x-api-key: <key> | Anthropic |
x-goog-api-key | x-goog-api-key: <key> | Google Gemini |
Provider compatibility
The compatibility object specifies which API formats the provider supports. This determines which endpoints Aperture exposes for the provider's models. The following table describes the compatibility fields:
| Field | Type | Default | Description |
|---|---|---|---|
openai_chat | boolean | true | Supports /v1/chat/completions |
openai_responses | boolean | false | Supports /v1/responses |
anthropic_messages | boolean | false | Supports /v1/messages |
gemini_generate_content | boolean | false | Supports Gemini API format |
bedrock_model_invoke | boolean | false | Supports Amazon Bedrock format |
google_generate_content | boolean | false | Supports Vertex AI Gemini format |
google_raw_predict | boolean | false | Supports Vertex AI raw predict for Anthropic models |
Provider examples
The following examples show how to configure common providers.
OpenAI
Configure OpenAI with the chat and responses APIs:
{
"providers": {
"openai": {
"baseurl": "https://api.openai.com/",
"apikey": "YOUR_OPENAI_KEY",
"models": ["gpt-5", "gpt-5-mini", "gpt-4.1"],
"name": "OpenAI",
"description": "OpenAI models",
"compatibility": {
"openai_chat": true,
"openai_responses": true
}
}
}
}
Amazon Bedrock
Configure Amazon Bedrock with the Bedrock model invocation API:
{
"providers": {
"bedrock": {
"baseurl": "https://bedrock-runtime.us-east-1.amazonaws.com",
"apikey": "bedrock-api-key-xxx",
"authorization": "bearer",
"models": [
"us.anthropic.claude-haiku-4-5-20251001-v1:0",
"us.anthropic.claude-sonnet-4-5-20250929-v1:0",
"us.anthropic.claude-opus-4-5-20251101-v1:0",
"us.anthropic.claude-opus-4-6-v1"
],
"compatibility": {
"bedrock_model_invoke": true
}
}
}
}
Anthropic
Configure Anthropic with the messages API and x-api-key authorization:
{
"providers": {
"anthropic": {
"baseurl": "https://api.anthropic.com",
"apikey": "YOUR_ANTHROPIC_KEY",
"authorization": "x-api-key",
"models": ["claude-sonnet-4-5", "claude-haiku-4-5", "claude-opus-4-5"],
"compatibility": {
"openai_chat": false,
"anthropic_messages": true
}
}
}
}
Google Gemini
Configure Google Gemini with the Gemini API and x-goog-api-key authorization:
{
"providers": {
"gemini": {
"baseurl": "https://generativelanguage.googleapis.com",
"apikey": "YOUR_GEMINI_KEY",
"authorization": "x-goog-api-key",
"models": ["gemini-2.5-flash", "gemini-2.5-pro"],
"name": "Google Gemini",
"compatibility": {
"openai_chat": false,
"gemini_generate_content": true
}
}
}
}
Google Vertex AI
Configure Google Vertex AI with support for both Gemini models and Anthropic models with raw predict:
{
"providers": {
"vertex": {
"baseurl": "https://aiplatform.googleapis.com",
"authorization": "bearer",
"apikey": "keyfile::ba3..3kb.data...67",
"models": [
"gemini-2.0-flash-exp",
"gemini-2.5-flash",
"gemini-2.5-flash-image",
"gemini-2.5-pro",
"claude-opus-4-5@20251101",
"claude-haiku-4-5@20251001",
"claude-sonnet-4-5@20250929",
"claude-opus-4-6"
],
"compatibility": {
// Gemini model support
"google_generate_content": true,
// Anthropic via Vertex model support
"google_raw_predict": true
}
}
}
}
OpenRouter
Configure OpenRouter as a multi-provider aggregator:
{
"providers": {
"openrouter": {
"baseurl": "https://openrouter.ai/api/",
"apikey": "YOUR_OPENROUTER_KEY",
"models": [
"qwen/qwen3-235b-a22b-2507",
"google/gemini-2.5-pro-preview",
"x-ai/grok-code-fast-1"
]
}
}
}
Self-hosted LLM on tailnet
Configure a self-hosted LLM server that Aperture reaches over the tailnet:
{
"providers": {
"private": {
"baseurl": "YOUR_PRIVATE_LLM_URL",
"tailnet": true,
"models": ["qwen3-coder-30b", "llama-3.1-70b"]
}
}
}
temp_grants
The temp_grants section defines access control policies that determine which users can access which models. If you don't configure the temp_grants settings, Aperture uses the default policies, which allow all users to access all models from all providers. If you remove the default policy, all models are denied for all uses.
The following example shows the default temp_grant policy that permits all users to access all models:
{
"temp_grants": [
{
"src": ["*"],
"grants": [
{"providers": [{"provider": "*", "model": "*"}]}
]
}
]
}
Each entry in grants is a policy that matches users and assigns permissions. The following table describes the policy fields:
| Field | Type | Description |
|---|---|---|
src | array | List of user identifiers to match. |
grants | array | List of grant objects specifying permissions. |
The src field specifies which users the policy applies to. The following table describes the available source patterns:
| Value | Matches |
|---|---|
"*" | All users |
"user@domain.tld" | Specific user by login name |
Grant types
The grants array contains objects that specify permissions. Aperture supports three grant types.
Role grant: A role grant assigns a role to matching users.
{"role": "user"}
{"role": "admin"}
Provider grant: A provider grant controls access to specific providers and models.
{"providers": [{"provider": "*", "model": "*"}]}
{"providers": [{"provider": "openai", "model": "gpt-5"}]}
{"providers": [{"provider": "anthropic", "model": "*"}]}
Grant wildcards
Grants support wildcard patterns to match multiple providers or models. The following table describes provider grant wildcards:
| Field | Wildcard | Meaning |
|---|---|---|
provider | "*" | Match any provider. |
model | "*" | Match any model. |
Grant examples
The following examples show common access control patterns.
Allow all users to access all models
This configuration grants all users access to all models from all providers:
{
"temp_grants": [
{
"src": ["*"],
"grants": [
{"role": "user"},
{"providers": [{"provider": "*", "model": "*"}]}
]
}
]
}
Restrict specific users to specific models
This configuration grants different access levels to different users:
{
"temp_grants": [
{
"src": ["developer@company.com"],
"grants": [
{"role": "user"},
{"providers": [
{"provider": "openai", "model": "gpt-5"},
{"provider": "anthropic", "model": "claude-sonnet-4-5"}
]}
]
},
{
"src": ["admin@company.com"],
"grants": [
{"role": "admin"},
{"providers": [{"provider": "*", "model": "*"}]}
]
}
]
}
hooks
The hooks section defines webhook endpoints that Aperture calls when certain conditions are met. Each hook is identified by a unique string key that grants reference. The following example shows the hooks configuration:
{
"hooks": {
"oso": {
"url": "https://api.osohq.com/api/agents/v1/model-request",
"apikey": "YOUR_OSO_API_KEY",
"timeout": "10s"
},
"my-webhook": {
"url": "https://example.com/webhook",
"apikey": "YOUR_API_KEY"
}
}
}
Each hook configuration accepts the following fields:
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
url | string | Yes | N/A | HTTP or HTTPS endpoint to POST hook data to. |
apikey | string | No | "" | API key sent in the Authorization: Bearer header. |
timeout | string | No | "5s" | Maximum duration to wait for the hook to respond. |
The timeout field accepts Go duration strings such as 5s, 30s, or 1m. Set to 0 to disable the timeout.
Hooks are triggered by grants in the temp_grants section. A hook defined here does nothing until a grant references it.
Hook grants in temp_grants
To trigger a hook, add a hook grant to a policy in the temp_grants section. Hook grants specify which requests trigger the hook and what data to include.
{
"temp_grants": [
{
"src": ["*"],
"grants": [
{
"hook": {
"match": {
"providers": ["*"],
"models": ["*"],
"events": ["tool_call_entire_request"]
},
"hook": "oso",
"fields": ["user_message", "tools", "request_body", "response_body"]
}
}
]
}
]
}
The hook grant object contains the following fields:
| Field | Type | Description |
|---|---|---|
match | object | Conditions that determine when the hook triggers |
hook | string | Key referencing a hook defined in the top-level hooks section |
fields | array | List of data fields to include in the hook payload |
Hook match conditions
The match object specifies when a hook triggers. All non-empty fields must match for the hook to fire (AND logic). Within each field, any element matching is sufficient (OR logic).
| Field | Type | Description |
|---|---|---|
providers | array | Provider IDs to match. Use * to match any provider. |
models | array | Model IDs to match. Use * to match any model. |
events | array | Event types that trigger the hook. |
Available events
| Event | Description |
|---|---|
tool_call_entire_request | Fires once after the response completes if any message in the response contained tool calls |
Hook payload fields
The fields array specifies which data to include in the POST payload sent to the hook endpoint. The following table describes the available fields:
| Field | Description |
|---|---|
tools | Array of tool calls extracted from the response |
request_body | The original request body sent to the LLM |
user_message | The user's message from the request |
response_body | The reconstructed response body JSON |
raw_responses | Array of raw SSE messages (for streaming) or single response object |
Every hook call automatically includes a metadata object with request context:
{
"metadata": {
"login_name": "user@example.com",
"user_agent": "curl/8.0",
"url": "/v1/chat/completions",
"model": "gpt-5",
"provider": "openai",
"tailnet_name": "example.com",
"stable_node_id": "n12345"
}
}
Hook grant example
The following example sends tool call data to an external service for all requests from a specific user:
{
"temp_grants": [
{
"src": ["developer@company.com"],
"grants": [
{
"hook": {
"match": {
"providers": ["anthropic", "openai"],
"models": ["*"],
"events": ["tool_call_entire_request"]
},
"hook": "my-webhook",
"fields": ["tools", "user_message"]
}
}
]
}
]
}
Validation
Aperture validates configuration at load time and reports errors for invalid configurations. Aperture enforces the following validation rules. Invalid configurations cause Aperture to exit with an error. The following table describes required validations:
| Condition | Error message |
|---|---|
| No providers defined | no providers configured |
Provider missing baseurl | provider {id} has no base URL configured |
Provider missing models | provider {id} has no models configured |
Invalid authorization type | provider {id} has invalid authorization type: {type} |
| Unresolved environment variable | unsubstituted macros: [var_name] |
Complete example
The following example shows a complete configuration with all sections:
{
// Access control: who can use which models
"temp_grants": [
// Allow all users basic access
{
"src": ["*"],
"grants": [
{"role": "user"},
{"providers": [{"provider": "*", "model": "*"}]}
]
},
// Admin access for specific user
{
"src": ["admin@company.com"],
"grants": [
{"role": "admin"},
{"providers": [{"provider": "*", "model": "*"}]},
{"mcp": [{"server": "*", "tool": "*"}]}
]
},
],
// Database settings
"database": {
"save_raws": false,
"keep_encrypted_blobs": false
},
// LLM session log export configuration
"exporters": {
"s3": {
"bucket_name": "aperture-exports",
"region": "us-west-2",
"prefix": "prod",
"access_key_id": "YOUR_AWS_KEY",
"access_secret": "YOUR_AWS_SECRET",
"every": 3600,
"limit": 1000
}
},
// LLM providers
"providers": {
"openai": {
"baseurl": "https://api.openai.com/",
"apikey": "YOUR_OPENAI_KEY",
"models": ["gpt-5", "gpt-5-mini", "gpt-4.1"],
"name": "OpenAI",
"description": "OpenAI models for coding and chat",
"compatibility": {
"openai_chat": true,
"openai_responses": true
}
},
"anthropic": {
"baseurl": "https://api.anthropic.com",
"apikey": "YOUR_PROXY_ANTHROPIC_KEY",
"authorization": "x-api-key",
"models": ["claude-sonnet-4-5", "claude-haiku-4-5", "claude-opus-4-5"],
"name": "Anthropic",
"compatibility": {
"openai_chat": false,
"anthropic_messages": true
}
},
"gemini": {
"baseurl": "https://generativelanguage.googleapis.com",
"apikey": "YOUR_PROXY_GEMINI_KEY",
"authorization": "x-goog-api-key",
"models": ["gemini-2.5-flash", "gemini-2.5-pro"],
"name": "Google Gemini",
"compatibility": {
"openai_chat": false,
"gemini_generate_content": true
}
},
"private": {
"baseurl": "YOUR_PRIVATE_LLM_URL",
"tailnet": true,
"models": ["qwen3-coder-30b"]
}
},
// Hooks for external integrations
"hooks": {
"oso": {
"url": "https://api.osohq.com/api/agents/v1/model-request",
"apikey": "YOUR_OSO_API_KEY",
},
},
}
