Set up OpenAI

Last validated:

Aperture by Tailscale is currently in alpha.

Configure an OpenAI provider in Aperture so your team can access GPT models through your tailnet. OpenAI is the default provider type in Aperture. When no compatibility block is specified, Aperture enables openai_chat automatically, but openai_responses is not enabled unless you set it explicitly.

Aperture routes requests based on the model name, not the LLM client. Any LLM client configured to use Aperture can access any provider your admin has set up. Refer to the provider compatibility reference for the full list of supported providers and API formats.

Prerequisites

Before you begin, you need:

Configure the provider

Add OpenAI as a provider in your Aperture configuration:

{
  "providers": {
    "openai": {
      "baseurl": "https://api.openai.com/",
      "apikey": "<your-openai-key>",
      "models": ["gpt-5", "gpt-5-mini", "gpt-4.1"],
      "name": "OpenAI",
      "compatibility": {
        "openai_chat": true,
        "openai_responses": true
      }
    }
  }
}

The openai_responses flag enables the Responses API, which tools like OpenAI Codex use. When you include a compatibility block, set both flags explicitly. Without a compatibility block, Aperture enables only openai_chat by default. Refer to the provider compatibility reference for the full list of flags.

After configuring the provider:

  1. Grant model access to the users or groups that need these models.
  2. Set up LLM clients to connect coding tools through Aperture.

Verify the provider

  1. Open the Aperture dashboard and confirm the provider appears with the expected models.

  2. Send a test request through a connected coding tool (such as Claude Code or Cursor), or use curl:

    curl http://<aperture-address>/v1/chat/completions \
      -H "Content-Type: application/json" \
      -d '{"model": "<model-name>", "messages": [{"role": "user", "content": "hello"}]}'
    

    Replace <aperture-address> with your Aperture instance address and <model-name> with one of the models you configured for this provider.

  3. Check the Aperture dashboard session list for a new entry. The session shows the model name, token counts, and timestamp.

If the request fails, refer to the Aperture troubleshooting guide.