Set up Vercel AI Gateway

Last validated:

Aperture by Tailscale is currently in alpha.

Configure a Vercel AI Gateway provider in Aperture so your team can access models from multiple LLM providers through a single gateway endpoint. Vercel AI Gateway aggregates providers like OpenAI and Anthropic behind one API, supporting both the chat completions and responses APIs.

Aperture routes requests based on the model name, not the LLM client. Any LLM client configured to use Aperture can access any provider your admin has set up. Refer to the provider compatibility reference for the full list of supported providers and API formats.

Prerequisites

Before you begin, you need:

Configure the provider

Add Vercel AI Gateway as a provider in your Aperture configuration:

{
  "providers": {
    "vercel": {
      "baseurl": "https://ai-gateway.vercel.sh",
      "apikey": "<your-vercel-token>",
      "models": [
        "anthropic/claude-sonnet-4-5",
        "openai/gpt-5-nano"
      ],
      "cost_basis": "vercel",
      "compatibility": {
        "openai_chat": true,
        "openai_responses": true
      }
    }
  }
}

You must set cost_basis to "vercel" because Aperture cannot auto-infer pricing for gateway providers. Without an explicit cost_basis, Aperture does not produce cost estimates for requests to this provider.

Model names use a provider/model prefix format. The openai_chat flag enables the chat completions API and openai_responses enables the Responses API, which tools like OpenAI Codex use. Refer to the provider compatibility reference for the full list of flags.

After configuring the provider:

  1. Grant model access to the users or groups that need these models.
  2. Set up LLM clients to connect coding tools through Aperture.

Verify the provider

  1. Open the Aperture dashboard and confirm the provider appears with the expected models.

  2. Send a test request through a connected coding tool (such as Claude Code or Cursor), or use curl:

    curl http://<aperture-address>/v1/chat/completions \
      -H "Content-Type: application/json" \
      -d '{"model": "<model-name>", "messages": [{"role": "user", "content": "hello"}]}'
    

    Replace <aperture-address> with your Aperture instance address and <model-name> with one of the models you configured for this provider.

  3. Check the Aperture dashboard session list for a new entry. The session shows the model name, token counts, and timestamp.

If the request fails, refer to the Aperture troubleshooting guide.