Use OpenAI-compatible tools with Aperture
Last validated:
Configure OpenAI-compatible LLM clients to send requests through Aperture by Tailscale. This guide covers any LLM client that supports a custom API base URL, including Gemini CLI, Roo Code, Cline, and custom applications.
For client-specific instructions, refer to the guides for Claude Code, Codex, and OpenCode.
Prerequisites
Before you begin, you need:
- An Aperture instance with at least one configured provider, accessible from your device. Refer to get started with Aperture if you have not set this up.
- The Aperture host URL (default:
http://ai) accessible from your device. Usehttp://, nothttps://.
To avoid unexpected TLS issues, use http:// for the Aperture URL when configuring LLM clients. All connections remain encrypted using WireGuard, even when HTTPS is not used.
Aperture routes requests based on the model name, not the LLM client. Any LLM client configured to use Aperture can access any provider your admin has set up. Refer to the provider compatibility reference for the full list of supported providers and API formats.
Configure the client
In your LLM client's settings, set the API base URL to your Aperture instance and provide a placeholder API key:
- API Base URL:
http://ai/v1 - API Key: Leave empty or set to any value (Aperture ignores client-provided keys and injects credentials automatically)
The exact setting names vary by client. Look for fields labeled "API Base URL," "Base URL," "API Endpoint," or similar.
Custom applications
You can point any HTTP client at the Aperture URL as long as it uses a supported provider API format. For example, to send a request using the OpenAI chat completions API:
curl -s http://ai/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [{"role": "user", "content": "Hello"}]
}'
Aperture routes the request to the appropriate provider based on the model name.
Verify the connection
- Send a test request using your configured tool.
- Open the Aperture dashboard at
http://ai/ui/and confirm the request appears on the Logs page.
If the request does not appear, refer to the Aperture troubleshooting guide.
Next steps
- Grant model access to users: Control which models each user or group can access through Aperture.
- View your usage dashboards: Monitor token consumption, costs, and session activity across your organization.
- Set per-user spending limits: Configure quota buckets to control costs for individual users.