Go back

Securely connect Claude Code to ClickHouse via MCP

The fastest path to connecting Claude Code to ClickHouse is a two-line config change. The harder problem, the one most guides skip, is doing it safely when your database isn't the public playground endpoint, when more than one developer needs access, and when you care about who ran which query against production.

This article covers both: the working setup, and the governance layer you'll need before rolling it out beyond your laptop.

How the ClickHouse MCP integration works

The ClickHouse MCP server exposes a small set of tools over the Model Context Protocol. It can execute a SELECT query, list databases, list tables, and describe a table schema. Claude Code calls these tools when you ask it analytical questions, and the server translates them into ClickHouse HTTP API requests.

There are three deployment patterns in practice:

  • Local stdio server (uv/npx): best for solo development and public or Cloud endpoints. Credentials live in local environment variables, and the client must be able to reach ClickHouse directly.
  • ClickHouse Cloud remote MCP: best for ClickHouse Cloud customers. Authentication runs through ClickHouse Cloud OAuth, and it works over the public internet.
  • Private gateway (self-managed): best for private VPCs, staging, and production. Credentials are centralized, and access typically runs through a tailnet or VPN.

The first two patterns are well-documented. The third is where most teams actually end up needing to be.

Minimum working setup: local stdio server

For Claude Code against ClickHouse Cloud or the public playground, the official mcp-clickhouse package via uv is the right starting point.

Prerequisites: uv installed, Claude Code authenticated.

Add the server to your Claude Code MCP config:

claude mcp add clickhouse \
  --command uv \
  --args "run --with mcp-clickhouse --python 3.10 mcp-clickhouse" \
  --env CLICKHOUSE_HOST=your-host.clickhouse.cloud \
  --env CLICKHOUSE_PORT=8443 \
  --env CLICKHOUSE_USER=default \
  --env CLICKHOUSE_PASSWORD=your-password \
  --env CLICKHOUSE_SECURE=true \
  --env CLICKHOUSE_VERIFY=true

Or edit ~/.claude/mcp.json directly if you prefer:

{
  "mcpServers": {
    "clickhouse": {
      "command": "uv",
      "args": ["run", "--with", "mcp-clickhouse", "--python", "3.10", "mcp-clickhouse"],
      "env": {
        "CLICKHOUSE_HOST": "your-host.clickhouse.cloud",
        "CLICKHOUSE_PORT": "8443",
        "CLICKHOUSE_USER": "default",
        "CLICKHOUSE_PASSWORD": "your-password",
        "CLICKHOUSE_SECURE": "true",
        "CLICKHOUSE_VERIFY": "true"
      }
    }
  }
}

Restart Claude Code. Run /mcp to confirm the server is listed and connected. Then test with a prompt like: "List the databases available and show me the schema for any tables in the default database."

Common failure: spawn uv ENOENT. This means Claude Code can't find uv in its PATH. Fix it by using the full binary path: run which uv and replace "command": "uv" with the absolute path (e.g., /Users/yourname/.cargo/bin/uv).

ClickHouse Cloud remote MCP: OAuth flow

If you're on ClickHouse Cloud, the built-in remote MCP endpoint removes the local server entirely:

claude mcp add --transport http clickhouse-cloud https://mcp.clickhouse.cloud/mcp
claude
/mcp

Securing AI access for enterprises

Here's where most guides stop, and where most production deployments actually start.

If your ClickHouse instance lives inside a VPC, a private subnet, or a staging environment that isn't exposed to the public internet, neither the local stdio server nor the Cloud remote MCP will reach it without additional network work. The local server runs on the developer's laptop, which means the laptop needs a direct network path to ClickHouse. In practice, that means:

  • Putting ClickHouse behind a public endpoint with IP allowlisting (operationally painful, security risk)
  • Running a VPN and hoping the developer's machine is connected when they need access
  • Distributing long-lived database credentials to every developer who wants MCP access

None of these are good answers at scale.

The cleaner architectural answer is a private AI gateway that sits inside your network, authenticates users by identity rather than shared credentials, and proxies MCP-capable requests to ClickHouse on their behalf. Aperture by Tailscale is built for exactly this pattern: it runs inside a tailnet, identifies users by their Tailscale identity, injects provider credentials centrally, and captures every tool call and session for audit. For teams already running Tailscale for private network access, this means Claude Code on a developer's laptop can reach a private ClickHouse instance through the tailnet without the database being exposed publicly and without distributing database passwords to individual machines.

Try Tailscale for free

Schedule a demo
Contact sales
cta phone
mercury
instacrt
Retool
duolingo
Hugging Face