Securely connect Claude Code to Postgres via MCP
Connecting Claude Code to a Postgres database through MCP takes about two minutes and one CLI command. The setup tutorials stop there, which is fine if you're one developer with a local database, but leaves some gaps when you're rolling this out to a team that touches staging or production data.
This guide covers the quick setup, compares the Postgres MCP servers worth considering, and then gets into the part most tutorials skip: how to handle credentials, access control, and audit trails when more than one person is involved.
Add a Postgres MCP server to Claude Code in two minutes
The fastest path from "Claude Code installed" to "querying your Postgres schema with natural language" is a single CLI command:
claude mcp add postgres -- npx -y @modelcontextprotocol/server-postgres \
"postgresql://USER:PASSWORD@HOST:PORT/DBNAME"That registers a local stdio MCP server. Restart Claude Code, type /mcp, and confirm the server shows as active. You can now ask Claude to list tables, explain foreign key relationships, or generate queries against your actual schema.
For project-level configuration that travels with your repo, drop a .mcp.json in the project root:
{
"mcpServers": {
"postgres": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-postgres",
"postgresql://username:password@localhost:5432/mydb"
]
}
}
}Claude Code picks this up automatically when you open the project. The same pattern works for Cursor and Codex with minor config-file differences.
Choose the right Postgres MCP server
The official @modelcontextprotocol/server-postgres gives you read-only schema introspection and SELECT queries, fine for exploring a local dev database. If you need more:
- pgEdge Postgres MCP adds detailed schema introspection (primary keys, foreign keys, indexes, constraints),
pg_stat_statementsperformance metrics, multi-database support, and HTTP/TLS transport. Works with any Postgres 14+, including Amazon RDS. - Postgres MCP Pro (crystaldba) focuses on index tuning, EXPLAIN plan analysis, database health checks, and configurable read/write access. Supports both stdio and SSE transports.
- MintMCP takes a hosted-gateway approach: configure your database once, get a managed MCP server with OAuth, audit trails, and rate limiting.
The right choice depends on whether you're one developer poking at a local database or a team that needs governance.
Lock down production database access
Every tutorial shows you pasting a connection string into a JSON file. That works, but it also means your production password is sitting in plaintext on disk, possibly checked into version control.
Create a dedicated Postgres role with only the permissions Claude actually needs. For most MCP use cases, that's SELECT on the schemas you care about, nothing else:
CREATE USER claude_readonly WITH PASSWORD 'a-real-password';
GRANT CONNECT ON DATABASE prod_db TO claude_readonly;
GRANT USAGE ON SCHEMA public TO claude_readonly;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO claude_readonly;Use .pgpass or environment variables instead of embedding credentials in .mcp.json. The pgEdge server reads standard PG* environment variables, so you can keep the password out of your config entirely.
Separate configs per environment. Configure dev-db, staging-db, and prod-db as distinct MCP server entries so you control which credentials, and which permission levels, each one carries.
Read-only enforcement at the Postgres role level is the only guarantee that matters. The MCP server can promise read-only mode, but a misconfigured server or a future update could change that. The database role is the backstop.
The problem that appears at team scale
Everything above works for a single developer. The trouble starts when you have twenty or fifty engineers, each with their own .mcp.json, each needing credentials for staging or production Postgres, each using Claude Code or Cursor with slightly different local configurations.
- Credential sprawl. Database passwords or MCP gateway API keys live on every developer laptop. Revoking one person's access means rotating a shared password for everyone.
- No identity attribution. If ten developers connect through the same Postgres role,
pg_stat_activityshows ten identicalclaude_readonlysessions. Who ran the expensive query? You don't know. - No centralized audit trail. Each local MCP server logs independently, or doesn't. There's no single view of which developer queried which database, when, and how many tokens the conversation consumed.
- Config drift. One developer points at production, another at staging, a third at a stale local copy. Nobody is sure which environment Claude is actually reasoning about.
Hosted MCP gateways address some of this with OAuth and audit logging. But they introduce another external dependency and another set of credentials to manage, which is the problem you were trying to solve.
How a network-level AI gateway changes this
If your organization already uses Tailscale, there's an approach that sidesteps credential distribution entirely.
Aperture by Tailscale is an AI gateway that runs inside your tailnet. Instead of distributing API keys or database passwords to every developer's local config, Aperture identifies users by their existing Tailscale identity. Every request carries cryptographic proof of who sent it, from which device, without any client-side credentials.
Claude Code connects to Aperture by setting ANTHROPIC_BASE_URL to your Aperture hostname (e.g., http://ai). The following capabilities come included:
- No API key on the client: Aperture injects provider credentials automatically.
- Per-user attribution: every request is logged with the user's login name, device ID, session context, and token counts.
- Access control uses your existing tailnet ACLs, so granting or revoking access to LLM providers doesn't require touching database passwords or MCP configs.
- MCP tool calls are visible in Aperture's Tool Use page, so you can see what your Postgres MCP server is actually doing across sessions.
This doesn't replace the Postgres MCP server. You still pick the one that fits, whether that's official, pgEdge, Postgres MCP Pro, or another option. What changes is the layer above: instead of scattering secrets across laptops and hoping every developer's local setup is correct, you route AI traffic through an identity-aware gateway that handles credentials, attribution, and audit centrally.