Deploy
Deploy Atlas to Railway, Vercel, or Docker in one click.
One-Click Deploy
The fastest way to get Atlas running in production.
Each button deploys from a starter repo — a standalone project with just Atlas, no monorepo. You get a working instance with demo data in under 5 minutes.
Create Your Own Project
The recommended way to deploy Atlas. Scaffold a project, connect your database, and deploy to any platform.
bun create @useatlas my-app
cd my-appThe interactive setup asks for your platform (Vercel, Railway, Docker), database, LLM provider, and API key. It generates a standalone project with the right config for your target.
Generate your semantic layer
# Profile your database and generate YAML files
bun run atlas -- init
# With LLM enrichment for richer descriptions
bun run atlas -- init --enrich
# Or start with demo data
bun run atlas -- init --demoDeploy to Railway
- Push to GitHub:
git init && git add -A && git commit -m "Initial commit"
gh repo create my-app --public --source=. --push- Create a new Railway project at railway.app
- Add a Postgres plugin — Railway injects
DATABASE_URLautomatically (Atlas's internal database) - Click + New > GitHub Repo and select your repo. Railway detects
railway.jsonand builds from the Dockerfile - Set environment variables in the Railway dashboard:
ATLAS_PROVIDER=anthropic
ANTHROPIC_API_KEY=sk-ant-...
ATLAS_DATASOURCE_URL=postgresql://user:pass@your-analytics-host:5432/mydb- Deploy — Railway builds and starts the container automatically
- Verify at
https://<your-app>.up.railway.app/api/health
DATABASE_URL is auto-set by Railway's Postgres plugin — it's Atlas's internal database for auth and audit. ATLAS_DATASOURCE_URL is the analytics database you want to query.
Deploy to Vercel
- Push to GitHub (same as above)
- Import your repo in the Vercel Dashboard
- Set environment variables:
# Option A: Vercel AI Gateway (recommended — single key, built-in observability)
ATLAS_PROVIDER=gateway
AI_GATEWAY_API_KEY=... # Create at https://vercel.com/~/ai/api-keys
# Option B: Direct provider
ATLAS_PROVIDER=anthropic
ANTHROPIC_API_KEY=sk-ant-...
# Required for both options
ATLAS_DATASOURCE_URL=postgresql://user:pass@host:5432/dbname
DATABASE_URL=postgresql://user:pass@host:5432/atlas
BETTER_AUTH_SECRET=... # openssl rand -base64 32- Deploy — Vercel auto-detects Next.js and provisions Neon Postgres if configured
- Verify at
https://<your-app>.vercel.app/api/health
Single-database shortcut: Set ATLAS_DEMO_DATA=true to skip ATLAS_DATASOURCE_URL entirely. Atlas will use DATABASE_URL_UNPOOLED (preferred) or DATABASE_URL as the analytics datasource, letting you run both Atlas internals and analytics queries against a single Neon Postgres instance.
Deploy with Docker
From the examples/docker/ directory:
docker compose upOr build and run manually from the project root:
docker build -f Dockerfile -t atlas .
docker run -p 3001:3001 \
-e ATLAS_PROVIDER=anthropic \
-e ANTHROPIC_API_KEY=sk-ant-... \
-e ATLAS_DATASOURCE_URL=postgresql://user:pass@host:5432/dbname \
atlasVerify at http://localhost:3001/api/health.
Platform Details
Railway
What you get with one-click: A Hono API container + managed Postgres + sidecar sandbox. Demo data is pre-seeded — you only need to provide an Anthropic API key.
- Railway auto-sets
DATABASE_URLvia the Postgres plugin railway.jsonconfigures Dockerfile builds, health checks, and restart policy- The Docker
HEALTHCHECKpolls/api/healthevery 30 seconds
Troubleshooting:
- Health check fails after deploy — Railway Postgres can take 10–30s to provision. The app retries connections automatically. Wait for the next health check cycle.
- Demo data not appearing — Check deploy logs for
seed-demo:messages. The seeding is idempotent and retries up to 5 times.
Vercel
What you get with one-click: A full-stack Next.js app with the Hono API embedded via a catch-all route. Neon Postgres is provisioned automatically. The explore tool uses Vercel Sandbox (Firecracker microVM) for hardware-level isolation.
ATLAS_PROVIDER=gatewayroutes through Vercel's AI Gateway with usage tracking in the Vercel dashboard- The catch-all route sets
maxDuration = 300(5 minutes) — this requires the Pro plan. Hobby plan limitsmaxDurationto 60 seconds, which may cause timeouts on complex multi-step queries. SetmaxDuration = 60if you're on Hobby. See Vercel plan limits @vercel/sandboxis auto-detected whenATLAS_RUNTIME=vercelis set or theVERCELenv var is present (set automatically on Vercel deployments)
Docker
What you get: A Docker Compose stack with the Hono API + Postgres. Optional nsjail isolation for the explore tool.
- Images are based on
oven/bun(see the Dockerfile for the pinned version) - The semantic layer (
semantic/) is baked into the image at build time — rebuild if you update YAMLs - The example Dockerfile (
examples/docker/Dockerfile) defaults toINSTALL_NSJAIL=true— nsjail is included in the image by default. The production Dockerfile (deploy/api/Dockerfile) defaults toINSTALL_NSJAIL=falsesince the production deployment uses a sidecar sandbox instead. Override with--build-arg INSTALL_NSJAIL=false(or=true) as needed
For development workflows where you're iterating on the semantic layer, mount it as a volume instead of baking it into the image:
docker run -v ./semantic:/app/semantic -p 3001:3001 ... atlasThis avoids rebuilding the image on every YAML change.
Environment Variables
Every deployment needs these:
| Variable | Example | Purpose |
|---|---|---|
ATLAS_PROVIDER | anthropic | LLM provider (anthropic, openai, bedrock, ollama, gateway) |
| Provider API key | ANTHROPIC_API_KEY=sk-ant-... | Authentication for the LLM |
ATLAS_DATASOURCE_URL | postgresql://... or mysql://... | Analytics database to query |
DATABASE_URL | postgresql://atlas:atlas@host:5432/atlas | Atlas internal Postgres for auth and audit (auto-set on Railway and Vercel) |
Optional (safe defaults):
| Variable | Default | Description |
|---|---|---|
ATLAS_MODEL | Provider default | Override the LLM model |
ATLAS_ROW_LIMIT | 1000 | Max rows returned per query |
ATLAS_QUERY_TIMEOUT | 30000 | Query timeout in ms |
PORT | 3001 | Set automatically by most platforms |
See Environment Variables for the full list.
Authentication
Auth is opt-in. Set one variable to enable:
| Variable | Auth mode | Description |
|---|---|---|
ATLAS_API_KEY | Simple key | Single shared key via Authorization: Bearer <key> |
BETTER_AUTH_SECRET | Managed | Email/password login with sessions. Min 32 chars. Requires DATABASE_URL |
ATLAS_AUTH_JWKS_URL | BYOT | Stateless JWT verification against your identity provider |
See Authentication for detailed configuration.
Security & Isolation
Atlas auto-detects the best available sandbox for the explore tool, using a six-tier priority:
| Priority | Platform | Sandbox | Isolation |
|---|---|---|---|
| 0 | Any (via Plugin SDK) | Sandbox plugin | Plugin-defined |
| 1 | Vercel | Firecracker microVM | Hardware-level (strongest) |
| 2 | Self-hosted Docker | nsjail explicit (ATLAS_SANDBOX=nsjail) | Kernel-level |
| 3 | Railway | Sidecar service (ATLAS_SANDBOX_URL) | Container-level |
| 4 | Self-hosted | nsjail auto-detect (binary on PATH) | Kernel-level |
| 5 | Local dev | just-bash + OverlayFS | Path-traversal protection |
Deploying for your own team? Any tier is fine — you're protecting against prompt injection edge cases, not hostile tenants. Multi-tenant? Use Vercel or nsjail for real process isolation.
See Sandbox Architecture for the full threat model.
Health Check
All deployments expose a health endpoint:
GET /api/healthReturns {"status":"ok"}, {"status":"degraded"}, or {"status":"error"} with sub-checks for datasource, provider, semantic layer, internal DB, explore (sandbox backend), auth, and slack. Returns HTTP 200 for ok/degraded, HTTP 503 for error. Always public (no auth required).