Bring Your Own Frontend
Use Atlas with any frontend framework that can make HTTP requests.
Atlas is a headless API. The built-in @atlas/web package is a Next.js reference client, but any frontend that can make HTTP requests and consume a streaming response can replace it.
Architecture
┌─────────────────────┐
│ Your Frontend │ HTTP (same-origin or cross-origin)
│ (Nuxt, Svelte, │ ──────────────────────────────────────► Atlas Hono API
│ React/Vite, │ POST /api/chat (streaming) ├── /api/health
│ TanStack, etc.) │ POST /api/v1/query (JSON) ├── /api/v1/query
│ │ GET /api/v1/conversations └── /api/v1/conversations
└─────────────────────┘The API server (@atlas/api) is a standalone Hono app that:
- Streams chat responses using the Vercel AI SDK Data Stream Protocol
- Accepts
Authorization: Bearer <key>headers for API key auth - Returns CORS headers (configurable via
ATLAS_CORS_ORIGIN) - Exposes tool call parts (explore, executeSQL) as structured data in the stream
Framework guides
| Framework | Guide | AI SDK adapter |
|---|---|---|
| Nuxt (Vue) | Nuxt | @ai-sdk/vue |
| SvelteKit | SvelteKit | @ai-sdk/svelte |
| React (Vite) | React/Vite | @ai-sdk/react |
| TanStack Start | TanStack Start | plain fetch / TanStack Query |
Common setup
1. API URL
Your frontend needs to reach the Atlas API. Two approaches:
Same-origin proxy (recommended) -- configure your dev server or reverse proxy to forward /api/* to the Atlas API. No CORS issues, no extra env vars.
Cross-origin -- point directly at the API server and set ATLAS_CORS_ORIGIN on the API side:
# Atlas API .env
ATLAS_CORS_ORIGIN=http://localhost:5173 # your frontend's originManaged auth (cookies): When using cookie-based managed auth cross-origin, you must set
ATLAS_CORS_ORIGINto an explicit origin (not*, which is incompatible with credentialed requests) and setcredentials: "include"on all fetch requests from your frontend.
2. Auth headers
Atlas supports multiple auth modes. Your frontend only needs to handle the one you configured:
| Auth mode | Header | Notes |
|---|---|---|
none | (nothing) | No auth required |
simple-key | Authorization: Bearer <key> | Static API key from ATLAS_API_KEY. Note: the health endpoint returns simple-key as the mode name (not api-key) |
managed | Cookie-based (Better Auth) | Set credentials: "include" on fetch |
byot | Authorization: Bearer <jwt> | JWT from your identity provider |
3. Streaming chat
The POST /api/chat endpoint accepts a Vercel AI SDK-compatible request body and returns a Data Stream response. The AI SDK framework adapters (@ai-sdk/react, @ai-sdk/vue, @ai-sdk/svelte) provide a useChat hook/composable that handles the protocol automatically.
If you prefer not to use an adapter (e.g., TanStack Start), you can consume the stream directly with fetch and parse the Data Stream Protocol manually, or use the JSON endpoint (POST /api/v1/query) for synchronous responses.
4. Tool call rendering
Atlas streams tool calls as structured parts. The key tool names are:
explore-- filesystem exploration of the semantic layer. Args:{ command: string }. Result: string output.executeSQL-- SQL query execution. Args:{ sql: string, explanation: string, connectionId?: string }. Result: see executeSQL result shape below.
Each framework guide shows how to detect and render these tool parts.
5. Conversation management
Atlas supports persistent conversations via:
POST /api/chatwith{ conversationId }in the body to continue a conversation- Response header
x-conversation-idcontains the conversation ID (new or existing) GET /api/v1/conversationsto list conversationsGET /api/v1/conversations/:idto load a conversation with messagesDELETE /api/v1/conversations/:idto delete a conversation
Conversation support requires DATABASE_URL to be configured on the API.
Data Stream Protocol
The POST /api/chat endpoint returns a Vercel AI SDK Data Stream. The stream contains text chunks and structured tool call parts. The AI SDK adapters (useChat in @ai-sdk/react, @ai-sdk/vue, @ai-sdk/svelte) parse the stream automatically and expose tool calls as message parts.
Tool call parts (AI SDK v6)
In AI SDK v6, tool invocations appear as message parts with per-tool type names rather than a single "tool-invocation" type. Each registered tool gets a type of "tool-{toolName}" (e.g., "tool-explore", "tool-executeSQL"). Unregistered/dynamic tools use type: "dynamic-tool" with a toolName field.
// Static tool part (for tools registered in the tool set)
{
type: "tool-explore", // "tool-{toolName}" — e.g., "tool-explore", "tool-executeSQL"
toolCallId: string, // unique ID for this invocation
state: string, // "input-streaming" | "input-available" | "output-available" | "output-error" | "output-denied"
input: { ... }, // tool arguments (shape depends on toolName)
output: unknown // tool result (only present when state is "output-available")
}
// Dynamic tool part (for tools not in the static tool set)
{
type: "dynamic-tool",
toolName: string, // "explore" | "executeSQL"
toolCallId: string,
state: string, // same states as above
input: { ... },
output: unknown
}Use the isToolUIPart(part) helper from "ai" to detect both static and dynamic tool parts, and getToolName(part) to extract the tool name regardless of which variant it is.
The state field progresses through the lifecycle: "input-streaming" (arguments streaming in), "input-available" (arguments complete, execution started), "output-available" (result ready), "output-error" (execution failed), or "output-denied" (tool call was denied). You typically only need to render "input-available" (show a loading state) and "output-available" (show the result).
executeSQL result shape
// input
{ sql: string, explanation: string, connectionId?: string }
// output (success)
{
success: true,
explanation: string,
columns: string[], // e.g. ["name", "revenue", "region"]
rows: Record<string, unknown>[], // e.g. [{ name: "Acme", revenue: 50000, region: "US" }]
truncated: boolean // true if result hit the row limit
}
// output (failure)
{ success: false, error: string }explore result shape
// input
{ command: string }
// output — a plain string (the stdout of the command)
"catalog.yml\nentities/\nglossary.yml\nmetrics/"On error, the string starts with "Error: " or "Error (exit N):".
Concrete example
A single assistant message might contain these parts, in order:
[
{ "type": "text", "text": "Let me look at the schema first." },
{
"type": "tool-explore",
"toolCallId": "call_abc123",
"state": "output-available",
"input": { "command": "cat entities/companies.yml" },
"output": "table: companies\ndescription: ..."
},
{ "type": "text", "text": "Now I'll query the data." },
{
"type": "tool-executeSQL",
"toolCallId": "call_def456",
"state": "output-available",
"input": { "sql": "SELECT name, revenue FROM companies LIMIT 5", "explanation": "Top companies" },
"output": {
"success": true,
"explanation": "Top companies",
"columns": ["name", "revenue"],
"rows": [
{ "name": "Acme", "revenue": 50000 },
{ "name": "Globex", "revenue": 42000 }
],
"truncated": false
}
},
{ "type": "text", "text": "Acme leads with $50k in revenue." }
]Note: If your tools are registered as dynamic tools (not in a static tool set), the parts will have
type: "dynamic-tool"with a separatetoolNamefield instead. UseisToolUIPart()andgetToolName()from"ai"to handle both cases.
Streaming vs. synchronous
POST /api/chat | POST /api/v1/query | |
|---|---|---|
| Response | Data Stream (chunked) | JSON (single response) |
| Tool visibility | Real-time tool call parts as they execute | Aggregated in sql[] and data[] arrays |
| Client library | useChat from AI SDK adapters | Plain fetch or any HTTP client |
| Response shape | Stream of message parts (above) | { answer, sql, data, steps, usage } |
The sync endpoint runs the same agent loop but waits for completion and returns a flat JSON object:
{
answer: string, // the agent's final text response
sql: string[], // all SQL queries executed
data: Array<{
columns: string[],
rows: Record<string, unknown>[]
}>,
steps: number, // agent steps taken
usage: { totalTokens: number },
conversationId?: string, // conversation ID (when DATABASE_URL is configured)
pendingActions?: Array<{ // present when the action framework is enabled
id: string,
type: string,
target: string,
summary: string,
approveUrl: string,
denyUrl: string,
}>
}Tip: The
@useatlas/sdkpackage provides a typed TypeScript client for the sync query endpoint (and other Atlas APIs), so you don't need to managefetchcalls and response parsing manually.
Minimal viable frontend
To build a working Atlas frontend, you only need:
- A chat loop --
POST /api/chatwith{ messages }, parse the Data Stream response (or useuseChatfrom an AI SDK adapter) - Text rendering -- display
type: "text"parts from assistant messages - Tool result rendering -- detect tool parts using
isToolUIPart(part)from"ai"and render SQL results as a table
Everything else -- conversation persistence, chart auto-detection, managed auth UI, markdown rendering -- is optional. The @atlas/web package provides all of these, but none are required. If you just want a chatbot that queries your database, the three items above are sufficient.
What @atlas/web adds
The built-in @atlas/web package adds these features on top of @ai-sdk/react. You can port any of them by reading the source in packages/web/src/ui/:
- Conversation sidebar with persistence (requires
DATABASE_URL) - Managed auth (Better Auth sign-in/sign-up UI)
- Chart detection and auto-visualization of SQL results
- Markdown rendering in assistant messages
- Error banners with auth-mode-aware messages
The core streaming and tool rendering works identically across all frameworks since they all use the same Data Stream Protocol.