# Changelog (/changelog) --- # Introduction to Atlas (/) Atlas lets you connect your database, auto-generate a semantic layer, and query your data in plain English. Every query is validated, read-only access is enforced, and it deploys anywhere. Getting Started [#getting-started] * [Hosted Quick Start](/getting-started/hosted) -- Sign up at app.useatlas.dev and start querying in minutes * [Self-Hosted Quick Start](/getting-started/quick-start) -- Deploy Atlas on your own infrastructure * [Connect Your Data](/getting-started/connect-your-data) -- Connect to PostgreSQL, MySQL, or plugin-based sources (ClickHouse, Snowflake, DuckDB, Salesforce) * [Semantic Layer](/getting-started/semantic-layer) -- Define your data schema in YAML so the agent understands your database * [Demo Datasets](/getting-started/demo-datasets) -- Try Atlas with pre-built demo data Guides [#guides] * [MCP Server](/guides/mcp) -- Use Atlas in Claude Desktop, Cursor, and other MCP clients * [Slack Integration](/guides/slack) -- Query your data from Slack with slash commands and threaded follow-ups * [Python Data Analysis](/guides/python) -- Sandboxed Python execution for charts and statistical analysis * [Scheduled Tasks](/guides/scheduled-tasks) -- Run recurring queries and deliver results via email, Slack, or webhook * [Actions Framework](/guides/actions) -- Approval-gated write operations (email, JIRA, and more) * [Admin Console](/guides/admin-console) -- Monitor connections, browse the semantic layer, and manage users * [Embedding Widget](/guides/embedding-widget) -- Add an Atlas chat widget to any website with a script tag * [Sharing Conversations](/guides/sharing-conversations) -- Share conversations via public links and embed them * [Troubleshooting](/guides/troubleshooting) -- Diagnose and fix common issues Deployment [#deployment] * [Deploy](/deployment/deploy) -- One-click deploy to Railway, Vercel, or Docker * [Authentication](/deployment/authentication) -- Configure auth modes: none, API key, managed, BYOT Frameworks [#frameworks] * [Bring Your Own Frontend](/frameworks/overview) -- Use Atlas with any frontend framework * Framework-specific guides for [React/Vite](/frameworks/react-vite), [Nuxt](/frameworks/nuxt), [SvelteKit](/frameworks/sveltekit), and [TanStack Start](/frameworks/tanstack-start) Security [#security] * [SQL Validation Pipeline](/security/sql-validation) -- 7-layer defense-in-depth for read-only SQL enforcement Reference [#reference] * [Environment Variables](/reference/environment-variables) -- Complete env var reference with defaults * [CLI Reference](/reference/cli) -- All commands: init, diff, query, doctor, validate, mcp, and more * [Configuration](/reference/config) -- Declarative config with `atlas.config.ts` * [SDK Reference](/reference/sdk) -- TypeScript SDK for programmatic API access * [React Hooks Reference](/reference/react) -- Headless React hooks for custom Atlas chat UIs * [API Reference](/reference/api) -- HTTP API endpoints for chat, queries, and conversations Plugins [#plugins] * [Plugin Authoring Guide](/plugins/authoring-guide) -- Build datasource, context, interaction, action, and sandbox plugins Architecture [#architecture] * [Sandbox Architecture](/architecture/sandbox) -- Platform-agnostic code execution sandboxing design Comparisons [#comparisons] * [Atlas vs Alternatives](/comparisons) -- Feature matrix comparing Atlas, WrenAI, Vanna, Metabase, Cube, and ThoughtSpot Changelog [#changelog] * [Changelog](/changelog) -- What's new in Atlas --- # Row-Level Security (/security/row-level-security) This page covers content for **workspace admins** (setting up RLS policies), **operators** (auth mode compatibility and claim resolution), and **end users** (understanding data access boundaries). Jump to the section relevant to your role, or read end-to-end for the full picture. Row-Level Security (RLS) injects WHERE conditions into every SQL query based on the authenticated user's claims. This ensures tenants only see their own data -- without relying on the agent to add the right filters. RLS is **fail-closed**. If the user's claims are missing or cannot be resolved, the query is blocked entirely. This is by design -- silent data leaks are worse than a blocked query. How It Works [#how-it-works] 1. You define **policies** that map JWT claims to table columns 2. When the agent generates a SQL query, Atlas resolves the claim value from the authenticated user 3. Atlas injects `WHERE table.column = 'claim_value'` (or `IN (...)` for array claims) into the query AST (not string concatenation) 4. The injection runs **after** plugin `beforeQuery` hooks -- plugins cannot strip RLS conditions 5. Claim values are SQL-escaped (single quotes doubled) before injection Policies support single or multi-column conditions, array-valued JWT claims (generates `IN (...)` instead of `=`), and configurable AND/OR logic between policies. The AST manipulation handles CTEs, subqueries, UNIONs, derived tables, and table aliases correctly. Custom validators (SOQL, GraphQL) bypass RLS and must enforce their own filtering. *** Quick Start (Environment Variables) [#quick-start-environment-variables] The simplest setup uses three environment variables for a single-policy configuration: ```bash ATLAS_RLS_ENABLED=true ATLAS_RLS_COLUMN=tenant_id # Column name to add to WHERE clauses ATLAS_RLS_CLAIM=org_id # JWT claim path — resolved from the authenticated user's token ``` This creates a wildcard policy that applies to **all tables** -- equivalent to: ```typescript rls: { enabled: true, policies: [{ tables: ["*"], column: "tenant_id", claim: "org_id" }], } ``` Both `ATLAS_RLS_COLUMN` and `ATLAS_RLS_CLAIM` are required when `ATLAS_RLS_ENABLED=true`. Atlas will fail to start if either is missing. *** Advanced Configuration (atlas.config.ts) [#advanced-configuration-atlasconfigts] For multi-policy setups, use the config file: ```typescript import { defineConfig } from "@atlas/api/lib/config"; export default defineConfig({ rls: { enabled: true, policies: [ // Wildcard — applies tenant_id filter to every table in every query { tables: ["*"], column: "tenant_id", claim: "org_id" }, // Target specific tables — uses a nested JWT claim path (dot-delimited) { tables: ["orders", "shipments"], column: "region", claim: "app_metadata.region" }, // Schema-qualified table name — the full "schema.table" must match { tables: ["analytics.events"], column: "workspace_id", claim: "workspace" }, ], }, }); ``` Policy Schema [#policy-schema] Each policy uses either the single-condition shorthand (`column` + `claim`) or the multi-condition form (`conditions`): | Field | Type | Description | | ------------ | --------------------- | ------------------------------------------------------------------- | | `tables` | `string[]` | Table names this policy applies to. Use `["*"]` for all tables | | `column` | `string` | Column name to filter on (single-condition shorthand) | | `claim` | `string` | Claim path to resolve the filter value (single-condition shorthand) | | `conditions` | `{ column, claim }[]` | Multiple column/claim pairs — ANDed together within this policy | Use `column`+`claim` for single-condition policies, or `conditions` for multi-column policies. You cannot use both on the same policy. RLS Config Options [#rls-config-options] | Field | Type | Default | Description | | ------------- | --------------- | ------- | ------------------------------------------------- | | `enabled` | `boolean` | `false` | Whether RLS is active | | `policies` | `RLSPolicy[]` | `[]` | Array of policies | | `combineWith` | `"and" \| "or"` | `"and"` | How to combine conditions from different policies | **Validation rules:** * At least one policy is required when `enabled: true` * Column names must match `/^[a-zA-Z_][a-zA-Z0-9_]*$/` * Tables array must have at least one entry *** Claim Path Resolution [#claim-path-resolution] Claim paths support dot-delimited access into nested JWT structures: | Claim Path | JWT Claims | Resolved Value | | --------------------- | ------------------------------------------ | -------------- | | `org_id` | `{ "org_id": "acme" }` | `acme` | | `app_metadata.tenant` | `{ "app_metadata": { "tenant": "acme" } }` | `acme` | | `custom.nested.id` | `{ "custom": { "nested": { "id": 42 } } }` | `42` | Non-string values are coerced to strings. Array values generate `IN (...)` conditions (see [Array Claims](#array-claims)). If the path resolves to `undefined` or `null`, the query is blocked. Empty arrays are also blocked (fail-closed). *** Array Claims [#array-claims] When a JWT claim resolves to an array, Atlas generates an `IN (...)` condition instead of `=`: **JWT claims:** ```json { "sub": "user_123", "departments": ["engineering", "sales"] } ``` **Config:** ```typescript rls: { enabled: true, policies: [ { tables: ["*"], column: "department", claim: "departments" }, ], } ``` **Injection result:** ```sql -- Agent generates: SELECT * FROM tickets -- After RLS injection: SELECT * FROM tickets WHERE tickets.department IN ('engineering', 'sales') ``` Empty arrays are blocked (fail-closed). If a user's claim resolves to `[]`, the query is rejected to prevent accidental full-table access. *** Multi-Column Policies [#multi-column-policies] When a policy needs to filter on multiple columns simultaneously, use the `conditions` array: ```typescript rls: { enabled: true, policies: [ { tables: ["orders", "shipments"], conditions: [ { column: "tenant_id", claim: "org_id" }, { column: "region", claim: "app_metadata.region" }, ], }, ], } ``` All conditions within a policy are ANDed together: ```sql -- After injection: SELECT * FROM orders WHERE orders.tenant_id = 'org_acme' AND orders.region = 'us-east' ``` *** OR-Logic Between Policies [#or-logic-between-policies] By default, conditions from different policies are ANDed (all must match). Set `combineWith: "or"` to allow access when **any** policy matches: ```typescript rls: { enabled: true, combineWith: "or", policies: [ // Users can access data matching their org... { tables: ["*"], column: "org_id", claim: "org_id" }, // ...OR data in their assigned region { tables: ["*"], column: "region", claim: "region" }, ], } ``` **Injection result:** ```sql -- Agent generates: SELECT * FROM orders WHERE status = 'active' -- After RLS injection (OR between policies): SELECT * FROM orders WHERE status = 'active' AND (orders.org_id = 'org_acme' OR orders.region = 'us-east') ``` When using `combineWith: "or"`, the OR-combined conditions are parenthesized to prevent precedence issues with existing WHERE clauses. Within each policy, multiple conditions are still ANDed. *** Auth Mode Compatibility [#auth-mode-compatibility] RLS requires an authentication mode that provides user claims: | Auth Mode | Claims Available | RLS Compatible | | --------- | ----------------------------------------- | ----------------------- | | `none` | No | No -- queries blocked | | `api-key` | Only via `ATLAS_RLS_CLAIMS` (static JSON) | Yes, with static claims | | `managed` | Yes (Better Auth session) | Yes | | `byot` | Yes (JWT claims) | Yes -- primary use case | If `ATLAS_AUTH_MODE=none` and RLS is enabled, **all queries will be blocked**. This is intentional -- RLS without authentication has no user context to resolve claims from. *** Example: Multi-Tenant SaaS [#example-multi-tenant-saas] A typical setup where each user belongs to an organization: **JWT claims** (from your identity provider): ```json { "sub": "user_123", "email": "alice@acme.com", "org_id": "org_acme", "role": "analyst" } ``` **Atlas config:** ```typescript export default defineConfig({ auth: "byot", rls: { enabled: true, policies: [ { tables: ["*"], column: "org_id", claim: "org_id" }, ], }, }); ``` **What happens when Alice runs a query:** ```sql -- Agent generates: SELECT department, COUNT(*) FROM employees GROUP BY department -- After RLS injection: SELECT department, COUNT(*) FROM employees WHERE employees.org_id = 'org_acme' GROUP BY department ``` The injection is transparent to the agent and the user. The agent sees the filtered results and builds its answer from there. *** How Injection Works [#how-injection-works] RLS conditions are injected via AST manipulation (not string concatenation), which handles edge cases correctly: Table Aliases [#table-aliases] ```sql -- Before: SELECT e.name FROM employees e JOIN departments d ON e.dept_id = d.id -- After (policies: employees.org_id, departments.org_id): SELECT e.name FROM employees e JOIN departments d ON e.dept_id = d.id WHERE e.org_id = 'org_acme' AND d.org_id = 'org_acme' ``` CTEs and Subqueries [#ctes-and-subqueries] RLS conditions are injected into each SELECT that references a filtered table, including CTE definitions, derived tables in FROM clauses, and WHERE-clause subqueries. UNIONs [#unions] Each branch of a UNION gets its own RLS conditions independently. *** Security Model [#security-model] * **Fail-closed** -- missing user, missing claims, unresolvable claim paths, or empty array claims block the query * **Post-plugin** -- RLS injection runs after plugin `beforeQuery` hooks, so plugins cannot strip conditions * **SQL-escaped** -- claim values have single quotes doubled before injection * **AST-based** -- conditions are injected into the parsed AST and regenerated, preventing injection attacks * **Scoped to SELECT** -- only applies to queries that pass SQL validation (SELECT-only) *** Troubleshooting [#troubleshooting] "RLS is enabled but no authenticated user is available" [#rls-is-enabled-but-no-authenticated-user-is-available] Authentication is required. Check that your auth mode provides user context. See [Authentication](/deployment/authentication). "RLS policy requires claim X but it is missing" [#rls-policy-requires-claim-x-but-it-is-missing] The JWT doesn't contain the expected claim. Verify: * The claim path is correct (check spelling, case sensitivity) * Your identity provider includes the claim in the JWT * For nested paths like `app_metadata.tenant`, ensure the full structure exists "ATLAS_RLS_ENABLED=true requires both ATLAS_RLS_COLUMN and ATLAS_RLS_CLAIM" [#atlas_rls_enabledtrue-requires-both-atlas_rls_column-and-atlas_rls_claim] Both environment variables must be set when using env-var configuration. For multi-policy setups, use `atlas.config.ts` instead. Queries return no data [#queries-return-no-data] If RLS is working but queries return empty results, the claim value may not match any rows. Check: ```bash ATLAS_LOG_LEVEL=debug ``` Look for `"RLS conditions injected into SQL"` log messages to see the exact WHERE clauses applied to each query. See [Troubleshooting](/guides/troubleshooting#row-level-security) for more diagnostic steps. *** For workspace admins [#for-workspace-admins] If you manage an Atlas workspace through the admin console, here is what you need to know about RLS: **Setting up RLS policies** requires access to `atlas.config.ts` or the environment variables listed in [Quick Start](#quick-start-environment-variables). Coordinate with your platform operator to configure policies that match your tenant column structure. **What to verify:** 1. Every table your users query has a tenant column (e.g. `org_id`, `tenant_id`, `workspace_id`) 2. The JWT claims from your identity provider include the corresponding claim path 3. Test with `ATLAS_LOG_LEVEL=debug` to confirm injection — look for `"RLS conditions injected into SQL"` log messages **Admin console visibility:** RLS rejections appear in Admin > Audit Log as failed queries. Filter by error messages containing "RLS" to monitor policy enforcement. *** For operators [#for-operators] Auth mode compatibility [#auth-mode-compatibility-1] RLS depends on user claims being available at query time. See [Auth Mode Compatibility](#auth-mode-compatibility) for the full matrix. For `api-key` mode specifically, static claims are provided via `ATLAS_RLS_CLAIMS` (a JSON string parsed at startup) — suitable for service-to-service scenarios with a fixed tenant context. Claim resolution [#claim-resolution] Claims are resolved using dot-delimited paths. See [Claim Path Resolution](#claim-path-resolution) for the full resolution rules and examples. Claim values are SQL-escaped (single quotes doubled) before injection. Custom validator bypass [#custom-validator-bypass] If a connection uses a custom validator (via `ConnectionPluginMeta.validate`), RLS injection is **skipped entirely**. Custom validators for SOQL, GraphQL, or other non-SQL languages must implement their own row-level filtering. *** For end users [#for-end-users] When RLS is enabled, you will only see data that belongs to your organization or tenant. This happens automatically — you do not need to add any filters to your questions. **What this means in practice:** * If you ask "Show all orders," you see only your organization's orders * Aggregate queries (counts, sums, averages) only include your data * The agent does not know about other tenants' data — the filtering happens at the database level **If queries return no data:** * Your account may not have the expected claim value — contact your workspace admin * The column name in the RLS policy may not match the actual database column — this is a configuration issue, not a data issue *** See Also [#see-also] * [Authentication](/deployment/authentication) — Auth mode setup (RLS requires an auth mode that provides user claims) * [Configuration](/reference/config#rls-row-level-security) — Declarative RLS policies in `atlas.config.ts` * [Environment Variables](/reference/environment-variables#row-level-security) — Env var shorthand for a single RLS policy * [SQL Validation Pipeline](/security/sql-validation) — How RLS injection fits into the 7-layer validation pipeline * [Troubleshooting](/guides/troubleshooting#row-level-security) — Common RLS issues and fixes --- # SQL Validation Pipeline (/security/sql-validation) Atlas lets an AI agent write SQL against your production database. Without guardrails, this is dangerous — a single malicious or mistaken query could delete data, expose secrets, or lock up your database. The validation pipeline is designed to prevent this. Every SQL query the agent writes passes through **7 layers of validation** before it reaches your database. No query can bypass the pipeline — if any layer rejects, the query is blocked and the agent receives a structured error message explaining why. Key terms [#key-terms] Before diving in, here are a few terms used throughout this page: | Term | Meaning | | ------------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **DML** (Data Manipulation Language) | SQL statements that change data: `INSERT`, `UPDATE`, `DELETE` | | **DDL** (Data Definition Language) | SQL statements that change database structure: `CREATE TABLE`, `DROP TABLE`, `ALTER TABLE` | | **AST** (Abstract Syntax Tree) | A structured representation of a SQL query, produced by a parser. Like a grammar diagram for code — it lets Atlas understand the *structure* of a query, not just its text | | **RLS** (Row-Level Security) | A mechanism that restricts which rows a user can see by automatically adding filter conditions to every query | | **Table whitelist** | The set of tables the agent is allowed to query — derived from your [semantic layer](/getting-started/semantic-layer) entity YAML files | | **CTE** (Common Table Expression) | A temporary named result set defined with `WITH name AS (...)`. CTEs are part of the query, not real tables | | **Semantic layer** | YAML files that describe your database tables, columns, and relationships — the agent reads these to understand your data. See [Semantic Layer Concepts](/getting-started/concepts) | Pipeline overview [#pipeline-overview] | Layer | Name | Runs | Purpose | | ----- | -------------------- | ---------------- | ----------------------------------------------- | | 0 | Empty check | Before execution | Reject empty or whitespace-only input | | 1 | Regex mutation guard | Before execution | Quick reject of DML/DDL keywords | | 2 | AST parse | Before execution | Full SQL parse — verify single SELECT statement | | 3 | Table whitelist | Before execution | Only allow tables defined in the semantic layer | | 4 | RLS injection | During execution | Inject WHERE clauses for tenant isolation | | 5 | Auto LIMIT | During execution | Cap result set size to prevent full-table scans | | 6 | Statement timeout | During execution | Kill queries that run too long | Layers 0–3 run in `validateSQL()` before the query touches the database. Layers 4–6 are applied during execution in `executeSQL()`. Between layers 3 and 4, per-source rate limiting checks concurrency and queries-per-minute limits. The pipeline is identical across all supported databases. The only difference is the AST parser mode: `node-sql-parser` uses `PostgresQL` mode for PostgreSQL connections and `MySQL` mode for MySQL connections. Plugin-registered database types fall back to the PostgreSQL parser with a warning. Plugins can register a custom `parserDialect` via `ConnectionPluginMeta` to override the parser mode. Layer 0: Empty check [#layer-0-empty-check] **What it does:** Rejects empty strings, whitespace-only input, and bare semicolons. Trailing semicolons are stripped before processing. **Why it matters:** An empty query sent to the database would return an error, but catching it here provides a clearer error message and avoids unnecessary database round-trips. ```sql -- Blocked "" " " ";" ``` Layer 1: Regex mutation guard [#layer-1-regex-mutation-guard] **What it does:** A fast first-pass check that scans the query text for keywords associated with data modification or administrative commands. SQL comments are stripped before testing so they cannot be used to hide dangerous keywords, but string literals are preserved — this is why keywords inside string values (like `WHERE status = 'DELETE'`) trigger false positives. **Why it matters:** This is the first line of defense against queries that try to change or destroy data. It runs before the more expensive AST parsing, catching obvious attacks cheaply. **Blocked keywords (all databases):** ``` INSERT, UPDATE, DELETE, DROP, CREATE, ALTER, TRUNCATE GRANT, REVOKE, EXEC, EXECUTE, CALL, KILL COPY, LOAD, VACUUM, REINDEX, OPTIMIZE INTO OUTFILE ``` **Additional blocked keywords (MySQL only):** ``` HANDLER, SHOW, DESCRIBE, EXPLAIN, USE ``` **Examples:** ```sql -- Blocked: DML INSERT INTO users (name) VALUES ('alice') -- Blocked: DDL DROP TABLE users -- Blocked: comment bypass attempt (comments stripped before check) /* harmless */ DELETE FROM users -- Blocked: keyword inside string literal (known false positive) SELECT * FROM logs WHERE status = 'DELETE' ``` The regex guard is intentionally broad — it prefers **false positives over false negatives**. A query like `WHERE status = 'DELETE'` is blocked because the word "DELETE" appears in the text, even though it's inside a string literal. The agent can work around this by reformulating the query (e.g. filtering by a status code instead). Plugins can register additional forbidden patterns via `ConnectionPluginMeta.forbiddenPatterns` to add database-specific protection for connection types not covered by the built-in rules. Layer 2: AST parse [#layer-2-ast-parse] **What it does:** The query is fully parsed into an Abstract Syntax Tree using [node-sql-parser](https://github.com/taozhi8833998/node-sql-parser). The parser mode is auto-detected based on the connection type (PostgreSQL or MySQL). This layer enforces three rules: 1. **Single statement** — No semicolon-separated batches. Exactly one statement allowed. 2. **SELECT only** — The parsed statement type must be `select`. Any other type is rejected. 3. **Must parse** — If the parser cannot understand the query, it is **rejected**. This is a critical security decision — an unparseable query could be a crafted bypass attempt. **Why it matters:** The regex guard (layer 1) catches known bad keywords, but it can't understand SQL structure. A query like `SELECT 1; DROP TABLE users` contains `SELECT`, but it also contains a second statement that drops a table. Only a full parser can detect this — the AST reveals the query has two statements, and the second one isn't a SELECT. ```sql -- Blocked: multiple statements (piggybacking attack) SELECT 1; DROP TABLE users -- Blocked: non-SELECT CREATE TABLE evil (id INT) -- Blocked: unparseable (rejected, not allowed through) SELECT * FROM users UNION WEIRD SYNTAX -- Allowed: CTEs (WITH clause) WITH active AS (SELECT * FROM users WHERE active = true) SELECT * FROM active ``` **Reject by default:** If the parser can't understand a query, Atlas blocks it. This is the opposite of how most tools work — they allow queries they don't understand. Atlas treats "I can't parse this" as a potential attack, not a syntax error to ignore. CTE names (`WITH x AS (...)`) are collected during parsing and passed to layer 3 so they aren't mistaken for real table names. Layer 3: Table whitelist [#layer-3-table-whitelist] **What it does:** Every table referenced in the query is checked against a whitelist derived from your semantic layer. Only tables that have an entity YAML file (`semantic/entities/*.yml` or `semantic/{source}/entities/*.yml`) are allowed. **Why it matters:** Even a valid SELECT can be dangerous if it reads from the wrong table. Your database might contain tables with sensitive data (credentials, PII, internal configuration) that the agent should never access. The whitelist ensures the agent can only query tables you've explicitly described in the semantic layer. ```sql -- Given semantic layer defines: users, orders, products -- Allowed: both tables are in the semantic layer SELECT * FROM users JOIN orders ON users.id = orders.user_id -- Blocked: "secrets" is not in the semantic layer SELECT * FROM secrets -- Blocked: schema-qualified reference requires qualified name in whitelist SELECT * FROM internal.secrets ``` **Schema-qualified queries:** If a query uses `schema.table` syntax (e.g. `analytics.orders`), the qualified name must be in the whitelist. Unqualified names cannot bypass schema restrictions. **CTE exclusion:** CTE names collected in layer 2 are excluded from the whitelist check — they are temporary result sets defined by the query, not real tables. **Disabling:** Set `ATLAS_TABLE_WHITELIST=false` to disable (not recommended for production). Layer 4: RLS injection [#layer-4-rls-injection] **What it does:** When Row-Level Security is enabled, Atlas automatically adds WHERE clauses to the query based on the authenticated user's identity claims. This is done via AST manipulation — not string concatenation — so the resulting SQL is always syntactically correct. **Why it matters:** In multi-tenant applications, different users should only see their own data. Without RLS, the agent could accidentally (or intentionally) query another tenant's rows. RLS makes this impossible by injecting filter conditions that the agent cannot remove or override. RLS can be configured via [environment variables](/reference/environment-variables#row-level-security) (single-policy shorthand) or declaratively in `atlas.config.ts` with the `rls` key (supports multiple policies targeting different tables and claims). ```sql -- Original query SELECT * FROM orders WHERE status = 'active' -- After RLS injection (tenant_id from user's JWT org_id claim) SELECT * FROM orders WHERE status = 'active' AND orders.tenant_id = 'acme-corp' ``` RLS is **fail-closed**. If the user's claims are missing or cannot be resolved, the query is blocked entirely. A blocked query is always safer than a data leak. RLS injection runs **after** validation (layers 0–3) and **after** all plugin `beforeQuery` hooks. This ordering ensures plugins cannot strip RLS conditions. See [Row-Level Security](/security/row-level-security) for full configuration details. Layer 5: Auto LIMIT [#layer-5-auto-limit] **What it does:** A `LIMIT` clause is appended to every query that doesn't already have one. The check is text-based — if the word `LIMIT` appears anywhere in the query (including in a subquery), no additional LIMIT is appended. **Why it matters:** Without a limit, a simple `SELECT * FROM events` on a table with millions of rows could return a massive result set, consuming memory and bandwidth. Auto LIMIT caps the result set to a safe default. ```sql -- Input (no LIMIT) SELECT * FROM users -- Executed SELECT * FROM users LIMIT 1000 ``` Default: **1000 rows**. Configure with `ATLAS_ROW_LIMIT`. If the result set hits the limit, the response includes `truncated: true` so the agent knows data was cut off and can add filters or pagination. Layer 6: Statement timeout [#layer-6-statement-timeout] **What it does:** A per-query execution deadline is set on the database connection. Queries that exceed the deadline are terminated by the database engine itself. **Why it matters:** Even a valid, read-only SELECT can consume significant resources — a complex join across large tables, or a query that triggers a sequential scan, could run for minutes and degrade performance for other users. The statement timeout prevents any single query from monopolizing the database. * **PostgreSQL**: Session-level `statement_timeout` set on the connection * **MySQL**: Session-level `MAX_EXECUTION_TIME` set per query Default: **30 seconds** (30000ms). Configure with `ATLAS_QUERY_TIMEOUT`. Queries that exceed the timeout are terminated by the database and return an error to the agent. What gets rejected: concrete examples [#what-gets-rejected-concrete-examples] Here are real-world attack patterns and which layer catches them: 1\. SQL injection via piggybacked statement [#1-sql-injection-via-piggybacked-statement] An attacker tries to append a destructive statement after a legitimate SELECT: ```sql SELECT * FROM users; DROP TABLE users ``` **Caught at:** Layer 1 (regex guard blocks `DROP`) **and** layer 2 (AST parser detects two statements). Defense in depth — even if one layer had a bug, the other would catch it. 2\. Data exfiltration from an undocumented table [#2-data-exfiltration-from-an-undocumented-table] The agent tries to read from a table that exists in the database but isn't described in the semantic layer: ```sql SELECT api_key, secret FROM internal_credentials ``` **Caught at:** Layer 3 (table whitelist). `internal_credentials` has no entity YAML file, so it's not in the allowed set. The agent only knows about tables you've explicitly documented. 3\. Comment-wrapped mutation attempt [#3-comment-wrapped-mutation-attempt] An attacker hides a DELETE statement inside what looks like a commented-out section, hoping the validator only sees the harmless-looking parts: ```sql /* just a select */ DELETE FROM users WHERE 1=1 ``` **Caught at:** Layer 1 (regex guard). SQL comments are stripped before pattern matching, so `DELETE` is detected regardless of surrounding comments. 4\. Resource exhaustion via unlimited query [#4-resource-exhaustion-via-unlimited-query] The agent writes a query that would return millions of rows, consuming memory and network bandwidth: ```sql SELECT * FROM events ``` **Caught at:** Layer 5 (auto LIMIT). The query is rewritten to `SELECT * FROM events LIMIT 1000` before execution. If it still runs too long, layer 6 (statement timeout) terminates it after 30 seconds. How Atlas compares to other tools [#how-atlas-compares-to-other-tools] Most text-to-SQL tools validate queries minimally or not at all. Here's how Atlas's pipeline compares: | Validation layer | Atlas | WrenAI | Vanna | Metabase | | ------------------------- | -------------------------------- | -------------- | ------- | ----------------------- | | Empty check | Yes | — | — | — | | Regex mutation guard | Yes (DML + DDL + admin commands) | — | — | — | | AST parse (single SELECT) | Yes (reject if unparseable) | — | — | Partial (native driver) | | Table whitelist | Yes (semantic layer) | Modeling layer | — | Sandboxed permissions | | RLS / row-level filtering | Yes (AST injection, fail-closed) | — | — | Yes (sandboxing) | | Auto LIMIT | Yes (configurable) | — | — | Yes (hardcoded) | | Statement timeout | Yes (configurable) | — | — | Yes | | **Total layers** | **7** | **\~1** | **\~0** | **\~3** | This comparison reflects publicly documented validation behavior as of March 2026. WrenAI and Vanna focus on query generation accuracy rather than execution-time validation. Metabase has a mature permissions model but serves a different use case (BI dashboards vs. AI agent SQL execution). The key difference isn't just the number of layers — it's the **fail-closed philosophy**. Atlas rejects anything it doesn't understand (unparseable SQL, unknown tables, missing claims). Most tools default to allowing queries they can't validate. Plugin hooks [#plugin-hooks] When plugins are installed, two hook points extend the pipeline: * **`beforeQuery`** — Plugins can inspect or rewrite SQL before execution. If a plugin rewrites the query, the rewritten SQL goes through layers 0–3 again for re-validation. This prevents plugins from introducing DML or bypassing the whitelist. * **`afterQuery`** — Plugins receive the query results for logging, transformation, or side effects. **Custom validators:** For non-SQL datasources (SOQL, GraphQL, MQL), a plugin can provide a `customValidator` via `ConnectionPluginMeta.validate`. When a connection has a custom validator registered, it **completely replaces** the built-in validation layers 0–3 — `validateSQL` is not called. RLS injection (layer 4) and auto LIMIT (layer 5) are also skipped, since non-SQL languages may not support standard WHERE/LIMIT syntax. The custom validator is responsible for providing equivalent safety guarantees, including any row-level isolation. Error handling [#error-handling] When a query fails at any validation layer, the response includes a structured error: ```json { "success": false, "error": "Table \"secrets\" is not in the allowed list. Check catalog.yml for available tables." } ``` Database errors that might expose connection details or internal state (passwords, connection strings, SSL certificates) are automatically scrubbed before being returned to the agent. The full error is logged server-side. All queries — successful and failed — are recorded in the [audit log](/deployment/authentication#audit-logging) with user attribution, timing, and error details. What this means for you [#what-this-means-for-you] This page covers content for **end users** (querying data), **operators** (configuring and auditing the pipeline), and **developers** (building on the SDK or plugins). For end users [#for-end-users] Your queries are safe. Every question you ask goes through 7 layers of validation before it reaches the database. You cannot accidentally delete, modify, or corrupt data — the pipeline only allows read-only SELECT queries. If a query is blocked, the agent will explain why and suggest an alternative approach. Common reasons include referencing a table that hasn't been added to the semantic layer, or using a keyword that looks like a data modification command (even inside a string literal). For operators [#for-operators] Configuration [#configuration] Three environment variables control the execution-time layers: | Variable | Default | What it controls | | ----------------------- | ------------- | -------------------------------------------------- | | `ATLAS_ROW_LIMIT` | `1000` | Maximum rows returned per query (layer 5) | | `ATLAS_QUERY_TIMEOUT` | `30000` (30s) | Query execution deadline in milliseconds (layer 6) | | `ATLAS_TABLE_WHITELIST` | `true` | Whether the table whitelist is enforced (layer 3) | The first four layers (0-3) are always active and cannot be disabled. This is intentional — they form the core security boundary. Testing the pipeline [#testing-the-pipeline] Validate a query without executing it using the SDK or REST API: ```bash curl -X POST https://your-api.example.com/api/v1/validate-sql \ -H "Content-Type: application/json" \ -d '{"sql": "SELECT * FROM users"}' ``` This runs layers 0-3 and returns the validation result. Use this to test edge cases or verify that your semantic layer whitelist is correct. Security audit [#security-audit] The pipeline follows a **fail-closed** threat model: | Threat | Mitigation | Layer | | ------------------------------------------ | --------------------------------------------------- | ----- | | SQL injection (piggybacked statements) | AST parse rejects multi-statement input | 2 | | Data exfiltration from undocumented tables | Table whitelist restricts to semantic layer | 3 | | Mutation via DML/DDL keywords | Regex guard + AST type check (SELECT only) | 1, 2 | | Comment-wrapped bypass attempts | Comments stripped before regex check | 1 | | Unparseable SQL used to evade validation | Reject-by-default on parse failure | 2 | | Resource exhaustion (full-table scans) | Auto LIMIT + statement timeout | 5, 6 | | Cross-tenant data access | RLS WHERE injection (fail-closed on missing claims) | 4 | Review query patterns in Admin > Audit Log — see [Error handling](#error-handling) for what gets recorded. For developers [#for-developers] If you're building plugins or custom tools, see [Plugin hooks](#plugin-hooks) above for how `beforeQuery`/`afterQuery` hooks and custom validators interact with the pipeline. Custom validators **completely replace** layers 0-3, so your validator must provide equivalent safety guarantees. *** Test coverage [#test-coverage] The validation pipeline has \~260 unit tests across 12 test files covering edge cases including: * Comment-based bypass attempts * CTE name exclusion * Schema-qualified table references * MySQL-specific forbidden patterns * Semicolon injection * Unparseable SQL rejection * String literal false positives See [`packages/api/src/lib/tools/__tests__/sql.test.ts`](https://github.com/AtlasDevHQ/atlas/blob/main/packages/api/src/lib/tools/__tests__/sql.test.ts) for the full test suite. Run the tests: `bun run test` (isolated per-file runner). *** See Also [#see-also] * [Row-Level Security](/security/row-level-security) — Automatic WHERE clause injection based on user claims (layer 4) * [Connect Your Data](/getting-started/connect-your-data#safety-configuration) — Configuring row limits, query timeouts, and table whitelists * [Semantic Layer](/getting-started/semantic-layer) — How entity YAML files define the table whitelist (layer 3) * [Schema Evolution](/deployment/schema-evolution) — Keeping the semantic layer in sync with database changes * [Environment Variables](/reference/environment-variables#security) — `ATLAS_ROW_LIMIT`, `ATLAS_QUERY_TIMEOUT`, and `ATLAS_TABLE_WHITELIST` * [Plugin Authoring Guide](/plugins/authoring-guide) — Extending validation with custom forbidden patterns and validators * [Atlas vs Raw MCP](/comparisons/raw-mcp) — Why connecting AI directly to your database via MCP skips all of this --- # Sandbox Architecture (/architecture/sandbox) > Design doc for platform-agnostic code execution sandboxing in Atlas. Problem [#problem] Atlas needs isolated code execution for two purposes: 1. **Explore tool** -- run shell commands (`ls`, `cat`, `grep`) against the semantic layer YAML files. Read-only, no network, no secrets needed. 2. **Python execution tool** -- run agent-generated Python to analyze data retrieved via SQL. Needs a runtime (Python + pandas/numpy/matplotlib), must not have direct access to secrets. The explore tool works on Vercel (Firecracker VM) and on Linux with nsjail. It fails on **Railway** because the platform runs shared-kernel containers that block `clone()` with namespace flags -- no `CAP_SYS_ADMIN`, no unprivileged user namespaces. This is a fundamental platform limitation, not a configuration issue. Threat Model: Who Needs What [#threat-model-who-needs-what] Not every deployment needs the same level of sandbox isolation. The right backend depends on your trust model: Self-hosted / single-tenant [#self-hosted--single-tenant] The agent and all its users are employees operating within the same trust boundary. In this model: * **Prompt injection is the main risk** -- a crafted value in the database could influence the agent's behavior. But the agent's tools are already scoped: `executeSQL` is SELECT-only, and `explore` only reads YAML files. * **nsjail or the sidecar is plenty** -- you're defending against accidental damage, not hostile tenants. * **just-bash is acceptable** -- if you run Atlas behind VPN with API key auth. Multi-tenant SaaS / public-facing [#multi-tenant-saas--public-facing] Now you have real trust boundaries. User A should not be able to influence User B's queries or data. In this model: * **Sandbox isolation is critical** -- generated code must run in its own security context. * **Firecracker (Vercel Sandbox, E2B) is the right answer** -- hardware-level VM isolation, ephemeral per execution. Security Model [#security-model] Four Actors [#four-actors] | Actor | Atlas equivalent | Trust level | | ------------------------ | ------------------------------------------------ | ----------------------------- | | Agent harness | Hono API + `streamText` loop | Trusted (deployed via SDLC) | | Agent secrets | `ATLAS_DATASOURCE_URL`, API keys, `DATABASE_URL` | Must never enter sandbox | | Generated code | `explore` commands, `executePython` code | Untrusted (prompt-injectable) | | Filesystem / environment | Host OS, `semantic/` directory | Protected from generated code | Architecture [#architecture] ``` +--------------------------------------------------+ | Agent Harness (Hono API server) | | +----------+ +-----------+ +----------------+ | | | explore | |executeSQL | |executePython | | | | (sandbox)| | (in-proc) | | (sandbox) | | | +----+-----+ +-----------+ +-------+--------+ | | | | | | | no network | no network| | | no secrets | no secrets| | | read-only fs | data via | | | | stdin only| | | | Secrets: ATLAS_DATASOURCE_URL, | | API keys, DATABASE_URL | | (never enter any sandbox) | +---------------------------------------------------+ ``` Sandbox Backends [#sandbox-backends] Built-in Backends [#built-in-backends] The five built-in backends, in priority order (see [Backend Selection Priority](#backend-selection-priority) for the full table including plugin backends): ``` Priority 1: Vercel Sandbox -- Firecracker VM, deny-all network Priority 2: nsjail (explicit) -- ATLAS_SANDBOX=nsjail, hard-fail if unavailable Priority 3: Sidecar service -- HTTP-isolated container with no secrets (Railway) Priority 4: nsjail (auto) -- nsjail found on PATH, graceful fallback on failure Priority 5: just-bash -- JS-level OverlayFs (in-memory writes), path-traversal protection ``` Sandbox Plugins [#sandbox-plugins] Two additional sandbox backends are available as plugins: | Plugin | Priority | Isolation | Install | | ----------- | -------- | ------------------------------ | ------------------------ | | **E2B** | 90 | Firecracker microVM (managed) | `bun add e2b` | | **Daytona** | 85 | Cloud-hosted ephemeral sandbox | `bun add @daytonaio/sdk` | Plugin backends are always tried before any built-in backend (they sit at the top of the priority chain). The `priority` field only determines ordering among multiple plugins -- higher values are tried first. Plugins default to priority 60 (`SANDBOX_DEFAULT_PRIORITY`) if not specified. If all plugins fail to create a backend, the built-in chain (Vercel > nsjail > sidecar > just-bash) takes over. ```typescript // atlas.config.ts import { defineConfig } from "@atlas/api/lib/config"; import { e2bSandboxPlugin } from "@useatlas/e2b"; export default defineConfig({ plugins: [ e2bSandboxPlugin({ apiKey: process.env.E2B_API_KEY! }), ], }); ``` What Each Backend Supports [#what-each-backend-supports] | Capability | Vercel Sandbox | E2B / Daytona | nsjail | Sidecar | just-bash | | -------------------------- | -------------- | ------------- | ------ | ------- | --------- | | `explore` (shell) | Yes | Yes | Yes | Yes | Yes | | `executePython` | Yes | Yes | Yes | Yes | No | | VM-level isolation | Yes | Yes | No | No | No | | Kernel namespace isolation | N/A | N/A | Yes | No | No | | No secrets in sandbox | Yes | Yes | Yes | Yes | No | | No network | Yes | Yes | Yes | Yes | No | **Note on just-bash:** The just-bash backend uses the [`just-bash`](https://www.npmjs.com/package/just-bash) npm package and shares the same process as the API server. It uses `just-bash`'s `OverlayFs` class -- a JavaScript-level virtual filesystem overlay that intercepts writes in memory -- not Linux kernel OverlayFS. Because it runs in-process, host secrets (environment variables like `ATLAS_DATASOURCE_URL` and API keys) are accessible in the same memory space. The table above marks secret isolation as "No" for this reason. For multi-tenant deployments, use a higher-priority backend. Python execution is not supported under just-bash; `executePython` requires a sandbox backend (sidecar, Vercel sandbox, or nsjail) and returns an error without one. Platform Capabilities [#platform-capabilities] | Platform | nsjail | Sidecar | Best backend | | ---------------------- | ----------------- | ----------- | --------------------------- | | **Vercel** | N/A | N/A | Vercel Sandbox (priority 1) | | **Railway** | No | Required\* | Sidecar (priority 3) | | **Self-hosted Docker** | With capabilities | Optional | nsjail (priority 2) | | **Self-hosted VM** | Yes | Optional | nsjail (priority 2) | | **Local dev** | Varies | Default\*\* | Sidecar (priority 3) | \*On Railway, nsjail is unavailable (no `CAP_SYS_ADMIN` / user namespaces). Without the sidecar, the explore tool falls back to just-bash (no secret isolation, no Python support). The sidecar is effectively **required** for production isolation on Railway. \*\*`bun run db:up` starts a sidecar container alongside Postgres. `.env.example` enables `ATLAS_SANDBOX_URL=http://localhost:8080` by default, so local dev matches production isolation out of the box. Sidecar Service Design [#sidecar-service-design] Since no kernel-level sandbox works on Railway, isolation comes from **process/network separation** -- a separate service with its own filesystem and no access to the main service's secrets. ``` +----------------------------------+ +-----------------------------+ | Main Service (Hono API) | | Sandbox Sidecar | | | | | | ENV: | | ENV: | | ATLAS_DATASOURCE_URL=... | | SIDECAR_AUTH_TOKEN=... | | ANTHROPIC_API_KEY=... | | (no DB creds, no API | | DATABASE_URL=... | | keys, no secrets) | | | | | | Agent loop calls: | | FILES: | | POST sidecar:8080/exec |---->| /semantic/**/*.yml | | POST sidecar:8080/exec-python |---->| | | | | ENDPOINTS: | | Receives: | | GET /health | | { stdout, stderr, exitCode } |<----| POST /exec | | or PythonResult |<----| POST /exec-python | +----------------------------------+ +-----------------------------+ Railway private network ``` The sidecar enforces a **concurrency limit of 10** (`MAX_CONCURRENT = 10`) across both `/exec` and `/exec-python` endpoints. Requests beyond this limit receive HTTP 429. Backend Selection Priority [#backend-selection-priority] At startup, the explore tool selects the highest-priority backend available. The selection is evaluated top-to-bottom — the first match wins: | Priority | Backend | Condition | | -------- | ------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | 0 | **Sandbox plugin** | A sandbox plugin is registered via `atlas.config.ts` (skipped when `ATLAS_SANDBOX=nsjail`). **Explore tool only** — `executePython` does not check plugin backends | | 1 | **Vercel Sandbox** | `ATLAS_RUNTIME=vercel` or `VERCEL` env var is present | | 2 | **nsjail (explicit)** | `ATLAS_SANDBOX=nsjail` is set. Hard-fails if the nsjail binary is not found — no fallback | | 3 | **Sidecar** | `ATLAS_SANDBOX_URL` is set. Skips nsjail auto-detection entirely | | 4 | **nsjail (auto-detect)** | nsjail binary found on `PATH` (no explicit config needed). Falls back gracefully on failure | | 5 | **just-bash** | Fallback. JS-level `OverlayFs` + path-traversal protection only | Key behaviors: * **Explicit nsjail is strict.** Setting `ATLAS_SANDBOX=nsjail` means "nsjail or nothing" — if the binary is missing or initialization fails, the explore tool returns an error rather than falling back. Plugin-provided sandbox backends are skipped entirely when this flag is set. * **Sidecar skips nsjail auto-detection.** When `ATLAS_SANDBOX_URL` is set, nsjail auto-detection is completely skipped. This avoids noisy namespace warnings on platforms like Railway where `clone()` with namespace flags is blocked. * **Auto-detected nsjail is graceful.** If nsjail is found on `PATH` but fails to initialize (e.g. missing kernel capabilities), the backend falls back to just-bash with a warning. * **Plugin backends take top priority.** Sandbox plugins (E2B, Daytona, custom) are tried first and sorted by their `priority` field (highest wins). If all plugins fail, the built-in chain continues. The health endpoint (`GET /api/health`) reports which backend is active in the `explore.backend` field. > **`executePython` uses a different priority order.** See [Python Execution](#python-execution-executepython) for the full priority chain. Key difference: sidecar is priority 1, plugin backends are skipped, and just-bash is not a fallback. Configuration [#configuration] | Variable | Default | Description | | --------------------------- | ----------- | --------------------------------------------------- | | `ATLAS_SANDBOX` | auto-detect | Force sandbox backend: `nsjail` | | `ATLAS_SANDBOX_URL` | -- | Sidecar service URL (enables sidecar backend) | | `SIDECAR_AUTH_TOKEN` | -- | Shared secret for sidecar auth | | `ATLAS_NSJAIL_PATH` | -- | Explicit path to nsjail binary | | `ATLAS_NSJAIL_TIME_LIMIT` | `10` | nsjail per-command time limit in seconds | | `ATLAS_NSJAIL_MEMORY_LIMIT` | `256` | nsjail per-command memory limit in MB (`rlimit_as`) | nsjail Resource Limits [#nsjail-resource-limits] In addition to the configurable time and memory limits above, nsjail enforces these hard-coded resource limits per command: | Limit | Value | Description | | ----------------- | -------------------------------------------- | ------------------------------------------------------- | | `rlimit_as` | `ATLAS_NSJAIL_MEMORY_LIMIT` (default 256 MB) | Virtual memory limit | | `rlimit_fsize` | 10 MB | Max file size a process can create | | `rlimit_nproc` | 5 | Max number of processes | | `rlimit_nofile` | 64 | Max open file descriptors | | Stdout/stderr cap | 1 MB (`MAX_OUTPUT`) | Output read from stdout and stderr is truncated at 1 MB | Sidecar Timeouts [#sidecar-timeouts] Shell commands and Python execution have separate timeout configurations: **Shell commands (`/exec`):** The sidecar enforces a **10-second command timeout** (`DEFAULT_TIMEOUT_MS = 10_000`). The HTTP fetch uses a total abort signal of **15 seconds** (10s execution + 5s HTTP overhead) to account for network latency and response serialization. Shell commands that exceed the timeout return exit code 124 (matching GNU `timeout(1)` convention). **Python execution (`/exec-python`):** The default timeout is **30 seconds** (`PYTHON_DEFAULT_TIMEOUT_MS = 30_000`), configurable via the `ATLAS_PYTHON_TIMEOUT` environment variable. The sidecar clamps the value to a maximum of **120 seconds** (`PYTHON_MAX_TIMEOUT_MS`). The HTTP fetch adds **10 seconds** of overhead on top of the execution timeout, giving a total abort signal of **40 seconds** at the default (or up to 130s at the maximum). Python Execution (`executePython`) [#python-execution-executepython] The `executePython` tool runs agent-generated Python code for data analysis and visualization. It uses a different backend selection priority than the explore tool: ``` Priority 1: Sidecar (ATLAS_SANDBOX_URL) -- POST /exec-python Priority 2: Vercel Sandbox -- Python 3.13 Firecracker microVM Priority 3: nsjail (explicit, ATLAS_SANDBOX=nsjail) -- hard-fail if unavailable Priority 4: nsjail (auto-detect, on PATH) -- graceful fallback Priority 5: No backend — error -- just-bash is NOT a fallback ``` Key differences from the explore tool's priority chain: * **Sidecar is priority 1** (not priority 3). When `ATLAS_SANDBOX_URL` is set, Python uses the sidecar immediately without checking for Vercel sandbox first. * **Plugin sandbox backends are not checked.** Python only uses built-in backends. * **just-bash is not a fallback.** If no sandbox backend is available, `executePython` returns an error rather than running Python in the host process. Python execution has two layers of defense: an AST-based import guard (defense-in-depth, runs before execution) and the sandbox backend itself (the actual security boundary). The import guard blocks dangerous modules (`subprocess`, `os`, `socket`, etc.) and builtins (`exec`, `eval`, `open`, `__import__`, etc.). If `python3` is not available locally for AST validation, the guard is skipped -- the sandbox enforces isolation regardless. Data from a previous SQL query is injected into the sandbox as a pandas DataFrame (`df`) or raw dict (`data`). Results are returned as structured output: tables (`_atlas_table`), interactive Recharts charts (`_atlas_chart`), or PNG files (matplotlib via `chart_path()`). --- # CLI Reference (/reference/cli) The CLI is used for self-hosted deployments to profile databases, generate the semantic layer, and manage configuration from the terminal. Hosted users at [app.useatlas.dev](https://app.useatlas.dev) can skip this — the platform handles database profiling, semantic layer generation, and configuration through the web UI. The Atlas CLI (`atlas`) profiles databases, generates semantic layers, validates configuration, and queries data from the terminal. ```bash # Run via bun workspace bun run atlas -- [options] # Or directly (if installed globally) atlas [options] ``` init [#init] Profile a database and generate semantic layer YAML files. ```bash bun run atlas -- init [options] ``` | Flag | Description | | -------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------- | | `--tables ` | Profile only specific tables/views (comma-separated) | | `--schema ` | PostgreSQL schema name (default: `public`) | | `--source ` | Write to `semantic/{name}/` subdirectory (per-source layout). Mutually exclusive with `--connection` | | `--connection ` | Profile a named datasource from `atlas.config.ts`. Mutually exclusive with `--source` | | `--csv ` | Load CSV files via DuckDB (no DB server needed). Requires `@duckdb/node-api` | | `--parquet ` | Load Parquet files via DuckDB. Requires `@duckdb/node-api` | | `--enrich` | Add LLM-enriched descriptions and query patterns (requires API key) | | `--no-enrich` | Explicitly skip LLM enrichment | | `--force` | Continue even if more than 20% of tables fail to profile | | `--demo [simple\|cybersec\|ecommerce]` | Load a demo dataset then profile (default: `simple`) | | `--org ` | Write to `semantic/.orgs/{orgId}/` and auto-import to DB (org-scoped mode). Requires managed auth (`DATABASE_URL` + `BETTER_AUTH_SECRET`) | | `--no-import` | Skip auto-import to DB in org-scoped mode (write disk only). Only meaningful with `--org` | **Examples:** ```bash # Profile all tables in the default schema bun run atlas -- init # Profile specific tables only bun run atlas -- init --tables users,orders,products # Profile a non-public schema bun run atlas -- init --schema analytics # Profile with LLM enrichment bun run atlas -- init --enrich # Load the cybersec demo dataset (62 tables, ~500K rows) bun run atlas -- init --demo cybersec # Profile a named connection from atlas.config.ts bun run atlas -- init --connection warehouse # Profile CSV files directly (no database needed) bun run atlas -- init --csv sales.csv,products.csv # Per-source layout (writes to semantic/warehouse/) bun run atlas -- init --source warehouse # Org-scoped: writes to semantic/.orgs/org-123/ and imports to DB bun run atlas -- init --org org-123 # Org-scoped, disk only (skip DB import) bun run atlas -- init --org org-123 --no-import ``` `--demo` without an argument loads the simple dataset (3 tables, \~330 rows). `--demo cybersec` loads the cybersec dataset (62 tables, \~500K rows). `--demo ecommerce` loads the e-commerce dataset (52 tables, \~480K rows). `--connection` and `--source` cannot be used together. Here's how they differ: * **No flag** — Profiles the default datasource (`ATLAS_DATASOURCE_URL`) and writes output to `semantic/entities/`. * **`--source `** — Writes output to `semantic//entities/` (for multi-source layouts where you organize by source). Does not change which datasource is profiled. * **`--connection `** — Profiles a named datasource defined in `atlas.config.ts` (e.g. `datasources.warehouse`) and automatically writes output to the matching `semantic//` subdirectory. If more than 20% of tables fail to profile, `init` exits with an error — this usually indicates a systemic issue like wrong credentials or missing permissions. Use `--force` to override this threshold. In TTY mode (interactive terminal), `init` presents a table picker. Pass `--tables` to skip the picker for scripted/CI usage. **What it generates:** * `semantic/entities/*.yml` — One file per table/view with columns, types, sample values, joins, measures, virtual dimensions, and query patterns * `semantic/metrics/*.yml` — Atomic and breakdown metrics per table * `semantic/glossary.yml` — Ambiguous terms, FK relationships, enum definitions * `semantic/catalog.yml` — Table catalog with `use_for` and `common_questions` diff [#diff] Compare the database schema against the existing semantic layer. Exits with code 1 if drift is detected. ```bash bun run atlas -- diff [options] ``` | Flag | Description | | ------------------ | ---------------------------------------------------------------------- | | `--tables ` | Diff only specific tables/views | | `--schema ` | PostgreSQL schema. Falls back to `ATLAS_SCHEMA` env var, then `public` | | `--source ` | Read from `semantic/{name}/` subdirectory | **Examples:** ```bash # Check all tables for schema drift bun run atlas -- diff # Check specific tables only bun run atlas -- diff --tables users,orders # CI usage: fail the build if schema drifted bun run atlas -- diff || echo "Schema drift detected!" ``` query [#query] Ask a natural language question and get an answer. Calls `POST /api/v1/query` on a running Atlas API server — only `bun run dev:api` is needed (the full Next.js stack is not required). ```bash bun run atlas -- query "your question" [options] ``` | Flag | Description | | ------------------- | ----------------------------------------- | | `--json` | Raw JSON output (pipe-friendly) | | `--csv` | CSV output (headers + rows, no narrative) | | `--quiet` | Data only — no narrative, SQL, or stats | | `--auto-approve` | Auto-approve any pending actions | | `--connection ` | Query a specific datasource | **Environment:** | Variable | Default | Description | | --------------- | ----------------------- | -------------------------- | | `ATLAS_API_URL` | `http://localhost:3001` | API server URL | | `ATLAS_API_KEY` | — | API key for authentication | **Examples:** ```bash # Default: formatted table output with narrative explanation bun run atlas -- query "How many users signed up last month?" # JSON: structured response for piping to jq or other scripts bun run atlas -- query "top 10 customers by revenue" --json # CSV: headers + rows only, redirect to file for reports bun run atlas -- query "monthly revenue by product" --csv > report.csv # Quiet: raw data with no narrative, SQL, or stats bun run atlas -- query "active users today" --quiet # Target a named datasource from atlas.config.ts bun run atlas -- query "warehouse inventory" --connection warehouse ``` doctor [#doctor] Alias for [`validate`](#validate) with relaxed exit codes. Runs the same config, semantic layer, and connectivity checks, but Sandbox and Internal DB failures do not cause exit 1 — they still appear in the output but are excluded from the exit code calculation. ```bash bun run atlas -- doctor ``` No flags. Use `doctor` in environments where Sandbox or Internal DB are intentionally absent but other services (datasource, LLM provider) are available. Use `validate --offline` for fast CI checks that skip all connectivity. Use `validate` when you want strict exit codes for all failures. **Exit codes:** 0 = all pass (Sandbox/Internal DB failures excluded), 1 = any non-excluded failure, 2 = warnings only. **Example output:** ``` Config ✓ atlas.config.ts Valid (defineConfig) Semantic Layer ✓ semantic/entities/ 5 entities parsed ✓ semantic/glossary.yml Valid (12 terms) ✓ semantic/catalog.yml Valid ✓ semantic/metrics/ 2 metrics parsed Connectivity ✓ ATLAS_DATASOURCE_URL Set (postgresql://…@localhost:5432/atlas) ✓ Database connectivity Connected (PostgreSQL 16.1) ✓ LLM provider anthropic (ANTHROPIC_API_KEY set) ✓ Sandbox nsjail available ✓ Internal DB Connected ``` validate [#validate] Validate config, semantic layer, and connectivity. Checks that configuration is correct, semantic layer YAML files are valid, and all services are reachable. ```bash bun run atlas -- validate [options] ``` | Flag | Description | | ----------- | ---------------------------------------------------------- | | `--offline` | Skip connectivity checks (no database or API key required) | **Checks:** * **Config** — `atlas.config.ts` presence and structure * **Semantic layer** — YAML syntax, required fields, column types, join references, metric SQL, glossary entries, cross-references * **Connectivity** (unless `--offline`) — datasource, database, LLM provider, sandbox, internal DB **Exit codes:** 0 = all pass, 1 = any failure, 2 = warnings only. Useful in CI to catch errors before deployment. Use `--offline` for fast, local-only validation. mcp [#mcp] Start an MCP (Model Context Protocol) server for use with Claude Desktop, Cursor, and other MCP-compatible clients. ```bash bun run atlas -- mcp [options] ``` | Flag | Default | Description | | -------------------------- | ------- | --------------------------------------------------------- | | `--transport ` | `stdio` | Transport type | | `--port ` | `8080` | Port for SSE transport (only used with `--transport sse`) | **When to use each transport:** * **`stdio`** (default) — For local MCP clients that launch the server as a subprocess (Claude Desktop, Cursor, Windsurf). The client manages the process lifecycle. This is the most common setup. * **`sse`** — For remote or containerized MCP servers where the client connects over HTTP. Use this when the MCP server runs in a Docker container, on a remote host, or when multiple clients need to share one server instance. Clients connect via `http://host:port/mcp`. **Examples:** ```bash # Start MCP server on stdio (default, for Claude Desktop) bun run atlas -- mcp # Start with SSE transport on a custom port bun run atlas -- mcp --transport sse --port 9090 ``` **Claude Desktop configuration** (`claude_desktop_config.json`): ```json { "mcpServers": { "atlas": { "command": "bun", "args": ["run", "atlas", "--", "mcp"], "env": { "ATLAS_DATASOURCE_URL": "postgresql://user:pass@host:5432/db", "ATLAS_PROVIDER": "anthropic", "ANTHROPIC_API_KEY": "sk-ant-..." } } } } ``` eval [#eval] Run the evaluation pipeline against demo schemas. Used to measure text-to-SQL accuracy. Test cases are YAML files in `eval/cases/`, organized by dataset (`simple/`, `cybersec/`, `ecommerce/`). Each case specifies `id`, `question`, `schema`, `difficulty`, `category`, `gold_sql`, and optionally `expected_rows` and `tags`. Results are written to JSONL files and can be compared against baselines for regression detection. ```bash bun run atlas -- eval [options] ``` | Flag | Description | | ---------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------- | | `--schema ` | Filter by demo dataset name (e.g. `simple`, `cybersec`, `ecommerce`). This is the eval dataset name, not a PostgreSQL schema | | `--category ` | Filter by category | | `--difficulty ` | Filter by difficulty | | `--id ` | Run a single case | | `--limit ` | Max cases to evaluate | | `--resume ` | Resume from existing JSONL results file | | `--baseline` | Save results as new baseline | | `--compare ` | Diff against baseline (exit 1 on regression) | | `--csv` | CSV output | | `--json` | JSON summary output | smoke [#smoke] Run end-to-end smoke tests against a running Atlas deployment. ```bash bun run atlas -- smoke [options] ``` | Flag | Default | Description | | ----------------- | ----------------------- | ------------------------------------ | | `--target ` | `http://localhost:3001` | API base URL | | `--api-key ` | — | Bearer auth token | | `--timeout ` | `30000` | Per-check timeout | | `--verbose` | — | Show full response bodies on failure | | `--json` | — | Machine-readable JSON output | **Environment:** Flags can also be set via environment variables. `--target` falls back to `ATLAS_API_URL`, and `--api-key` falls back to `ATLAS_API_KEY`. Explicit flags take precedence. | Variable | Default | Description | | --------------- | ----------------------- | --------------------------------------------- | | `ATLAS_API_URL` | `http://localhost:3001` | API base URL (overridden by `--target`) | | `ATLAS_API_KEY` | — | Bearer auth token (overridden by `--api-key`) | plugin [#plugin] Manage Atlas plugins. plugin list [#plugin-list] List installed plugins from `atlas.config.ts`. ```bash bun run atlas -- plugin list ``` plugin create [#plugin-create] Scaffold a new plugin. ```bash bun run atlas -- plugin create --type ``` | Flag | Description | | --------------- | ----------------------------------------------------------------------------------- | | `--type ` | Plugin type: `datasource`, `context`, `interaction`, `action`, `sandbox` (required) | plugin add [#plugin-add] Install a plugin package. ```bash bun run atlas -- plugin add ``` migrate [#migrate] Generate or apply plugin schema migrations. ```bash bun run atlas -- migrate [options] ``` | Flag | Description | | --------- | --------------------------------------------------------------- | | `--apply` | Execute migrations against internal database (default: dry-run) | index [#index] Rebuild the semantic index from current YAML files, or print index statistics. The semantic index is a pre-computed text summary of the semantic layer that the agent receives as context — it condenses all entities, columns, metrics, and glossary terms so the agent can find relevant tables without reading every YAML file via the explore tool. ```bash bun run atlas -- index [options] ``` | Flag | Description | | --------- | ------------------------------------------------- | | `--stats` | Print current index statistics without rebuilding | **Examples:** ```bash # Rebuild the semantic index bun run atlas -- index # Check index stats without rebuilding bun run atlas -- index --stats ``` `--stats` prints a one-line summary: entity count, dimensions, measures, metrics, glossary terms, and total keywords. Use this to verify the index covers your semantic layer after adding or removing entity files. A full rebuild loads all YAML files under `semantic/` (including per-source subdirectories) and validates they parse correctly. The command prints the number of indexed entities, dimensions, measures, keyword count, and elapsed time on success. import [#import] Import semantic layer YAML files from disk into the internal database for the active organization. Calls `POST /api/v1/admin/semantic/org/import` on a running Atlas API server. This is used in multi-tenant (org-scoped) deployments where the semantic layer is stored in the internal database rather than read from disk at runtime. After running `atlas init --org ` to generate YAML files on disk, use `atlas import` to sync them into the database. ```bash bun run atlas -- import [options] ``` | Flag | Description | | --------------------- | --------------------------------------------------- | | `--connection ` | Associate imported entities with a named datasource | **Environment:** | Variable | Default | Description | | --------------- | ----------------------- | -------------------------------------------------------- | | `ATLAS_API_URL` | `http://localhost:3001` | API server URL | | `ATLAS_API_KEY` | — | API key for authentication (required if auth is enabled) | **Requires** a running Atlas API server (`bun run dev:api` or production deployment). **Examples:** ```bash # Import all semantic layer files for the active org bun run atlas -- import # Import and associate entities with a specific datasource bun run atlas -- import --connection warehouse ``` **Output:** The command reports how many entities were imported, how many were skipped (already up to date), the total count, and any errors encountered during import. learn [#learn] Analyze the audit log and propose semantic layer YAML improvements. This is Atlas's offline learning loop — it examines real query patterns and suggests additions to your entity files, joins, and glossary. ```bash bun run atlas -- learn [options] ``` | Flag | Description | | ----------------- | --------------------------------------------------------------------------------------------------------------------------------------- | | `--apply` | Write proposed changes to YAML files (default: dry-run) | | `--limit ` | Max audit log entries to analyze (default: 1000) | | `--since ` | Only analyze queries after this date (ISO 8601, e.g. `2026-03-01`) | | `--source ` | Read from/write to `semantic/{name}/` subdirectory | | `--suggestions` | Generate query suggestions from the audit log (stored in the `query_suggestions` table). Can be combined with `--apply` and other flags | **Requires** `DATABASE_URL` to be set (the audit log lives in the internal database). Proposals include: * **Query patterns** — frequently-used SQL not yet documented in entity YAML * **Join discoveries** — table pairs queried together but with no join defined * **Glossary terms** — column aliases used often enough to warrant a glossary entry ```bash # Preview proposals (dry run, the default) atlas learn # Apply changes to YAML files atlas learn --apply # Only analyze recent queries atlas learn --since 2026-03-01 --limit 500 # Generate query suggestions from audit log atlas learn --suggestions ``` benchmark [#benchmark] Run the [BIRD benchmark](https://bird-bench.github.io/) for text-to-SQL accuracy evaluation. This is a developer tool for measuring Atlas's query generation quality. BIRD is an external academic benchmark dataset (\~1500 questions across 11 SQLite databases) and is **not included in the Atlas repository**. You must download the BIRD dev set separately from the [BIRD website](https://bird-bench.github.io/) and point `--bird-path` to the extracted directory. ```bash bun run atlas -- benchmark [options] ``` | Flag | Description | | -------------------- | ---------------------------------------------------- | | `--bird-path ` | Path to the downloaded BIRD dev directory (required) | | `--limit ` | Max questions to evaluate | | `--db ` | Filter to a single database | | `--csv` | CSV output | | `--resume ` | Resume from existing JSONL results file | export [#export] Export workspace data to a portable migration bundle (JSON). Reads from the internal database. Used as the first step in a self-hosted → SaaS migration workflow. See [Migration guide](/guides/migration) for the full workflow. ```bash bun run atlas -- export [options] ``` | Flag | Description | | ----------------- | --------------------------------------------------------- | | `--output ` | Output file path (default: `./atlas-export-{date}.json`) | | `-o ` | Alias for `--output` | | `--org ` | Export data for a specific org (default: global/unscoped) | **Requires** `DATABASE_URL` to be set (reads from the internal database). **Examples:** ```bash # Export all workspace data to a timestamped file bun run atlas -- export # Export to a specific file bun run atlas -- export --output backup.json # Export a specific organization's data bun run atlas -- export --org org_abc123 ``` migrate-import [#migrate-import] Import an export bundle into a hosted Atlas instance. Used for self-hosted → SaaS migration. Calls the target instance's internal migration API endpoint. ```bash bun run atlas -- migrate-import --bundle [options] ``` | Flag | Description | | ----------------- | ---------------------------------------------------------- | | `--bundle ` | Path to the export bundle JSON file (required) | | `--target ` | Target Atlas API URL (default: `https://app.useatlas.dev`) | | `--api-key ` | API key for the target workspace (or set `ATLAS_API_KEY`) | **Examples:** ```bash # Import into the hosted SaaS (default target) bun run atlas -- migrate-import --bundle atlas-export-2026-04-02.json # Import into a self-hosted instance bun run atlas -- migrate-import --bundle backup.json --target https://atlas.internal.company.com # Use env var for the API key ATLAS_API_KEY=sk-... bun run atlas -- migrate-import --bundle backup.json ``` completions [#completions] Output a shell completion script for bash, zsh, or fish. Completions cover all commands and their flags. ```bash bun run atlas -- completions ``` No flags. The only argument is the target shell. If the shell argument is missing or unsupported, the command prints usage instructions and exits with code 1. **Installation** (assumes `atlas` is installed globally or aliased — substitute `bun run atlas --` if running from the monorepo)**:** | Shell | Setup | | ----- | ------------------------------------------------------------------------- | | bash | Add `eval "$(atlas completions bash)"` to `~/.bashrc` | | zsh | Add `eval "$(atlas completions zsh)"` to `~/.zshrc` | | fish | Run `atlas completions fish > ~/.config/fish/completions/atlas.fish` once | **Examples:** ```bash # Generate and activate bash completions for the current session eval "$(bun run atlas -- completions bash)" # Persist zsh completions across sessions echo 'eval "$(atlas completions zsh)"' >> ~/.zshrc # Install fish completions (saved to file, auto-loaded by fish) bun run atlas -- completions fish > ~/.config/fish/completions/atlas.fish ``` *** See Also [#see-also] * [Schema Evolution](/deployment/schema-evolution) — Detecting database drift with `atlas diff` and updating entity YAMLs * [MCP Server](/guides/mcp) — Using `atlas mcp` with Claude Desktop and Cursor * [Environment Variables](/reference/environment-variables) — Variables that affect CLI behavior (`ATLAS_DATASOURCE_URL`, `ATLAS_SCHEMA`, etc.) * [Configuration](/reference/config) — Declarative config file that the CLI reads and validates * [Troubleshooting](/guides/troubleshooting) — Diagnostic steps when CLI commands fail --- # Error Codes (/reference/error-codes) Atlas uses structured error codes across its API, SDK, and chat interface. Every error response includes a `code` field identifying the failure, a human-readable `message`, and a `retryable` flag indicating whether the client should retry. Error codes are defined in [`@useatlas/types/errors`](https://github.com/AtlasDevHQ/atlas/blob/main/packages/types/src/errors.ts) and shared across all Atlas packages. *** Server Error Codes (ChatErrorCode) [#server-error-codes-chaterrorcode] These codes are returned by the Atlas API in JSON error responses. Each maps to an HTTP status code and a retryable classification. Retryable Errors [#retryable-errors] Transient failures where retrying the same request may succeed. Use exponential backoff for retries. | Code | HTTP Status | Description | Common Cause | Fix | | ---------------------- | ----------- | ----------------------------- | ---------------------------------------------------------------------- | ------------------------------------------------------------------------------------ | | `rate_limited` | 429 | Too many requests | Client exceeding the per-user rate limit | Wait for `retryAfterSeconds` then retry. Reduce request frequency | | `provider_rate_limit` | 503 | AI provider rate limited | Upstream LLM provider (Anthropic, OpenAI, etc.) throttling requests | Wait and retry with backoff. Consider upgrading your provider plan | | `provider_timeout` | 504 | AI provider timed out | LLM took too long to respond, or query exceeded `ATLAS_QUERY_TIMEOUT` | Retry with a simpler question. Increase `ATLAS_QUERY_TIMEOUT` if queries are complex | | `provider_unreachable` | 503 | Cannot reach the AI provider | Network issue between Atlas and the LLM provider, or provider outage | Check provider status page. Retry after a short delay | | `provider_error` | 502 | AI provider returned an error | Unexpected error from the LLM provider (500, malformed response, etc.) | Retry after a short delay. Check provider status if persistent | | `internal_error` | 500 | Server error | Unhandled exception, database connection failure, or pool exhaustion | Retry after a short delay. Check server logs using the `requestId` | Permanent Errors [#permanent-errors] Retrying these will not help — the request or configuration must be changed. | Code | HTTP Status | Description | Common Cause | Fix | | -------------------------- | ----------- | --------------------------------- | --------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------- | | `auth_error` | 401 | Authentication failed | Invalid API key or revoked token | Check your API key or sign in again. See [Authentication](/deployment/authentication) | | `session_expired` | 401 | Session expired | Session token has expired or been revoked | Sign in again to get a new session | | `forbidden` | 403 | Access denied | User lacks the required role (e.g., admin endpoints require `admin` role) | Request the appropriate role from your Atlas administrator | | `forbidden_role` | 403 | Admin role required | Non-admin user tried to access an admin endpoint | Sign in with an admin account or request the admin role | | `org_not_found` | 400 | No active organization | Request requires an active organization but none is selected | Select an organization in the org switcher and try again | | `configuration_error` | 400 | Atlas is not fully configured | Missing environment variables, invalid config file, or startup diagnostics failed | Run `atlas doctor` to identify the issue. See [Troubleshooting](/guides/troubleshooting) | | `no_datasource` | 400 | No datasource configured | `ATLAS_DATASOURCE_URL` is not set | Set `ATLAS_DATASOURCE_URL` in your environment. See [Environment Variables](/reference/environment-variables) | | `invalid_request` | 400 | Invalid request | Malformed JSON body or missing required fields | Check the request body format. See [API Reference](/reference/api) | | `validation_error` | 422 | Request validation failed | Request body doesn't match the expected schema (wrong types, missing fields) | Check the `details` field for specific field errors | | `not_found` | 404 | Resource not found | Conversation, entity, or other resource doesn't exist or isn't owned by the caller | Verify the resource ID. The resource may have been deleted | | `plan_limit_exceeded` | 429 | Plan limit exceeded | Workspace has exceeded its plan's query or token limit (including the 10% grace buffer) | Upgrade your plan or wait until the next billing period | | `provider_model_not_found` | 400 | AI model not found | The model specified in `ATLAS_MODEL` doesn't exist at the configured provider | Check `ATLAS_MODEL` and `ATLAS_PROVIDER` values. See [Environment Variables](/reference/environment-variables) | | `provider_auth_error` | 503 | AI provider authentication failed | Invalid or expired LLM provider API key (e.g., `ANTHROPIC_API_KEY`) | Verify your provider API key. Regenerate it if expired | Billing & Workspace Errors [#billing--workspace-errors] These codes are returned by billing enforcement and workspace status middleware. They block requests before the agent loop runs. | Code | HTTP Status | Description | Common Cause | Fix | | ------------------------ | ----------- | ---------------------- | ---------------------------------------------- | ----------------------------------------------------------------------------------------------------- | | `trial_expired` | 403 | Trial has expired | 14-day SaaS trial ended | Upgrade to a paid plan | | `billing_check_failed` | 503 | Billing check failed | Failed to fetch plan data from internal DB | Retry — transient infrastructure issue | | `workspace_check_failed` | 503 | Workspace check failed | Failed to verify workspace status | Retry — transient infrastructure issue | | `workspace_throttled` | 429 | Workspace throttled | Workspace triggered abuse detection thresholds | Wait for the throttle delay to pass and retry. See [Abuse Prevention](/platform-ops/abuse-prevention) | | `workspace_suspended` | 403 | Workspace suspended | Admin suspended the workspace | Contact your workspace administrator | | `workspace_deleted` | 404 | Workspace deleted | Workspace has been permanently deleted | Create a new workspace | *** Client Error Codes (ClientErrorCode) [#client-error-codes-clienterrorcode] These codes are detected **client-side** by the SDK and chat UI before parsing a server response. They represent network-level failures or HTTP status patterns. | Code | Description | Common Cause | Fix | Retryable | | ------------------- | ------------------------------------- | ---------------------------------------------------------------------------- | ---------------------------------------------------------------------------------- | --------- | | `api_unreachable` | Cannot connect to the Atlas API | Network failure, DNS resolution error, server not running, or CORS issue | Check the API URL configuration and ensure the server is running | Yes | | `auth_failure` | HTTP 401 detected before JSON parsing | API key not sent, expired session token, or wrong auth header format | Check your API key or sign in again | No | | `rate_limited_http` | HTTP 429 detected before JSON parsing | Rate limit hit — the response wasn't valid JSON (e.g., from a reverse proxy) | Wait 30 seconds and retry | Yes | | `server_error` | HTTP 5xx detected before JSON parsing | Server crashed or upstream proxy error (502, 503) | Retry after a short delay | Yes | | `offline` | Browser reports no network connection | Device is offline (detected via `navigator.onLine === false`) | Reconnect to the network — the client may auto-retry when connectivity is restored | Yes | Client error codes appear in the `clientCode` field of `ChatErrorInfo`, while server error codes appear in the `code` field. When the server returns valid JSON with a known error code, the server code takes precedence. *** SDK Error Codes [#sdk-error-codes] The `@useatlas/sdk` includes additional codes in the `AtlasErrorCode` type beyond `ChatErrorCode`. Three are client-side codes detected by the SDK itself (never returned by the server). One (`not_available`) is returned by server admin and conversation endpoints. | Code | Description | Origin | Retryable | | ------------------ | ------------------------------------------------------------------------------ | -------------------------- | --------- | | `network_error` | `fetch()` threw an error (connection refused, DNS failure, stream interrupted) | SDK client-side | Yes | | `invalid_response` | Server returned a 2xx status but the body wasn't valid JSON | SDK client-side | No | | `unknown_error` | Server returned an error response with an unrecognized error code | SDK client-side | No | | `not_available` | Feature not available (e.g., conversation history without `DATABASE_URL`) | Server (admin/CRUD routes) | No | *** Error Response Format [#error-response-format] All Atlas API error responses follow a consistent JSON structure: ```json { "error": "rate_limited", "message": "Too many requests. Please wait before trying again.", "retryAfterSeconds": 12, "retryable": true, "requestId": "a1b2c3d4-e5f6-7890-abcd-ef1234567890" } ``` | Field | Type | Description | | ------------------- | ----------- | ---------------------------------------------------------------------------------------------- | | `error` | `string` | The error code (one of the codes listed above) | | `message` | `string` | Human-readable description of the error | | `retryable` | `boolean` | Whether retrying the same request may succeed | | `retryAfterSeconds` | `number?` | Seconds to wait before retrying (only for `rate_limited`; client-side parsing clamps to 0–300) | | `requestId` | `string?` | Server-assigned UUID for log correlation. Quote this when reporting issues | | `details` | `object?` | Additional context (e.g., Zod validation issues for `validation_error`) | | `diagnostics` | `object[]?` | Startup diagnostic results (only for `configuration_error`) | *** How Server Codes Map to SDK Errors [#how-server-codes-map-to-sdk-errors] When you use the `@useatlas/sdk`, server error responses are automatically parsed into `AtlasError` instances: ``` Server JSON Response → SDK AtlasError ───────────────────── ────────────────── { error: "rate_limited" } → error.code = "rate_limited" { message: "Too many..." } → error.message = "Too many..." HTTP 429 → error.status = 429 { retryable: true } → error.retryable = true { retryAfterSeconds: 12 } → error.retryAfterSeconds = 12 ``` If the server response isn't valid JSON, or the `error` field isn't a known code, the SDK classifies the error as `network_error`, `invalid_response`, or `unknown_error` depending on the failure mode. *** Error Handling with Retry Logic [#error-handling-with-retry-logic] Use the `retryable` flag on `AtlasError` to build generic retry logic without hard-coding error codes: ```typescript import { AtlasError, createAtlasClient, type QueryResponse } from "@useatlas/sdk"; const atlas = createAtlasClient({ baseUrl: "https://api.example.com", apiKey: "your-api-key", }); function sleep(ms: number): Promise { return new Promise((resolve) => setTimeout(resolve, ms)); } async function queryWithRetry( question: string, maxRetries = 3, ): Promise { for (let attempt = 0; attempt <= maxRetries; attempt++) { try { return await atlas.query(question); } catch (error) { if (!(error instanceof AtlasError)) throw error; // Permanent error — retrying won't help if (!error.retryable) throw error; // Last attempt — no more retries if (attempt === maxRetries) throw error; // Rate limited — use the server-provided delay if (error.code === "rate_limited" && error.retryAfterSeconds) { console.log(`Rate limited — waiting ${error.retryAfterSeconds}s`); await sleep(error.retryAfterSeconds * 1000); continue; } // Other transient errors — exponential backoff const delay = Math.min(1000 * 2 ** attempt, 30_000); console.log(`${error.code} — retrying in ${delay}ms (attempt ${attempt + 1}/${maxRetries})`); await sleep(delay); } } // Unreachable, but satisfies TypeScript throw new Error("Retry loop exited unexpectedly"); } ``` Handling Specific Codes [#handling-specific-codes] Refine behavior for specific error codes after checking `retryable`: ```typescript try { await atlas.query("Revenue by region"); } catch (error) { if (!(error instanceof AtlasError)) throw error; switch (error.code) { case "auth_error": case "session_expired": // Redirect to login or prompt for a new API key redirectToLogin(); break; case "forbidden_role": // User doesn't have admin access showPermissionDenied(error.message); break; case "configuration_error": case "no_datasource": // Show setup instructions — the server isn't ready showSetupGuide(error.message); break; case "provider_model_not_found": case "provider_auth_error": // Admin needs to fix server config showAdminAlert(error.message); break; default: if (error.retryable) { // Generic transient error — show retry UI showRetryButton(error.message); } else { // Permanent error — show the message to the user showError(error.message); } } } ``` *** See Also [#see-also] * [Troubleshooting](/guides/troubleshooting) — Startup errors, connection issues, and diagnostic codes * [SDK Reference](/reference/sdk#error-handling) — Full SDK API with error handling examples * [React Hooks Reference](/reference/react) — `ChatErrorInfo` and client-side error parsing in `useAtlasChat` * [Rate Limiting & Retry](/guides/rate-limiting) — Rate limit configuration, 429 handling, and backoff patterns * [Environment Variables](/reference/environment-variables) — Configuration that affects error behavior * [API Overview](/reference/api#error-responses) — HTTP error response format --- # React Hooks Reference (/reference/react) The `@useatlas/react` package provides headless React hooks and a pre-built chat component for embedding Atlas into React applications. Use the hooks to build a fully custom UI, or drop in the `AtlasChat` component for a ready-made experience. Installation [#installation] ```bash bun add @useatlas/react ``` Peer Dependencies [#peer-dependencies] The hooks require these peer dependencies: ```bash bun add react react-dom ai @ai-sdk/react ``` The pre-built `AtlasChat` component additionally uses `lucide-react`, `react-syntax-highlighter`, `recharts`, `tailwindcss`, and `xlsx` (all optional peer deps). Imports [#imports] The package has two entry points: ```typescript // Pre-built AtlasChat component + UI provider import { AtlasChat, AtlasUIProvider } from "@useatlas/react"; // Headless hooks only (smaller bundle, no UI components) import { AtlasProvider, useAtlasChat, useAtlasAuth } from "@useatlas/react/hooks"; ``` The main entry point (`@useatlas/react`) exports `AtlasUIProvider` for use with the pre-built `AtlasChat` component. The hooks entry point (`@useatlas/react/hooks`) exports `AtlasProvider`, a lightweight provider for headless hooks. *** AtlasChat [#atlaschat] A pre-built, full-featured chat component. Drop it into your app for a ready-made Atlas experience with markdown rendering, code highlighting, charts, conversation history, and schema exploration. ```tsx import { AtlasChat, AtlasUIProvider } from "@useatlas/react"; function App() { return ( ); } ``` AtlasChat Props [#atlaschat-props] | Prop | Type | Default | Description | | ----------------------- | ------------------------------- | ------------------------- | -------------------------------------------------------------------------------------------------------------------------- | | `apiUrl` | `string` | — | **Required.** Atlas API server URL. Use `""` for same-origin deployments. | | `apiKey` | `string` | — | API key for `simple-key` auth mode. Sent as Bearer token. Also loaded from `sessionStorage` (`atlas-api-key`) on mount. | | `theme` | `"light" \| "dark" \| "system"` | `"system"` | Color theme. Applied on mount via CSS variables and persisted to localStorage. | | `sidebar` | `boolean` | `false` | Show the conversation history sidebar. Lazy-loads conversations after auth resolves. | | `schemaExplorer` | `boolean` | `false` | Show the schema explorer button and modal for browsing tables and columns. | | `authClient` | `AtlasAuthClient` | — | Custom auth client for `managed` auth mode. Provides session management (login, signup, logout, `useSession`). | | `toolRenderers` | `ToolRenderers` | — | Custom renderers for tool results. Map tool names to React components (see [Tool Result Types](#tool-result-types) below). | | `chatEndpoint` | `string` | `"/api/chat"` | Custom chat API endpoint path. Combined with `apiUrl` for the full URL. | | `conversationsEndpoint` | `string` | `"/api/v1/conversations"` | Custom conversations API endpoint path. Used for listing and loading conversations. | Auth Mode Detection [#auth-mode-detection] `AtlasChat` auto-detects the server's auth mode by calling `/api/health` on mount: | Auth Mode | Behavior | | -------------------- | ----------------------------------------------------------------------------------- | | `managed` | Uses `authClient` for session-based auth. Shows password change dialog if required. | | `simple-key` | Sends `apiKey` as Bearer token. Falls back to `sessionStorage`. | | `none` | No auth headers sent. | | Health check failure | Defaults to `none`. Shows a persistent warning banner. | Brand Color [#brand-color] If the health endpoint returns a `brandColor` value (OKLCH format), it's applied as the `--atlas-brand` CSS custom property. See [Custom Styling](#custom-styling) for override options. *** AtlasProvider [#atlasprovider] Wraps your app and supplies API URL, auth credentials, and an optional auth client to all Atlas hooks. Must be an ancestor of any component that uses Atlas hooks. ```tsx import { AtlasProvider } from "@useatlas/react/hooks"; function App() { return ( // Wrap your app — all Atlas hooks read config from this provider ); } ``` Props [#props] | Prop | Type | Required | Description | | ------------ | ----------------- | -------- | -------------------------------------------------------------------- | | `apiUrl` | `string` | Yes | Atlas API server URL. Use `""` for same-origin deployments. | | `apiKey` | `string` | No | API key for `simple-key` auth mode. Sent as Bearer token. | | `authClient` | `AtlasAuthClient` | No | Custom auth client for `managed` auth mode (better-auth compatible). | The provider automatically detects cross-origin requests and configures credential handling accordingly. *** useAtlasChat [#useatlaschat] Manages chat state, message streaming, and conversation tracking. Wraps the AI SDK's `useChat` with Atlas-specific transport configuration. ```tsx import { useAtlasChat } from "@useatlas/react/hooks"; function ChatUI() { const { messages, // All messages in the conversation (AI SDK UIMessage format) sendMessage, // Send a text message to the agent input, // Controlled input value setInput, // Update the input field status, // "ready" | "submitted" | "streaming" | "error" isLoading, // True while the agent is working error, // Last error, if any conversationId, // Server-assigned conversation ID } = useAtlasChat(); return (
{ e.preventDefault(); sendMessage(input); }}> {/* Render each message's text parts */}
{messages.map((msg) => (
{msg.parts?.map((p, i) => p.type === "text" ?

{p.text}

: null )}
))}
setInput(e.target.value)} />
); } ``` useAtlasChat options [#useatlaschat-options] ```typescript useAtlasChat({ initialConversationId: "existing-id", // Resume an existing conversation onConversationIdChange: (id) => { ... }, // Called when server assigns a conversation ID }) ``` useAtlasChat return value [#useatlaschat-return-value] | Field | Type | Description | | ------------------- | --------------------------------- | ---------------------------------------------------------------------- | | `messages` | `UIMessage[]` | Array of chat messages (AI SDK format) | | `setMessages` | `(msgs \| updater) => void` | Replace all messages, or update via callback | | `sendMessage` | `(text: string) => Promise` | Send a text message. Rejects on failure. | | `input` | `string` | Current input value (managed by the hook) | | `setInput` | `(input: string) => void` | Update the input value | | `status` | `AtlasChatStatus` | `"ready"`, `"submitted"`, `"streaming"`, or `"error"` | | `isLoading` | `boolean` | `true` when `status` is `"submitted"` or `"streaming"` | | `error` | `Error \| null` | Last error, if any | | `conversationId` | `string \| null` | Current conversation ID (set by server via `x-conversation-id` header) | | `setConversationId` | `(id: string \| null) => void` | Manually set the conversation ID | *** useAtlasAuth [#useatlasauth] Detects the server's auth mode, manages authentication state, and provides login/signup/logout for managed auth. ```tsx import { useAtlasAuth } from "@useatlas/react/hooks"; // Gate component — shows login form or children based on auth state function AuthGate({ children }: { children: React.ReactNode }) { const { authMode, isAuthenticated, isLoading, login } = useAtlasAuth(); if (isLoading) return

Loading...

; // Only show login for managed auth — other modes handle auth externally if (!isAuthenticated && authMode === "managed") { return ; } return <>{children}; } ``` useAtlasAuth return value [#useatlasauth-return-value] | Field | Type | Description | | ----------------- | -------------------------------------------------------- | --------------------------------------------------------------------------------------------- | | `authMode` | `AuthMode \| null` | Detected auth mode: `"none"`, `"simple-key"`, `"byot"`, or `"managed"`. `null` while loading. | | `isAuthenticated` | `boolean` | Whether the user is authenticated based on auth mode and credentials | | `session` | `{ user?: { email?: string } } \| null` | Session data for managed auth | | `isLoading` | `boolean` | `true` while the health check or session loading is in progress | | `error` | `Error \| null` | Error from health check or auth operations | | `login` | `(email, password) => Promise<{ error?: string }>` | Sign in with email/password (managed auth) | | `signup` | `(email, password, name) => Promise<{ error?: string }>` | Sign up (managed auth) | | `logout` | `() => Promise<{ error?: string }>` | Sign out (managed auth) | Auth mode detection works by calling the `/api/health` endpoint on mount. The hook retries once on failure (2 total attempts) before falling back to `"none"`. *** useAtlasConversations [#useatlasconversations] Lists, loads, deletes, and stars conversations. Auth credentials are automatically wired from `AtlasProvider`. ```tsx import { useAtlasConversations } from "@useatlas/react/hooks"; function ConversationList() { const { conversations, // Array of conversation metadata (no messages) total, // Total count for pagination isLoading, selectedId, // Currently highlighted conversation setSelectedId, // Select a conversation by ID loadConversation, // Fetch full message history for a conversation deleteConversation, starConversation, // Set star status (true to pin, false to unpin) } = useAtlasConversations(); if (isLoading) return

Loading...

; return (
    {conversations.map((c) => (
  • setSelectedId(c.id)}> {c.title ?? "Untitled"} {/* Toggle star — second arg is the desired starred state */}
  • ))}
); } ``` useAtlasConversations options [#useatlasconversations-options] ```typescript useAtlasConversations({ enabled: true, // When false, refresh() becomes a no-op (default: true) }) ``` useAtlasConversations return value [#useatlasconversations-return-value] | Field | Type | Description | | -------------------- | ---------------------------------------------------- | ---------------------------------------------------------------- | | `conversations` | `Conversation[]` | Array of conversation metadata (no messages) | | `total` | `number` | Total count of conversations | | `isLoading` | `boolean` | Whether the conversation list is loading | | `available` | `boolean` | Whether the conversation API is available (requires internal DB) | | `selectedId` | `string \| null` | Currently selected conversation ID | | `setSelectedId` | `(id: string \| null) => void` | Set the selected conversation | | `refresh` | `() => Promise` | Manually refresh the conversation list | | `loadConversation` | `(id: string) => Promise` | Load a conversation's messages (returns AI SDK format) | | `deleteConversation` | `(id: string) => Promise` | Delete a conversation | | `starConversation` | `(id: string, starred: boolean) => Promise` | Star or unstar a conversation | *** useAtlasContext [#useatlascontext] Provides raw access to the `AtlasProvider` context value. Throws if called outside ``. Useful for building custom hooks or components that need direct access to the API URL, credentials, and cross-origin status. ```tsx import { useAtlasContext } from "@useatlas/react/hooks"; function CustomFetcher() { const { apiUrl, apiKey, isCrossOrigin } = useAtlasContext(); // Build a custom fetch call using provider-supplied credentials async function fetchCustomEndpoint() { const headers: Record = {}; if (apiKey) headers["Authorization"] = `Bearer ${apiKey}`; const res = await fetch(`${apiUrl}/api/v1/custom`, { headers, credentials: isCrossOrigin ? "include" : "same-origin", }); return res.json(); } // ... } ``` useAtlasContext return value [#useatlascontext-return-value] | Field | Type | Description | | --------------- | --------------------- | -------------------------------------------------------------------------- | | `apiUrl` | `string` | Atlas API server URL | | `apiKey` | `string \| undefined` | API key for `simple-key` auth mode | | `authClient` | `AtlasAuthClient` | Auth client instance (noop if not provided to `AtlasProvider`) | | `isCrossOrigin` | `boolean` | Whether the API URL is cross-origin (derived from `apiUrl` at render time) | *** useConversations [#useconversations] Lower-level conversation management hook that accepts explicit auth configuration instead of reading from `AtlasProvider`. Use this when you need full control over request headers and credentials, or when integrating outside the `AtlasProvider` context. `useAtlasConversations` wraps this hook with context-derived credentials. ```tsx import { useConversations } from "@useatlas/react"; function ConversationManager() { const { conversations, total, loading, available, fetchError, selectedId, setSelectedId, fetchList, loadConversation, deleteConversation, starConversation, refresh, } = useConversations({ apiUrl: "https://your-atlas-api.example.com", enabled: true, getHeaders: () => ({ Authorization: "Bearer sk-..." }), getCredentials: () => "same-origin", }); // ... } ``` useConversations options [#useconversations-options] ```typescript interface UseConversationsOptions { apiUrl: string; enabled: boolean; getHeaders: () => Record; getCredentials: () => RequestCredentials; /** Custom conversations API endpoint path. Defaults to "/api/v1/conversations". */ conversationsEndpoint?: string; } ``` useConversations return value [#useconversations-return-value] | Field | Type | Description | | -------------------- | ------------------------------------------------- | ------------------------------------------------------ | | `conversations` | `Conversation[]` | Array of conversation metadata | | `total` | `number` | Total count of conversations | | `loading` | `boolean` | Whether the conversation list is loading | | `available` | `boolean` | Whether the conversation API is available | | `fetchError` | `string \| null` | Error message from the last fetch attempt | | `selectedId` | `string \| null` | Currently selected conversation ID | | `setSelectedId` | `(id: string \| null) => void` | Set the selected conversation | | `fetchList` | `() => Promise` | Fetch the conversation list | | `loadConversation` | `(id: string) => Promise` | Load a conversation's messages (returns AI SDK format) | | `deleteConversation` | `(id: string) => Promise` | Delete a conversation | | `starConversation` | `(id: string, starred: boolean) => Promise` | Star or unstar a conversation | | `refresh` | `() => Promise` | Alias for `fetchList` | *** useAtlasConfig [#useatlasconfig] Provides access to the `AtlasUIProvider` context value. This is the provider-level equivalent for the pre-built `AtlasChat` component tree (as opposed to `useAtlasContext` which reads from `AtlasProvider`). Throws if called outside ``. ```tsx import { useAtlasConfig } from "@useatlas/react"; function CustomWidget() { const { apiUrl, authClient, isCrossOrigin } = useAtlasConfig(); // ... } ``` useAtlasConfig return value [#useatlasconfig-return-value] | Field | Type | Description | | --------------- | ----------------- | ----------------------------------- | | `apiUrl` | `string` | Atlas API server URL | | `authClient` | `AtlasAuthClient` | Auth client instance | | `isCrossOrigin` | `boolean` | Whether the API URL is cross-origin | *** Utilities [#utilities] setTheme [#settheme] Sets the theme mode globally without needing a hook. Useful for server-side rendering or setting the theme before React hydrates. ```typescript import { setTheme } from "@useatlas/react"; // Set theme before React renders (e.g. in a layout component) setTheme("dark"); ``` Signature [#signature] ```typescript function setTheme(mode: ThemeMode): void ``` buildThemeInitScript / THEME_STORAGE_KEY [#buildthemeinitscript--theme_storage_key] Server-side utilities for flicker-free theme initialization. `buildThemeInitScript` returns a ` ``` *** SaaS vs Self-Hosted [#saas-vs-self-hosted] | Behavior | Self-Hosted | SaaS | | -------------- | ------------------------- | ----------------- | | Key scope | Global (single workspace) | Per-workspace | | Who can manage | Any admin | Workspace admins | | Storage | Internal database | Internal database | The UI and workflow are identical in both modes. In SaaS deployments, keys are automatically scoped to the active workspace. *** API Endpoints [#api-endpoints] All endpoints require admin authentication. | Method | Path | Description | | ------ | -------------------------- | ---------------------------------------- | | `GET` | `/api/auth/api-key/list` | List all API keys | | `POST` | `/api/auth/api-key/create` | Create a new key (returns full key once) | | `POST` | `/api/auth/api-key/delete` | Revoke a key by ID | *** See Also [#see-also] * [SDK Reference](/reference/sdk) — Use API keys with the TypeScript SDK * [MCP Server](/guides/mcp) — Configure Atlas as an MCP tool provider * [Embedding Widget](/guides/embedding-widget) — Embed Atlas in your application * [Authentication](/deployment/authentication) — Auth modes and configuration --- # Admin Console (/guides/admin-console) The admin console is a built-in web UI for managing your Atlas deployment. Access it at `/admin` in the Atlas web app. * [Managed auth](/deployment/authentication#managed-auth) enabled (or auth mode `none` for local dev) * A user with the `admin` role * When auth mode is `none` (local development), all users have implicit admin access In **SaaS mode**, workspace admins see a subset of admin pages scoped to their workspace. Platform admins see all pages including cross-tenant management. In **self-hosted mode**, all admins see the full admin console. *** Dashboard [#dashboard] **Route:** `/admin` The dashboard shows deployment health at a glance: * **Connections** — Number of registered datasources * **Entities** — Tables and views in the semantic layer * **Plugins** — Installed plugins and their health status * **Overall health** — Aggregated status (healthy, degraded, unhealthy) **Component health checks** show live status for each subsystem — Datasource, Internal DB, LLM Provider, Scheduler, and Sandbox — with latency, last-checked timestamp, and model/backend details where applicable. *** Connections [#connections] **Route:** `/admin/connections` Lists all registered datasource connections with their database type, description, health status, and latency. Connection URLs are encrypted at rest when `BETTER_AUTH_SECRET` or `ATLAS_ENCRYPTION_KEY` is set. **Actions:** * **Add connection** — Create a new datasource with URL, database type, and optional description. For PostgreSQL, an optional schema field is also available. The URL field supports a show/hide toggle for credential safety * **Edit connection** — Update an existing connection's URL, type, or description * **Delete connection** — Remove a connection (with confirmation dialog). The default connection cannot be deleted * **Test connection** — Runs a health check query against the datasource and reports latency and status Pool Stats [#pool-stats] A collapsible **Pool Stats** section shows real-time connection pool metrics for each datasource: * **Active / Idle / Total** connections with a visual bar * **Queries** — total queries executed through this pool * **Errors** — total errors and consecutive failure count * **Avg time** — average query execution time in milliseconds * **Last drain** — timestamp of the most recent pool drain **Drain & Recreate** — closes all connections in a pool and creates a fresh one. Use this when a connection becomes stale or unresponsive. A 30-second cooldown prevents drain storms. Pool warmup and auto-drain thresholds are configurable via [`ATLAS_POOL_WARMUP`](/reference/environment-variables) and [`ATLAS_POOL_DRAIN_THRESHOLD`](/reference/environment-variables) *** Semantic Layer Browser [#semantic-layer-browser] **Route:** `/admin/semantic` A two-panel browser for exploring the semantic layer: * **Left sidebar** — File tree showing entities, metrics, glossary, and catalog * **Right panel** — Detail view with a **Pretty** / **YAML** toggle In **Pretty** mode, entities show parsed dimensions, joins, measures, and query patterns with formatted tables. In **YAML** mode, the raw YAML source is displayed with syntax highlighting. Browse entities by clicking them in the sidebar. The catalog shows `use_for` tags and `common_questions`. The glossary highlights ambiguous terms. Metrics display SQL, aggregation type, and objectives. In **SaaS mode**, admins can create, edit, and delete entities directly from the UI with schema-aware autocomplete and full version history. See [Semantic Editor](/guides/semantic-editor) for the full guide. See [Semantic Layer](/getting-started/semantic-layer) for the YAML format reference. *** Schema Diff [#schema-diff] **Route:** `/admin/schema-diff` Compares the live database schema against the semantic layer YAML files to detect drift: * **Summary cards** — New tables (in DB, not in YAML), removed tables (in YAML, not in DB), changed tables (column-level drift), unchanged tables (in sync) * **New tables** — Green cards listing tables that exist in the database but have no corresponding entity YAML. Shows a suggestion to run `atlas init --update` * **Removed tables** — Red cards listing entity YAMLs that reference tables no longer in the database. Indicates stale entities that should be cleaned up * **Changed tables** — Amber expandable cards with a column-level diff table showing added columns, removed columns, and type mismatches between DB and YAML * **In-sync state** — When no drift is detected, shows a green success message **Multiple connections:** If multiple datasource connections are registered, a connection selector dropdown appears in the page header. The diff is re-computed when you switch connections. **API endpoint:** `GET /api/v1/admin/semantic/diff?connection=` — returns structured JSON with `newTables`, `removedTables`, `tableDiffs`, `unchangedCount`, and a `summary` object. *** Audit Log [#audit-log] **Route:** `/admin/audit` Requires an internal database (`DATABASE_URL`). Without it, this page shows a feature gate message. Two tabs: **Log** and **Analytics**. Log [#log] Shows every SQL query executed by the agent with: * **Stats** — Total queries, total errors, error rate * **Log table** — Timestamp, user, SQL query, tables accessed, duration, row count, success/failure **Data classification tags:** Each audit entry includes `tables_accessed` and `columns_accessed` arrays extracted from the validated SQL AST. These enable compliance-grade filtering — answer questions like "which queries touched the `users` table?" without parsing SQL text. **Filters:** * **Table** — Filter by table name (dropdown populated from audit data). Uses JSONB array matching on the `tables_accessed` column * **Column** — Filter by column name (dropdown populated from audit data). Uses JSONB array matching on the `columns_accessed` column * Date range (from/to) * Connection * Status (success/error) * Free-text search (SQL, user, error) * Pagination Analytics [#analytics] Charts and tables for query performance insights: * **Query volume** — Line chart showing queries and errors per day over the selected date range * **Slowest queries** — Top 20 queries ranked by average duration, with max duration and execution count * **Most frequent queries** — Top 20 queries ranked by execution count, with average duration and error count * **Error breakdown** — Bar chart grouping errors by message pattern * **Per-user activity** — Table showing query count, average duration, and error rate per user *** Actions [#actions] **Route:** `/admin/actions` Requires `ATLAS_ACTIONS_ENABLED=true`. Without it, this page shows a feature gate message. Lists action log entries with status filtering: * **Tabs** — Pending, Executed, Denied, Failed, All * **Table** — Relative timestamps (hover for absolute), action type with icon badge, target, summary, status badge * **Expandable rows** — Full payload, timestamps, error details, and a link to the originating conversation (when available) * **Empty states** — Context-aware messages per filter tab **Actions:** * **Approve** / **Deny** individual pending actions * **Bulk approve / deny** — Select multiple pending actions with checkboxes and approve or deny them in one operation. A "select all" toggle selects every pending action in the current view See [Actions Framework](/guides/actions) for details on approval modes. *** Scheduled Tasks [#scheduled-tasks] **Route:** `/admin/scheduled-tasks` Requires `ATLAS_SCHEDULER_ENABLED=true`. Manages recurring queries: * **Filter tabs** — All, Enabled, Disabled * **Table** — Name, question, cron expression, delivery channel, next run time, enabled status * **Expandable rows** — Recent run history with status, timing, and token usage **Actions:** * **Create task** — Form dialog with task name, natural language question, cron schedule (presets or custom expression with human-readable preview and next-run times), delivery channel (email, Slack, webhook), approval mode, and connection selector * **Edit task** — Same form, pre-populated with existing values * **Toggle enabled/disabled** * **Run now** — Trigger immediate execution Run History [#run-history] **Route:** `/admin/scheduled-tasks/runs` Cross-task run history page showing all executions across all tasks (newest first). * **Filters** — Task name, status (running, success, failed, skipped), date range * **Table** — Task name, status badge, started timestamp, duration, token usage, error preview * **Expandable detail** — Full error message, timestamps, duration, token usage, links to conversation (and action, when applicable) See [Scheduled Tasks](/guides/scheduled-tasks) for setup and configuration. *** Token Usage [#token-usage] **Route:** `/admin/token-usage` Requires an internal database (`DATABASE_URL`). Without it, this page shows a feature gate message. Tracks LLM token consumption across all agent interactions: * **Summary cards** — Total tokens, prompt tokens, completion tokens, total requests (with per-request averages) * **Token usage over time** — Area chart showing prompt and completion tokens per day * **Top users** — Table ranking users by total token consumption with prompt/completion breakdown **Filters:** * Date range (from/to) — defaults to last 30 days Token data is recorded automatically for every agent interaction (chat and query endpoints). When no internal database is configured, token usage is logged via pino only. *** Plugins [#plugins] **Route:** `/admin/plugins` Manage installed plugins — enable/disable, health checks, and configuration. In **SaaS mode**, the page includes a tabbed interface with **Installed** and **Available** tabs, providing a full plugin marketplace for browsing, installing, configuring, and uninstalling plugins. See [Plugin Marketplace](/guides/plugin-marketplace) for the full guide. In **self-hosted mode**, the page shows config-loaded plugins with: * **Plugin cards** — Each card shows name, type(s), version, health status, and enabled state * **Health check** — Trigger a live health probe for any plugin * **Enable/disable toggle** — Toggle a plugin on or off without restarting. Disabled plugins are skipped during agent execution (queries, hooks, tool registration) * **Configure** — Opens a config dialog with form inputs generated from the plugin's config schema. Secret values (API keys, tokens) are masked in the UI Plugin enable/disable state and config overrides are stored in the internal database (`DATABASE_URL`). Without it, all plugins are enabled and configuration is read-only. **How it works:** * Toggling a plugin immediately takes effect — disabled plugins are excluded from `getByType()` and `getAllHealthy()` calls in the agent loop * Config changes are saved to the `plugin_settings` table and take effect on next restart * Plugins that implement `getConfigSchema()` expose a typed form in the UI. Plugins without it show current config as read-only JSON *** Settings [#settings] **Route:** `/admin/settings` Manage application configuration from the UI. Settings follow a three-tier resolution: **DB override > env var > default**. * **Grouped sections** — Query Limits, Rate Limiting, Security, Sessions, Sandbox, Platform, Agent, Appearance, Secrets * **Source badges** — Each setting shows where its current value comes from: * **override** (blue) — value saved in the internal database * **workspace-override** (purple) — value saved per-workspace * **env** (green) — value from an environment variable * **default** (gray) — built-in default * **Edit** — Override any non-secret setting. Changes are saved to the internal database * **Live vs Restart** — Each setting shows whether changes take effect immediately (**Live**) or require a server restart (**Requires restart**). Query Limits, Rate Limiting, Sessions, Sandbox, and Appearance are live; Security, Platform, and Agent settings require a restart * **Reset** — Remove a DB override to revert to the environment variable or default value * **Secrets** — Sensitive settings (API keys, database URLs) are masked and read-only. Manage these via environment variables * **SaaS mode** — Workspace admins only see settings marked as workspace-visible. Platform admins see all settings. The response includes a `deployMode` field and a `manageable` flag indicating whether settings can be persisted On **app.useatlas.dev** (SaaS mode), most settings are hot-reloadable — changes are picked up on the next request without a server restart. A few infrastructure settings (LLM provider, model) may still require redeployment to fully take effect. The restart requirement mainly applies to self-hosted deployments. Settings overrides require an internal database (`DATABASE_URL`). Without it, the settings page is read-only — all values come from environment variables. **Available settings:** | Section | Setting | Description | Scope | | ------------- | -------------------------------- | ---------------------------------------------------------------- | --------- | | Query Limits | `ATLAS_ROW_LIMIT` | Maximum rows returned per query (default: 1000) | workspace | | Query Limits | `ATLAS_QUERY_TIMEOUT` | Query timeout in ms (default: 30000) | workspace | | Rate Limiting | `ATLAS_RATE_LIMIT_RPM` | Max requests per minute per user (0 = disabled) | workspace | | Security | `ATLAS_RLS_ENABLED` | Enable row-level security | platform | | Security | `ATLAS_RLS_COLUMN` | Column name for RLS filtering | platform | | Security | `ATLAS_RLS_CLAIM` | JWT claim path for RLS value | platform | | Security | `ATLAS_TABLE_WHITELIST` | Only allow semantic layer tables (default: true) | platform | | Security | `ATLAS_CORS_ORIGIN` | Allowed CORS origin (default: \*) | platform | | Sessions | `ATLAS_SESSION_IDLE_TIMEOUT` | Seconds of inactivity before session invalidation (0 = disabled) | workspace | | Sessions | `ATLAS_SESSION_ABSOLUTE_TIMEOUT` | Maximum session lifetime in seconds (0 = disabled) | workspace | | Sandbox | `ATLAS_SANDBOX_BACKEND` | Sandbox backend for explore/Python tool isolation | workspace | | Sandbox | `ATLAS_SANDBOX_URL` | Custom sidecar service URL (sidecar backend only) | workspace | | Platform | `ATLAS_DEPLOY_MODE` | Deployment mode: auto, saas, or self-hosted | platform | | Agent | `ATLAS_AGENT_MAX_STEPS` | Maximum tool-call steps per agent run (1–100, default: 25) | workspace | | Agent | `ATLAS_PROVIDER` | LLM provider selection | platform | | Agent | `ATLAS_MODEL` | Model ID override | platform | | Agent | `ATLAS_LOG_LEVEL` | Application log level | platform | | Appearance | `ATLAS_BRAND_COLOR` | Primary brand color in oklch format | platform | | Secrets | `ANTHROPIC_API_KEY` | Anthropic provider API key (masked, read-only) | platform | | Secrets | `OPENAI_API_KEY` | OpenAI provider API key (masked, read-only) | platform | | Secrets | `DATABASE_URL` | Internal database connection string (masked, read-only) | platform | | Secrets | `ATLAS_DATASOURCE_URL` | Analytics datasource connection string (masked, read-only) | platform | *** Learned Patterns [#learned-patterns] **Route:** `/admin/learned-patterns` Requires an internal database (`DATABASE_URL`). Patterns are proposed by the agent during conversations or by the `atlas learn` CLI command. Review, approve, and manage query patterns the agent has learned: * **Stats** — Total patterns, pending review, approved, rejected * **Status filter** — Tabs to filter by All / Pending / Approved / Rejected * **Entity filter** — Dropdown to filter by source entity (table) * **Pattern table** — Status badge, SQL pattern (monospace, truncated), description, source entity, confidence (progress bar), repetition count, source (Agent or CLI), created date **Detail sheet** — Click any row to open a side panel with: * Full SQL pattern in a monospace block * Metadata (entity, source, confidence, repetitions, timestamps) * Review history (who reviewed, when) * Source queries that originated the pattern **Actions (per pattern):** * **Approve** — Mark a pattern as approved * **Reject** — Mark a pattern as rejected * **Delete** — Permanently remove a pattern (with confirmation dialog) **Bulk actions:** * Select multiple patterns with checkboxes * **Approve selected** / **Reject selected** — Update all selected patterns at once (max 100 per operation) All status changes record the reviewer and timestamp. Approve/reject use optimistic UI updates — the table updates immediately, reverting on failure. *** Prompt Library [#prompt-library] **Route:** `/admin/prompts` Requires an internal database (`DATABASE_URL`). Atlas ships with three built-in collections; admins can create custom collections for their organization. Manage curated prompt collections — pre-written questions organized by industry that help users get started with common analyses. Built-in Collections [#built-in-collections] Atlas seeds three built-in collections on first boot: * **SaaS Metrics** — MRR, churn, LTV, CAC, ARPU, and growth indicators * **E-commerce KPIs** — GMV, AOV, conversion rates, inventory, and fulfillment metrics * **Cybersecurity Compliance** — vulnerability tracking, incident response, and compliance scores Built-in collections are read-only and visible to all users. Custom Collections [#custom-collections] Admins can create custom prompt collections scoped to their organization: 1. Click **Create Collection** and fill in the name, industry, and description 2. Add prompt items with questions, descriptions, and categories 3. Reorder items within a collection 4. Custom collections are only visible to members of the admin's organization Chat Integration [#chat-integration] Users can access the prompt library from the chat interface via the book icon in the header. Clicking a prompt immediately sends it as a question. *** Cache [#cache] **Route:** `/admin/cache` Query caching is enabled by default. Set `ATLAS_CACHE_ENABLED=false` to disable it. When disabled, the page shows zeroed stats with an inline notice. Displays query result cache statistics and provides a manual flush control: * **Hit Rate** — Cache hit percentage with a visual progress bar, plus hit and miss counts * **Storage** — Current entry count vs max size, fill percentage, and TTL * **Flush Cache** — Removes all cached entries with a confirmation dialog. After flushing, subsequent queries hit the database directly until the cache repopulates **API endpoints:** | Method | Path | Description | | ------ | --------------------------- | ------------------------------------------ | | `GET` | `/api/v1/admin/cache/stats` | Cache hit rate, entry count, max size, TTL | | `POST` | `/api/v1/admin/cache/flush` | Flush all cached entries | *** Sessions [#sessions] **Route:** `/admin/sessions` Requires managed auth with an internal database (`DATABASE_URL`). Not available in simple-key, BYOT, or no-auth modes. Manage active user sessions across all users: * **Stats** — Total sessions, active sessions, unique users * **Search** — Filter by email or IP address * **Session table** — User, created time, last active, IP address, user agent **Actions:** * **Revoke session** — Immediately invalidate a specific session (user gets 401 on next request) * **Revoke all for user** — Force logout a user across all devices * **Bulk revoke** — Select multiple sessions and revoke them at once Session Timeouts [#session-timeouts] Configure automatic session timeout policies via environment variables or `atlas.config.ts`: | Setting | Env Var | Description | Default | | ---------------- | -------------------------------- | ----------------------------------------- | -------------- | | Idle timeout | `ATLAS_SESSION_IDLE_TIMEOUT` | Seconds of inactivity before invalidation | `0` (disabled) | | Absolute timeout | `ATLAS_SESSION_ABSOLUTE_TIMEOUT` | Max session lifetime in seconds | `0` (disabled) | ```typescript // atlas.config.ts export default defineConfig({ session: { idleTimeout: 3600, // 1 hour idle timeout absoluteTimeout: 86400, // 24 hour absolute timeout }, }); ``` These can also be changed at runtime via Admin > Settings without a server restart. User Self-Service [#user-self-service] Non-admin users can manage their own sessions via `GET /api/v1/sessions` and `DELETE /api/v1/sessions/:id`. Users can only see and revoke their own sessions. *** User Management [#user-management] **Route:** `/admin/users` Requires managed auth with the Better Auth admin plugin. Not available in simple-key, BYOT, or no-auth modes. Full user management with stats and controls: * **Stats** — Total users, admins, analysts, viewers * **Search** — Filter by email * **Role filter** — Filter by role **Actions (per user):** * **Change role** — Set to viewer, analyst, or admin * **Ban/unban** — Temporarily or permanently ban a user * **Revoke sessions** — Force logout across all devices * **Delete** — Permanently remove user and sessions **Invite users:** * **Invite user** button opens a dialog with email input and role selector * If `RESEND_API_KEY` is configured, an invitation email is sent automatically * If no email delivery is configured, the dialog shows a one-time invite link to copy and share manually * Invited users are assigned the selected role automatically on signup * **Pending invitations** section shows outstanding invitations with status (pending, accepted, expired, revoked) and expiry date * Admins can revoke pending invitations **Safety guards:** * Cannot change your own role * Cannot ban or delete yourself * Cannot demote or delete the last admin *** Workspace Management [#workspace-management] **Route:** `/admin/organizations` Requires managed auth with an internal database (`DATABASE_URL`). Platform-wide view — admins see all organizations, not just their own. Manage workspaces (organizations) across the platform: * **Organization list** — Name, slug, status badge (active/suspended/deleted), plan tier, member count, creation date * **Detail view** — Members with roles, pending invitations, creation date, and metadata * **Stats** — Member count, conversation count, query count per organization Workspace Status [#workspace-status] | Status | Description | | ------------- | ------------------------------------------------------------------------------------------- | | **Active** | Normal operation — queries and agent interactions are allowed | | **Suspended** | All queries blocked. Connection pools drained. Can be reactivated | | **Deleted** | Soft-deleted with cascading cleanup. Pools drained, caches flushed, data marked for removal | Actions [#actions-1] * **Suspend** — Block all queries for a workspace. Drains the org's connection pools immediately. Use this for billing holds, abuse prevention, or maintenance windows * **Activate** — Reactivate a suspended workspace. Queries resume immediately * **Delete** — Soft-delete with cascading cleanup: drain pools, flush caches, cascade database cleanup, mark as deleted. This is not reversible from the UI * **Change plan** — Set a workspace's plan tier: `free`, `team`, or `enterprise` Health Status [#health-status] The organization status endpoint (`GET /admin/organizations/:id/status`) returns workspace health metrics: * **Members** — Total member count * **Conversations** — Total conversations * **Queries last 24h** — Recent query volume * **Connections** — Registered datasource count * **Scheduled tasks** — Active task count * **Pool metrics** — Connection pool stats (if org-scoped pooling is enabled) Admin API [#admin-api] | Method | Path | Description | | -------- | ----------------------------- | -------------------------------------------- | | `GET` | `/organizations` | List all organizations | | `GET` | `/organizations/:id` | Get org details with members and invitations | | `GET` | `/organizations/:id/stats` | Org stats (members, conversations, queries) | | `GET` | `/organizations/:id/status` | Workspace health summary | | `PATCH` | `/organizations/:id/suspend` | Suspend workspace | | `PATCH` | `/organizations/:id/activate` | Reactivate workspace | | `DELETE` | `/organizations/:id` | Soft-delete workspace | | `PATCH` | `/organizations/:id/plan` | Update plan tier | See [Usage Metering](/guides/usage-metering) for per-workspace usage tracking. *** API Keys [#api-keys] **Route:** `/admin/api-keys` Create and manage API keys for programmatic access via the SDK, MCP server, or embeddable widget. Keys are shown once at creation and cannot be retrieved later. See [API Key Management](/guides/api-keys) for the full guide. *** Integrations [#integrations] **Route:** `/admin/integrations` Connect and manage external platform integrations — Slack, Microsoft Teams, Discord, Telegram, and webhooks — from a single page. Each integration shows its connection status and provides connect/disconnect actions. See [Integrations Hub](/guides/integrations) for the full guide. *** Sandbox / Execution Environment [#sandbox--execution-environment] **Route:** `/admin/sandbox` In **self-hosted** mode, configure which sandbox backend the explore and Python tools use via a dropdown selector. In **SaaS** mode, the page shows an integration card grid where workspace admins connect their own cloud sandbox providers (Vercel, E2B, Daytona) alongside the managed Atlas Cloud Sandbox. See [Sandbox Configuration](/guides/sandbox) for the full guide. *** Data Residency [#data-residency] **Route:** `/admin/residency` Data residency is available on [app.useatlas.dev](https://app.useatlas.dev). Self-hosted deployments can configure regions via the platform admin API, but the admin UI is optimized for the SaaS experience. Control where your workspace data is stored and request region migrations. Viewing Your Region [#viewing-your-region] When a region is assigned, the page displays: * **Current region** — Name, region ID, and compliance badge (e.g. "GDPR compliant" for EU regions, "SOC 2 compliant" for US regions) * **Assignment date** — When the region was first assigned * **Status** — Active badge confirming the region is serving traffic Requesting a Region Migration [#requesting-a-region-migration] Migration moves all workspace data between regions. Some features may be temporarily unavailable during the process. To migrate to a different region: 1. Click **Change Region** on the Data Region card 2. Select the target region from the available options — your current region is shown for reference 3. Review the migration summary (current region → target region with compliance badges) 4. Click **Request Migration**, then confirm in the confirmation dialog Migration Phases [#migration-phases] Once submitted, a migration progresses through these phases: | Phase | Status | What Happens | | --------------- | --------- | ----------------------------------------------------------------------------------------------------------- | | **Pending** | Queued | Migration request is queued for processing. You can cancel at this stage | | **In Progress** | Migrating | Data is being replicated from the source to the target region. Some features may be temporarily unavailable | | **Completed** | Done | All data has been moved to the target region. The workspace is now serving from the new region | | **Failed** | Error | The migration could not be completed. You can retry or contact support | | **Cancelled** | Stopped | The migration was cancelled before processing began | What to Expect During Migration [#what-to-expect-during-migration] * A status banner at the top of the page shows real-time migration progress * **Pending migrations** can be cancelled before processing begins * **Failed migrations** can be retried with a single click * Only one migration can be active at a time — the "Change Region" button is disabled while a migration is pending or in progress * After completion, the Data Region card updates to reflect the new region Residency Admin API [#residency-admin-api] | Method | Path | Description | | ------ | -------------------------------------------- | --------------------------------------------------- | | `GET` | `/api/v1/admin/residency` | Current region status and available regions | | `PUT` | `/api/v1/admin/residency` | Assign initial region (body: `{ region }`) | | `GET` | `/api/v1/admin/residency/migration` | Current migration status | | `POST` | `/api/v1/admin/residency/migrate` | Request region migration (body: `{ targetRegion }`) | | `POST` | `/api/v1/admin/residency/migrate/:id/retry` | Retry a failed migration | | `POST` | `/api/v1/admin/residency/migrate/:id/cancel` | Cancel a pending migration | *** Custom Domain [#custom-domain] **Route:** `/admin/custom-domain` Custom domains require an Enterprise plan on [app.useatlas.dev](https://app.useatlas.dev). The page shows an upgrade prompt if your workspace is on a lower plan. Serve Atlas from your own domain (e.g. `data.acme.com`) with automatic TLS certificate provisioning. Adding a Custom Domain [#adding-a-custom-domain] 1. Navigate to **Admin Console → Custom Domain** 2. Enter your desired subdomain (e.g. `data.acme.com`) — use a subdomain, not a root domain 3. Click **Add Domain** Atlas generates a CNAME target that you need to configure in your DNS provider. DNS Verification [#dns-verification] After adding a domain, a DNS configuration card appears with the required CNAME record: | Field | Value | | --------- | ----------------------------------------------------------- | | **Type** | `CNAME` | | **Name** | Your domain (e.g. `data.acme.com`) | | **Value** | The generated CNAME target (copy with the clipboard button) | Add this record to your DNS provider (Cloudflare, Route 53, Google Cloud DNS, etc.). DNS propagation may take up to 48 hours. Checking Verification Status [#checking-verification-status] Click **Check Status** to trigger a verification check. The domain progresses through these statuses: | Status | Badge | Meaning | | ------------ | -------------------- | -------------------------------------------------------------------------- | | **Pending** | Pending Verification | CNAME record not yet detected — DNS may still be propagating | | **Verified** | Active | CNAME verified and TLS certificate issued — domain is serving traffic | | **Failed** | Failed | Verification failed — check that your CNAME record is correctly configured | Removing a Domain [#removing-a-domain] Click **Remove Domain** and confirm in the dialog. This immediately stops serving traffic on the custom domain. The action cannot be undone — you would need to add and verify the domain again. Domain Admin API [#domain-admin-api] | Method | Path | Description | | -------- | ----------------------------- | ---------------------------------------- | | `GET` | `/api/v1/admin/domain` | Get current domain configuration | | `POST` | `/api/v1/admin/domain` | Add a custom domain (body: `{ domain }`) | | `POST` | `/api/v1/admin/domain/verify` | Trigger DNS verification check | | `DELETE` | `/api/v1/admin/domain` | Remove custom domain | *** API Endpoints [#api-endpoints] All admin endpoints are at `/api/v1/admin/` and require the `admin` role. See the [SDK](/reference/sdk) for typed client methods. | Method | Path | Description | | -------- | ---------------------------------------- | ------------------------------------------------------------------------------------------------------ | | `GET` | `/overview` | Dashboard stats + component health | | `GET` | `/connections` | List connections | | `GET` | `/connections/:id` | Get connection detail | | `POST` | `/connections` | Create connection | | `PUT` | `/connections/:id` | Update connection | | `DELETE` | `/connections/:id` | Delete connection | | `POST` | `/connections/test` | Test connection by URL | | `POST` | `/connections/:id/test` | Test saved connection | | `GET` | `/semantic/entities` | List entity summaries | | `GET` | `/semantic/entities/:name` | Get entity detail | | `GET` | `/semantic/metrics` | List all metrics | | `GET` | `/semantic/glossary` | Get glossary | | `GET` | `/semantic/catalog` | Get catalog | | `GET` | `/semantic/stats` | Semantic coverage stats | | `GET` | `/semantic/diff` | Schema diff (DB vs YAML) | | `PUT` | `/semantic/entities/edit/:name` | Create/update entity (structured JSON) | | `DELETE` | `/semantic/entities/edit/:name` | Delete entity | | `GET` | `/semantic/columns/:tableName` | Column metadata for autocomplete | | `GET` | `/semantic/entities/:name/versions` | Entity version history | | `GET` | `/semantic/entities/versions/:versionId` | Version detail with YAML | | `POST` | `/semantic/entities/:name/rollback` | Rollback to previous version | | `GET` | `/semantic/org/entities` | List org-scoped entities | | `GET` | `/semantic/org/entities/:name` | Get org entity detail | | `PUT` | `/semantic/org/entities/:name` | Create/update org entity | | `DELETE` | `/semantic/org/entities/:name` | Delete org entity | | `GET` | `/cache/stats` | Cache statistics (hit rate, entries, TTL) | | `POST` | `/cache/flush` | Flush all cached entries | | `GET` | `/audit` | Query audit log (supports `table` and `column` filters) | | `GET` | `/audit/stats` | Audit aggregates | | `GET` | `/audit/facets` | Distinct tables and columns for filter dropdowns | | `GET` | `/audit/analytics/volume` | Query volume per day | | `GET` | `/audit/analytics/slow` | Slowest queries | | `GET` | `/audit/analytics/frequent` | Most frequent queries | | `GET` | `/audit/analytics/errors` | Error breakdown | | `GET` | `/audit/analytics/users` | Per-user activity | | `GET` | `/tokens/summary` | Token usage summary | | `GET` | `/tokens/by-user` | Top users by token consumption | | `GET` | `/tokens/trends` | Daily token usage time series | | `GET` | `/settings` | List all settings with values and sources | | `PUT` | `/settings/:key` | Set or update a settings override | | `DELETE` | `/settings/:key` | Remove override (revert to env/default) | | `GET` | `/plugins` | List plugins | | `POST` | `/plugins/:id/health` | Plugin health check | | `POST` | `/plugins/:id/enable` | Enable a plugin | | `POST` | `/plugins/:id/disable` | Disable a plugin | | `GET` | `/plugins/:id/schema` | Get config schema and values | | `PUT` | `/plugins/:id/config` | Update plugin config | | `GET` | `/plugins/marketplace/available` | List available marketplace plugins | | `POST` | `/plugins/marketplace/install` | Install marketplace plugin | | `DELETE` | `/plugins/marketplace/:id` | Uninstall marketplace plugin | | `PUT` | `/plugins/marketplace/:id/config` | Update marketplace plugin config | | `GET` | `/sessions` | List all sessions | | `GET` | `/sessions/stats` | Session count stats | | `DELETE` | `/sessions/:id` | Revoke a session | | `DELETE` | `/sessions/user/:userId` | Revoke all sessions for user | | `GET` | `/users` | List users | | `GET` | `/users/stats` | User stats | | `PATCH` | `/users/:id/role` | Change role | | `POST` | `/users/:id/ban` | Ban user | | `POST` | `/users/:id/unban` | Unban user | | `POST` | `/users/:id/revoke` | Revoke sessions | | `DELETE` | `/users/:id` | Delete user | | `POST` | `/users/invite` | Invite user by email | | `GET` | `/users/invitations` | List invitations | | `DELETE` | `/users/invitations/:id` | Revoke invitation | | `GET` | `/learned-patterns` | List learned patterns (supports `status`, `source_entity`, `min_confidence`, `max_confidence` filters) | | `GET` | `/learned-patterns/:id` | Get single learned pattern | | `PATCH` | `/learned-patterns/:id` | Update pattern (description, status) | | `DELETE` | `/learned-patterns/:id` | Delete learned pattern | | `POST` | `/learned-patterns/bulk` | Bulk approve/reject (body: `{ ids, status }`, max 100) | | `GET` | `/prompts` | List prompt collections | | `POST` | `/prompts` | Create prompt collection | | `PATCH` | `/prompts/:id` | Update prompt collection | | `DELETE` | `/prompts/:id` | Delete prompt collection | | `POST` | `/prompts/:id/items` | Add item to collection | | `PATCH` | `/prompts/:collectionId/items/:itemId` | Update item | | `DELETE` | `/prompts/:collectionId/items/:itemId` | Delete item | | `PUT` | `/prompts/:id/reorder` | Reorder items (body: `{ itemIds }`) | | `GET` | `/suggestions` | List query suggestions (supports `table`, `min_frequency`, `limit`, `offset` filters) | | `DELETE` | `/suggestions/:id` | Delete a query suggestion | | `GET` | `/organizations` | List all organizations | | `GET` | `/organizations/:id` | Get org details with members | | `GET` | `/organizations/:id/stats` | Org stats | | `GET` | `/organizations/:id/status` | Workspace health summary | | `PATCH` | `/organizations/:id/suspend` | Suspend workspace | | `PATCH` | `/organizations/:id/activate` | Reactivate workspace | | `DELETE` | `/organizations/:id` | Soft-delete workspace | | `PATCH` | `/organizations/:id/plan` | Update plan tier | | `GET` | `/usage` | Current period usage summary | | `GET` | `/usage/history` | Historical usage summaries | | `GET` | `/usage/breakdown` | Per-user usage breakdown | | `GET` | `/sso/providers` | List SSO providers | | `GET` | `/sso/providers/:id` | Get SSO provider detail | | `POST` | `/sso/providers` | Create SSO provider | | `PATCH` | `/sso/providers/:id` | Update SSO provider | | `DELETE` | `/sso/providers/:id` | Delete SSO provider | | `GET` | `/integrations/status` | Get all integration statuses | | `DELETE` | `/integrations/slack` | Disconnect Slack | | `DELETE` | `/integrations/teams` | Disconnect Teams | | `DELETE` | `/integrations/discord` | Disconnect Discord | | `POST` | `/integrations/telegram` | Connect Telegram (validates bot token) | | `DELETE` | `/integrations/telegram` | Disconnect Telegram | | `GET` | `/sandbox/status` | Get sandbox configuration, backends, and connected providers | | `POST` | `/sandbox/connect/{provider}` | Validate and save BYOC sandbox credentials | | `DELETE` | `/sandbox/disconnect/{provider}` | Remove BYOC sandbox credentials | | `GET` | `/residency` | Current region status and available regions | | `PUT` | `/residency` | Assign initial region | | `GET` | `/residency/migration` | Current migration status | | `POST` | `/residency/migrate` | Request region migration | | `POST` | `/residency/migrate/:id/retry` | Retry a failed migration | | `POST` | `/residency/migrate/:id/cancel` | Cancel a pending migration | | `GET` | `/domain` | Get custom domain configuration | | `POST` | `/domain` | Add a custom domain | | `POST` | `/domain/verify` | Trigger DNS verification | | `DELETE` | `/domain` | Remove custom domain | *** Troubleshooting [#troubleshooting] Admin page shows "not enabled" message [#admin-page-shows-not-enabled-message] **Cause:** The feature requires a dependency that isn't configured — Audit Log and Token Usage need `DATABASE_URL`, Actions needs `ATLAS_ACTIONS_ENABLED=true`, Scheduled Tasks needs `ATLAS_SCHEDULER_ENABLED=true`. **Fix:** Set the required environment variable and restart the server. The feature gate message on each page tells you exactly what's missing. Cannot access `/admin` — 403 or redirect to login [#cannot-access-admin--403-or-redirect-to-login] **Cause:** Your user account doesn't have the `admin` role, or authentication isn't configured. **Fix:** In managed auth, have an existing admin promote your account via Admin > Users. In BYOT mode, ensure your JWT includes the correct role claim. In `none` auth mode, all users have implicit admin access. Settings changes don't take effect [#settings-changes-dont-take-effect] **Cause:** Some settings (Security, Agent) require a server restart, while others (Query Limits, Rate Limiting) apply immediately. The settings page shows a **Live** or **Requires restart** badge on each setting. **Fix:** Check the badge next to the setting you changed. If it says **Requires restart**, restart the Atlas server for the change to apply. On **app.useatlas.dev** (SaaS mode), most settings are picked up on the next request without a restart — though infrastructure settings like LLM provider and model may require redeployment. For more, see [Troubleshooting](/guides/troubleshooting). *** See Also [#see-also] * [Authentication](/deployment/authentication#managed-auth) — Set up managed auth (required for multi-user admin) * [Multi-Datasource Routing](/deployment/multi-datasource) — Configure and monitor multiple datasource connections * [SDK Reference](/reference/sdk#admin) — Programmatic access to admin endpoints * [Scheduled Tasks](/guides/scheduled-tasks) — Configure recurring queries managed via the admin console * [Actions](/guides/actions) — Approval-gated write operations visible in the admin console * [API Key Management](/guides/api-keys) — Create and manage API keys * [Integrations Hub](/guides/integrations) — Connect Slack, Teams, and webhooks * [Sandbox Configuration](/guides/sandbox) — Configure sandbox backends * [Semantic Editor](/guides/semantic-editor) — Create and edit entities from the UI * [Plugin Marketplace](/guides/plugin-marketplace) — Browse and install plugins * [Self-Serve Signup](/guides/signup#region-selection) — Region selection during onboarding * [White-Labeling](/guides/white-labeling) — Custom branding for your workspace --- # Slack Integration (/guides/slack) Atlas integrates with Slack via a slash command (`/atlas`) and threaded follow-ups. Ask questions in any channel, get answers with formatted data tables, and continue the conversation in a thread. On [app.useatlas.dev](https://app.useatlas.dev), the Atlas Slack app is pre-configured. Install it to your workspace from **Admin > Integrations**. For custom Slack app setup (e.g., your own branding), contact support. * Atlas API server running and accessible over HTTPS (required by Slack) * A [Slack workspace](https://api.slack.com/apps) with permissions to create apps * `SLACK_SIGNING_SECRET` from your Slack app's Basic Information page * For multi-workspace OAuth: internal database (`DATABASE_URL`) Setup Modes [#setup-modes] | Mode | Env Vars | Best For | | ----------------------- | ------------------------------------------------------------------ | ----------------------------------- | | Single-workspace | `SLACK_BOT_TOKEN` + `SLACK_SIGNING_SECRET` | One Slack workspace | | Multi-workspace (OAuth) | `SLACK_CLIENT_ID` + `SLACK_CLIENT_SECRET` + `SLACK_SIGNING_SECRET` | Distributing to multiple workspaces | Both modes require `SLACK_SIGNING_SECRET` for request signature verification. *** Single-Workspace Setup (Self-Hosted) [#single-workspace-setup-self-hosted] SaaS users do not need to create their own Slack app. Use the pre-configured Atlas app from **Admin > Integrations**. The sections below are for self-hosted operators. The simplest setup -- no OAuth flow, no database storage for tokens. 1\. Create a Slack App [#1-create-a-slack-app] Go to [api.slack.com/apps](https://api.slack.com/apps) and create a new app from scratch. 2\. Configure Scopes [#2-configure-scopes] Under **OAuth & Permissions**, add these **Bot Token Scopes**: * `commands` -- Register slash commands * `chat:write` -- Post messages * `app_mentions:read` -- Read mentions (for thread follow-ups) 3\. Create the Slash Command [#3-create-the-slash-command] Under **Slash Commands**, create a new command: * **Command:** `/atlas` * **Request URL:** `https://your-api-host/api/v1/slack/commands` * **Description:** Ask a data question 4\. Enable Events [#4-enable-events] Under **Event Subscriptions**: * **Request URL:** `https://your-api-host/api/v1/slack/events` * **Subscribe to bot events:** `message.channels`, `message.groups` 5\. Install and Configure [#5-install-and-configure] Install the app to your workspace, then copy the **Bot User OAuth Token** and **Signing Secret**: ```bash # .env — copy these from your Slack app's settings page SLACK_BOT_TOKEN=xoxb-your-bot-token # OAuth & Permissions → Bot User OAuth Token SLACK_SIGNING_SECRET=your-signing-secret # Basic Information → Signing Secret ``` *** Multi-Workspace OAuth Setup (Self-Hosted) [#multi-workspace-oauth-setup-self-hosted] SaaS users do not need to configure OAuth. Use the pre-configured Atlas app from **Admin > Integrations**. This section is for self-hosted operators only. For distributing Atlas to multiple Slack workspaces. Requires an [internal database](/reference/environment-variables) (`DATABASE_URL`) to store per-workspace bot tokens. 1\. Configure OAuth [#1-configure-oauth] In your Slack app settings, under **OAuth & Permissions**: * **Redirect URL:** `https://your-api-host/api/v1/slack/callback` 2\. Set Environment Variables [#2-set-environment-variables] ```bash # .env — OAuth credentials from your Slack app settings SLACK_CLIENT_ID=your-client-id SLACK_CLIENT_SECRET=your-client-secret SLACK_SIGNING_SECRET=your-signing-secret DATABASE_URL=postgresql://... # Required — stores per-workspace bot tokens ``` 3\. Install Flow [#3-install-flow] Direct users to `https://your-api-host/api/v1/slack/install`. This redirects to Slack's OAuth authorize page. After approval, the callback stores the bot token in the `slack_installations` table. *** How It Works [#how-it-works] Slash Command [#slash-command] ``` /atlas What was last month's revenue? ``` 1. Slack sends the command to `POST /api/v1/slack/commands` 2. Atlas verifies the request signature (HMAC-SHA256, 5-minute timestamp window) 3. Atlas immediately acknowledges within 3 seconds (Slack requirement) 4. In the background, Atlas posts a "Thinking..." message, runs the agent, and updates the message with the formatted response Thread Follow-ups [#thread-follow-ups] When a user replies in a thread started by Atlas: 1. Atlas receives the message via the Events API 2. Loads the conversation history from the original thread 3. Runs the agent with the full conversation context 4. Posts the response in the same thread This provides multi-turn conversations within Slack threads. Action Approvals [#action-approvals] When the [action framework](/guides/actions) is enabled and a query produces pending actions, Atlas posts approval buttons visible only to the requesting user. Clicking **Approve** or **Deny** triggers `POST /api/v1/slack/interactions`. *** Message Formatting [#message-formatting] Atlas formats responses using Slack Block Kit: * **Answer text** -- The agent's narrative response * **SQL** -- The executed query in a code block * **Data table** -- Column-aligned text table (max 20 rows displayed, with a "Showing X of Y" note) * **Metadata** -- Steps taken, tokens used Block Kit limits: max 50 blocks per message, max 3,000 characters per text block. *** Environment Variables [#environment-variables] | Variable | Required | Description | | ---------------------- | ---------------- | ------------------------------ | | `SLACK_SIGNING_SECRET` | Yes | Request signature verification | | `SLACK_BOT_TOKEN` | Single-workspace | Bot OAuth token | | `SLACK_CLIENT_ID` | Multi-workspace | OAuth app client ID | | `SLACK_CLIENT_SECRET` | Multi-workspace | OAuth app client secret | See [Environment Variables](/reference/environment-variables) for the full reference. *** Deployment Notes [#deployment-notes] * The API must be accessible over **HTTPS** for Slack to deliver events and commands * Slash commands have a **3-second response deadline** -- Atlas acknowledges immediately and processes asynchronously * Rate limiting is per Slack user per team for commands, and per team for thread follow-ups * Error messages sent to Slack are scrubbed to prevent leaking connection strings, API keys, or stack traces * Without an internal database (`DATABASE_URL`), thread conversation mappings and OAuth tokens cannot be persisted. Single-workspace mode with `SLACK_BOT_TOKEN` still works, but thread follow-ups won't have conversation history *** Troubleshooting [#troubleshooting] **Events not being delivered** -- Verify the Request URL in your Slack app settings is HTTPS and publicly reachable. Slack retries failed deliveries but will disable the subscription after repeated failures. **OAuth callback fails** -- Ensure the Redirect URL in your Slack app settings matches `https://your-api-host/api/v1/slack/callback` exactly. Check that `DATABASE_URL` is set (required for storing OAuth tokens). **Bot doesn't respond in threads** -- The bot needs `message.channels` and `message.groups` event subscriptions. Also verify `DATABASE_URL` is set for conversation history persistence. See [Troubleshooting](/guides/troubleshooting) for general diagnostic steps. --- # Python Data Analysis (/guides/python) Atlas can execute Python code in a sandboxed environment for data analysis, statistical computation, and chart generation. The agent writes Python after running SQL queries to produce visualizations and deeper analysis. * Atlas server running (`bun run dev`) * A sandbox backend configured: `ATLAS_SANDBOX_URL` (sidecar), nsjail, or Vercel sandbox * Python will not work without an isolated sandbox — there is no `just-bash` fallback Enable [#enable] ```bash ATLAS_PYTHON_ENABLED=true ``` Python execution requires a sandbox backend. Set `ATLAS_SANDBOX_URL` (sidecar) or use nsjail/Vercel sandbox. Unlike the explore tool, there is no `just-bash` fallback — Python will not work without a sandbox. *** Sandbox Backends [#sandbox-backends] Python runs through the same sandbox infrastructure as the `explore` tool, with the following priority: | Priority | Backend | How to enable | | -------- | -------------------- | ---------------------------------------------- | | 1 | Sidecar | Set `ATLAS_SANDBOX_URL` | | 2 | Vercel sandbox | Set `ATLAS_RUNTIME=vercel` (Python 3.13) | | 3 | nsjail (explicit) | Set `ATLAS_SANDBOX=nsjail` | | 4 | nsjail (auto-detect) | nsjail binary on `PATH` or `ATLAS_NSJAIL_PATH` | Unlike the `explore` tool, there is no `just-bash` fallback for Python. If no isolated backend is available, Python execution is rejected with an error. *** Available Libraries [#available-libraries] The following libraries are available in the sandbox: * **pandas** -- DataFrames and data manipulation * **numpy** -- Numerical computation * **matplotlib** -- Static charts and plots * **plotly** -- Interactive charts The exact set of available libraries depends on the sandbox backend and its Python environment. *** Chart Rendering [#chart-rendering] When Python code generates charts, they appear inline in the chat UI. Two output formats are supported: Static charts (matplotlib) [#static-charts-matplotlib] The agent calls `plt.savefig(chart_path(0))` to save a chart. The sandbox provides a `chart_path(n)` helper that returns the correct output path for the nth chart. Multiple charts can be generated in a single execution (`chart_path(0)`, `chart_path(1)`, etc.). Charts are returned as base64-encoded PNG images. Interactive charts (Recharts) [#interactive-charts-recharts] The agent sets a special `_atlas_chart` variable with structured chart data: ```python _atlas_chart = { "type": "line", # "line", "bar", or "pie" "data": [...], # Array of data points "categoryKey": "month", # X-axis key "valueKeys": ["revenue", "cost"], # Y-axis keys } ``` Recharts output renders natively in the web UI as interactive charts with hover tooltips and legends. Multiple charts can be returned as a list. *** Security Model [#security-model] Python execution is isolated at multiple layers: 1. **Import guard** -- Defense-in-depth blocking of dangerous modules. Configurable via [`atlas.config.ts`](/reference/config#python-import-guard) — you can allow specific modules (e.g., `requests` for sandboxed API calls) or block additional ones. Critical modules (`os`, `subprocess`, `sys`, `shutil`) can never be unblocked 2. **No filesystem writes** -- The sandbox environment is read-only 3. **No network access** -- Outbound connections are blocked by the sandbox 4. **No shell access** -- `os.system()`, `subprocess.run()`, and similar are blocked 5. **Timeout enforcement** -- Each execution has a configurable time limit *** Environment Variables [#environment-variables] | Variable | Default | Description | | ---------------------- | ------- | ----------------------------------------------------------- | | `ATLAS_PYTHON_ENABLED` | — | Set to `true` to enable Python execution | | `ATLAS_PYTHON_TIMEOUT` | `30000` | Per-execution timeout in milliseconds (default: 30 seconds) | The sandbox itself is configured via the standard sandbox variables. See [Environment Variables](/reference/environment-variables) and [Sandbox Architecture](/architecture/sandbox). *** How It Works [#how-it-works] 1. The agent runs a SQL query via `executeSQL` and gets tabular data 2. The agent writes Python code using the query results as input (passed as a `data` parameter with `columns` and `rows`) 3. Atlas validates the code against blocked imports and builtins 4. The code runs in the sandbox backend with the data available 5. Output (text, tables, or charts) is returned to the agent and displayed in the chat UI The agent decides when to use Python based on the question -- statistical analysis, trend detection, and visualization requests typically trigger Python execution. Streaming output [#streaming-output] When using the sidecar backend, Python output streams progressively to the chat UI — stdout appears line-by-line as the script runs, and matplotlib charts render inline as soon as `savefig` is called, rather than waiting for the entire script to complete. Recharts and plotly output is delivered after the script finishes. This uses the sidecar's NDJSON streaming endpoint (`/exec-python-stream`). If the sidecar does not support the streaming endpoint (older versions), Atlas falls back to the non-streaming sidecar endpoint. Non-sidecar backends (Vercel sandbox, nsjail) always use the standard execution path. *** Troubleshooting [#troubleshooting] "Python execution requires a sandbox" [#python-execution-requires-a-sandbox] **Cause:** No sandbox backend is configured. Unlike the `explore` tool, Python has no `just-bash` fallback — it requires an isolated sandbox. **Fix:** Set `ATLAS_SANDBOX_URL` to point to the sidecar (`bun run db:up` starts one), or configure nsjail or Vercel sandbox. See [Sandbox Backends](#sandbox-backends) above. Chart doesn't render in the chat UI [#chart-doesnt-render-in-the-chat-ui] **Cause:** The Python code generated a chart but used an unsupported output method, or the sandbox timed out before the chart was written. **Fix:** Ensure matplotlib charts use `plt.savefig(chart_path(0))` — the sandbox provides a `chart_path()` helper that returns the correct output path. Check `ATLAS_PYTHON_TIMEOUT` if the script is compute-heavy. Charts rendered in Recharts format appear as interactive components; matplotlib charts appear as static images. "Blocked import" error [#blocked-import-error] **Cause:** The Python code tried to import a module on the default blocklist (e.g., `subprocess`, `os`, `socket`). This is a defense-in-depth guard — the sandbox would block these anyway. **Fix:** Rewrite the code to avoid blocked modules. For data analysis, use `pandas`, `numpy`, `matplotlib`, and `plotly`. Some modules like `requests` can be selectively unblocked via [`atlas.config.ts`](/reference/config#python-import-guard) for sandboxed environments. For more, see [Troubleshooting](/guides/troubleshooting). --- # Migrating to Hosted Atlas (/guides/migration) Overview [#overview] Atlas provides built-in migration tooling to move your workspace data from a self-hosted instance to the hosted SaaS (or between any two Atlas instances). The migration preserves: * **Conversations** with all messages, metadata, and timestamps * **Semantic entities** (DB-backed YAML definitions) * **Learned patterns** (approved, pending, and rejected) * **Settings** (org-scoped key/value pairs) The import is **idempotent** — running it multiple times skips already-imported data, so you can safely re-run after fixing errors or adding new data. Prerequisites [#prerequisites] * A running self-hosted Atlas instance with `DATABASE_URL` configured * An API key for the target hosted workspace (generate one in **Admin > API Keys**) * The Atlas CLI installed (`bun install` in your Atlas repo, or via `create-atlas`) Migration workflow [#migration-workflow] Export from self-hosted [#export-from-self-hosted] Run `atlas export` against your self-hosted instance. This reads directly from the internal database (no running API server required). ```bash # Basic export (global/unscoped data) atlas export # Export a specific org's data atlas export --org org_abc123 # Custom output path atlas export --output my-backup.json ``` The command produces a JSON bundle file (e.g. `atlas-export-2026-04-02.json`) containing all workspace data with a manifest header. Review the bundle [#review-the-bundle] The export bundle is a plain JSON file you can inspect: ```bash # Check the manifest cat atlas-export-2026-04-02.json | jq '.manifest' ``` ```json { "version": 1, "exportedAt": "2026-04-02T12:00:00.000Z", "source": { "label": "self-hosted", "apiUrl": "http://localhost:3001" }, "counts": { "conversations": 42, "messages": 380, "semanticEntities": 15, "learnedPatterns": 8, "settings": 3 } } ``` Import into hosted Atlas [#import-into-hosted-atlas] Send the bundle to the target instance using `atlas migrate-import`: ```bash # Import to app.useatlas.dev (default) atlas migrate-import \ --bundle atlas-export-2026-04-02.json \ --api-key sk-your-admin-api-key # Import to a custom instance atlas migrate-import \ --bundle atlas-export-2026-04-02.json \ --target https://atlas.internal.company.com \ --api-key sk-your-admin-api-key ``` You can also set the API key via environment variable: ```bash export ATLAS_API_KEY=sk-your-admin-api-key atlas migrate-import --bundle atlas-export-2026-04-02.json ``` Verify the import [#verify-the-import] The CLI prints a summary table showing imported and skipped counts: ``` Import complete! Entity Imported Skipped ──────────────── ──────── ─────── Conversations 42 0 Semantic entities 15 0 Learned patterns 8 0 Settings 3 0 ``` Log into the hosted instance and verify your conversations and semantic layer are present. CLI reference [#cli-reference] `atlas export` [#atlas-export] Exports workspace data from the internal database to a portable JSON bundle. | Flag | Description | Default | | ----------------- | ------------------------------ | ---------------------------- | | `--output ` | Output file path | `./atlas-export-{date}.json` | | `-o ` | Alias for `--output` | | | `--org ` | Export data for a specific org | Global (unscoped) | **Requires:** `DATABASE_URL` environment variable pointing to the Atlas internal database. `atlas migrate-import` [#atlas-migrate-import] Sends an export bundle to a hosted Atlas instance for import. | Flag | Description | Default | | ----------------- | ------------------------------------ | -------------------------- | | `--bundle ` | Path to the export bundle (required) | | | `--target ` | Target Atlas API URL | `https://app.useatlas.dev` | | `--api-key ` | Admin API key for the target | `ATLAS_API_KEY` env var | `POST /api/v1/admin/migrate/import` [#post-apiv1adminmigrateimport] The API endpoint that receives and processes the bundle. Requires admin authentication and an active organization context. **Idempotency rules:** * Conversations: skipped if a conversation with the same ID already exists * Semantic entities: skipped if an entity with the same (type, name) exists * Learned patterns: skipped if a pattern with identical SQL already exists * Settings: skipped if the key already has a value (won't overwrite) Troubleshooting [#troubleshooting] Ensure your API key has admin role access. Generate a new key at **Admin > API Keys** in the target workspace. If the import fails with a 413 error, your bundle exceeds the request size limit. Export a subset of data using `--org` or contact support for bulk import assistance. The import is idempotent — previously imported items are skipped automatically. Fix the issue and re-run the same command. --- # Usage Metering (/guides/usage-metering) Atlas tracks usage events — queries executed, tokens consumed, and user logins — per workspace. Admins can view current-period summaries, historical trends, and per-user breakdowns through the admin API. Usage data powers billing integrations and capacity planning. * Internal database configured (`DATABASE_URL`) * Admin role required for usage endpoints * Usage is recorded automatically — no configuration needed *** What's Tracked [#whats-tracked] Atlas records three types of usage events: | Event Type | What It Counts | When It's Recorded | | ---------- | -------------------- | --------------------------------- | | `query` | SQL queries executed | After each `executeSQL` tool call | | `token` | LLM tokens consumed | After each agent step | | `login` | User sign-ins | (Reserved — not yet emitted) | Each event includes: * **Workspace ID** — scoped to the user's active organization * **User ID** — who triggered the event * **Quantity** — how many (1 for queries, token count for tokens) * **Metadata** — optional context (e.g., model name, query duration) *** Admin API Endpoints [#admin-api-endpoints] Usage endpoints are mounted at `/api/v1/admin/usage`. All require the `admin` role and are scoped to the admin's organization. | Method | Path | Description | | ------ | ------------ | --------------------------------------- | | `GET` | `/` | Current period summary (real-time) | | `GET` | `/history` | Historical summaries (daily or monthly) | | `GET` | `/breakdown` | Per-user usage breakdown | Current Period Summary [#current-period-summary] Returns real-time counts from raw usage events for the current billing period. ```bash curl https://your-atlas.example.com/api/v1/admin/usage \ -H "Authorization: Bearer " ``` Response: ```json { "workspaceId": "org_abc123", "queryCount": 1842, "tokenCount": 245000, "activeUsers": 12, "periodStart": "2026-03-01T00:00:00.000Z", "periodEnd": "2026-04-01T00:00:00.000Z" } ``` Historical Summaries [#historical-summaries] Returns aggregated summaries over a date range, grouped by day or month. Triggers aggregation on read — summaries are computed from raw events and cached in the `usage_summaries` table. ```bash curl "https://your-atlas.example.com/api/v1/admin/usage/history?period=daily&startDate=2026-03-01&endDate=2026-03-19&limit=30" \ -H "Authorization: Bearer " ``` | Parameter | Type | Default | Description | | ----------- | -------------------- | --------- | ----------------------- | | `period` | `daily` \| `monthly` | `monthly` | Aggregation granularity | | `startDate` | ISO 8601 | — | Start of date range | | `endDate` | ISO 8601 | — | End of date range | | `limit` | number | `90` | Max summaries to return | Per-User Breakdown [#per-user-breakdown] Shows usage by individual user within a date range. ```bash curl "https://your-atlas.example.com/api/v1/admin/usage/breakdown?startDate=2026-03-01&endDate=2026-03-19" \ -H "Authorization: Bearer " ``` | Parameter | Type | Default | Description | | ----------- | -------- | ------- | ----------------------------- | | `startDate` | ISO 8601 | — | Start of date range | | `endDate` | ISO 8601 | — | End of date range | | `limit` | number | `100` | Max users to return (max 500) | Response: ```json { "workspaceId": "org_abc123", "users": [ { "user_id": "user_1", "query_count": 450, "token_count": 62000, "login_count": 0 }, { "user_id": "user_2", "query_count": 312, "token_count": 48000, "login_count": 0 } ] } ``` *** How It Works [#how-it-works] Event Logging [#event-logging] Usage events are logged via `logUsageEvent()` — a fire-and-forget function that writes to the internal database asynchronously. If the internal DB is not configured, usage logging is silently skipped. A circuit breaker protects against database failures: after 5 consecutive write errors, the circuit trips and events are dropped until the database recovers. This prevents usage logging from impacting query latency. Aggregation [#aggregation] Historical summaries are computed on-demand when the `/history` endpoint is called. The aggregation: 1. Groups raw events by workspace, period, and event type 2. Upserts results into the `usage_summaries` table 3. Uses `ON CONFLICT ... DO UPDATE` for concurrent safety 4. Returns the aggregated rows for the requested date range *** See Also [#see-also] * [Admin Console](/guides/admin-console) — Token usage UI and workspace management * [Environment Variables](/reference/environment-variables) — Full variable reference * [Observability](/platform-ops/observability) — Logging and monitoring setup --- # Self-Hosted Models (/guides/self-hosted-models) This guide is for operators running their own Atlas instance who want to use local inference servers instead of cloud LLM providers. On [app.useatlas.dev](https://app.useatlas.dev), the LLM provider is managed by the Atlas platform — no model hosting is required. Atlas works with any OpenAI-compatible inference server. This guide covers setting up Ollama, vLLM, and TGI, choosing the right model, and troubleshooting common issues. Atlas requires models with **tool calling** (function calling) support. The agent loop depends on `executeSQL` and `explore` tools — models without tool calling cannot run Atlas queries. See the [compatibility matrix](#compatibility-matrix) for tested models. *** Quick Start [#quick-start] The fastest way to run Atlas with a local model: ```bash # 1. Install and start Ollama curl -fsSL https://ollama.com/install.sh | sh ollama pull llama3.1:8b # 2. Configure Atlas ATLAS_PROVIDER=ollama ATLAS_MODEL=llama3.1:8b OLLAMA_BASE_URL=http://localhost:11434/v1 # 3. Start Atlas bun run dev ``` Or use Docker Compose for a fully containerized setup: ```bash # From repo root — starts Atlas + Postgres + Ollama docker compose -f examples/docker/docker-compose.ollama.yml up ``` *** Providers [#providers] Atlas supports two provider modes for self-hosted models: `ollama` — Ollama preset [#ollama--ollama-preset] Preconfigured for Ollama's default endpoint. No API key needed. ```bash ATLAS_PROVIDER=ollama ATLAS_MODEL=llama3.1:8b # Optional: override if Ollama is on a different host OLLAMA_BASE_URL=http://localhost:11434/v1 ``` `openai-compatible` — Any OpenAI-compatible server [#openai-compatible--any-openai-compatible-server] Works with vLLM, TGI, LiteLLM, LocalAI, and any server that implements the OpenAI Chat Completions API with tool calling. ```bash ATLAS_PROVIDER=openai-compatible ATLAS_MODEL=llama3.1 # Model name as served by your server OPENAI_COMPATIBLE_BASE_URL=http://localhost:8000/v1 # Required # Optional: API key if your server requires one OPENAI_COMPATIBLE_API_KEY=your-key ``` `ATLAS_MODEL` is **required** for `openai-compatible` — there is no default. Set it to the model name as reported by your server's `/v1/models` endpoint. *** Inference Servers [#inference-servers] Ollama [#ollama] The easiest way to run models locally. Handles model downloading, quantization, and GPU management automatically. ```bash # Install curl -fsSL https://ollama.com/install.sh | sh # Pull a model (downloads ~4.7 GB for 8B Q4) ollama pull llama3.1:8b # Verify it's running curl http://localhost:11434/api/tags ``` **Pros:** Simple setup, automatic GPU detection, built-in model management, good for development. **Cons:** Lower throughput than vLLM even with continuous batching, limited serving options. **Atlas config:** ```bash ATLAS_PROVIDER=ollama ATLAS_MODEL=llama3.1:8b ``` vLLM [#vllm] High-throughput serving with continuous batching. Best for production self-hosted deployments. ```bash # Install pip install vllm # Serve with tool calling enabled (required for Atlas) vllm serve meta-llama/Llama-3.1-8B-Instruct \ --served-model-name llama3.1 \ --enable-auto-tool-choice \ --tool-call-parser hermes \ --max-model-len 8192 # Verify curl http://localhost:8000/v1/models ``` **Pros:** Highest throughput (continuous batching, PagedAttention), production-grade, tensor parallelism for multi-GPU. **Cons:** Requires NVIDIA GPU, longer startup (model loading), more complex configuration. vLLM requires `--enable-auto-tool-choice` and a `--tool-call-parser` for Atlas to work. Without these flags, tool calls will fail silently or return malformed responses. **Atlas config:** ```bash ATLAS_PROVIDER=openai-compatible ATLAS_MODEL=llama3.1 OPENAI_COMPATIBLE_BASE_URL=http://localhost:8000/v1 ``` Text Generation Inference (TGI) [#text-generation-inference-tgi] Hugging Face's inference server. Good middle ground between Ollama and vLLM. ```bash # Run with Docker (recommended) docker run --gpus all -p 8080:80 \ -v tgi_data:/data \ ghcr.io/huggingface/text-generation-inference:latest \ --model-id meta-llama/Llama-3.1-8B-Instruct \ --max-input-tokens 4096 \ --max-total-tokens 8192 # Verify curl http://localhost:8080/v1/models ``` **Pros:** Good throughput, Hugging Face ecosystem integration, Flash Attention support. **Cons:** Tool calling support varies by model — not all models work reliably. Check the [compatibility matrix](#compatibility-matrix). **Atlas config:** ```bash ATLAS_PROVIDER=openai-compatible ATLAS_MODEL=meta-llama/Llama-3.1-8B-Instruct OPENAI_COMPATIBLE_BASE_URL=http://localhost:8080/v1 ``` *** Model Selection [#model-selection] Which model should I use? [#which-model-should-i-use] Atlas needs models that can: 1. **Call tools reliably** — generate structured JSON for `executeSQL` and `explore` tool calls 2. **Write SQL** — translate natural language to correct SQL for your schema 3. **Follow system prompts** — respect the semantic layer context injected into the system prompt Not all models do this well. Larger models are significantly better at tool calling and SQL generation. Recommended Models [#recommended-models] | Model | Parameters | Quality | Speed | Best For | | ----------------- | ---------- | --------- | -------- | ----------------------------------------------------- | | **Llama 3.1 70B** | 70B | High | Moderate | Production self-hosted — best quality-to-cost ratio | | **Qwen 2.5 72B** | 72B | High | Moderate | Production — strong tool calling and multilingual SQL | | **Mistral Large** | 123B | Very High | Slow | Maximum quality when latency is acceptable | | **Llama 3.1 8B** | 8B | Moderate | Fast | Development and testing — quick iteration | | **Qwen 2.5 7B** | 7B | Moderate | Fast | Development — good tool calling for its size | | **Mistral 7B** | 7B | Low | Fast | Not recommended — unreliable tool calling | | **DeepSeek V3** | 671B (MoE) | Very High | Moderate | Multi-GPU setups with ample VRAM | **Minimum viable model for text-to-SQL:** 8B parameter models (Llama 3.1 8B, Qwen 2.5 7B) can handle simple queries against small schemas (\< 20 tables). For complex joins, subqueries, or large schemas, use 70B+ models. Quality Tiers [#quality-tiers] **Tier 1 — Production ready (70B+):** Reliable tool calling, accurate SQL generation for complex queries, handles large schemas. Comparable to GPT-4o for most text-to-SQL tasks. **Tier 2 — Development viable (7-8B):** Works for simple queries (single-table SELECTs, basic aggregations). Tool calling works but may require retries. Struggles with multi-table joins and complex WHERE clauses. **Tier 3 — Not recommended (\< 7B):** Unreliable tool calling, frequent SQL syntax errors, poor schema comprehension. Use only for testing the pipeline, not for actual queries. *** Hardware Requirements [#hardware-requirements] GPU Memory (VRAM) [#gpu-memory-vram] | Model | FP16 | Q8 | Q4 | Minimum GPU | | ---------------------- | ---------- | --------- | --------- | --------------------------- | | Llama 3.1 8B | 16 GB | 9 GB | 5 GB | RTX 3090 / A10 | | Qwen 2.5 7B | 14 GB | 8 GB | 5 GB | RTX 3090 / A10 | | Mistral 7B | 14 GB | 8 GB | 5 GB | RTX 3090 / A10 | | Llama 3.1 70B | 140 GB | 75 GB | 40 GB | 2× A100 80GB / 1× A100 (Q4) | | Qwen 2.5 72B | 144 GB | 77 GB | 42 GB | 2× A100 80GB / 1× A100 (Q4) | | Mistral Large (123B) | 246 GB | 131 GB | 72 GB | 4× A100 80GB | | DeepSeek V3 (671B MoE) | \~130 GB\* | \~70 GB\* | \~40 GB\* | 2× A100 80GB (FP8) | \* DeepSeek V3 uses Mixture-of-Experts — only active parameters are loaded, so VRAM is lower than the total parameter count suggests. System Requirements [#system-requirements] | Component | Minimum | Recommended | | --------- | --------------------- | --------------------------------------------- | | **RAM** | Model VRAM × 1.5 | Model VRAM × 2 | | **Disk** | Model size + 20 GB | SSD with 100+ GB free | | **CPU** | 4 cores | 8+ cores (for vLLM continuous batching) | | **GPU** | CUDA 11.8+ compatible | NVIDIA Ampere or newer (A100, H100, RTX 4090) | **CPU-only inference** is possible with Ollama for 7-8B models (Q4 quantization) but is 10-50× slower than GPU. Not recommended for interactive use — the agent loop's built-in step timeout (30s per tool call) may kill requests before the model finishes generating. Quantization Trade-offs [#quantization-trade-offs] | Quantization | VRAM Savings | Quality Impact | Recommendation | | ------------ | --------------- | ----------------------------- | ------------------------------------------------- | | **FP16** | Baseline | None | Best quality, if you have VRAM | | **Q8** | \~45% reduction | Minimal (\< 1% accuracy loss) | Good default for production | | **Q4** | \~70% reduction | Noticeable on complex queries | Acceptable for development, risky for production | | **Q2** | \~85% reduction | Significant degradation | Not recommended — tool calling becomes unreliable | *** Compatibility Matrix [#compatibility-matrix] Tested model and inference server combinations for Atlas. **Tool calling** is the critical requirement — without it, Atlas cannot function. Legend [#legend] * ✅ Works — tool calling, streaming, and SQL generation all function correctly * ⚠️ Partial — works but with known limitations (see notes) * ❌ Fails — tool calling broken or too unreliable for use Ollama [#ollama-1] | Model | Tool Calling | Streaming | Notes | | ------------------ | ------------ | --------- | --------------------------------------------------------- | | Llama 3.1 70B | ✅ | ✅ | Best self-hosted option for Ollama | | Llama 3.1 8B | ✅ | ✅ | Good for development | | Qwen 2.5 72B | ✅ | ✅ | Strong tool calling | | Qwen 2.5 7B | ✅ | ✅ | Good tool calling for its size | | Mistral Large | ✅ | ✅ | Requires significant VRAM | | Mistral 7B (v0.3) | ⚠️ | ✅ | Tool calling works but sometimes malformed — retries help | | DeepSeek V3 | ⚠️ | ✅ | Requires Ollama 0.5+; large VRAM requirement | | Phi-3 Medium (14B) | ⚠️ | ✅ | Tool calling inconsistent — not recommended for Atlas | | CodeLlama 34B | ❌ | ✅ | No tool calling support | | Llama 2 (any size) | ❌ | ✅ | No tool calling support | vLLM [#vllm-1] | Model | Tool Calling | Streaming | Notes | | ----------------- | ------------ | --------- | ---------------------------------------------------------- | | Llama 3.1 70B | ✅ | ✅ | Best production option — use `--tool-call-parser hermes` | | Llama 3.1 8B | ✅ | ✅ | Use `--tool-call-parser hermes` | | Qwen 2.5 72B | ✅ | ✅ | Use `--tool-call-parser hermes` | | Qwen 2.5 7B | ✅ | ✅ | Use `--tool-call-parser hermes` | | Mistral Large | ✅ | ✅ | Use `--tool-call-parser mistral` | | Mistral 7B (v0.3) | ⚠️ | ✅ | Tool calling less reliable than 70B+ models | | DeepSeek V3 | ✅ | ✅ | Requires FP8 or multi-GPU; use `--tool-call-parser hermes` | vLLM **requires** `--enable-auto-tool-choice` and a `--tool-call-parser` flag. The parser must match the model's chat template. Most Llama and Qwen models use `hermes`; Mistral models use `mistral`. TGI (Text Generation Inference) [#tgi-text-generation-inference] | Model | Tool Calling | Streaming | Notes | | ------------- | ------------ | --------- | --------------------------------------------- | | Llama 3.1 70B | ✅ | ✅ | Requires TGI v2.0+ | | Llama 3.1 8B | ✅ | ✅ | Requires TGI v2.0+ | | Qwen 2.5 72B | ⚠️ | ✅ | Tool calling works but output format can vary | | Qwen 2.5 7B | ⚠️ | ✅ | Same as 72B — format inconsistencies | | Mistral Large | ✅ | ✅ | Good TGI support | | Mistral 7B | ⚠️ | ✅ | Inconsistent tool calling | *** Docker Compose Profiles [#docker-compose-profiles] Pre-built Docker Compose files for common self-hosted setups. All include Atlas API + Postgres + demo data. Ollama [#ollama-2] ```bash # Start with default model (Llama 3.1 8B) docker compose -f examples/docker/docker-compose.ollama.yml up # Use a different model OLLAMA_MODEL=qwen2.5:72b docker compose -f examples/docker/docker-compose.ollama.yml up ``` **Included services:** Postgres, Ollama (with GPU passthrough), model auto-pull, Atlas API. For CPU-only: remove the `deploy` block from the `ollama` service in the compose file. vLLM [#vllm-2] ```bash # Start with default model (Llama 3.1 8B Instruct) HUGGING_FACE_HUB_TOKEN=hf_... docker compose -f examples/docker/docker-compose.vllm.yml up # Use a different model HUGGING_FACE_HUB_TOKEN=hf_... \ VLLM_MODEL=meta-llama/Llama-3.1-70B-Instruct \ VLLM_SERVED_NAME=llama3.1-70b \ docker compose -f examples/docker/docker-compose.vllm.yml up ``` **Included services:** Postgres, vLLM (with tool calling enabled), Atlas API. A Hugging Face token is required for gated models (Llama, Mistral). Create one at [huggingface.co/settings/tokens](https://huggingface.co/settings/tokens). *** Benchmark Results [#benchmark-results] Expected performance ranges for self-hosted models with Atlas. Results vary by hardware, quantization, schema complexity, and query type. These benchmarks reflect expected ranges based on model architecture and published benchmarks. Actual performance depends heavily on hardware, quantization, context length, and schema complexity. Run your own benchmarks against your schema for production sizing. Latency [#latency] Estimated for a single A100 80GB GPU with Q8 quantization, 10-table demo schema. | Model | TTFT (simple) | TTFT (complex) | Total (simple) | Total (complex) | | ------------- | ------------- | -------------- | -------------- | --------------- | | Llama 3.1 8B | 0.3–0.5s | 0.5–1.0s | 2–4s | 5–10s | | Qwen 2.5 7B | 0.3–0.5s | 0.5–1.0s | 2–4s | 5–10s | | Llama 3.1 70B | 1–2s | 2–4s | 5–10s | 15–30s | | Qwen 2.5 72B | 1–2s | 2–4s | 5–10s | 15–30s | | Mistral Large | 2–3s | 3–6s | 8–15s | 20–45s | **Simple:** Single-table query, 1 tool call (e.g., "How many users signed up this week?"). **Complex:** Multi-table join, 2–3 tool calls, aggregation (e.g., "What's the average resolution time by severity for tickets assigned to the top 5 agents?"). Accuracy [#accuracy] Approximate success rates on representative query suites. "Success" means the generated SQL executes without error and returns correct results. | Model | Simple Queries | Complex Queries | Tool Calling Reliability | | ------------- | -------------- | --------------- | ------------------------ | | Llama 3.1 70B | 90–95% | 70–80% | 95%+ | | Qwen 2.5 72B | 90–95% | 70–80% | 95%+ | | Llama 3.1 8B | 80–90% | 50–65% | 85–90% | | Qwen 2.5 7B | 80–90% | 50–65% | 85–90% | | Mistral 7B | 70–80% | 35–50% | 70–80% | Token Efficiency [#token-efficiency] Average tokens consumed per successful query (system prompt + tool calls + response). | Model | Simple Query | Complex Query | | ----------- | ------------ | ------------- | | 70B models | 1,500–2,500 | 3,000–5,000 | | 7-8B models | 1,800–3,000 | 4,000–7,000 | Smaller models tend to use more tokens due to retries and less efficient tool call formatting. *** Tuning Tips [#tuning-tips] Temperature [#temperature] Atlas sets temperature to `0.2` by default — a good starting point for SQL generation. This is applied by the agent loop regardless of your inference server's default. If you see inconsistent SQL output, the issue is more likely model size or quantization than temperature. Context Length [#context-length] Atlas injects the semantic layer into the system prompt. Large schemas (20+ tables) can consume 4,000–8,000 tokens of context. Ensure your model's context window can accommodate this plus the conversation history. | Schema Size | System Prompt Tokens | Recommended Min Context | | ------------ | -------------------- | ----------------------- | | \< 10 tables | 1,000–2,000 | 4,096 | | 10–20 tables | 2,000–4,000 | 8,192 | | 20–50 tables | 4,000–8,000 | 16,384 | | 50+ tables | 8,000+ | 32,768 | For vLLM, set `--max-model-len` to match. For Ollama, set `num_ctx` in the Modelfile or via the API. Agent Max Steps [#agent-max-steps] Smaller models may need more steps to complete complex queries (they retry more). Consider increasing the step limit: ```bash # Default: 25 — increase for smaller models ATLAS_AGENT_MAX_STEPS=40 ``` *** Troubleshooting [#troubleshooting] Tool calling failures [#tool-calling-failures] **Symptom:** Atlas responds with text instead of executing SQL. The agent describes what it would query but never calls `executeSQL`. **Causes:** * Model doesn't support tool calling (check the [compatibility matrix](#compatibility-matrix)) * vLLM missing `--enable-auto-tool-choice` or wrong `--tool-call-parser` * Model too small — 7B models sometimes "forget" to use tools on complex queries **Fixes:** 1. Verify tool calling works: `curl http://localhost:8000/v1/chat/completions -d '{"model":"llama3.1","messages":[{"role":"user","content":"Call the get_weather function for NYC"}],"tools":[{"type":"function","function":{"name":"get_weather","parameters":{"type":"object","properties":{"city":{"type":"string"}}}}}]}'` 2. For vLLM, ensure both `--enable-auto-tool-choice` and `--tool-call-parser` are set 3. Try a larger model — 70B models are dramatically more reliable at tool calling than 7B Streaming issues [#streaming-issues] **Symptom:** Responses appear all at once instead of streaming, or the connection times out. **Causes:** * Reverse proxy buffering (nginx, Cloudflare) * Inference server not configured for streaming * Connection timeout too low **Fixes:** 1. Check that your inference server returns `Transfer-Encoding: chunked` 2. If behind nginx, add: `proxy_buffering off;` and `proxy_http_version 1.1;` 3. Note that the agent loop has built-in timeouts (5s per chunk, 30s per step) — very slow models may exceed these limits Context length exceeded [#context-length-exceeded] **Symptom:** Error messages about maximum context length, or the model produces garbage output mid-response. **Causes:** * Large semantic layer exhausting the context window * Long conversation history **Fixes:** 1. Enable the semantic index (`ATLAS_SEMANTIC_INDEX_ENABLED=true`, default) — it compresses the semantic layer summary 2. Increase model context: vLLM `--max-model-len`, Ollama `num_ctx` 3. For very large schemas (50+ tables), use a 70B+ model with 32K+ context Slow first response [#slow-first-response] **Symptom:** First query after startup takes 30+ seconds. **Causes:** * Model loading into GPU memory (normal for large models) * KV cache allocation (vLLM pre-allocates based on `--gpu-memory-utilization`) **Fixes:** 1. This is expected on cold start — subsequent queries will be fast 2. For vLLM, reduce `--gpu-memory-utilization` if startup is OOM-killed (default 0.9) 3. Use Ollama's `keep_alive` to prevent model unloading: `ollama run llama3.1 --keepalive 24h` Quantization quality issues [#quantization-quality-issues] **Symptom:** SQL has subtle errors (wrong column names, incorrect join conditions) that don't appear with larger quantizations. **Causes:** * Aggressive quantization (Q2, Q3) degrades the model's ability to follow schemas precisely **Fixes:** 1. Use Q8 for production — best balance of VRAM savings and quality 2. Avoid Q2/Q3 for any text-to-SQL use case 3. If VRAM is limited, use a smaller model at higher quantization rather than a larger model at Q4 *** See Also [#see-also] * [Environment Variables](/reference/environment-variables) — All provider and model configuration * [Configuration](/reference/config) — Declarative `atlas.config.ts` * [Deploy](/deployment/deploy) — Docker deployment guide * [Troubleshooting](/guides/troubleshooting) — General Atlas troubleshooting --- # Compliance Reporting (/guides/compliance-reporting) Atlas provides compliance reports built on top of audit log data, PII classifications, and user session history. Reports help answer audit questions like "who accessed what data, when, and how often" — a requirement for SOC2, HIPAA, and similar frameworks. Compliance reporting is included with Enterprise plans on [app.useatlas.dev](https://app.useatlas.dev). Contact your account team to enable it, or visit **Admin > Billing** to upgrade. * Active Enterprise plan on [app.useatlas.dev](https://app.useatlas.dev) * Admin role required for compliance report endpoints * [Audit logging](/guides/audit-retention) active (queries must be logged to generate reports) * [Managed auth](/deployment/authentication#managed-auth) enabled * Internal database configured (`DATABASE_URL`) *** Report Types [#report-types] Data Access Report [#data-access-report] Answers: **Who queried what tables, when, and how often?** Each row represents a unique (table, user) pair within the selected date range: | Field | Description | | --------------- | --------------------------------------------------- | | `tableName` | The database table that was queried | | `userId` | The user who ran the queries | | `userEmail` | User email (resolved from auth) | | `userRole` | Role within the organization (admin, owner, member) | | `queryCount` | Number of queries touching this table | | `uniqueColumns` | Columns accessed across all queries | | `hasPII` | Whether the table has PII classifications | | `firstAccess` | Earliest query timestamp in the range | | `lastAccess` | Latest query timestamp in the range | The report summary includes total queries, unique users, unique tables, and PII tables accessed. User Activity Report [#user-activity-report] Answers: **What has each user been doing?** Each row represents a single user: | Field | Description | | ---------------- | -------------------------------------- | | `userId` | The user ID | | `userEmail` | User email | | `role` | Organization role | | `totalQueries` | Total queries in the date range | | `tablesAccessed` | List of tables queried | | `lastActiveAt` | Most recent query timestamp | | `lastLoginAt` | Most recent login (from session table) | *** API Endpoints [#api-endpoints] Both endpoints are mounted under `/api/v1/admin/compliance/reports/`. GET /reports/data-access [#get-reportsdata-access] ```bash curl -H "Authorization: Bearer $TOKEN" \ "https://your-atlas.com/api/v1/admin/compliance/reports/data-access?startDate=2026-01-01&endDate=2026-03-01" ``` GET /reports/user-activity [#get-reportsuser-activity] ```bash curl -H "Authorization: Bearer $TOKEN" \ "https://your-atlas.com/api/v1/admin/compliance/reports/user-activity?startDate=2026-01-01&endDate=2026-03-01" ``` Query Parameters [#query-parameters] | Parameter | Type | Required | Description | | ----------- | ----------------- | -------- | ------------------------------------- | | `startDate` | string (ISO 8601) | Yes | Start of the reporting period | | `endDate` | string (ISO 8601) | Yes | End of the reporting period | | `userId` | string | No | Filter to a specific user | | `role` | string | No | Filter by role (admin, owner, member) | | `table` | string | No | Filter to a specific table | | `format` | `json` \| `csv` | No | Response format (default: `json`) | *** Export Formats [#export-formats] JSON (default) [#json-default] Returns a structured JSON object with `rows`, `summary`, `filters`, and `generatedAt` fields. CSV [#csv] Set `format=csv` to download as a CSV file. The response includes `Content-Disposition` headers for browser download. CSV follows RFC 4180 escaping rules. ```bash # Download CSV curl -H "Authorization: Bearer $TOKEN" \ -o data-access-report.csv \ "https://your-atlas.com/api/v1/admin/compliance/reports/data-access?startDate=2026-01-01&endDate=2026-03-01&format=csv" ``` *** Admin Console [#admin-console] The compliance page in the admin console (`/admin/compliance`) has two tabs: 1. **PII Classifications** — Review and manage detected PII columns (see [PII Masking guide](/guides/pii-masking)) 2. **Reports** — Generate compliance reports with a visual interface The Reports tab provides: * Date range picker (defaults to last 30 days) * Report type selector (Data Access / User Activity) * Filter controls for user, role, and table * Results table with detailed breakdown * Export buttons for CSV and JSON download All filter state is persisted in the URL via query parameters, so reports are shareable and bookmarkable. *** How Reports Query Data [#how-reports-query-data] Reports run pure SQL against the internal database. No external services are required. * **Data Access Report** queries `audit_log` with a `CROSS JOIN LATERAL` on `tables_accessed` (JSONB array), joined with the `user` table for email resolution. Role data is enriched from the `member` table, and PII status is enriched from `pii_column_classifications`, both via separate concurrent queries. * **User Activity Report** queries `audit_log` grouped by user, joined with the `user` table for email. Last login timestamp is enriched from the `session` table, and role information from the `member` table, both via separate concurrent queries. Both reports only include successful queries (`success = true`) and respect the org isolation boundary (`org_id`). Reports are bounded by a `LIMIT 10000` (data access) or `LIMIT 5000` (user activity) to prevent excessive memory usage. For very large audit logs, narrow the date range or apply filters. *** Related [#related] * [PII Detection & Masking](/guides/pii-masking) — auto-detect and mask PII columns * [Audit Log Retention](/guides/audit-retention) — configure retention policies and export raw audit logs * [Custom Roles](/guides/custom-roles) — define roles for role-based report filtering --- # Onboarding Emails (/guides/onboarding-emails) Onboarding emails are an automated drip campaign sent to new users after signup. Each email is triggered by a milestone (e.g., connecting a database, running a first query) or sent on a time-based fallback if the milestone is not reached. Onboarding emails are pre-enabled on [app.useatlas.dev](https://app.useatlas.dev) — new users receive the welcome sequence automatically. No configuration needed. * Internal database configured (`DATABASE_URL`) * Email delivery configured (`RESEND_API_KEY` or `ATLAS_SMTP_URL`) * Feature flag enabled (`ATLAS_ONBOARDING_EMAILS_ENABLED=true`) *** Email Sequence [#email-sequence] The onboarding sequence consists of five milestone-driven emails: | Step | Trigger Milestone | Fallback | Description | | ------------------ | ---------------------- | ------------------- | ----------------------------------------------------------------- | | `welcome` | `signup_completed` | Immediate | Welcome email with getting-started links | | `connect_database` | `database_connected` | 24h after signup | Prompts the user to connect their first datasource | | `first_query` | `first_query_executed` | 72h after signup | Encourages the user to ask their first question | | `invite_team` | `team_member_invited` | 7 days after signup | Suggests inviting colleagues | | `explore_features` | `feature_explored` | 7 days after signup | Highlights advanced features (notebook, scheduled tasks, actions) | Each email is sent at most once per user. If the user completes the milestone before the fallback timer fires, the email is sent immediately on milestone completion. *** Configuration (Self-Hosted) [#configuration-self-hosted] On [app.useatlas.dev](https://app.useatlas.dev), onboarding emails are enabled by default with no additional setup. Skip this section. Self-hosted setup [#self-hosted-setup] Enable onboarding emails via environment variable: ```bash ATLAS_ONBOARDING_EMAILS_ENABLED=true RESEND_API_KEY=re_... # or ATLAS_SMTP_URL for HTTP webhook-based delivery (POST JSON) ATLAS_EMAIL_FROM=Atlas # optional, defaults to "Atlas " ``` See [Environment Variables](/reference/environment-variables#email-delivery) for all email configuration options. *** Admin Management [#admin-management] Admins can view onboarding email status for their organization via the admin API: * **List statuses** — `GET /api/v1/admin/onboarding-emails` returns per-user progress (sent steps, pending steps, unsubscribe status) * **View sequence** — `GET /api/v1/admin/onboarding-emails/sequence` returns the configured sequence steps with triggers and timing See the [API Reference](/docs/api-reference/admin-onboarding-emails/getAdminOnboardingEmails) for full endpoint documentation. *** Unsubscribe & Resubscribe [#unsubscribe--resubscribe] Every onboarding email includes an unsubscribe link. Users can: * **Unsubscribe** — click the link in any onboarding email (`GET /api/v1/onboarding-emails/unsubscribe?userId=...`) * **Resubscribe** — `POST /api/v1/onboarding-emails/resubscribe` with `{ userId }` in the body Unsubscribed users will not receive any further onboarding emails. Resubscribing resumes the sequence from the next unsent step. *** See Also [#see-also] * [Environment Variables — Email Delivery](/reference/environment-variables#email-delivery) * [Guided Tour](/guides/guided-tour) — In-app walkthrough complementing the email sequence * [Signup](/guides/signup) — User registration flow that triggers the welcome email --- # Choosing an Integration (/guides/choosing-an-integration) Atlas offers multiple ways to integrate and multiple authentication modes. These tables help you choose the right combination for your use case. Widget vs SDK vs API [#widget-vs-sdk-vs-api] Atlas has three integration surfaces. Each serves a different audience and customization level. | Dimension | Script Tag Widget | React Package (`@useatlas/react`) | TypeScript SDK (`@useatlas/sdk`) | REST API | | --------------------------- | --------------------------------------------------------------- | ------------------------------------------------------- | ------------------------------------------------------- | ------------------------------------------------- | | **Use case** | Drop-in chat bubble for any website | Custom chat UI in a React/Next.js app | Server-side or Node.js automation | Any language, any platform | | **Setup effort** | \~2 minutes — one ` ``` **React package** — wrap your app in the provider: ```tsx import { AtlasProvider, AtlasChat } from "@useatlas/react"; ``` **TypeScript SDK** — programmatic queries: ```ts import { createAtlasClient } from "@useatlas/sdk"; const atlas = createAtlasClient({ baseUrl: "https://your-api.example.com", apiKey: "sk-...", }); const result = await atlas.query("How many users signed up last week?"); ``` **REST API** — any language: ```bash curl -X POST https://your-api.example.com/api/v1/query \ -H "Authorization: Bearer sk-..." \ -H "Content-Type: application/json" \ -d '{"question": "How many users signed up last week?"}' ``` Auth Modes [#auth-modes] Atlas supports four authentication modes. The right choice depends on your deployment environment and user management needs. | Dimension | None | Simple Key (`api-key`) | Managed (`managed`) | BYOT (`byot`) | | ------------------------------ | -------------------------------------- | ---------------------------------------------- | -------------------------------------------------------- | ------------------------------------------------------------------------- | | **Best for** | Local dev, internal tools behind a VPN | Headless integrations, CLI tools, SDKs | Multi-user web apps with login | Enterprises with existing SSO (Auth0, Clerk, Okta) | | **Session management** | None — all requests are anonymous | None — stateless key per request | Server-side sessions (7-day expiry, renewed on activity) | Stateless — JWT verified per request via JWKS | | **User isolation** | No — all queries share one identity | No — single shared identity per key | Yes — each user has their own account and history | Yes — user identity extracted from JWT `sub` claim | | **Role support** | No roles | Single role for the key (`ATLAS_API_KEY_ROLE`) | Per-user roles (viewer, analyst, admin) | Role extracted from JWT claim (configurable path) | | **RLS (row-level security)** | Not available | Static claims only (`ATLAS_RLS_CLAIMS`) | Per-user claims from session | Per-user claims from JWT payload | | **Setup complexity** | Zero config | One env var (`ATLAS_API_KEY`) | 2-3 env vars + internal Postgres (`DATABASE_URL`) | 2-4 env vars (JWKS URL + issuer required, audience + role claim optional) | | **Requires internal database** | No | No | Yes (`DATABASE_URL`) | No | | **Recommended for** | Getting started, demos | Automated pipelines, single-user tools | Production apps with user accounts | Production apps with existing identity provider | **Not sure?** Use **none** for local development. When you're ready for production, choose **managed** if you want Atlas to handle user accounts, or **BYOT** if you already have an identity provider. Auth mode configuration examples [#auth-mode-configuration-examples] **None** — no configuration needed: ```bash # Just start the server — no auth env vars required bun run dev ``` **Simple Key**: ```bash ATLAS_API_KEY=your-secret-key-here ATLAS_API_KEY_ROLE=analyst # optional: viewer, analyst (default), admin ``` **Managed**: ```bash BETTER_AUTH_SECRET=your-random-secret-at-least-32-characters-long DATABASE_URL=postgresql://user:pass@host:5432/atlas ATLAS_ADMIN_EMAIL=admin@example.com # optional: first admin account ``` **BYOT**: ```bash ATLAS_AUTH_JWKS_URL=https://your-idp.com/.well-known/jwks.json ATLAS_AUTH_ISSUER=https://your-idp.com/ ATLAS_AUTH_AUDIENCE=your-atlas-api # optional but recommended ATLAS_AUTH_ROLE_CLAIM=app_metadata.role # optional, defaults to checking "role" then "atlas_role" ``` Auto-detection [#auto-detection] When `ATLAS_AUTH_MODE` is not set, Atlas auto-detects from environment variables in this order: 1. `ATLAS_AUTH_JWKS_URL` present → **BYOT** 2. `BETTER_AUTH_SECRET` present → **Managed** 3. `ATLAS_API_KEY` present → **Simple Key** 4. None of the above → **None** Set `ATLAS_AUTH_MODE` explicitly to override auto-detection. Valid values: `none`, `api-key`, `managed`, `byot`. You can also set `auth` in `atlas.config.ts`. *** Related [#related] * [Embedding Widget](/guides/embedding-widget) — full widget configuration and customization guide * [Authentication](/deployment/authentication) — detailed auth setup, roles, rate limiting, and audit logging * [Environment Variables](/reference/environment-variables) — all configuration variables with defaults * [SDK documentation](https://www.npmjs.com/package/@useatlas/sdk) — `@useatlas/sdk` on npm --- # PII Detection & Masking (/guides/pii-masking) Atlas can auto-detect personally identifiable information (PII) in your database columns and mask sensitive values in query results. Detection runs during profiling, and masking is applied at query time based on user role. PII detection and masking is included with Enterprise plans on [app.useatlas.dev](https://app.useatlas.dev). Contact your account team to enable it, or visit **Admin > Billing** to upgrade. * Active Enterprise plan on [app.useatlas.dev](https://app.useatlas.dev) * Admin role required for managing PII classifications * [Managed auth](/deployment/authentication#managed-auth) enabled * Internal database configured (`DATABASE_URL`) *** How It Works [#how-it-works] Detection [#detection] PII detection runs during database profiling (`atlas init` or the semantic layer wizard). For each column, the detector checks: 1. **Sample values** (high confidence) — regex patterns match common PII formats (email, phone, SSN, credit card, IP address, date of birth) 2. **Column names** (medium confidence) — heuristic name matching (e.g., `email`, `phone_number`, `ssn`, `first_name`) 3. **Column types** (low confidence) — type-based guesses (e.g., `inet` → IP address) Detected PII is stored as column classifications in the internal database. Masking [#masking] When a query is executed, Atlas checks the result columns against PII classifications and applies masking based on the user's role: | Role | Behavior | | ------------------- | -------------------------------------------- | | **Admin / Owner** | See raw (unmasked) values | | **Analyst** | See partial masks (e.g., `a***@example.com`) | | **Viewer / Member** | See full masks (`***`) | Masking happens after query execution and before results are returned — the underlying SQL is never modified. *** PII Categories [#pii-categories] | Category | Example Pattern | Detection Method | | ---------------- | --------------------- | -------------------------- | | `email` | `user@example.com` | Regex + column name | | `phone` | `555-123-4567` | Regex + column name | | `ssn` | `123-45-6789` | Regex + column name | | `credit_card` | `4111-1111-1111-1111` | Regex + column name | | `name` | First/last/full name | Column name only | | `ip_address` | `192.168.1.1` | Regex + column name + type | | `date_of_birth` | `1990-01-15` | Regex + column name + type | | `address` | Street/postal/zip | Column name only | | `passport` | `AB1234567` | Regex + column name | | `driver_license` | `D123-456-789` | Regex + column name | *** Masking Strategies [#masking-strategies] Each PII column can be configured with a masking strategy: | Strategy | Example Output | Use Case | | --------- | ------------------ | --------------------------------- | | `full` | `***` | Maximum privacy — no visible data | | `partial` | `a***@example.com` | Preserve structure for debugging | | `hash` | `a1b2c3d4e5f67890` | Consistent pseudonymization | | `redact` | `[REDACTED]` | Explicit redaction marker | Partial Masking Examples [#partial-masking-examples] * **Email**: `alice@example.com` → `a***@example.com` * **SSN**: `123-45-6789` → `***-**-6789` * **Credit card**: `4111-1111-1111-1111` → `****-****-****-1111` * **Phone**: `555-123-4567` → `555-***-4567` * **Generic**: `John Smith` → `Jo***th` *** Admin UI [#admin-ui] Navigate to **Admin → PII Compliance** to manage classifications: * **Review detections** — see all detected PII columns with confidence levels * **Edit classifications** — change the PII category or masking strategy * **Dismiss false positives** — mark incorrect detections as dismissed * **Bulk review** — mark all pending detections as reviewed *** Semantic Layer Integration [#semantic-layer-integration] When PII is detected during profiling, the column's entity YAML is tagged: ```yaml dimensions: - name: email sql: email type: string description: Customer email address pii: email pii_confidence: high ``` These tags are informational — masking rules are stored in the internal database and managed via the admin UI. *** API Reference [#api-reference] List Classifications [#list-classifications] ``` GET /api/v1/admin/compliance/classifications ``` Query parameters: * `connectionId` (optional) — filter by datasource connection Update Classification [#update-classification] ``` PUT /api/v1/admin/compliance/classifications/:id ``` Body: ```json { "category": "email", "maskingStrategy": "partial", "reviewed": true, "dismissed": false } ``` Delete Classification [#delete-classification] ``` DELETE /api/v1/admin/compliance/classifications/:id ``` *** Configuration [#configuration] PII detection and masking is enabled automatically when enterprise features are active. No additional configuration is required. The masking applies to all query results returned by the `executeSQL` tool, including cached results. If the enterprise module is unavailable, PII masking is silently skipped and unmasked results are returned. This ensures non-enterprise deployments are unaffected. --- # Enterprise SSO (/guides/enterprise-sso) Atlas supports enterprise SSO via SAML 2.0 and OpenID Connect (OIDC). Admins register identity providers through the admin API, map email domains to providers, and users are auto-provisioned on first login. Enterprise SSO is available on [app.useatlas.dev](https://app.useatlas.dev) Enterprise plans. Self-hosted deployments do not include enterprise features. * [Managed auth](/deployment/authentication#managed-auth) enabled * Internal database configured (`DATABASE_URL`) * Active Enterprise plan on [app.useatlas.dev](https://app.useatlas.dev) * Admin role required for all SSO management endpoints *** How It Works [#how-it-works] 1. An admin registers an SSO provider (SAML or OIDC) via the admin API 2. The provider is linked to an email domain (e.g., `acme.com`) 3. When a user with that email domain signs in, Atlas routes them to the configured IdP 4. On successful IdP authentication, the user is auto-provisioned into the workspace Each organization can have multiple SSO providers, each mapped to a different email domain. Domain uniqueness is enforced globally — no two providers across any organization can claim the same domain. *** SAML Configuration [#saml-configuration] SAML providers require three pieces of information from your identity provider: | Field | Description | | ---------------- | ----------------------------------------------------------------- | | `idpEntityId` | The IdP's entity identifier (issuer URI) | | `idpSsoUrl` | The IdP's single sign-on URL (HTTP-Redirect or HTTP-POST binding) | | `idpCertificate` | The IdP's signing certificate in PEM format | Optional fields for service provider (SP) configuration: | Field | Description | | ------------ | ------------------------------------------------------- | | `spEntityId` | Custom SP entity ID (defaults to Atlas-generated value) | | `spAcsUrl` | Custom Assertion Consumer Service URL | Register a SAML Provider [#register-a-saml-provider] ```bash curl -X POST https://your-atlas.example.com/api/v1/admin/sso/providers \ -H "Content-Type: application/json" \ -H "Authorization: Bearer " \ -d '{ "type": "saml", "issuer": "https://idp.acme.com", "domain": "acme.com", "enabled": true, "config": { "idpEntityId": "https://idp.acme.com/entity", "idpSsoUrl": "https://idp.acme.com/sso", "idpCertificate": "-----BEGIN CERTIFICATE-----\nMIIC...\n-----END CERTIFICATE-----" } }' ``` *** OIDC Configuration [#oidc-configuration] OIDC providers require: | Field | Description | | -------------- | ------------------------------------------------ | | `clientId` | OAuth client ID from your IdP | | `clientSecret` | OAuth client secret (encrypted at rest) | | `discoveryUrl` | The IdP's `.well-known/openid-configuration` URL | Register an OIDC Provider [#register-an-oidc-provider] ```bash curl -X POST https://your-atlas.example.com/api/v1/admin/sso/providers \ -H "Content-Type: application/json" \ -H "Authorization: Bearer " \ -d '{ "type": "oidc", "issuer": "https://accounts.google.com", "domain": "acme.com", "enabled": true, "config": { "clientId": "your-client-id", "clientSecret": "your-client-secret", "discoveryUrl": "https://accounts.google.com/.well-known/openid-configuration" } }' ``` Client secrets are encrypted at rest using `BETTER_AUTH_SECRET` or `ATLAS_ENCRYPTION_KEY`. They are never returned in API responses — the provider detail endpoint redacts secret fields. The `enabled` field defaults to `false` if omitted — providers must be explicitly enabled. *** Admin API Endpoints [#admin-api-endpoints] All SSO endpoints require the `admin` role and an active enterprise license. They are mounted at `/api/v1/admin/sso`. | Method | Path | Description | | -------- | ---------------- | -------------------------------------------- | | `GET` | `/providers` | List SSO provider summaries (config omitted) | | `GET` | `/providers/:id` | Get a single provider (secrets redacted) | | `POST` | `/providers` | Register a new SSO provider | | `PATCH` | `/providers/:id` | Update a provider (partial config merge) | | `DELETE` | `/providers/:id` | Delete a provider | | `GET` | `/enforcement` | Get SSO enforcement status | | `PUT` | `/enforcement` | Enable or disable SSO enforcement | Error Responses [#error-responses] | Status | Code | When | | ------ | --------------------- | -------------------------------------------------------- | | 400 | `validation` | Invalid config fields, malformed domain | | 400 | `no_provider` | Trying to enable enforcement with no active SSO provider | | 403 | `enterprise_required` | Enterprise license not active | | 404 | `not_found` | Provider ID doesn't exist | | 409 | `conflict` | Domain already claimed by another SSO provider | *** Domain Auto-Provisioning [#domain-auto-provisioning] When a user signs in with an email matching a configured SSO domain: 1. Atlas calls `findProviderByDomain()` to locate the matching, enabled SSO provider 2. The user is redirected to the IdP for authentication 3. On successful authentication, Atlas creates or updates the user account 4. The user is added to the organization that owns the SSO provider Domain format requirements: * Lowercase letters, numbers, hyphens, and dots * Must be a valid DNS domain (e.g., `acme.com`, `engineering.acme.com`) * Validated against the pattern: `^[a-z0-9]([a-z0-9-]*[a-z0-9])?(\.[a-z0-9]([a-z0-9-]*[a-z0-9])?)+$` *** SSO Enforcement [#sso-enforcement] By default, SSO is an **optional** login method — users with a matching email domain are routed to the IdP, but password login remains available. SSO enforcement changes this: when enabled, **password and email-based login is blocked** for all users whose email domain matches a configured SSO provider. They must sign in through the identity provider. Enforcement is an **organization-level** setting. When toggled, it updates the `sso_enforced` flag on all SSO providers belonging to that organization. What Happens When Enforcement Is On [#what-happens-when-enforcement-is-on] 1. A user attempts to sign in with email `alice@acme.com` 2. Atlas detects that `acme.com` has an SSO provider with enforcement enabled 3. The password/session auth is blocked with a `403` response: ```json { "error": "SSO is required for this workspace. Please sign in via your identity provider.", "ssoRedirectUrl": "https://idp.acme.com/sso" } ``` 4. The response includes `ssoRedirectUrl` so the client can redirect the user to the IdP **Break-glass bypass:** API key authentication (`simple-key` mode) is not affected by SSO enforcement. If an admin is locked out of the UI, they can still access the API using an API key to disable enforcement. Check Enforcement Status [#check-enforcement-status] ```bash curl https://your-atlas.example.com/api/v1/admin/sso/enforcement \ -H "Authorization: Bearer " ``` **Response (200):** ```json { "enforced": false, "orgId": "org_abc123" } ``` Enable Enforcement [#enable-enforcement] ```bash curl -X PUT https://your-atlas.example.com/api/v1/admin/sso/enforcement \ -H "Content-Type: application/json" \ -H "Authorization: Bearer " \ -d '{ "enforced": true }' ``` **Response (200):** ```json { "enforced": true, "orgId": "org_abc123" } ``` Disable Enforcement [#disable-enforcement] ```bash curl -X PUT https://your-atlas.example.com/api/v1/admin/sso/enforcement \ -H "Content-Type: application/json" \ -H "Authorization: Bearer " \ -d '{ "enforced": false }' ``` **Before enabling enforcement**, ensure at least one SSO provider is configured and enabled for your organization. Atlas will reject the request with a `400` error (`no_provider`) if no active provider exists: ```json { "error": "no_provider", "message": "Cannot enforce SSO without at least one active SSO provider. Create and enable a SAML or OIDC provider first." } ``` Enforcement and Existing Users [#enforcement-and-existing-users] When you enable SSO enforcement: * **Active sessions are not terminated.** Users already signed in continue to have access until their session expires * **New sign-in attempts via password are blocked.** Users must use the SSO flow on their next login * **API key access is unaffected.** The `simple-key` auth mode bypasses SSO enforcement, providing a break-glass path for administrators When you disable SSO enforcement: * Password login is immediately available again for all users * SSO login continues to work — disabling enforcement does not remove SSO providers *** SAML vs OIDC [#saml-vs-oidc] | Feature | SAML 2.0 | OIDC | | -------------------- | --------------------------------- | ---------------------------------------- | | **Protocol** | XML-based, HTTP-POST binding | OAuth 2.0 / JWT | | **Setup complexity** | Higher (certificates, entity IDs) | Lower (client ID/secret + discovery URL) | | **Common IdPs** | Okta, OneLogin, ADFS | Google Workspace, Azure AD, Auth0 | | **Secret storage** | Certificate (public) | Client secret (encrypted at rest) | Choose SAML when your IdP mandates it or for compliance requirements. Choose OIDC for simpler setup with modern identity providers. *** See Also [#see-also] * [IP Allowlisting](/guides/ip-allowlisting) — Restrict access by IP range * [Authentication](/deployment/authentication) — Auth mode setup and configuration * [Admin Console](/guides/admin-console) — Manage users and sessions * [Environment Variables](/reference/environment-variables) — Full variable reference --- # Plugin Marketplace (/guides/plugin-marketplace) The plugin marketplace lets workspace admins discover and install plugins without editing configuration files. Browse available plugins, install them with one click, configure settings, and uninstall when no longer needed. * [Managed auth](/deployment/authentication#managed-auth) enabled * An internal database configured (`DATABASE_URL`) * A user with the `admin` role * **SaaS mode** — the marketplace is available on app.useatlas.dev. In self-hosted mode, manage plugins via `atlas.config.ts` *** Accessing the Marketplace [#accessing-the-marketplace] Navigate to **Admin > Plugins** (`/admin/plugins`). In SaaS mode, the page shows two tabs: * **Installed** — Plugins currently active in your workspace (both config-loaded and marketplace-installed) * **Available** — Plugins in the platform catalog that you haven't installed yet In self-hosted mode, only config-loaded plugins are shown with a note to manage them via `atlas.config.ts`. *** Browsing Available Plugins [#browsing-available-plugins] Switch to the **Available** tab to see plugins you can install. Each card shows: * **Name** and **type badge** (Datasource, Context, Interaction, Action, Sandbox) * **Description** of what the plugin does * **Enterprise badge** — if the plugin requires an enterprise plan * **Install button** Use the **search bar** to filter by name or the **type dropdown** to filter by plugin category. Plugins are filtered by your workspace's plan tier. Each plugin has a minimum plan requirement — only plugins your tier qualifies for are shown. *** Installing a Plugin [#installing-a-plugin] 1. Find the plugin in the **Available** tab 2. Click **Install** 3. If the plugin has a config schema, a dialog appears with configuration fields — fill them in 4. If no config is needed, a confirmation dialog appears 5. Click **Install** to confirm The plugin is immediately available in your workspace. It appears in the **Installed** tab with a **marketplace** badge. *** Configuring Installed Plugins [#configuring-installed-plugins] Marketplace Plugins [#marketplace-plugins] 1. In the **Installed** tab, find the marketplace plugin (marked with a "marketplace" badge) 2. Click the **gear icon** (Configure) 3. If the plugin defines a config schema, typed form fields appear (text, number, boolean). If not, a JSON editor is shown 4. Edit the configuration values 5. Click **Save** Changes take effect immediately. Config-Loaded Plugins [#config-loaded-plugins] Config-loaded plugins (from `atlas.config.ts`) also appear in the Installed tab. Click the **gear icon** to open the config dialog: * If the plugin exposes a `getConfigSchema()`, a typed form is shown * If not, current config is displayed as read-only JSON * Config changes are saved to the internal database and take effect on restart Without an internal database, plugin configuration is read-only — all values come from `atlas.config.ts`. *** Managing Plugin State [#managing-plugin-state] For config-loaded plugins: * **Enable/disable toggle** — Toggle a plugin on or off without restarting. Disabled plugins are excluded from the agent loop * **Health check** — Click **Health** to run a live probe and see the plugin's status The enable/disable toggle requires an internal database (`DATABASE_URL`). Without it, all plugins are always enabled. *** Uninstalling a Plugin [#uninstalling-a-plugin] 1. In the **Installed** tab, find the marketplace plugin 2. Click the **trash icon** (Uninstall) 3. Confirm in the dialog The plugin is removed from your workspace. You can reinstall it later from the Available tab. Config-loaded plugins cannot be uninstalled from the UI — remove them from `atlas.config.ts`. *** API Endpoints [#api-endpoints] | Method | Path | Description | | -------- | ---------------------------------------------- | ------------------------------------------------- | | `GET` | `/api/v1/admin/plugins/marketplace/available` | List available plugins for this workspace | | `POST` | `/api/v1/admin/plugins/marketplace/install` | Install a plugin (body: `{ catalogId, config? }`) | | `DELETE` | `/api/v1/admin/plugins/marketplace/:id` | Uninstall a marketplace plugin | | `PUT` | `/api/v1/admin/plugins/marketplace/:id/config` | Update marketplace plugin config | | `GET` | `/api/v1/admin/plugins` | List all installed plugins (config + marketplace) | | `POST` | `/api/v1/admin/plugins/:id/health` | Run health check | | `POST` | `/api/v1/admin/plugins/:id/enable` | Enable a plugin | | `POST` | `/api/v1/admin/plugins/:id/disable` | Disable a plugin | | `GET` | `/api/v1/admin/plugins/:id/schema` | Get config schema and values | | `PUT` | `/api/v1/admin/plugins/:id/config` | Update config-loaded plugin config | *** See Also [#see-also] * [Admin Console — Plugins](/guides/admin-console#plugins) — Overview of the plugins admin page * [Plugin Catalog](/platform-ops/plugin-catalog) — Platform operator guide for managing the catalog * [Plugin Authoring Guide](/plugins/authoring-guide) — Build custom plugins --- # Sandbox Configuration (/guides/sandbox) The sandbox is the isolated execution environment that Atlas uses to run the **explore** tool (file browsing, grep, find on the semantic layer) and **Python** tool (data analysis scripts). The admin page lets you select which backend to use and — in SaaS mode — connect your own cloud sandbox providers. * [Managed auth](/deployment/authentication#managed-auth) enabled * A user with the `admin` role *** Overview [#overview] **Route:** `/admin/sandbox` The page adapts based on deploy mode: * **Self-hosted** — A dropdown selector for backend IDs with sidecar URL configuration * **SaaS** — An integration card grid where workspace admins connect their own sandbox providers *** SaaS Mode — Execution Environment [#saas-mode--execution-environment] In SaaS mode the page is titled **Execution Environment** and presents a card grid with four sandbox providers: Atlas Cloud Sandbox [#atlas-cloud-sandbox] Managed container service with HTTP isolation. No credentials needed and no setup required. This is the default for all workspaces and is marked **(Recommended)**. * **Active state** — highlighted border with "Active" badge * **Inactive state** — click **Select** to make it active Vercel Sandbox [#vercel-sandbox] Firecracker microVM with network isolation. Bring your own Vercel account. **Credentials:** * **Access Token** — Vercel API access token * **Team ID** — Vercel team identifier (e.g. `team_...`) Credentials are validated against the Vercel API before saving. The team name is displayed after connection. E2B [#e2b] Bring your own E2B account for ephemeral cloud sandboxes with sub-second startup. **Credentials:** * **API Key** — E2B API key Validated against the E2B sandboxes API before saving. Daytona [#daytona] Bring your own Daytona account for cloud-hosted development sandboxes. **Credentials:** * **API Key** — Daytona API key * **API URL** (optional) — Custom API endpoint (defaults to `https://api.daytona.io`) Validated against the Daytona health endpoint before saving. Connect / Disconnect Flow [#connect--disconnect-flow] 1. Click **Connect** on a provider card 2. Enter credentials in the dialog 3. Click **Validate & Connect** — credentials are verified against the provider API 4. On success, the card shows as **Connected** with the account name and connection date 5. Click **Select** to make a connected provider the active sandbox 6. Click **Disconnect** to remove credentials (reverts to Atlas Cloud Sandbox, the platform default, if this was the active provider) Credentials are stored per-workspace in the `sandbox_credentials` table. Each workspace can connect different providers independently. *** Self-Hosted Mode — Backend Selector [#self-hosted-mode--backend-selector] In self-hosted mode, the page shows the original dropdown UI: * **Platform default** — The backend selected at the platform level * **Active backend** — The backend currently in use * **Workspace override** — A per-workspace selection, if set Selecting a Backend [#selecting-a-backend] 1. Open `/admin/sandbox` 2. Choose a backend from the **Backend** dropdown 3. Click **Save** To return to the platform default, select **Use platform default** from the dropdown, or click **Reset to default**. Sidecar Configuration [#sidecar-configuration] When the **Sidecar** backend is selected, an additional **Sidecar URL** field appears. This is the URL of the `@atlas/sandbox-sidecar` service. * Set the URL to your sidecar deployment (e.g. `http://sandbox:3002`) * Leave empty to use the platform default The sidecar is a separate service that runs explore and Python commands in an isolated container. See the [Docker deployment example](/deployment/deploy#deploy-with-docker) for a production sidecar setup. The admin UI dropdown only shows API-enumerable backends (Vercel Sandbox, Sidecar, and plugin backends). nsjail and just-bash are resolved by the priority chain but are not selectable from the dropdown. *** Available Backends [#available-backends] Atlas supports multiple sandbox backends. The active backend is resolved using a priority chain: | Backend | Type | Description | | ------------------ | -------- | ---------------------------------------------------------------- | | **Plugin** | plugin | Custom sandbox provided by a plugin (highest priority) | | **Vercel Sandbox** | built-in | Vercel's secure sandbox environment (used on Vercel deployments) | | **nsjail** | built-in | Linux namespace isolation via nsjail binary | | **Sidecar** | built-in | External sandbox sidecar service (`@atlas/sandbox-sidecar`) | | **just-bash** | built-in | Direct shell execution (development fallback only) | The **just-bash** backend provides no isolation and should only be used in local development. Production deployments should use Vercel Sandbox, nsjail, or the sidecar. Priority Resolution [#priority-resolution] When no explicit override is set, Atlas auto-detects the best available backend using this default priority chain: 1. Plugin backends (always highest priority) 2. Vercel Sandbox (when running on Vercel) 3. nsjail explicit (when `ATLAS_SANDBOX=nsjail` is set) 4. Sidecar (when `ATLAS_SANDBOX_URL` is set) 5. nsjail auto-detect (when nsjail binary is found on PATH) 6. just-bash (development fallback) You can override this chain with the `ATLAS_SANDBOX_PRIORITY` environment variable or via `sandbox.priority` in `atlas.config.ts`. *** SaaS vs Self-Hosted [#saas-vs-self-hosted] | Behavior | Self-Hosted | SaaS | | ------------------ | ---------------------------------------------------------- | -------------------------------------------- | | Page title | Sandbox | Execution Environment | | UI | Backend dropdown + sidecar URL | Provider card grid with connect/disconnect | | Providers | Not applicable | Atlas Cloud, Vercel, E2B, Daytona | | Credential storage | Not applicable | Per-workspace in `sandbox_credentials` table | | Active selection | Dropdown + Save in admin UI (sets `ATLAS_SANDBOX_BACKEND`) | Click **Select** on a connected provider | | Default fallback | Auto-detect priority chain | Atlas Cloud Sandbox (sidecar) | | Sidecar URL | Configurable via admin UI | Managed by platform | *** Environment Variables [#environment-variables] | Variable | Description | | ------------------------ | --------------------------------------------- | | `ATLAS_SANDBOX_URL` | Sidecar service URL (implies sidecar backend) | | `ATLAS_SANDBOX_PRIORITY` | Comma-separated backend priority list | The priority list can also be set via `sandbox.priority` in `atlas.config.ts`. The sidecar URL is configured via the `ATLAS_SANDBOX_URL` environment variable or the admin Settings page. *** API Endpoints [#api-endpoints] All endpoints require admin authentication. | Method | Path | Description | | -------- | ---------------------------------------------- | ---------------------------------------------------------------------- | | `GET` | `/api/v1/admin/sandbox/status` | Get sandbox configuration, available backends, and connected providers | | `POST` | `/api/v1/admin/sandbox/connect/{provider}` | Validate and save provider credentials (vercel, e2b, daytona) | | `DELETE` | `/api/v1/admin/sandbox/disconnect/{provider}` | Remove provider credentials | | `PUT` | `/api/v1/admin/settings/ATLAS_SANDBOX_BACKEND` | Set workspace sandbox backend | | `PUT` | `/api/v1/admin/settings/ATLAS_SANDBOX_URL` | Set workspace sidecar URL | | `DELETE` | `/api/v1/admin/settings/ATLAS_SANDBOX_BACKEND` | Reset to platform default | *** See Also [#see-also] * [Admin Console](/guides/admin-console) — Overview of all admin pages * [Integrations Hub](/guides/integrations) — Connect Slack, Teams, and other platforms * [Environment Variables](/reference/environment-variables) — Full variable reference * [Docker Deployment](/deployment/deploy#deploy-with-docker) — Sidecar setup in Docker --- # Workspace Model Routing (/guides/model-routing) Atlas supports workspace-level model routing. Each workspace can configure its own LLM provider and API key, overriding the platform default. This enables enterprise customers to use their own Anthropic, OpenAI, Azure OpenAI, or custom OpenAI-compatible endpoints. Model routing is available on [app.useatlas.dev](https://app.useatlas.dev) Enterprise plans. Self-hosted deployments configure the model provider globally via environment variables. * [Managed auth](/deployment/authentication#managed-auth) enabled * Internal database configured (`DATABASE_URL`) * Active Enterprise plan on [app.useatlas.dev](https://app.useatlas.dev) * Admin role required for all model config endpoints * Encryption key configured (`ATLAS_ENCRYPTION_KEY` or `BETTER_AUTH_SECRET`) for API key storage *** How It Works [#how-it-works] Workspace model routing is **opt-in per workspace**. When no custom configuration exists, the workspace uses the platform default provider configured via `ATLAS_PROVIDER` and `ATLAS_MODEL` environment variables. Resolution Order [#resolution-order] When the agent loop starts a new conversation: 1. **Workspace config** (internal DB) — checked first if the user has an active organization 2. **Platform env vars** (`ATLAS_PROVIDER` / `ATLAS_MODEL`) — fallback when no workspace config exists Supported Providers [#supported-providers] | Provider | Value | Description | | ------------ | -------------- | -------------------------------------------------- | | Anthropic | `anthropic` | Claude models via `api.anthropic.com` | | OpenAI | `openai` | GPT models via `api.openai.com` | | Azure OpenAI | `azure-openai` | Azure-hosted OpenAI models (requires base URL) | | Custom | `custom` | Any OpenAI-compatible endpoint (requires base URL) | Security [#security] * API keys are encrypted at rest using AES-256-GCM (same pattern as connection URLs) * API keys are never returned in API responses — only a masked version (last 4 characters) is shown * The encryption key is derived from `ATLAS_ENCRYPTION_KEY` or `BETTER_AUTH_SECRET` *** Admin UI [#admin-ui] Navigate to **Admin > AI Provider** in the admin console. From here you can: * View the current configuration (custom or platform default) * Set a custom provider, model, and API key * Test the connection before saving * Reset to platform default Configuration Fields [#configuration-fields] | Field | Required | Description | | -------- | ---------------- | --------------------------------------------------- | | Provider | Yes | One of: Anthropic, OpenAI, Azure OpenAI, Custom | | Model | Yes | Model identifier (e.g. `claude-opus-4-6`, `gpt-4o`) | | API Key | Yes | Provider API key (encrypted at rest) | | Base URL | For Azure/Custom | Endpoint URL for Azure OpenAI or custom providers | *** API Reference [#api-reference] All endpoints require admin role and enterprise license. Mounted at `/api/v1/admin/model-config`. Get Configuration [#get-configuration] ```http GET /api/v1/admin/model-config ``` Returns the workspace's custom model configuration, or `null` if using platform defaults. **Response:** ```json { "config": { "id": "550e8400-e29b-41d4-a716-446655440000", "orgId": "org_abc123", "provider": "anthropic", "model": "claude-opus-4-6", "baseUrl": null, "apiKeyMasked": "***********api0", "createdAt": "2026-03-22T00:00:00.000Z", "updatedAt": "2026-03-22T00:00:00.000Z" } } ``` Set Configuration [#set-configuration] ```http PUT /api/v1/admin/model-config Content-Type: application/json { "provider": "anthropic", "model": "claude-opus-4-6", "apiKey": "sk-ant-...", "baseUrl": null } ``` Creates or updates the workspace model configuration. The API key is encrypted before storage. Delete Configuration [#delete-configuration] ```http DELETE /api/v1/admin/model-config ``` Removes the workspace's custom configuration. The workspace reverts to using the platform default. Test Configuration [#test-configuration] ```http POST /api/v1/admin/model-config/test Content-Type: application/json { "provider": "anthropic", "model": "claude-opus-4-6", "apiKey": "sk-ant-...", } ``` Tests a model configuration by making a minimal API call to the provider. Does not save the configuration. **Response:** ```json { "success": true, "message": "Connection successful.", "modelName": "claude-opus-4-6" } ``` *** Database Schema [#database-schema] The `workspace_model_config` table stores one row per workspace: ```sql CREATE TABLE workspace_model_config ( id UUID PRIMARY KEY DEFAULT gen_random_uuid(), org_id TEXT NOT NULL UNIQUE, provider TEXT NOT NULL, model TEXT NOT NULL, api_key_encrypted TEXT NOT NULL, base_url TEXT, created_at TIMESTAMPTZ NOT NULL DEFAULT now(), updated_at TIMESTAMPTZ NOT NULL DEFAULT now() ); ``` The `org_id` column has a unique constraint — each workspace can have at most one custom model configuration. *** Troubleshooting [#troubleshooting] "Enterprise features are not enabled" [#enterprise-features-are-not-enabled] Model routing requires an active Enterprise plan on [app.useatlas.dev](https://app.useatlas.dev). Contact support if you see this error on an Enterprise workspace. API key decryption fails [#api-key-decryption-fails] If you see "Failed to decrypt workspace API key" in logs, the encryption key may have changed since the key was stored. Re-save the configuration with the current encryption key. Workspace still uses platform default [#workspace-still-uses-platform-default] Verify the user has an active organization set (the `activeOrganizationId` must be present in the session). The model routing only applies when the agent loop has org context. --- # Sharing Conversations (/guides/sharing-conversations) Atlas lets you share conversations via links with configurable expiry and access controls. Public shares are visible to anyone with the link. Organization-scoped shares require authentication. Shared conversations include rich OpenGraph metadata for social previews. * Internal database (`DATABASE_URL`) — conversation history and sharing are unavailable without it Creating a Shared Link [#creating-a-shared-link] From the UI [#from-the-ui] 1. Open a conversation in the Atlas chat interface. 2. Click the **Share** button in the conversation header. 3. Choose an **expiry duration** (1 hour, 24 hours, 7 days, 30 days, or never). 4. Optionally toggle **Organization only** to restrict access to authenticated users. 5. Click **Create share link** — Atlas generates a unique share link. Copy it to your clipboard. The share dialog also provides an iframe embed snippet for embedding the conversation in other pages. To revoke a shared link, open the share dialog again and click **Remove share link**. The public link will immediately stop working. From the API [#from-the-api] Create and revoke share links programmatically: ```bash # Share a conversation (with expiry and share mode) curl -X POST https://your-atlas-api.example.com/api/v1/conversations/{id}/share \ -H "Authorization: Bearer your-api-key" \ -H "Content-Type: application/json" \ -d '{"expiresIn": "7d", "shareMode": "public"}' # Response: # { # "token": "abc123...", # "url": "https://your-atlas-api.example.com/shared/abc123...", # "expiresAt": "2026-03-19T00:00:00.000Z", # "shareMode": "public" # } ``` Available `expiresIn` values: `"1h"`, `"24h"`, `"7d"`, `"30d"`, `"never"` (default: `"never"`). Available `shareMode` values: `"public"` (anyone with link), `"org"` (authenticated users only). ```bash # Revoke sharing curl -X DELETE https://your-atlas-api.example.com/api/v1/conversations/{id}/share \ -H "Authorization: Bearer your-api-key" ``` From the SDK [#from-the-sdk] ```typescript import { createAtlasClient } from "@useatlas/sdk"; const atlas = createAtlasClient({ baseUrl: "https://your-atlas-api.example.com", apiKey: "your-api-key", }); // Generate a share link (options like expiresIn and shareMode are available via the REST API) const { token, url } = await atlas.conversations.share("conversation-id"); // Revoke the share link — it stops working immediately await atlas.conversations.unshare("conversation-id"); ``` Public View [#public-view] Shared conversations are visible at `/shared/{token}`. The page displays: * **Header** — Atlas branding, "Shared conversation" label, and creation date * **Conversation title** — derived from the first user message * **Messages** — user and assistant messages rendered in a clean, read-only format (system and tool messages are hidden) No authentication is required to view a shared conversation. The public endpoint has its own rate limiter to prevent abuse. Assistant messages containing only non-text parts (such as tool call results) will appear empty in the shared view. OpenGraph & Social Previews [#opengraph--social-previews] Shared conversation pages include OpenGraph and Twitter Card metadata for rich social previews when links are shared on Slack, Twitter, LinkedIn, etc. | Meta Tag | Value | | ---------------- | --------------------------------------------------------------------------------------------------------- | | `og:title` | First user message (truncated to 60 characters), e.g. "Atlas: What were our top 10 customers by revenue?" | | `og:description` | First assistant response (truncated to 160 characters) | | `og:type` | `article` | | `og:site_name` | `Atlas` | | `twitter:card` | `summary` | When the conversation has no user messages, the title falls back to the conversation title or "Atlas — Shared Conversation". When the assistant response is only tool calls (no text), the description uses a generic fallback. Iframe Embedding [#iframe-embedding] Embed a shared conversation in another page using the `/embed` variant: ```html ``` The embed view is a compact, chromeless version of the shared conversation — no header, smaller avatars, and a "Powered by Atlas" footer. It supports both light and dark modes (follows system preference). This is ideal for embedding conversation transcripts in documentation, blog posts, or dashboards. API Reference [#api-reference] Share [#share] ``` POST /api/v1/conversations/:id/share ``` Generates a cryptographically random share token and returns the public URL. Requires authentication. If the conversation is already shared, a new token replaces the old one. **Request body** (optional): ```json { "expiresIn": "7d", "shareMode": "public" } ``` | Field | Type | Default | Description | | ----------- | --------------------------------------------------- | ---------- | ------------------------------------------------------------- | | `expiresIn` | `"1h"` \| `"24h"` \| `"7d"` \| `"30d"` \| `"never"` | `"never"` | When the link expires | | `shareMode` | `"public"` \| `"org"` | `"public"` | `public` = anyone with link; `org` = authenticated users only | **Response:** ```json { "token": "abc123...", "url": "https://your-atlas-api.example.com/shared/abc123...", "expiresAt": "2026-03-19T00:00:00.000Z", "shareMode": "public" } ``` Unshare [#unshare] ``` DELETE /api/v1/conversations/:id/share ``` Revokes the share token. The public link stops working immediately. Returns `204 No Content` on success. View Shared Conversation [#view-shared-conversation] ``` GET /api/public/conversations/:token ``` Returns the conversation with messages. Public shares require no authentication. Organization-scoped shares (`shareMode: "org"`) require authentication — unauthenticated requests receive `403`. Expired links return `410 Gone`. Rate-limited per IP (60 requests per minute). **Response:** ```json { "title": "Revenue Analysis", "surface": "web", "createdAt": "2026-03-12T00:00:00Z", "messages": [ { "role": "user", "content": "What was last month's revenue?", "createdAt": "2026-03-12T00:00:01Z" }, { "role": "assistant", "content": "Last month's revenue was $1.2M...", "createdAt": "2026-03-12T00:00:02Z" } ] } ``` Security [#security] * **Tokens are random** — share tokens are 28-character base64url strings generated with `crypto.randomBytes(21)`. They cannot be guessed. * **Expiry enforcement** — expired links are checked on every access (not just at creation). Expired tokens return `410 Gone`. A background cleanup task also removes expired tokens from the database hourly. * **Organization-scoped sharing** — links with `shareMode: "org"` require the viewer to be authenticated. Unauthenticated requests receive `403`. * **Revocation is instant** — unsharing clears the token from the database. Cached responses (Next.js 60s revalidate) will expire shortly after. * **No write access** — shared conversation views are completely read-only. Viewers cannot modify or interact with the conversation. * **Rate limited** — the public endpoint has a per-IP rate limiter (60 requests per minute, separate from the authenticated rate limiter) to prevent scraping. *** Troubleshooting [#troubleshooting] Share button doesn't appear [#share-button-doesnt-appear] **Cause:** Conversation sharing requires an internal database (`DATABASE_URL`). Without it, conversation history and sharing are unavailable. **Fix:** Set `DATABASE_URL` to a PostgreSQL connection string and restart the server. The share button appears in the conversation header once the internal database is connected. Shared link returns 410 Gone [#shared-link-returns-410-gone] **Cause:** The share link has expired. Links have a configurable expiry (1 hour to 30 days, or never). **Fix:** Create a new share link with a longer expiry. When sharing via the API, set `expiresIn` to `"never"` for permanent links, or `"30d"` for longer-lived ones. Social preview shows generic title [#social-preview-shows-generic-title] **Cause:** The conversation has no user messages, or the first assistant response contains only tool calls (no text). OpenGraph metadata falls back to generic values in these cases. **Fix:** Ensure the conversation has at least one user message before sharing. The OG title is derived from the first user message (truncated to 60 characters). For more, see [Troubleshooting](/guides/troubleshooting). --- # Custom Roles (/guides/custom-roles) Atlas supports custom role definitions with granular permission flags. Move beyond the default admin/member/owner hierarchy by creating named roles with specific permission sets. Custom roles are available on [app.useatlas.dev](https://app.useatlas.dev) Enterprise plans. Self-hosted deployments use the built-in admin/member roles. * [Managed auth](/deployment/authentication#managed-auth) enabled * Internal database configured (`DATABASE_URL`) * Active Enterprise plan on [app.useatlas.dev](https://app.useatlas.dev) * Admin role required for all role management endpoints *** How It Works [#how-it-works] 1. Atlas ships with three built-in roles: **admin**, **analyst**, and **viewer** 2. Admins create custom roles via the admin API or the admin UI 3. Each role is a set of permission flags (additive model) 4. Users are assigned roles via the organization membership system 5. Permission checks happen at request time, falling back to legacy behavior when enterprise is not enabled *** Permission Flags [#permission-flags] | Permission | Description | | ------------------- | ----------------------------------------- | | `query` | Can send chat queries | | `query:raw_data` | Can see raw row data (vs aggregates only) | | `admin:users` | Can manage users | | `admin:connections` | Can manage data connections | | `admin:settings` | Can manage application settings | | `admin:audit` | Can view audit logs | | `admin:roles` | Can manage roles | | `admin:semantic` | Can edit the semantic layer | Permissions are **additive** — a role is the union of its flags. There is no "deny" mechanism. *** Built-in Roles [#built-in-roles] Three roles are seeded automatically and cannot be modified or deleted: | Role | Permissions | Use Case | | --------- | ---------------------------------------- | -------------------------------------------------------- | | `admin` | All permissions | Full administrative access | | `analyst` | `query`, `query:raw_data`, `admin:audit` | Data team members who need raw data and audit visibility | | `viewer` | `query` | Stakeholders who only need aggregate query results | *** Admin UI [#admin-ui] Navigate to **Admin Console > Roles** to manage roles visually: * View built-in and custom roles with their permission badges * Create new roles with a permission checkbox dialog * Edit custom role descriptions and permissions * Delete custom roles (built-in roles show a lock icon) *** API Reference [#api-reference] All endpoints are mounted under `/api/v1/admin/roles` and require admin authentication. List Roles [#list-roles] ```http GET /api/v1/admin/roles ``` Returns all roles (built-in + custom) for the active organization, along with the list of available permissions. ```json { "roles": [ { "id": "...", "orgId": "...", "name": "admin", "description": "Full access to all features and administration", "permissions": ["query", "query:raw_data", "admin:users", "..."], "isBuiltin": true, "createdAt": "2026-01-01T00:00:00Z", "updatedAt": "2026-01-01T00:00:00Z" } ], "permissions": ["query", "query:raw_data", "admin:users", "..."], "total": 3 } ``` Create Role [#create-role] ```http POST /api/v1/admin/roles Content-Type: application/json { "name": "data-engineer", "description": "Can query and manage connections", "permissions": ["query", "query:raw_data", "admin:connections"] } ``` Role names must be lowercase, start with a letter, and contain only letters, numbers, hyphens, or underscores (1-63 characters). Update Role [#update-role] ```http PUT /api/v1/admin/roles/:id Content-Type: application/json { "description": "Updated description", "permissions": ["query", "admin:connections", "admin:semantic"] } ``` Built-in roles cannot be modified. Attempting to update a built-in role returns `403`. Delete Role [#delete-role] ```http DELETE /api/v1/admin/roles/:id ``` Built-in roles cannot be deleted. Attempting to delete a built-in role returns `403`. List Role Members [#list-role-members] ```http GET /api/v1/admin/roles/:id/members ``` Returns users assigned to the specified role in the organization. Assign Role to User [#assign-role-to-user] ```http PUT /api/v1/admin/roles/users/:userId/role Content-Type: application/json { "role": "data-engineer" } ``` The role name must match an existing role (built-in or custom) in the organization. *** Backward Compatibility [#backward-compatibility] Custom roles are fully backward compatible with the existing auth system: * **No enterprise license?** The legacy `admin`/`member`/`owner` roles continue to work exactly as before. `admin` and `owner` get all permissions; `member` gets query permissions. * **Enterprise enabled but no custom roles assigned?** Users fall back to their existing role's default permissions. * **Mixed deployment?** The permission resolver checks for custom roles first, then falls back to the legacy mapping. *** Integration with Permission Checks [#integration-with-permission-checks] The `checkPermission` function can be used in route handlers to enforce fine-grained access: ```typescript import { checkPermission } from "ee/src/auth/roles"; // In a route handler: const denied = await checkPermission(user, "admin:connections", requestId); if (denied) { return c.json(denied.body, denied.status); } ``` The `hasPermission` function returns a boolean for simpler conditional checks: ```typescript import { hasPermission } from "ee/src/auth/roles"; if (await hasPermission(user, "admin:semantic")) { // User can edit semantic layer } ``` --- # Self-Serve Signup (/guides/signup) Atlas includes a self-serve signup flow that walks new users through account creation, workspace setup, and database connection — all from the browser. Once complete, users land in the chat UI ready to query. On [app.useatlas.dev](https://app.useatlas.dev), signup is available immediately at [app.useatlas.dev/signup](https://app.useatlas.dev/signup). Social login (Google, GitHub, Microsoft) is pre-configured. Skip to [How It Works](#how-it-works) for the user experience. * [Managed auth](/deployment/authentication#managed-auth) enabled (`ATLAS_AUTH_MODE=managed`) * Internal database configured (`DATABASE_URL`) * At least one LLM provider configured (`ATLAS_PROVIDER` + API key) *** How It Works [#how-it-works] The signup wizard is a five-step flow at `/signup`: | Step | Route | What happens | | ----------------------- | ------------------- | --------------------------------------------------- | | 1. **Create account** | `/signup` | Email/password or OAuth (Google, GitHub, Microsoft) | | 2. **Name workspace** | `/signup/workspace` | Workspace name + auto-generated slug (max 48 chars) | | 3. **Choose region** | `/signup/region` | Select where workspace data is stored (SaaS only) | | 4. **Connect database** | `/signup/connect` | Paste a PostgreSQL or MySQL URL, test it, save | | 5. **Done** | `/signup/success` | Confirmation with next steps | Each step validates before allowing the user to proceed. The database URL is tested against the live server before being saved. *** Region Selection [#region-selection] Region selection appears only when data residency is configured on the platform. Self-hosted deployments and platforms without residency configured skip this step automatically. In step 3, new users choose a data region for their workspace. This determines where all workspace data (conversations, audit logs, cached results) is physically stored. Available Regions [#available-regions] Atlas supports three regions at launch: | Region | ID | Compliance | | ---------------- | -------------- | ------------------------- | | **US East** | `us-east` | SOC 2 compliant | | **EU West** | `eu-west` | GDPR compliant | | **Asia Pacific** | `ap-southeast` | Regional data sovereignty | The platform's default region is pre-selected. Users can choose a different region based on their compliance requirements. Why It Matters [#why-it-matters] * **Data residency** — All workspace data stays in the selected region. This is critical for organizations subject to GDPR, CCPA, or other data sovereignty regulations * **Latency** — Choosing a region close to your users and database reduces round-trip time * **Permanence** — Region assignment is permanent at signup. To change regions later, workspace admins can request a migration from the [Data Residency](/guides/admin-console#data-residency) page in the admin console How It Works [#how-it-works-1] 1. The page loads available regions from `GET /api/v1/onboarding/regions` 2. If residency is not configured (`configured: false`), the step is skipped and the user proceeds directly to database connection 3. The user selects a region and clicks **Continue** 4. The selection is saved via `POST /api/v1/onboarding/assign-region` 5. The user proceeds to the database connection step *** Enabling Social Login [#enabling-social-login] On [app.useatlas.dev](https://app.useatlas.dev), Google, GitHub, and Microsoft login are pre-configured. This section is for self-hosted operators only. Social OAuth buttons appear automatically when the corresponding environment variables are set. Each provider requires both a client ID and client secret: | Provider | Environment Variables | | --------- | ------------------------------------------------ | | Google | `GOOGLE_CLIENT_ID`, `GOOGLE_CLIENT_SECRET` | | GitHub | `GITHUB_CLIENT_ID`, `GITHUB_CLIENT_SECRET` | | Microsoft | `MICROSOFT_CLIENT_ID`, `MICROSOFT_CLIENT_SECRET` | ```bash # .env — enable Google and GitHub OAuth GOOGLE_CLIENT_ID=your-google-client-id GOOGLE_CLIENT_SECRET=your-google-client-secret GITHUB_CLIENT_ID=your-github-client-id GITHUB_CLIENT_SECRET=your-github-client-secret ``` The signup page checks which providers are available via `GET /api/v1/onboarding/social-providers` and renders OAuth buttons accordingly. If no social providers are configured, only email/password signup is shown. See [Social Providers](/guides/social-providers) for detailed OAuth setup instructions per provider. *** Database Connection [#database-connection] In step 4, the user pastes a database URL (`postgresql://` or `mysql://`). The wizard: 1. **Validates the URL scheme** — only PostgreSQL and MySQL are supported 2. **Tests connectivity** — creates a temporary connection, runs a health check, reports latency 3. **Persists on success** — encrypts the URL at rest and saves it scoped to the user's workspace Connection URLs are encrypted using `BETTER_AUTH_SECRET` or `ATLAS_ENCRYPTION_KEY` before storage. The URL is never stored in plaintext. The connection is registered with a default ID of `"default"`. If you need multiple datasources, add them later via the [Admin Console](/guides/admin-console#connections). Connection ID Rules [#connection-id-rules] Custom connection IDs (optional) must be: * 2–64 characters * Lowercase alphanumeric, hyphens, or underscores * Must start with a lowercase letter (`a-z`) * Must end with a lowercase letter or digit *** API Endpoints [#api-endpoints] The onboarding API is mounted at `/api/v1/onboarding`. All endpoints require managed auth mode. | Method | Path | Auth | Description | | ------ | ------------------- | ------- | --------------------------------------------------------------------------------------- | | `GET` | `/social-providers` | None | List enabled OAuth providers | | `GET` | `/regions` | Session | List available data regions (returns `configured`, `defaultRegion`, `availableRegions`) | | `POST` | `/assign-region` | Session | Assign workspace to a data region (body: `{ region }`) | | `POST` | `/test-connection` | Session | Test a database URL without saving | | `POST` | `/complete` | Session | Save connection and finalize workspace setup | Test Connection [#test-connection] ```bash curl -X POST https://your-atlas.example.com/api/v1/onboarding/test-connection \ -H "Content-Type: application/json" \ -H "Cookie: your-session-cookie" \ -d '{ "url": "postgresql://user:pass@host:5432/mydb" }' ``` Response: ```json { "status": "healthy", "latencyMs": 42, "dbType": "postgresql", "maskedUrl": "postgresql://user:****@host:5432/mydb" } ``` Complete Onboarding [#complete-onboarding] ```bash curl -X POST https://your-atlas.example.com/api/v1/onboarding/complete \ -H "Content-Type: application/json" \ -H "Cookie: your-session-cookie" \ -d '{ "url": "postgresql://user:pass@host:5432/mydb" }' ``` Response (201): ```json { "connectionId": "default", "dbType": "postgresql", "maskedUrl": "postgresql://user:****@host:5432/mydb" } ``` *** Configuration (Self-Hosted) [#configuration-self-hosted] On [app.useatlas.dev](https://app.useatlas.dev), auth and datasource configuration is handled automatically during signup. This section is for self-hosted operators only. ```typescript // atlas.config.ts export default defineConfig({ auth: "managed", datasources: { default: { url: process.env.ATLAS_DATASOURCE_URL! }, }, }); ``` For most self-serve deployments, the datasource URL comes from the onboarding flow rather than a static config file. The `atlas.config.ts` is primarily used for self-hosted deployments with pre-configured connections. *** After Signup [#after-signup] Once a user completes onboarding: 1. Their workspace is active with one datasource connection 2. The semantic layer whitelist cache is flushed, so new tables are queryable immediately 3. They can start chatting, or run `atlas init` to generate a semantic layer from the connected database For semantic layer setup, see [Semantic Layer](/getting-started/semantic-layer). *** See Also [#see-also] * [Authentication](/deployment/authentication) — Auth mode setup and configuration * [Social Providers](/guides/social-providers) — Detailed OAuth setup per provider * [Admin Console](/guides/admin-console) — Manage connections and users after signup * [Data Residency](/guides/admin-console#data-residency) — Manage region assignment and migration after signup * [Environment Variables](/reference/environment-variables) — Full variable reference --- # Guided Tour (/guides/guided-tour) The guided tour is an interactive overlay that walks new users through Atlas's key features. It highlights four areas of the interface — chat, notebook, admin console, and semantic layer — with tooltips explaining each one. *** How It Works [#how-it-works] The tour auto-starts the first time a user visits Atlas. Returning users who have already completed (or skipped) the tour are not shown it again. Completion state is tracked in two layers: 1. **Local storage** (`atlas-tour-completed`) — checked first for instant decisions without a network round-trip 2. **Server-side** (`user_onboarding` table) — persists across browsers and devices when managed auth is enabled If the server is unreachable, the tour falls back to local storage only. *** Tour Steps [#tour-steps] | Step | Title | Description | | ---- | ------------------- | -------------------------------------------------------------------------------------------------- | | 1 | Chat with your data | Ask questions in natural language. Atlas translates them to SQL and returns results | | 2 | Notebook | Build multi-step analyses with cells you can re-run, reorder, and export | | 3 | Admin console | Manage connections, users, roles, and monitor usage (**admin-only** — skipped for non-admin users) | | 4 | Semantic layer | Entity definitions, metrics, and glossary terms that help Atlas understand your data | Non-admin users see 3 steps (the admin console step is filtered out automatically). *** Keyboard Navigation [#keyboard-navigation] | Key | Action | | ----------------- | ------------------------- | | `→` (Arrow Right) | Next step | | `←` (Arrow Left) | Previous step | | `Escape` | Skip and dismiss the tour | Each step also has clickable **Back**, **Next** (or **Done** on the last step), and **Skip** buttons. *** Replaying the Tour [#replaying-the-tour] To re-trigger the tour after completing it, open the **help menu** (the `?` icon in the top navigation bar) and click **Replay guided tour**. This clears the completion state and restarts from step 1. *** API Endpoints [#api-endpoints] Three endpoints under `/api/v1/onboarding` provide programmatic control over tour state. All require managed auth (`ATLAS_AUTH_MODE=managed`) and an authenticated session. GET `/api/v1/onboarding/tour-status` [#get-apiv1onboardingtour-status] Check whether the current user has completed the tour. ```json { "tourCompleted": false, "tourCompletedAt": null } ``` POST `/api/v1/onboarding/tour-complete` [#post-apiv1onboardingtour-complete] Mark the tour as completed. Idempotent — safe to call multiple times. ```json { "tourCompleted": true, "tourCompletedAt": "2026-03-22T12:00:00.000Z" } ``` POST `/api/v1/onboarding/tour-reset` [#post-apiv1onboardingtour-reset] Clear the completion flag so the tour can be replayed. ```json { "tourCompleted": false, "tourCompletedAt": null } ``` These endpoints return `404` when managed auth is not configured or when no internal database is available. The frontend gracefully falls back to local storage in this case. *** See Also [#see-also] * [Admin Console](/guides/admin-console) — The admin step highlighted during the tour * [Notebook View](/guides/notebook) — Cell-based analysis interface --- # Actions Framework (/guides/actions) The action framework lets the Atlas agent perform write operations -- sending emails, creating JIRA tickets, and more -- with configurable approval gates. Actions go through a request-approve-execute lifecycle that gives humans control over what the agent does. * Authentication enabled (any mode except `none`) * Internal database (`DATABASE_URL`) recommended for persistent action log (in-memory fallback available but lost on restart) * For email actions: `RESEND_API_KEY` * For JIRA actions: `JIRA_BASE_URL`, `JIRA_EMAIL`, `JIRA_API_TOKEN` Enable [#enable] ```bash ATLAS_ACTIONS_ENABLED=true ``` Actions require authentication (any mode except `none`) and an internal database (`DATABASE_URL`) for the action log. *** Approval Modes [#approval-modes] Each action has an approval mode that controls whether human approval is required before execution. | Mode | Behavior | Who can approve | | ------------ | --------------------------------------- | -------------------- | | `auto` | Execute immediately, no approval needed | N/A | | `manual` | Queue for approval | `analyst` or `admin` | | `admin-only` | Queue for approval, admin required | `admin` only | In `admin-only` mode, the user who requested the action cannot approve their own request (separation of duties). *** Action Lifecycle [#action-lifecycle] ``` Agent requests action → pending ↓ ┌────────────────┼────────────────┐ ↓ ↓ ↓ auto_approved approved denied ↓ ↓ executed executed ↓ ↓ success/failed/ success/failed/ timed_out timed_out ↓ ↓ rolled_back* rolled_back* * reversible actions only ``` * **auto** mode: `pending` → `auto_approved` → `executed` (or `failed` / `timed_out`) * **manual** / **admin-only** mode: `pending` → `approved` / `denied` → `executed` (or `failed` / `timed_out`) * **rollback** (reversible actions): `executed` / `auto_approved` → `rolled_back` When execution exceeds the configured timeout, the action transitions to `timed_out` instead of remaining in-flight. The timeout duration is logged in the audit trail. *** Rollback [#rollback] Reversible actions can be undone after execution. When an action plugin declares `reversible: true` and its execute handler returns `rollbackInfo`, the action can be rolled back via the API or the admin console. How Rollback Works [#how-rollback-works] 1. The action plugin declares `reversible: true` in its `PluginAction` definition 2. The execute handler returns a `rollbackInfo` object containing the method and parameters needed to undo the action 3. Atlas stores `rollbackInfo` in the action log alongside the execution result 4. An admin triggers rollback via the API or admin console 5. The action transitions to `rolled_back` status via compare-and-swap (CAS) 6. The rollback handler is dispatched using the stored `rollbackInfo` Status Transition [#status-transition] Rollback is only available from the `executed` or `auto_approved` statuses: ``` executed / auto_approved → rolled_back ``` Actions in any other status (`pending`, `denied`, `failed`, `timed_out`, or already `rolled_back`) cannot be rolled back. API [#api] ```bash # Rollback an executed action curl -X POST http://localhost:3001/api/v1/actions//rollback \ -H "Authorization: Bearer " ``` **Request:** No body required. **Response (success):** ```json { "id": "action-uuid", "status": "rolled_back", "action_type": "jira:create", "target": "PROJ-123", "summary": "Created JIRA issue PROJ-123", "rollback_info": { "method": "transition", "params": { "issueKey": "PROJ-123" } }, "error": null } ``` **Response (rollback handler error):** ```json { "id": "action-uuid", "status": "rolled_back", "action_type": "jira:create", "target": "PROJ-123", "summary": "Created JIRA issue PROJ-123", "rollback_info": { "method": "transition", "params": { "issueKey": "PROJ-123" } }, "error": "JIRA API returned 403", "warning": "Rollback status updated but the rollback handler reported an error. The side-effect may not have been reversed." } ``` When the rollback handler fails, the status still transitions to `rolled_back` but the response includes a `warning` and `error` field so operators know the side-effect may not have been reversed. Admin Console [#admin-console] Admins can rollback actions from the admin console at `/admin/actions`. Executed reversible actions show a **Rollback** button. Error Cases [#error-cases] | Scenario | HTTP Status | Error Code | | ------------------------------------------------------- | ----------- | ---------------------------------------------- | | No internal database configured | 404 | `not_available` | | Invalid action ID format | 400 | `invalid_request` | | Action not found | 404 | `not_found` | | Action not reversible (no `rollback_info`) | 409 | `conflict` | | Action already rolled back or not in rollbackable state | 409 | `conflict` | | Insufficient role | 403 | `forbidden` | | Unexpected server error | 500 | `internal_error` | | Rollback handler failed | 200 | Response includes `warning` and `error` fields | Declaring Actions as Reversible [#declaring-actions-as-reversible] In your action plugin, set `reversible: true` and return `rollbackInfo` from the execute handler: ```typescript import { z } from "zod"; import { tool } from "@useatlas/plugin-sdk/ai"; import type { PluginAction } from "@useatlas/plugin-sdk"; const action: PluginAction = { name: "createJiraIssue", description: "Create a JIRA issue", tool: tool({ description: "Create a JIRA issue from analysis findings", inputSchema: z.object({ summary: z.string(), project: z.string() }), execute: async ({ summary, project }) => { const issue = await createIssue({ summary, project }); return { issueKey: issue.key, url: issue.url, // Return rollbackInfo so the action can be undone rollbackInfo: { method: "transition", params: { issueKey: issue.key }, }, }; }, }), actionType: "jira:create", reversible: true, // Enables the rollback button/API defaultApproval: "manual", requiredCredentials: ["JIRA_API_TOKEN"], }; ``` The `rollbackInfo.method` and `rollbackInfo.params` are stored in the action log and passed to the rollback handler when triggered. SDK [#sdk] ```typescript const result = await atlas.rollbackAction("action-uuid"); console.log(result.status); // "rolled_back" if (result.warning) { console.warn(result.warning); // Handler error — side-effect may persist } ``` See [SDK Reference](/reference/sdk#rollback) for the full method signature. *** Built-in Actions [#built-in-actions] Email (`email:send`) [#email-emailsend] Send email reports via [Resend](https://resend.com). **Default approval:** `admin-only` **Required credentials:** * `RESEND_API_KEY` -- Resend API token **Optional:** * `ATLAS_EMAIL_FROM` -- From address (default: `Atlas `) * `ATLAS_EMAIL_ALLOWED_DOMAINS` -- Comma-separated domain whitelist for recipients **Input:** * `to` -- Recipient email address(es) * `subject` -- Email subject line * `body` -- Email body (HTML) JIRA (`jira:create`) [#jira-jiracreate] Create JIRA issues from data insights. **Default approval:** `manual` **Required credentials:** * `JIRA_BASE_URL` -- JIRA instance URL (e.g., `https://myco.atlassian.net`) * `JIRA_EMAIL` -- Authentication email * `JIRA_API_TOKEN` -- API token **Optional:** * `JIRA_DEFAULT_PROJECT` -- Default project key when not specified **Input:** * `summary` -- Issue title (max 255 chars) * `description` -- Issue description * `project` -- Project key (optional, falls back to `JIRA_DEFAULT_PROJECT`) * `labels` -- Optional labels This action is **reversible** -- on rollback, Atlas transitions the created issue to "Closed" (best-effort, depends on JIRA workflow). *** Configuration [#configuration] Via Environment Variables [#via-environment-variables] ```bash ATLAS_ACTIONS_ENABLED=true ATLAS_ACTION_TIMEOUT=300000 # 5 minute execution timeout (ms) ``` Via atlas.config.ts [#via-atlasconfigts] Override approval modes, role requirements, and timeouts per action: ```typescript import { defineConfig } from "@atlas/api/lib/config"; export default defineConfig({ actions: { // Default approval mode and timeout for all actions unless overridden defaults: { approval: "manual", timeout: 300000, // 5 minute default (ms) }, // Per-action overrides — key is the action type identifier "email:send": { approval: "admin-only", // Only admins can approve email sends requiredRole: "admin", timeout: 30000, // 30 second timeout for email sends credentials: { RESEND_API_KEY: { env: "RESEND_API_KEY" }, // Validated at startup }, }, "jira:create": { approval: "manual", // Any analyst or admin can approve requiredRole: "analyst", }, }, }); ``` **Timeout resolution:** With a config file, per-action `timeout` overrides `defaults.timeout`. The `ATLAS_ACTION_TIMEOUT` env var is only used when no config file is present (it populates `defaults.timeout` automatically). When no timeout is configured, actions run without a time limit. See [Configuration](/reference/config) for the full config schema. *** Approving and Denying Actions [#approving-and-denying-actions] Web UI [#web-ui] Pending actions appear in the chat UI with **Approve** and **Deny** buttons. Admins can also manage actions from the [Admin Console](/guides/admin-console) at `/admin/actions`. API [#api-1] ```bash # Approve a pending action curl -X POST http://localhost:3001/api/v1/actions//approve \ -H "Authorization: Bearer " # Deny with an optional reason (recorded in the action log) curl -X POST http://localhost:3001/api/v1/actions//deny \ -H "Authorization: Bearer " \ -H "Content-Type: application/json" \ -d '{"reason": "Not relevant"}' ``` Slack [#slack] When using the [Slack integration](/guides/slack), pending actions show as ephemeral messages with **Approve** and **Deny** buttons visible only to the requesting user. *** API Endpoints [#api-endpoints] | Method | Path | Description | | ------ | ------------------------------ | ----------------------------------------------------- | | `GET` | `/api/v1/actions` | List actions (filter by `status`, default: `pending`) | | `GET` | `/api/v1/actions/:id` | Get action details | | `POST` | `/api/v1/actions/:id/approve` | Approve a pending action | | `POST` | `/api/v1/actions/:id/deny` | Deny a pending action | | `POST` | `/api/v1/actions/:id/rollback` | Rollback an executed reversible action | *** Role Requirements [#role-requirements] | Role | Can request actions | Can approve `manual` | Can approve `admin-only` | | --------- | ------------------- | -------------------- | ------------------------ | | `viewer` | Yes | No | No | | `analyst` | Yes | Yes | No | | `admin` | Yes | Yes | Yes | See [Authentication](/deployment/authentication#roles) for role configuration. *** Building Custom Actions [#building-custom-actions] Action plugins follow the same pattern as built-in actions. See [Plugin Authoring Guide](/plugins/authoring-guide) for the `action` plugin type, which registers custom tools with approval gates and credential validation. *** Troubleshooting [#troubleshooting] **Actions not appearing** -- Verify `ATLAS_ACTIONS_ENABLED=true` and authentication is configured (any mode except `none`). Without `DATABASE_URL`, actions still work but use in-memory storage. **Approval stuck** -- Check that the approving user has the required role. In `admin-only` mode, the requester cannot self-approve. See [Troubleshooting](/guides/troubleshooting#action-framework) for more diagnostic steps. *** See Also [#see-also] * [Plugin Authoring Guide](/plugins/authoring-guide) — Building custom action plugins * [Configuration](/reference/config#actions) — Declarative action configuration in `atlas.config.ts` * [Authentication](/deployment/authentication#roles) — Role-based approval (actions require auth) * [Admin Console](/guides/admin-console) — Monitor and manage actions from the web UI * [Environment Variables](/reference/environment-variables#actions) — `ATLAS_ACTIONS_ENABLED` and per-action settings --- # Multi-Tenancy (/guides/multi-tenancy) Atlas supports multi-tenancy through [Better Auth organizations](https://www.better-auth.com/docs/plugins/organization). Each organization gets isolated semantic layers, connection pools, and query caches — preventing data leakage and noisy-neighbor issues between tenants. On [app.useatlas.dev](https://app.useatlas.dev), multi-tenancy is built in. Your workspace is an organization — manage members and roles from **Admin > Organization**. The infrastructure sections below (per-org pooling, scaling config) are handled automatically. * [Managed auth](/deployment/authentication#managed-auth) enabled (`BETTER_AUTH_SECRET` set) * An internal database (`DATABASE_URL`) — organizations and their data are stored here * At least one analytics datasource (`ATLAS_DATASOURCE_URL`) *** Overview [#overview] In a multi-tenant Atlas deployment, each organization is a fully isolated tenant: | Resource | Isolation | | -------------------- | ---------------------------------------------------------------------------------------------- | | **Semantic layer** | Per-org entity YAMLs stored in the internal DB and synced to `semantic/.orgs/{orgId}/` on disk | | **Connection pools** | Optional per-org pool instances (separate connection limits per tenant) | | **Query cache** | Cache keys include `orgId` — same SQL in different orgs produces different cache entries | | **Conversations** | Scoped to the org via `org_id` column in the conversations table | | **Audit log** | Queries are attributed to both user and org | Without organizations enabled, Atlas operates in single-tenant mode — all users share the same semantic layer, connections, and cache namespace. *** Enabling organizations [#enabling-organizations] Organizations are automatically available when managed auth is configured. The Better Auth `organization` plugin is included in Atlas's auth server with a three-tier role hierarchy: | Resource | member | admin | owner | | ---------------- | ------------ | ---------------------------- | ---------------------------- | | **organization** | — | — | update, delete | | **member** | — | create, read, update, delete | create, read, update, delete | | **connection** | read | create, read, update, delete | create, read, update, delete | | **conversation** | create, read | create, read, delete | create, read, delete | | **semantic** | read | read, update | read, update | | **settings** | read | read, update | read, update | Create the first organization [#create-the-first-organization] Use the Better Auth API to create an organization. The creating user becomes the owner: ```bash # Create an organization (authenticated as a signed-in user) curl -X POST https://your-atlas.com/api/auth/organization/create \ -H "Content-Type: application/json" \ -H "Cookie: better-auth.session_token=" \ -d '{ "name": "Acme Corp", "slug": "acme-corp" }' ``` Or via the Better Auth React client: ```typescript import { authClient } from "@/lib/auth/client"; await authClient.organization.create({ name: "Acme Corp", slug: "acme-corp", }); ``` Set the active organization [#set-the-active-organization] A user can belong to multiple organizations. The **active organization** determines which semantic layer, connection pool, and cache namespace are used for their queries: ```typescript // Switch to an organization await authClient.organization.setActive({ organizationId: "org-id-here" }); ``` The `activeOrganizationId` is stored in the user's session. All subsequent API requests use this org context until the user switches. *** Org-scoped semantic layers [#org-scoped-semantic-layers] Each organization has its own semantic layer stored in the internal database (`semantic_entities` table, auto-created by Atlas migrations). The DB is the source of truth; Atlas syncs entities to disk at `semantic/.orgs/{orgId}/` for the explore tool (the agent reads files via `ls`, `cat`, and `grep`). How it works [#how-it-works] ``` semantic/ ├── entities/ # Default (no-org) entities │ ├── users.yml │ └── orders.yml └── .orgs/ ├── org-abc/ # Org "abc" entities (synced from DB) │ └── entities/ │ ├── users.yml │ └── custom_metrics.yml └── org-xyz/ # Org "xyz" entities └── entities/ └── events.yml ``` When an `activeOrganizationId` is present in the session, the agent reads from the org-specific directory instead of the top-level `semantic/entities/`. Admin API for org entities [#admin-api-for-org-entities] Manage org-scoped entities via the admin API. The session must have an active organization set (via `organization.setActive()`) before calling these endpoints: ```bash # List entities for the active org curl https://your-atlas.com/api/v1/admin/semantic/org/entities \ -H "Cookie: better-auth.session_token=" # Create or update an entity curl -X PUT https://your-atlas.com/api/v1/admin/semantic/org/entities/users \ -H "Content-Type: application/json" \ -H "Cookie: better-auth.session_token=" \ -d '{ "yamlContent": "table: users\ndescription: Application users\ndimensions:\n - name: id\n sql: id\n type: string" }' # Delete an entity curl -X DELETE https://your-atlas.com/api/v1/admin/semantic/org/entities/users \ -H "Cookie: better-auth.session_token=" ``` Entity changes are written to the DB first, then synced to disk atomically (write-to-temp + rename to prevent partial reads by the explore tool). Dual-write sync [#dual-write-sync] The sync layer (`semantic-sync.ts`) maintains two directions: 1. **DB → disk** — Admin API entity CRUD writes to the DB, then syncs to `semantic/.orgs/{orgId}/` 2. **Disk → DB** — `atlas init --org` writes to disk, then imports into the DB If the on-disk directory is empty on first access (e.g., after a container restart), Atlas rebuilds it from the DB automatically before building the semantic index. *** Org-scoped connections (Self-Hosted) [#org-scoped-connections-self-hosted] On [app.useatlas.dev](https://app.useatlas.dev), per-org connection pooling is managed automatically. This section is for self-hosted operators only. By default, all organizations share the same database connection pools. For SaaS deployments where tenant isolation is critical, enable **per-org pool isolation** — each org gets its own connection pool instances with independent limits. Configuration [#configuration] ```typescript // atlas.config.ts import { defineConfig } from "@atlas/api/lib/config"; export default defineConfig({ datasources: { default: { url: process.env.ATLAS_DATASOURCE_URL! }, }, pool: { perOrg: { maxConnections: 5, // Connections per org per datasource (default: 5) idleTimeoutMs: 30000, // Idle connection timeout (default: 30000) maxOrgs: 50, // Max concurrent org pools before LRU eviction (default: 50) warmupProbes: 2, // Health probes on pool creation (default: 2) drainThreshold: 5, // Consecutive failures before auto-drain (default: 5) }, }, // Top-level — hard cap across all pools (base + org-scoped) maxTotalConnections: 100, // Default: 100 }); ``` How it works [#how-it-works-1] When `pool.perOrg` is configured and a request has an `activeOrganizationId`: 1. The `ConnectionRegistry` creates an isolated pool for that org+datasource pair on first access 2. The pool uses the same database URL as the base connection but with org-specific limits 3. Pools are lazily created and cached — subsequent requests from the same org reuse the pool 4. When `maxOrgs` concurrent org pools exist, the **least recently used** org's pools are evicted 5. A hard capacity check prevents exceeding `maxTotalConnections` across all pools Without `pool.perOrg`, all orgs share the base connection pool. This is fine for small deployments but risks noisy-neighbor issues at scale — one org running expensive queries can exhaust the shared pool. Pool monitoring [#pool-monitoring] Monitor org pool health via the admin API: ```bash # Get org pool metrics curl https://your-atlas.com/api/v1/admin/connections \ -H "Cookie: better-auth.session_token=" ``` The admin console's **Connections** page shows pool stats including active/idle connections, query counts, error rates, and drain history for both base and org-scoped pools. *** Org-scoped caching [#org-scoped-caching] Query result caching is automatically org-aware. Cache keys are computed from: * The normalized SQL query * The connection ID * The `orgId` (from the session's `activeOrganizationId`) * The user's claims (for RLS differentiation) This means the same SQL query executed in two different organizations produces **different cache entries** — there is no cross-org cache leakage. Per-org cache flush [#per-org-cache-flush] The admin cache flush endpoint clears all cached entries. There is no per-org flush API — flushing is global. However, since cache keys are org-scoped, entries naturally expire per the configured TTL. ```bash # Flush all cached entries (admin only) curl -X POST https://your-atlas.com/api/v1/admin/cache/flush \ -H "Cookie: better-auth.session_token=" ``` Configure cache behavior in `atlas.config.ts`: ```typescript export default defineConfig({ cache: { enabled: true, // Default: true ttl: 300_000, // Milliseconds (default: 300000 = 5 minutes) maxSize: 1000, // Max entries (default: 1000) }, }); ``` *** Member management [#member-management] Invite members [#invite-members] Org admins and owners can invite users by email: ```typescript await authClient.organization.inviteMember({ email: "analyst@example.com", role: "member", // "member" | "admin" | "owner" organizationId: "org-id", }); ``` Email delivery for invitations is not configured by default. Atlas logs a warning with the invite details — share the invite link manually, or configure an email plugin for automatic delivery. Manage roles [#manage-roles] ```typescript // Update a member's role await authClient.organization.updateMemberRole({ memberId: "member-id", role: "admin", organizationId: "org-id", }); // Remove a member await authClient.organization.removeMember({ memberId: "member-id", organizationId: "org-id", }); ``` Platform admin view [#platform-admin-view] Platform admins (users with the `admin` role at the application level) can manage all organizations via the admin API: ```bash # List all organizations with member counts curl https://your-atlas.com/api/v1/admin/organizations \ -H "Cookie: better-auth.session_token=" # Get org details with members and invitations curl https://your-atlas.com/api/v1/admin/organizations/org-id \ -H "Cookie: better-auth.session_token=" # Get org stats (conversations, members, queries) curl https://your-atlas.com/api/v1/admin/organizations/org-id/stats \ -H "Cookie: better-auth.session_token=" ``` *** Org switcher UI [#org-switcher-ui] When a user belongs to multiple organizations, the Atlas web UI displays an **org switcher** in the sidebar. The switcher: 1. Lists all organizations the user belongs to 2. Shows the currently active organization with a check mark 3. Switches the active org on click (reloads the page to pick up the new org context) 4. Hides automatically when the user belongs to only one (or zero) organizations The org switcher component is at `packages/web/src/ui/components/org-switcher.tsx`. It uses the Better Auth React client to fetch orgs and switch the active org. *** `atlas init` with organizations (Self-Hosted) [#atlas-init-with-organizations-self-hosted] On [app.useatlas.dev](https://app.useatlas.dev), semantic layer initialization is handled through the admin console. This section is for self-hosted operators using the CLI. The CLI supports org-scoped initialization via the `--org` flag: ```bash # Profile the datasource and write to org-specific directory + import to DB bun run atlas -- init --org org-abc ``` This performs a dual-write: 1. Writes entity YAMLs to `semantic/.orgs/org-abc/entities/` 2. Imports the entities into the `semantic_entities` DB table for that org You can also use the `ATLAS_ORG_ID` environment variable instead of the flag: ```bash ATLAS_ORG_ID=org-abc bun run atlas -- init ``` To write entities to disk without importing to the DB (useful for reviewing before committing), pass `--no-import`: ```bash bun run atlas -- init --org org-abc --no-import ``` Without `--org`, `atlas init` writes to the top-level `semantic/entities/` directory (single-tenant mode). *** Troubleshooting [#troubleshooting] "No internal database configured" when creating orgs [#no-internal-database-configured-when-creating-orgs] **Cause:** Organizations require `DATABASE_URL` to be set. The Better Auth organization plugin stores org data in the internal Postgres database. **Fix:** Set `DATABASE_URL` to a PostgreSQL connection string and restart: ```bash DATABASE_URL=postgresql://user:pass@host:5432/atlas ``` User sees empty semantic layer after switching orgs [#user-sees-empty-semantic-layer-after-switching-orgs] **Cause:** The new org doesn't have any entities yet, or the disk sync hasn't completed. **Fix:** 1. Check if entities exist for the org: `curl /api/v1/admin/semantic/org/entities` 2. If empty, run `atlas init --org ` to profile the datasource for that org 3. If entities exist in DB but the agent can't find them, the disk sync may have failed — check logs for `semantic-sync` errors. Atlas retries the sync on next access PoolCapacityExceededError [#poolcapacityexceedederror] **Cause:** Creating a new org pool would exceed `maxTotalConnections`. This happens when many orgs are active simultaneously and the total pool slots (maxOrgs × maxConnections × datasources) exceed the hard cap. **Fix:** Adjust pool configuration: ```typescript pool: { perOrg: { maxConnections: 3, // Reduce per-org connections maxOrgs: 30, // Reduce max concurrent orgs }, }, maxTotalConnections: 200, // Increase total cap ``` LRU eviction automatically frees pools for inactive orgs, but under sustained load from many concurrent orgs, you may need to increase the cap or reduce per-org limits. Conversations scoped to the active org [#conversations-scoped-to-the-active-org] **This is expected behavior.** Conversations are scoped to the org via the `org_id` column. All members of an org share the same conversation history. Switching orgs shows a different set of conversations. Cache not isolated between orgs [#cache-not-isolated-between-orgs] **Cause:** This shouldn't happen — cache keys include the `orgId` by design. If you observe cross-org cache hits, check that the session's `activeOrganizationId` is correctly set. **Fix:** Verify the active org is set correctly: 1. Check the session: the `activeOrganizationId` field should be present 2. Ensure the user called `organization.setActive()` after switching orgs 3. If using the API directly, confirm the session token belongs to a user with an active org *** See Also [#see-also] * [Authentication](/deployment/authentication#managed-auth) — Set up managed auth (required for organizations) * [Admin Console](/guides/admin-console) — Manage organizations, members, and connections via the web UI * [Multi-Datasource Routing](/deployment/multi-datasource) — Configure multiple databases per deployment * [Configuration Reference](/reference/config) — All `atlas.config.ts` fields including `pool.perOrg` * [Troubleshooting](/guides/troubleshooting) — General diagnostic steps --- # MCP Server (/guides/mcp) Atlas exposes a [Model Context Protocol](https://modelcontextprotocol.io/) (MCP) server that gives any MCP-compatible client access to your semantic layer and validated SQL execution. * Atlas project set up (`bun install`) * `ATLAS_DATASOURCE_URL` pointing to your analytics database * An MCP-compatible client (Claude Desktop, Cursor, or any stdio/SSE client) * LLM provider API key configured in the **client** (the MCP server itself does not call an LLM) Quick Start [#quick-start] ```bash bun run mcp # Start MCP server on stdio bun run dev:mcp # Start with hot reload bun run atlas -- mcp # Same as bun run mcp ``` *** Claude Desktop [#claude-desktop] Add Atlas to your Claude Desktop configuration at `~/.claude/claude_desktop_config.json`: ```jsonc { "mcpServers": { "atlas": { "command": "bun", "args": ["run", "mcp"], "cwd": "/path/to/your/atlas/project", // Must point to your Atlas project root "env": { "ATLAS_DATASOURCE_URL": "postgresql://user:pass@host:5432/db", "ATLAS_PROVIDER": "anthropic", "ANTHROPIC_API_KEY": "sk-ant-..." // Your LLM provider API key } } } } ``` *** Cursor [#cursor] In Cursor settings, add an MCP server with: * **Command:** `bun` * **Args:** `["run", "mcp"]` * **Working directory:** Your Atlas project root *** Other MCP Clients [#other-mcp-clients] Any client that supports the stdio transport can use Atlas. The server reads JSON-RPC on stdin and writes responses to stdout. Logs go to stderr. *** SSE Transport [#sse-transport] For remote or containerized setups, use the SSE (Server-Sent Events) transport: ```bash # Start SSE transport on default port 8080 bun ./packages/mcp/bin/serve.ts --transport sse # Custom port for SSE transport bun ./packages/mcp/bin/serve.ts --transport sse --port 9090 # Restrict CORS to a specific origin (default: *) bun ./packages/mcp/bin/serve.ts --transport sse --cors-origin "https://example.com" ``` | Flag | Default | Description | | --------------- | ---------------------------- | ---------------------- | | `--transport` | `stdio` | `stdio` or `sse` | | `--port` | `8080` | Port for SSE transport | | `--cors-origin` | `*` (or `ATLAS_CORS_ORIGIN`) | CORS allowed origin | The SSE endpoint is available at `http://localhost:8080/mcp`. *** Available Tools [#available-tools] The MCP server exposes two tools: explore [#explore] Read files from the semantic layer directory. Accepts bash commands scoped to the `semantic/` directory. ``` command: "cat catalog.yml" command: "grep -r revenue entities/" command: "ls entities/" ``` This is the same `explore` tool the agent uses -- read-only, path-traversal protected. executeSQL [#executesql] Execute a validated SQL query against the analytics datasource. ``` sql: "SELECT COUNT(*) FROM orders" explanation: "Count total orders" connectionId: "warehouse" (optional) ``` All SQL validation applies: SELECT-only, table whitelist, auto-LIMIT, statement timeout, and audit logging. See [SQL Validation Pipeline](/security/sql-validation). *** Available Resources [#available-resources] The MCP server exposes the semantic layer as read-only resources: | URI | Description | | ---------------------------------- | ----------------------------------------------- | | `atlas://semantic/catalog` | Data catalog (`catalog.yml`) | | `atlas://semantic/glossary` | Business glossary (`glossary.yml`) | | `atlas://semantic/entities/{name}` | Entity schema (e.g., `entities/orders.yml`) | | `atlas://semantic/metrics/{name}` | Metric definitions (e.g., `metrics/orders.yml`) | Entity and metric resources are listable -- clients can discover all available files. *** Configuration [#configuration] The MCP server uses the same configuration as the main API server: * `atlas.config.ts` (if present) * Environment variables: `ATLAS_DATASOURCE_URL`, `ATLAS_PROVIDER`, provider API keys, etc. See [Environment Variables](/reference/environment-variables) for the full reference. *** Programmatic Usage [#programmatic-usage] Create an MCP server instance in your own code: ```typescript import { createAtlasMcpServer } from "@atlas/mcp/server"; // Creates a fully configured MCP server with tools and resources registered const server = await createAtlasMcpServer(); // Connect to any transport: server.connect(stdioTransport) or server.connect(sseTransport) ``` The server factory initializes configuration, registers tools and resources, and returns a ready-to-use `McpServer` instance. Connect it to any transport (stdio, SSE, or custom). *** Troubleshooting [#troubleshooting] **"ATLAS\_DATASOURCE\_URL is not set"** -- The MCP server requires the same environment variables as the main API. Set them in your MCP client config's `env` block or ensure they're available in the shell. **Tools not appearing in Claude Desktop** -- Restart Claude Desktop after changing `claude_desktop_config.json`. Check the `cwd` path points to your Atlas project root (where `package.json` and `semantic/` live). **SSE connection refused** -- The MCP server defaults to stdio, not SSE. Verify you started with `--transport sse` (e.g., `bun packages/mcp/bin/serve.ts --transport sse`). Check the port is not already in use (default 8080) and firewall rules allow the connection. See [Troubleshooting](/guides/troubleshooting) for general diagnostic steps. *** Atlas MCP vs Raw Database MCP [#atlas-mcp-vs-raw-database-mcp] Wondering why you'd use Atlas's MCP server instead of connecting Claude Desktop directly to your database with a tool like DBHub? Atlas's MCP server routes every query through the [SQL validation pipeline](/security/sql-validation) and gives the AI your semantic layer for context — not just raw schema metadata. See [Atlas vs Raw MCP](/comparisons/raw-mcp) for a full comparison. *** See Also [#see-also] * [CLI Reference](/reference/cli#mcp) — `atlas mcp` command flags and transport options * [Environment Variables](/reference/environment-variables) — Variables used by the MCP server * [SQL Validation Pipeline](/security/sql-validation) — How queries from MCP clients are validated * [Sandbox Architecture](/architecture/sandbox) — Design doc for code execution isolation across platforms * [Troubleshooting](/guides/troubleshooting) — General diagnostic steps --- # SCIM Directory Sync (/guides/scim) Atlas supports SCIM 2.0 (RFC 7643/7644) for automated user provisioning from enterprise identity providers like Okta, Azure AD (Entra ID), and OneLogin. When users are added, updated, or deactivated in your corporate directory, changes are automatically synced to Atlas. SCIM directory sync is available on [app.useatlas.dev](https://app.useatlas.dev) Enterprise plans. Self-hosted deployments do not include enterprise features. * [Managed auth](/deployment/authentication#managed-auth) enabled * Internal database configured (`DATABASE_URL`) * Active Enterprise plan on [app.useatlas.dev](https://app.useatlas.dev) * [SSO configured](/guides/enterprise-sso) (recommended but not required) * [Custom roles](/guides/custom-roles) configured (for group mapping) *** How It Works [#how-it-works] 1. An admin generates a SCIM bearer token via the Atlas API 2. The token is configured in your identity provider's SCIM provisioning settings 3. When users change in the IdP (create, update, deactivate), the IdP calls Atlas SCIM endpoints 4. Atlas automatically creates, updates, or suspends user accounts 5. SCIM groups from the IdP can be mapped to Atlas custom roles The SCIM 2.0 endpoints are served by the Better Auth SCIM plugin at `/api/auth/scim/v2/*`. *** Setup [#setup] 1\. Generate a SCIM Token [#1-generate-a-scim-token] Generate a SCIM bearer token for your organization. The token is used by your IdP to authenticate SCIM requests. ```bash curl -X POST https://your-atlas.example.com/api/auth/scim/generate-token \ -H "Content-Type: application/json" \ -H "Authorization: Bearer " \ -d '{ "providerId": "okta-prod", "organizationId": "org_abc123" }' ``` The response includes the SCIM token. **Store it securely** — it is only shown once. 2\. Configure Your IdP [#2-configure-your-idp] Point your IdP's SCIM provisioning to your Atlas instance: | Setting | Value | | --------------------- | ------------------------------------------------- | | **SCIM Base URL** | `https://your-atlas.example.com/api/auth/scim/v2` | | **Authentication** | Bearer Token (use the token from step 1) | | **Unique identifier** | `userName` (maps to email) | 3\. Configure Group Mappings (Optional) [#3-configure-group-mappings-optional] Map SCIM groups from your IdP to Atlas custom roles so provisioned users get the correct permissions. ```bash curl -X POST https://your-atlas.example.com/api/v1/admin/scim/group-mappings \ -H "Content-Type: application/json" \ -H "Authorization: Bearer " \ -d '{ "scimGroupName": "Engineers", "roleName": "analyst" }' ``` *** SCIM 2.0 Endpoints [#scim-20-endpoints] The following RFC 7644 endpoints are available at `/api/auth/scim/v2`: Discovery [#discovery] | Method | Path | Description | | ------ | ------------------------ | -------------------------- | | `GET` | `/ServiceProviderConfig` | SCIM server capabilities | | `GET` | `/Schemas` | Supported SCIM schemas | | `GET` | `/Schemas/:id` | Specific schema definition | | `GET` | `/ResourceTypes` | Supported resource types | User Management [#user-management] | Method | Path | Description | | -------- | ------------ | ------------------------------------ | | `GET` | `/Users` | List users (supports SCIM filtering) | | `GET` | `/Users/:id` | Get a specific user | | `POST` | `/Users` | Create (provision) a user | | `PUT` | `/Users/:id` | Replace a user | | `PATCH` | `/Users/:id` | Partial update a user | | `DELETE` | `/Users/:id` | Deactivate a user | Token Management [#token-management] | Method | Path | Description | | ------ | ------------------------------------------- | ----------------------------- | | `POST` | `/api/auth/scim/generate-token` | Generate SCIM bearer token | | `GET` | `/api/auth/scim/list-provider-connections` | List SCIM connections | | `GET` | `/api/auth/scim/get-provider-connection` | Get connection by provider ID | | `POST` | `/api/auth/scim/delete-provider-connection` | Delete a connection | *** Admin API Endpoints [#admin-api-endpoints] Administrative endpoints for managing SCIM connections and group mappings. All require admin role and enterprise license. Mounted at `/api/v1/admin/scim`. | Method | Path | Description | | -------- | --------------------- | -------------------------------- | | `GET` | `/` | List connections and sync status | | `DELETE` | `/connections/:id` | Revoke a SCIM connection | | `GET` | `/group-mappings` | List group → role mappings | | `POST` | `/group-mappings` | Create a group → role mapping | | `DELETE` | `/group-mappings/:id` | Delete a group → role mapping | Error Responses [#error-responses] | Status | Code | When | | ------ | --------------------- | --------------------------------- | | 400 | `validation` | Invalid group name or role name | | 403 | `enterprise_required` | Enterprise license not active | | 404 | `not_found` | Connection/mapping/role not found | | 409 | `conflict` | Duplicate group mapping | *** User Attribute Mapping [#user-attribute-mapping] The Better Auth SCIM plugin maps IdP user attributes to Atlas: | SCIM Attribute | Atlas Field | | ------------------------------------ | ---------------------------------- | | `userName` | User email (primary identifier) | | `name.givenName` + `name.familyName` | Display name | | `emails[primary]` | Email address | | `active` | Account status (false = suspended) | | `externalId` | Linked account ID in Better Auth | User Deactivation [#user-deactivation] When a user is deactivated in your IdP (`active: false`), the SCIM PATCH/PUT updates the user's status in Atlas. The user's account is suspended but **not deleted** — this preserves the audit trail and allows reactivation. *** IdP-Specific Setup [#idp-specific-setup] Okta [#okta] 1. In Okta Admin Console, go to **Applications → Add Application** 2. Search for "SCIM 2.0 Test App (Header Auth)" or create a custom app 3. Under **Provisioning → Integration**, set: * SCIM connector base URL: `https://your-atlas.example.com/api/auth/scim/v2` * Unique identifier field: `userName` * Authentication Mode: HTTP Header * Authorization: `Bearer ` 4. Enable **Push New Users**, **Push Profile Updates**, and **Push Groups** Azure AD (Entra ID) [#azure-ad-entra-id] 1. In Azure Portal, go to **Enterprise Applications → New Application** 2. Create a non-gallery application 3. Under **Provisioning**, set mode to **Automatic** 4. Set: * Tenant URL: `https://your-atlas.example.com/api/auth/scim/v2` * Secret Token: `` 5. Test the connection, then enable provisioning OneLogin [#onelogin] 1. In OneLogin Admin, go to **Applications → Add App** 2. Search for "SCIM Provisioner with SAML (SCIM v2 Enterprise)" 3. Under **Configuration**, set: * SCIM Base URL: `https://your-atlas.example.com/api/auth/scim/v2` * SCIM Bearer Token: `` 4. Under **Provisioning**, enable Create, Update, and Deactivate *** Admin Console [#admin-console] The SCIM settings page in the Admin Console (`/admin/scim`) provides: * **Sync Status** — Active connections, provisioned user count, last sync time * **SCIM Connections** — View and revoke active IdP connections * **Group Mappings** — Create and manage SCIM group → Atlas role mappings *** Troubleshooting [#troubleshooting] Token generation fails [#token-generation-fails] Ensure the authenticated user has the `admin` role. Only admins can generate SCIM tokens. IdP cannot connect [#idp-cannot-connect] 1. Verify the SCIM base URL ends with `/api/auth/scim/v2` (not `/scim/v2`) 2. Check that the bearer token was copied correctly (no leading/trailing whitespace) 3. Ensure the Atlas API is accessible from your IdP's network Users not being provisioned [#users-not-being-provisioned] 1. Check that the SCIM connection is active in `/admin/scim` 2. Verify the IdP is sending `userName` as the email address 3. Check Atlas server logs for SCIM-related errors Group mappings not applied [#group-mappings-not-applied] 1. Ensure custom roles exist in `/admin/roles` before creating mappings 2. Verify the SCIM group name in the mapping matches the IdP exactly (case-sensitive) 3. Group mapping applies at provisioning time — existing users need manual role reassignment *** See Also [#see-also] * [Enterprise SSO](/guides/enterprise-sso) — Configure SAML/OIDC identity providers * [Custom Roles](/guides/custom-roles) — Define granular permission-based roles * [IP Allowlisting](/guides/ip-allowlisting) — Restrict access by IP range * [Admin Console](/guides/admin-console) — Manage users and workspace settings --- # Demo Mode (/guides/demo-mode) Demo mode is for operators running their own Atlas instance. On [app.useatlas.dev](https://app.useatlas.dev), contact the Atlas team to discuss demo and trial options for your organization. Demo mode lets you expose a public, email-gated preview of Atlas at `/demo`. Visitors enter their email to start a sandboxed chat session — no account creation required. It's designed for lead capture and product evaluation with built-in rate limiting and reduced agent capabilities. * `ATLAS_DEMO_ENABLED=true` * `BETTER_AUTH_SECRET` set (used to sign demo tokens) * A datasource configured (`ATLAS_DATASOURCE_URL`) *** How It Works [#how-it-works] 1. Visitor navigates to `/demo` and enters their email address 2. Atlas validates the email and issues an HMAC-SHA256 signed token (24-hour TTL) 3. The visitor chats with the agent using the demo token for authentication 4. Sessions are rate-limited and capped at fewer agent steps than production Demo sessions use a separate auth path from the main application. The demo token is derived from `BETTER_AUTH_SECRET` with a `:demo` suffix — it does not grant access to any authenticated endpoints. Lead Capture [#lead-capture] Every demo start captures: * Email address (used as the demo user identifier) * IP address and user agent * Session count and whether the user is returning User IDs are deterministic hashes (`demo:`) — the raw email is stored only in the lead capture record, not in conversation metadata. *** Configuration [#configuration] Environment Variables [#environment-variables] | Variable | Default | Description | | --------------------------- | ------- | ------------------------------------------------ | | `ATLAS_DEMO_ENABLED` | `false` | Enable demo mode and the `/demo` route | | `ATLAS_DEMO_MAX_STEPS` | `10` | Maximum agent steps per demo chat (1–100) | | `ATLAS_DEMO_RATE_LIMIT_RPM` | `10` | Requests per minute per demo user (0 = disabled) | ```bash # .env — enable demo mode with custom limits ATLAS_DEMO_ENABLED=true ATLAS_DEMO_MAX_STEPS=15 ATLAS_DEMO_RATE_LIMIT_RPM=20 ``` *** Demo vs Production [#demo-vs-production] | Feature | Demo | Production | | ----------------- | ------------------------------- | ---------------------------------- | | **Auth** | Email-gated token (24h TTL) | Managed auth, API key, or BYOT | | **Agent steps** | Reduced (default 10) | Full (default 25) | | **Tools** | Default tools only | All tools + plugin tools + actions | | **Rate limit** | Separate per-user limiter | Main rate limiter | | **Conversations** | Persisted with `surface="demo"` | Persisted with standard surface | | **Token expiry** | 24 hours | Session-based | Demo mode uses the default datasource and semantic layer. Ensure the demo datasource does not contain sensitive production data. *** API Endpoints [#api-endpoints] Demo endpoints are mounted at `/api/v1/demo`. | Method | Path | Auth | Description | | ------ | -------------------- | ------------------------- | -------------------------------------------------------- | | `POST` | `/start` | None (rate-limited by IP) | Email gate — validates email, signs token, captures lead | | `POST` | `/chat` | Bearer demo token | Stream a chat response (mirrors main chat) | | `GET` | `/conversations` | Bearer demo token | List the demo user's conversations | | `GET` | `/conversations/:id` | Bearer demo token | Get a conversation with messages | Start a Demo Session [#start-a-demo-session] ```bash curl -X POST https://your-atlas.example.com/api/v1/demo/start \ -H "Content-Type: application/json" \ -d '{ "email": "visitor@example.com" }' ``` Response: ```json { "token": "eyJlbWFpbCI6InZpc2l0b3JAZ...", "expiresAt": 1742565600000, "returning": false, "conversationCount": 0 } ``` Chat via Demo Token [#chat-via-demo-token] ```bash curl -X POST https://your-atlas.example.com/api/v1/demo/chat \ -H "Content-Type: application/json" \ -H "Authorization: Bearer eyJlbWFpbCI6InZpc2l0b3JAZ..." \ -d '{ "messages": [{ "role": "user", "content": "Show me revenue by month" }] }' ``` *** Limitations [#limitations] * **No plugin tools** — demo sessions use default tools only (executeSQL, explore). Custom actions and plugin-provided tools are not available * **No write operations** — actions are disabled in demo mode * **Reduced steps** — the agent has fewer steps to work with, which may limit complex multi-step analyses * **Token expiry** — demo tokens expire after 24 hours; the user must re-enter their email to continue * **Rate limiting** — each demo user is rate-limited independently from production users *** See Also [#see-also] * [Environment Variables](/reference/environment-variables) — Full variable reference * [Rate Limiting](/guides/rate-limiting) — Configure rate limits for production * [Embedding Widget](/guides/embedding-widget) — Alternative: embed Atlas in your own site --- # Approval Workflows (/guides/approval-workflows) Atlas supports approval workflows for sensitive queries. Admins define rules that intercept queries before execution, requiring sign-off from designated approvers. This adds a governance layer for compliance-sensitive environments. Approval workflows are available on [app.useatlas.dev](https://app.useatlas.dev) Enterprise plans. Self-hosted deployments do not include enterprise features. * [Managed auth](/deployment/authentication#managed-auth) enabled * Internal database configured (`DATABASE_URL`) * Active Enterprise plan on [app.useatlas.dev](https://app.useatlas.dev) * Admin role required for all approval workflow endpoints *** How It Works [#how-it-works] When a user runs a query, the SQL validation pipeline checks the query against approval rules **after** validation passes but **before** execution. If a rule matches, the query is queued for approval instead of executing. Flow [#flow] 1. User submits a query via the chat agent 2. SQL passes validation (4-layer pipeline) 3. Approval rules are checked against the validated query's tables and columns 4. If a rule matches: query is queued and the agent tells the user approval is required 5. An admin reviews the request in the admin console (or via API) 6. Once approved, the user can re-submit the same query — it executes normally Rule Types [#rule-types] | Type | Match Logic | Example | | ---------- | --------------------------------------------------- | ----------------------------------------------------------------------- | | **Table** | Triggers when a query accesses a specific table | `users` — any query touching the `users` table requires approval | | **Column** | Triggers when a query accesses a specific column | `ssn` — any query reading the `ssn` column requires approval | | **Cost** | Triggers when estimated row count exceeds threshold | `100000` — queries expected to return more than 100K rows need approval | Approval Statuses [#approval-statuses] | Status | Description | | ---------- | ----------------------------------------------- | | `pending` | Awaiting review | | `approved` | Approved by a reviewer | | `denied` | Denied by a reviewer | | `expired` | Auto-expired after the configured expiry window | *** Configuration [#configuration] Creating Rules [#creating-rules] Rules are managed via the admin console at `/admin/approval` or the API. ```bash # Create a table-based approval rule curl -X POST http://localhost:3001/api/v1/admin/approval/rules \ -H "Content-Type: application/json" \ -d '{ "name": "Require approval for PII tables", "ruleType": "table", "pattern": "users", "enabled": true }' ``` Reviewing Requests [#reviewing-requests] Pending requests appear in the admin console's Approval Queue tab. Reviewers can approve or deny with an optional comment. ```bash # Approve a pending request curl -X POST http://localhost:3001/api/v1/admin/approval/queue/{requestId} \ -H "Content-Type: application/json" \ -d '{ "action": "approve", "comment": "Approved for quarterly audit" }' ``` Auto-Expiry [#auto-expiry] Pending requests expire automatically after 24 hours by default. Configure via the `ATLAS_APPROVAL_EXPIRY_HOURS` environment variable. ```bash # Set approval expiry to 48 hours ATLAS_APPROVAL_EXPIRY_HOURS=48 ``` *** API Reference [#api-reference] All endpoints are under `/api/v1/admin/approval` and require admin role. Rules [#rules] | Method | Path | Description | | -------- | ------------ | ----------------------- | | `GET` | `/rules` | List all approval rules | | `POST` | `/rules` | Create a new rule | | `PUT` | `/rules/:id` | Update an existing rule | | `DELETE` | `/rules/:id` | Delete a rule | Queue [#queue] | Method | Path | Description | | ------ | ---------------- | --------------------------------------------------- | | `GET` | `/queue` | List approval requests (optional `?status=pending`) | | `GET` | `/queue/:id` | Get a single request | | `POST` | `/queue/:id` | Approve or deny a request | | `POST` | `/expire` | Manually expire stale requests | | `GET` | `/pending-count` | Count of pending requests | *** Audit Trail [#audit-trail] All approval decisions are logged in the internal database's `approval_queue` table with: * Requester identity and email * The SQL query text * Tables and columns accessed * Reviewer identity and email * Review timestamp and comment * Final status (approved/denied/expired) The standard audit log also records queries that were blocked pending approval. *** Troubleshooting [#troubleshooting] Approval check not triggering [#approval-check-not-triggering] * Verify the workspace has an active Enterprise plan * Check that the internal database is configured (`DATABASE_URL`) * Ensure the rule is enabled and the pattern matches (case-insensitive) * Table rules match both bare names (`users`) and schema-qualified names (`public.users`) Requests expiring too quickly [#requests-expiring-too-quickly] Increase the expiry window: ```bash ATLAS_APPROVAL_EXPIRY_HOURS=72 ``` --- # Billing & Plans (/guides/billing-and-plans) Atlas uses a tiered billing model. SaaS workspaces on [app.useatlas.dev](https://app.useatlas.dev) start with a 14-day trial, then require a paid plan. Self-hosted deployments are free with no enforcement. Billing includes automatic usage metering and graceful overage handling. On [app.useatlas.dev](https://app.useatlas.dev), billing is fully managed. View your usage, manage your plan, and update payment methods from the admin dashboard — no Stripe setup required on your end. * Internal database configured (`DATABASE_URL`) — required for usage tracking * `STRIPE_SECRET_KEY` set — enables billing routes and Stripe integration * Admin role required for the usage dashboard *** Plan Tiers [#plan-tiers] | | Trial | Team | Enterprise | Self-Hosted | | -------------------------- | --------- | --------- | ---------- | ----------- | | **Queries / month** | 10,000 | 10,000 | Unlimited | Unlimited | | **Tokens / month** | 5,000,000 | 5,000,000 | Unlimited | Unlimited | | **Members** | 25 | 25 | Unlimited | Unlimited | | **Datasource connections** | 5 | 5 | Unlimited | Unlimited | | **Duration** | 14 days | — | — | — | | **Stripe required** | No | Yes | Yes | No | * **Trial** — Auto-assigned on SaaS signup. Same limits as Team. Expires after 14 days, then queries are blocked until the workspace upgrades. * **Team** — Standard paid tier. Monthly or annual billing via Stripe. * **Enterprise** — Custom contracts. No metered limits. Includes SSO, SCIM, and priority support. * **Self-Hosted** (free) — No billing enforcement. All features available, no Stripe needed. *** BYOT (Bring Your Own Token) [#byot-bring-your-own-token] BYOT lets workspaces supply their own LLM API keys instead of using Atlas-provided tokens. When enabled: * The workspace is billed at a lower rate on the Team plan (BYOT is a configuration flag, not a separate Stripe product) * Token usage is still metered for analytics, but token limits are not enforced * Admin or owner role is required to toggle BYOT Toggle via the billing API: ```bash curl -X POST https://your-atlas.example.com/api/v1/billing/byot \ -H "Content-Type: application/json" \ -H "Authorization: Bearer " \ -d '{ "enabled": true }' ``` *** Usage Dashboard [#usage-dashboard] The admin usage dashboard is available at **Admin > Usage** (`/admin/usage`). It shows: * **Query count** — total SQL queries executed this billing period, with progress bar toward limit * **Token count** — total LLM tokens consumed, with progress bar toward limit * **Active users** — unique users who logged in this period (requires login event instrumentation; may show 0 if not yet configured) * **Current plan** — tier name and trial expiry date (if applicable) * **Daily usage chart** — queries and tokens over the last 30 days * **Per-user breakdown** — sortable table of usage by individual user (top 50) If the workspace has a Stripe subscription, a **Manage Plan** button opens the Stripe Customer Portal for self-service plan changes. API Endpoints [#api-endpoints] Usage data is also available programmatically via the admin API. See [Usage Metering](/guides/usage-metering) for endpoint details. *** Overage Handling [#overage-handling] Atlas uses a 4-tier degradation model instead of a hard cutoff. Enforcement runs before every agent execution, checking query and token usage against plan limits. | Tier | Usage | Behavior | | -------------- | -------- | ----------------------------------------------------------- | | **OK** | 0–79% | No warning. Request proceeds normally | | **Warning** | 80–99% | Request proceeds. Warning metadata attached to the response | | **Soft limit** | 100–109% | 10% grace buffer. Request allowed with overage warning | | **Hard limit** | 110%+ | Request blocked with HTTP 429. Upgrade CTA in error message | The worst status across all metered dimensions (queries and tokens) determines the outcome. For example, if token usage is at 85% (warning) and query usage is at 105% (soft limit), the soft limit behavior applies. Enforcement Skipped [#enforcement-skipped] Billing enforcement is skipped entirely when: * No internal database is configured (self-hosted without managed auth) * The user is not in an organization * The workspace is on the **free** or **enterprise** tier Trial Expiry [#trial-expiry] Trial workspaces have a separate check: if the trial has expired (14 days from creation or past the `trial_ends_at` date), all requests are blocked with HTTP 403 and a message to upgrade. Metering Failures [#metering-failures] If workspace or plan data cannot be fetched, the request is **blocked** with HTTP 503 as a security precaution. If usage metering data specifically cannot be read (e.g., metering database is temporarily unavailable), the request is **allowed** with a warning that usage tracking may be inaccurate. Enforcement is best-effort for metering — Atlas prefers availability over strict enforcement. *** Customer Portal [#customer-portal] Workspaces with a Stripe subscription can access the Stripe Customer Portal for self-service plan management: * Upgrade or downgrade plans * Update payment methods * View invoices and billing history * Cancel subscriptions Available features depend on your [Stripe Customer Portal configuration](https://docs.stripe.com/customer-management/activate-no-code-customer-portal). Access via the **Manage Plan** button on the usage dashboard, or programmatically: ```bash curl -X POST https://your-atlas.example.com/api/v1/billing/portal \ -H "Content-Type: application/json" \ -H "Authorization: Bearer " \ -d '{ "returnUrl": "https://your-atlas.example.com/admin/usage" }' ``` The response contains a `url` field — redirect the user to this URL to open the portal. *** Stripe Setup (Self-Hosted) [#stripe-setup-self-hosted] On [app.useatlas.dev](https://app.useatlas.dev), Stripe is pre-configured. Skip this section — manage your subscription from **Admin > Usage** instead. Set these environment variables to enable billing: | Variable | Description | | ----------------------------- | ----------------------------------------------------------------- | | `STRIPE_SECRET_KEY` | Stripe secret key (test or live). Enables billing routes when set | | `STRIPE_WEBHOOK_SECRET` | Stripe webhook signing secret for verifying events | | `STRIPE_TEAM_PRICE_ID` | Price ID for the Team plan (monthly) | | `STRIPE_TEAM_ANNUAL_PRICE_ID` | Price ID for the Team plan (annual discount) | | `STRIPE_ENTERPRISE_PRICE_ID` | Price ID for the Enterprise plan | When `STRIPE_SECRET_KEY` is set: * Billing routes mount at `/api/v1/billing` * The Better Auth Stripe plugin handles checkout, webhooks, and subscription lifecycle at `/api/auth/stripe/*` * Plan changes from Stripe webhooks automatically update the workspace tier Webhook Configuration [#webhook-configuration] Point your Stripe webhook endpoint to: ``` https://your-atlas.example.com/api/auth/stripe/webhook ``` For local development, use [Stripe CLI](https://docs.stripe.com/stripe-cli) to forward webhooks: ```bash stripe listen --forward-to localhost:3001/api/auth/stripe/webhook ``` The webhook signing secret (`STRIPE_WEBHOOK_SECRET`) must match the secret shown by `stripe listen` or configured in your Stripe dashboard. Mismatched secrets cause silent webhook failures. *** Self-Hosted Deployments [#self-hosted-deployments] Self-hosted Atlas has **no billing enforcement**. The `free` tier is assigned by default with unlimited queries, tokens, members, and connections. No Stripe configuration is needed. To run Atlas without any billing: 1. Omit all `STRIPE_*` environment variables 2. The billing routes will not mount 3. The usage dashboard still works for monitoring (if `DATABASE_URL` is configured) but shows "Unlimited" for all limits *** Troubleshooting [#troubleshooting] Stripe webhooks not arriving [#stripe-webhooks-not-arriving] * Verify `STRIPE_WEBHOOK_SECRET` matches the signing secret from your Stripe dashboard or `stripe listen` output * Ensure the webhook endpoint (`/api/auth/stripe/webhook`) is publicly reachable * Check Stripe dashboard **Developers > Webhooks** for failed delivery attempts * For local development, confirm `stripe listen` is running and forwarding to the correct port Plan not syncing after payment [#plan-not-syncing-after-payment] * Stripe webhook events update the workspace tier. Check that webhooks are being received (see above) * The plan cache has a 60-second TTL — changes may take up to a minute to propagate * Call `GET /api/v1/billing` to check the current billing status for the workspace Usage not updating [#usage-not-updating] * Usage events are fire-and-forget with a circuit breaker. After 5 consecutive write failures, events are dropped for 60 seconds before the circuit breaker resets and retries * Verify `DATABASE_URL` is set and the internal database is accessible * Historical summaries are aggregated on-demand when the dashboard or `/history` endpoint is accessed Trial expired unexpectedly [#trial-expired-unexpectedly] * Trial duration is 14 days from workspace creation (or from `trial_ends_at` if set) * Check the workspace's creation date via `GET /api/v1/billing` — the `trialEndsAt` field shows the expiry *** See Also [#see-also] * [Usage Metering](/guides/usage-metering) — Detailed metering API and aggregation * [Environment Variables](/reference/environment-variables) — Full variable reference including Stripe config * [Enterprise SSO](/guides/enterprise-sso) — SAML/OIDC single sign-on (Enterprise tier) * [Stripe Docs](https://docs.stripe.com) — Stripe API and dashboard reference --- # Semantic Layer Setup Wizard (/guides/semantic-layer-wizard) The setup wizard is a browser-based alternative to `atlas init`. It walks you through profiling your database, selecting tables, reviewing generated entity YAML, and saving everything to your workspace — all from `/wizard`. * An **admin** or **owner** role in your Atlas workspace * At least one [datasource connection](/guides/admin-console#connections) configured * [Managed auth](/deployment/authentication#managed-auth) enabled with an internal database (`DATABASE_URL`) *** How It Works [#how-it-works] The wizard is a five-step flow at `/wizard`: | Step | What happens | | ------------------------ | ------------------------------------------------------------------------------------------ | | 1. **Select datasource** | Choose a configured database connection | | 2. **Select tables** | Browse discovered tables and views, filter by name, select which to include | | 3. **Review entities** | Inspect profiling results — columns, types, relationships, flags — and edit generated YAML | | 4. **Preview** | Ask a test question to see how the agent would use your semantic layer (optional) | | 5. **Done** | Entities saved to your workspace, ready to query | The wizard calls the same profiler that powers `atlas init`, so you get identical heuristics: foreign key inference, enum detection, abandoned table flags, and denormalization warnings. *** Step-by-Step Walkthrough [#step-by-step-walkthrough] 1\. Select Datasource [#1-select-datasource] Navigate to `/wizard`. The first step lists all database connections configured in your workspace. Select one from the dropdown and click **Next**. If no connections appear, add one first via **Admin → Connections** or the [signup flow](/guides/signup). 2\. Select Tables [#2-select-tables] The wizard queries your database and lists all tables and views (plus materialized views for PostgreSQL). All objects are selected by default. * Use the **filter** input to narrow the list by name * Toggle individual tables or use the header checkbox to select/deselect all visible rows * The badge shows how many tables are currently selected Click **Generate Entities** to proceed. The wizard will profile each selected table — inspecting columns, data types, cardinality, sample values, primary keys, and foreign keys. 3\. Review Entities [#3-review-entities] For each profiled table, the wizard generates an entity YAML file. Click any table row to expand it and see: * **Columns** — name, data type, flags (PK, FK, enum-like, nullable), and sample values * **Relationships** — declared foreign keys (from constraints) and inferred foreign keys (from naming conventions) * **Flags** — warnings like "possibly abandoned" (table name matches legacy/temp patterns such as `old_*`, `temp_*`, `*_backup`) or "denormalized" (name matches reporting/cache patterns such as `*_summary`, `*_stats`, `*_cache`) * **Entity YAML** — editable in-place so you can refine descriptions, adjust column types, or add custom measures before saving * **Profiler notes** — additional observations from the heuristic analysis Review the generated YAML and make any adjustments. Click **Preview** to continue. 4\. Preview (Optional) [#4-preview-optional] Type a natural-language question (e.g., "How many orders by status?") and click the sparkle button. The wizard shows a summary of the entities available to the agent, including the table count and names. This is a static summary — it does not simulate actual agent behavior or query generation. This step is optional — you can skip straight to **Save & Finish**. 5\. Done [#5-done] Click **Save & Finish** in the preview step. The wizard persists your entities to the workspace's org-scoped semantic layer. Once saved: * Entity YAML files are written to disk under the org-scoped `semantic/` directory * The semantic layer whitelist cache is flushed, so new tables are queryable immediately You'll see a confirmation screen with links to **View Entities** in the admin console or **Start Chatting**. The wizard generates entity YAML files only. For a complete semantic layer scaffold including catalog, glossary, and metric files, use `atlas init` from the CLI. *** What Gets Generated [#what-gets-generated] The wizard generates entity YAML files — one per profiled table: | File | Description | | ---------------------- | -------------------------------------------------------------- | | `entities/.yml` | Dimensions, measures, joins, and query patterns for each table | All files are stored scoped to your organization. In a multi-tenant deployment, each workspace gets its own isolated semantic layer. For a complete scaffold including `catalog.yml`, `glossary.yml`, and `metrics/*.yml`, use `atlas init` from the CLI. *** Wizard vs CLI [#wizard-vs-cli] | | Wizard (`/wizard`) | CLI (`atlas init`) | | ------------------- | -------------------------------------- | ------------------------------------------------------------- | | **Interface** | Browser UI — point and click | Terminal — command line | | **Auth** | Requires admin session | Uses `ATLAS_DATASOURCE_URL` directly | | **Storage** | Org-scoped (disk) | Local `semantic/` directory | | **Table selection** | Visual checklist with filter | `--tables` flag or interactive prompt | | **YAML editing** | In-place editor before save | Edit files after generation | | **Preview** | Built-in question preview step | N/A — run `atlas query` separately | | **Best for** | SaaS deployments, non-technical admins | Self-hosted, CI pipelines, version-controlled semantic layers | Both tools use the same profiler engine. The generated entity YAML is identical — the difference is where it's stored and how you interact with it. The CLI additionally generates catalog, glossary, and metric files. *** Troubleshooting [#troubleshooting] Connection not found [#connection-not-found] The wizard resolves connections from the internal database or the runtime connection registry. If a connection doesn't appear: * Verify it's configured in **Admin → Connections** * Ensure the connection belongs to your active workspace (org-scoped connections are filtered by org) * For self-hosted setups using `ATLAS_DATASOURCE_URL`, the `default` connection should appear automatically Profiling timeout [#profiling-timeout] Large databases with many tables or wide tables with millions of rows can cause profiling to take longer than expected. If profiling fails: * Select fewer tables at a time (start with 10–20) * Check that `ATLAS_QUERY_TIMEOUT` is set high enough (default 30s) * Ensure the database user has `SELECT` permission on `information_schema` tables MySQL backtick escaping [#mysql-backtick-escaping] MySQL identifiers use backtick escaping. If you see unexpected errors with table or column names containing special characters, verify that the generated YAML uses the correct quoting. The profiler handles this automatically, but manual YAML edits should preserve backtick-quoted identifiers for MySQL datasources. Partial results or errors [#partial-results-or-errors] If some tables fail to profile while others succeed, the wizard reports per-table errors alongside successful results. Common causes: * **Permission denied** — the database user lacks `SELECT` on specific tables * **Unsupported column types** — exotic types may not map cleanly to dimension types * **Empty tables** — tables with zero rows still generate entities but may produce less useful sample values Check the error details in the review step. You can proceed with the successfully profiled tables and re-run the wizard for failed tables after fixing the underlying issue. Unsupported database type [#unsupported-database-type] The wizard currently supports **PostgreSQL** and **MySQL**. For other databases (ClickHouse, Snowflake, DuckDB, Salesforce), use `atlas init` from the CLI or the appropriate [datasource plugin](/plugins/datasources). *** API Endpoints [#api-endpoints] The wizard API is mounted at `/api/v1/wizard`. All endpoints require admin authentication. | Method | Path | Auth | Description | | ------ | ----------- | ------------- | ------------------------------------------------- | | `POST` | `/profile` | Admin session | List tables and views from a connected datasource | | `POST` | `/generate` | Admin session | Profile selected tables and generate entity YAML | | `POST` | `/preview` | Admin session | Preview agent behavior with generated entities | | `POST` | `/save` | Admin session | Persist entities to the org-scoped semantic layer | *** See Also [#see-also] * [Semantic Layer](/getting-started/semantic-layer) — Concepts and YAML format reference * [CLI Reference](/reference/cli) — `atlas init`, `atlas diff`, and other commands * [Admin Console](/guides/admin-console) — Manage connections, entities, and users * [Multi-Datasource Routing](/deployment/multi-datasource) — Configure multiple database connections --- # Embedding Widget (/guides/embedding-widget) This page covers content for **developers** (installation, configuration, theming, and the programmatic API) and **end users** (using the widget and understanding error states). Most of this page is developer-focused — end users can skip to [Using the Widget](#using-the-widget) and [Error Handling](#error-handling). Atlas provides a drop-in chat widget that adds a floating chat bubble to any webpage. Users click the bubble to open a panel and ask questions about your data — no React or build tooling required. Not sure if the widget is the right fit? See [Choosing an Integration](/guides/choosing-an-integration) to compare the widget, React package, SDK, and REST API. * Atlas API server running and accessible over the network * For React integration: `@useatlas/react` installed in your project * For script tag: your Atlas API URL (no build tooling required) * An API key or auth token for authentication (optional in `none` auth mode) Quick Start [#quick-start] Add a single ` ``` This injects a floating chat bubble in the bottom-right corner. Clicking it opens the Atlas chat panel. Configuration [#configuration] Configure the widget via `data-*` attributes on the script tag: | Attribute | Required | Default | Description | | --------------- | -------- | ---------------- | ------------------------------------------------- | | `data-api-url` | Yes | -- | Base URL of your Atlas API server | | `data-api-key` | No | -- | API key for authentication (sent as Bearer token) | | `data-theme` | No | `"light"` | `"light"` or `"dark"` | | `data-position` | No | `"bottom-right"` | `"bottom-right"` or `"bottom-left"` | Event Callbacks [#event-callbacks] Bind callbacks by setting `data-on-*` attributes to the name of a global function: | Attribute | Event | Callback Argument | | ------------------------ | ------------------------------------------- | ------------------------------------- | | `data-on-open` | Widget opens | `{}` | | `data-on-close` | Widget closes | `{}` | | `data-on-query-complete` | Query finishes (reserved — not yet emitted) | `{ sql?: string, rowCount?: number }` | | `data-on-error` | Widget error | `{ code?: string, message?: string }` | ```html ``` Programmatic API [#programmatic-api] After the script loads, `window.Atlas` exposes the following methods: | Method | Description | | --------------------------- | -------------------------------------------------------------------------- | | `Atlas.open()` | Open the widget panel | | `Atlas.close()` | Close the widget panel | | `Atlas.toggle()` | Toggle open/close | | `Atlas.ask(question)` | Open the widget and send a question | | `Atlas.destroy()` | Remove widget from DOM, clean up all listeners | | `Atlas.on(event, handler)` | Bind an event listener (`"open"`, `"close"`, `"queryComplete"`, `"error"`) | | `Atlas.setAuthToken(token)` | Send an auth token to the widget iframe | | `Atlas.setTheme(theme)` | Set theme (`"light"` or `"dark"`) | `Atlas.ask()` currently opens the panel but does not submit the query due to a message type mismatch between the loader and the widget iframe ([#324](https://github.com/AtlasDevHQ/atlas/issues/324)). As a workaround, use the iframe `postMessage` API directly with `{type: "atlas:ask", query: "..."}`. Example [#example] ```javascript // Open the widget and ask a question Atlas.ask("What was last month's revenue?"); // Listen for errors Atlas.on("error", (detail) => { console.error("Widget error:", detail.code, detail.message); }); // Pass an auth token (e.g. after your app's login flow) Atlas.setAuthToken("user-jwt-token"); // Clean up when navigating away in a SPA Atlas.destroy(); ``` Pre-load Command Queue [#pre-load-command-queue] You can queue commands before the widget script finishes loading. Initialize `window.Atlas` as an array and push commands: ```html ``` Queued commands are replayed in order once the widget initializes. TypeScript Support [#typescript-support] TypeScript declarations for `window.Atlas` are available at `/widget.d.ts`: ```bash curl https://your-atlas-api.example.com/widget.d.ts -o atlas-widget.d.ts ``` Add it to your project to get type-safe access to the Atlas API: ```typescript /// // Full type safety for Atlas methods and event payloads window.Atlas?.ask("What's the churn rate?"); ``` The type declarations define these interfaces: ```typescript interface AtlasWidgetEventMap { /** Emitted when the widget panel opens */ open: Record; /** Emitted when the widget panel closes */ close: Record; /** Emitted when a query completes (reserved — not yet emitted) */ queryComplete: { sql?: string; rowCount?: number }; /** Emitted on widget errors */ error: { code?: string; message?: string }; } interface AtlasWidget { open(): void; close(): void; toggle(): void; ask(question: string): void; destroy(): void; on( event: K, handler: (detail: AtlasWidgetEventMap[K]) => void, ): void; setAuthToken(token: string): void; setTheme(theme: "light" | "dark"): void; } interface Window { /** Atlas widget API — or a command queue array before widget.js loads */ Atlas?: AtlasWidget | Array<[string, ...unknown[]]>; } ``` *** CSS Variables [#css-variables] The Atlas widget renders inside an `.atlas-root` container that defines all design tokens as CSS custom properties. These use the [OKLCH color space](https://developer.mozilla.org/en-US/docs/Web/CSS/color_value/oklch) and follow the shadcn/ui neutral base. Light Theme Tokens [#light-theme-tokens] All variables are scoped to `.atlas-root`: | Variable | Default (OKLCH) | Purpose | | -------------------------- | --------------------------- | -------------------------------------------------------------- | | `--radius` | `0.625rem` | Base border-radius for cards, inputs, buttons | | `--background` | `oklch(1 0 0)` | Page / container background | | `--foreground` | `oklch(0.145 0 0)` | Primary text color | | `--card` | `oklch(1 0 0)` | Card surface background | | `--card-foreground` | `oklch(0.145 0 0)` | Card text color | | `--popover` | `oklch(1 0 0)` | Popover / dropdown background | | `--popover-foreground` | `oklch(0.145 0 0)` | Popover text color | | `--primary` | `oklch(0.205 0 0)` | Primary action color (buttons, links) | | `--primary-foreground` | `oklch(0.985 0 0)` | Text on primary-colored surfaces | | `--secondary` | `oklch(0.97 0 0)` | Secondary surface color | | `--secondary-foreground` | `oklch(0.205 0 0)` | Text on secondary surfaces | | `--muted` | `oklch(0.97 0 0)` | Muted / disabled backgrounds | | `--muted-foreground` | `oklch(0.556 0 0)` | Muted text (placeholders, hints) | | `--accent` | `oklch(0.97 0 0)` | Accent highlight background | | `--accent-foreground` | `oklch(0.205 0 0)` | Text on accent surfaces | | `--destructive` | `oklch(0.577 0.245 27.325)` | Error / destructive action color | | `--destructive-foreground` | `oklch(0.577 0.245 27.325)` | Text for destructive elements | | `--border` | `oklch(0.922 0 0)` | Border color for inputs, cards, dividers | | `--input` | `oklch(0.922 0 0)` | Input field border color | | `--ring` | `oklch(0.708 0 0)` | Focus ring color | | `--atlas-brand` | `oklch(0.759 0.148 167.71)` | Brand accent (Atlas teal) — drives `--primary` in the full app | Dark Theme Tokens [#dark-theme-tokens] Applied via `.dark .atlas-root`: | Variable | Default (OKLCH) | Change from Light | | -------------------------- | --------------------------- | ---------------------------- | | `--background` | `oklch(0.145 0 0)` | Near-black surface | | `--foreground` | `oklch(0.985 0 0)` | Near-white text | | `--card` | `oklch(0.145 0 0)` | Near-black card surface | | `--card-foreground` | `oklch(0.985 0 0)` | Near-white card text | | `--popover` | `oklch(0.145 0 0)` | Near-black popover surface | | `--popover-foreground` | `oklch(0.985 0 0)` | Near-white popover text | | `--primary` | `oklch(0.985 0 0)` | Inverted: light on dark | | `--primary-foreground` | `oklch(0.205 0 0)` | Dark text on light primary | | `--secondary` | `oklch(0.269 0 0)` | Darker gray surface | | `--secondary-foreground` | `oklch(0.985 0 0)` | Near-white text on secondary | | `--muted` | `oklch(0.269 0 0)` | Darker muted background | | `--muted-foreground` | `oklch(0.708 0 0)` | Brighter muted text | | `--accent` | `oklch(0.269 0 0)` | Darker accent background | | `--accent-foreground` | `oklch(0.985 0 0)` | Near-white accent text | | `--destructive` | `oklch(0.396 0.141 25.723)` | Darker, less saturated red | | `--destructive-foreground` | `oklch(0.637 0.237 25.331)` | Brighter red for readability | | `--border` | `oklch(0.269 0 0)` | Darker borders | | `--input` | `oklch(0.269 0 0)` | Darker input borders | | `--ring` | `oklch(0.439 0 0)` | Darker focus ring | Overriding CSS Variables [#overriding-css-variables] Override variables by targeting `.atlas-root` in your page's CSS. Place your overrides **after** the Atlas stylesheet loads: ```html ``` ```css /* atlas-overrides.css — load after Atlas styles */ .atlas-root { /* Match your design system's radius */ --radius: 0.5rem; /* Use your brand's primary color */ --primary: oklch(0.55 0.2 260); --primary-foreground: oklch(0.98 0 0); /* Custom muted tones */ --muted: oklch(0.96 0.01 260); --muted-foreground: oklch(0.5 0.02 260); } .dark .atlas-root { --primary: oklch(0.75 0.15 260); --primary-foreground: oklch(0.15 0 0); --muted: oklch(0.25 0.01 260); --muted-foreground: oklch(0.65 0.02 260); } ``` Widget-Specific CSS Variables [#widget-specific-css-variables] When using the script tag or iframe embed, the widget host page defines one additional variable: | Variable | Set By | Purpose | | ----------------------- | ------------------------------------------------------- | ----------------------------------------------------------------------- | | `--atlas-widget-accent` | `accent` query param or `atlas:setBranding` postMessage | Overrides submit button background, input focus border, and link colors | This is separate from the `.atlas-root` tokens — it uses `!important` overrides on specific elements to apply accent coloring without requiring CSS variable changes. *** Theming [#theming] Atlas supports `"light"`, `"dark"`, and `"system"` themes. The `"system"` mode follows the user's OS preference via `prefers-color-scheme`. Theme via Script Tag [#theme-via-script-tag] ```html ``` The script tag loader only supports `"light"` and `"dark"`. For `"system"` theme support, use the iframe embed directly with `?theme=system`. Theme via Programmatic API [#theme-via-programmatic-api] ```javascript // Switch theme at runtime — e.g. when user toggles your app's theme Atlas.setTheme("dark"); ``` Theme via iframe postMessage [#theme-via-iframe-postmessage] ```javascript const iframe = document.querySelector("iframe"); // Send a theme change to the iframe — only "light" and "dark" are valid iframe.contentWindow.postMessage( { type: "theme", value: "dark" }, "https://your-atlas-api.example.com", ); ``` Brand Colors [#brand-colors] Atlas supports two brand color mechanisms depending on the embedding approach: **For the React component (`@useatlas/react`):** The `--atlas-brand` CSS variable accepts an OKLCH color value. In the full Atlas app, `--atlas-brand` drives the `--primary` token, affecting buttons, links, and focus rings. The Atlas API's `/api/health` endpoint can return a `brandColor` field which the component applies automatically via `applyBrandColor()`. ```css /* Override the brand color in your stylesheet */ .atlas-root { --atlas-brand: oklch(0.62 0.2 275); /* Purple brand */ } ``` **For the widget embed (script tag / iframe):** The `accent` parameter takes a 3- or 6-digit hex color (without `#`). This sets `--atlas-widget-accent` and overrides the submit button, input focus border, and link colors. ```html ``` Complete Brand Theming Example [#complete-brand-theming-example] This example combines a custom logo, accent color, welcome message, and dark theme to match a fictional "Acme Analytics" brand: ```html Acme Analytics
``` *** postMessage API Reference [#postmessage-api-reference] The widget communicates with its parent page via the [`postMessage`](https://developer.mozilla.org/en-US/docs/Web/API/Window/postMessage) API. There are two communication channels: 1. **Host page → Widget iframe** — control the widget from your application 2. **Widget iframe → Host page** — receive events from the widget Always specify the target origin instead of `"*"` in production. Using `"*"` allows any page to send messages to your widget, which could be exploited if the page is embedded elsewhere. Host → Widget Messages [#host--widget-messages] These messages are sent from your page to the widget iframe via `iframe.contentWindow.postMessage(message, origin)`. `theme` — Set the color theme [#theme--set-the-color-theme] ```typescript // Set the widget to dark mode iframe.contentWindow.postMessage( { type: "theme", // Message discriminator value: "dark", // "light" or "dark" — "system" is not supported via postMessage }, "https://your-atlas-api.example.com", // Target origin — must match your API URL ); ``` `auth` — Pass an authentication token [#auth--pass-an-authentication-token] ```typescript // Send a JWT or API key to the widget — it becomes the Bearer token for API requests iframe.contentWindow.postMessage( { type: "auth", // Message discriminator token: "eyJhbG...", // Your auth token string — sent as Authorization: Bearer }, "https://your-atlas-api.example.com", ); ``` `toggle` — Show or hide the widget [#toggle--show-or-hide-the-widget] ```typescript // Toggle the widget's visibility (shown/hidden) iframe.contentWindow.postMessage( { type: "toggle", // No additional fields needed }, "https://your-atlas-api.example.com", ); ``` `atlas:ask` — Send a query programmatically [#atlasask--send-a-query-programmatically] ```typescript // Programmatically type and submit a question iframe.contentWindow.postMessage( { type: "atlas:ask", // Must include the "atlas:" prefix query: "Show revenue by region", // The question to send }, "https://your-atlas-api.example.com", ); ``` `atlas:setBranding` — Update branding at runtime [#atlassetbranding--update-branding-at-runtime] ```typescript // Update logo, accent color, or welcome message without reloading iframe.contentWindow.postMessage( { type: "atlas:setBranding", logo: "https://example.com/logo.png", // Optional — HTTPS URLs only accent: "dc2626", // Optional — hex without #, 3 or 6 digits welcome: "Welcome! Ask me about your data.", // Optional — max 500 characters }, "https://your-atlas-api.example.com", ); ``` Each field in `atlas:setBranding` is optional — only include the fields you want to update. Logo URLs must use HTTPS or they will be silently ignored. Widget → Host Messages [#widget--host-messages] The widget sends these messages to the parent window via `window.parent.postMessage()`. Listen for them with `window.addEventListener("message", handler)`. `atlas:ready` — Widget loaded successfully [#atlasready--widget-loaded-successfully] ```typescript window.addEventListener("message", (event) => { // Always check the origin to prevent spoofed messages if (event.origin !== "https://your-atlas-api.example.com") return; if (event.data?.type === "atlas:ready") { console.log("Atlas widget is ready"); // Safe to send postMessage commands now (auth, branding, queries) } }); ``` `atlas:error` — Error occurred [#atlaserror--error-occurred] ```typescript window.addEventListener("message", (event) => { if (event.origin !== "https://your-atlas-api.example.com") return; if (event.data?.type === "atlas:error") { // Error codes: "UNCAUGHT", "UNHANDLED_REJECTION", "RENDER_FAILED", "LOAD_FAILED" console.error( `Atlas error [${event.data.code}]:`, event.data.message, ); } }); ``` TypeScript Types for postMessage [#typescript-types-for-postmessage] Use these types in your application for type-safe message handling: ```typescript // ---- Host → Widget messages ---- /** Set the widget theme */ interface AtlasThemeMessage { type: "theme"; value: "light" | "dark"; // "system" not supported via postMessage } /** Pass an auth token to the widget */ interface AtlasAuthMessage { type: "auth"; token: string; // Sent as Authorization: Bearer } /** Toggle widget visibility */ interface AtlasToggleMessage { type: "toggle"; } /** Send a query to the widget */ interface AtlasAskMessage { type: "atlas:ask"; query: string; } /** Update branding at runtime — all fields optional */ interface AtlasSetBrandingMessage { type: "atlas:setBranding"; logo?: string; // HTTPS URL only accent?: string; // Hex color without # (3 or 6 digits) welcome?: string; // Max 500 characters } type HostToWidgetMessage = | AtlasThemeMessage | AtlasAuthMessage | AtlasToggleMessage | AtlasAskMessage | AtlasSetBrandingMessage; // ---- Widget → Host messages ---- /** Widget loaded successfully */ interface AtlasReadyMessage { type: "atlas:ready"; } /** Widget error — includes error code and human-readable message */ interface AtlasErrorMessage { type: "atlas:error"; code: "UNCAUGHT" | "UNHANDLED_REJECTION" | "RENDER_FAILED" | "LOAD_FAILED"; message: string; } type WidgetToHostMessage = AtlasReadyMessage | AtlasErrorMessage; ``` Complete postMessage Example [#complete-postmessage-example] ```html ``` *** Layout Options [#layout-options] The widget supports three layout modes: floating bubble (default for the script tag), inline embed, and full-page. Floating Bubble (Default) [#floating-bubble-default] The script tag loader creates a floating bubble in the bottom corner. Clicking it opens a 400×600 panel. ```html ``` ```html ``` The bubble has these fixed properties: * **Size:** 56×56px circle * **Z-index:** 2147483646 (just below maximum) * **Panel size:** 400×600px, capped at `calc(100vh - 108px)` height and `calc(100vw - 40px)` width * **Animation:** Scale + opacity entrance, cubic-bezier open/close transition * **Keyboard:** Escape key closes the panel Inline Embed [#inline-embed] Embed the widget directly as an iframe with explicit dimensions. This gives you full control over placement — put it in a sidebar, a modal, or anywhere in your layout. ```html ``` With branding: ```html ``` Widget Query Parameters [#widget-query-parameters] These parameters are set on the `/widget` iframe URL: | Parameter | Default | Description | | -------------- | ------------- | -------------------------------------------------------------------------------------------- | | `theme` | `"system"` | `"light"`, `"dark"`, or `"system"` — the iframe supports all three | | `apiUrl` | iframe origin | Atlas API base URL (must be `http://` or `https://`) | | `position` | `"inline"` | `"bottomRight"`, `"bottomLeft"`, or `"inline"` — set by the script loader | | `logo` | -- | HTTPS URL to a custom logo image (replaces the default Atlas logo) | | `accent` | -- | Hex color without `#` (e.g. `4f46e5`) — overrides submit button, focus ring, and link colors | | `welcome` | -- | Welcome message shown before the first user message (max 500 chars) | | `initialQuery` | -- | Auto-sends this query when the widget first opens (max 500 chars) | Full-Page Mode [#full-page-mode] Make the iframe fill the entire viewport for a dedicated analytics page: ```html Analytics — Powered by Atlas ``` *** Common Failures [#common-failures] CORS Errors [#cors-errors] **Symptom:** Browser console shows `Access to fetch at 'https://your-api...' has been blocked by CORS policy`. **Cause:** The Atlas API sets `Access-Control-Allow-Origin: *` on widget routes, but your reverse proxy or CDN may strip or override these headers. **Fix:** Ensure your reverse proxy forwards CORS headers from the Atlas API. If you use nginx: ```nginx # nginx — forward CORS headers from the Atlas API for widget routes location /widget { proxy_pass http://atlas-api:3001; # Don't strip Access-Control headers set by Atlas proxy_pass_request_headers on; } ``` If you override CORS at the proxy level, include these headers: ``` Access-Control-Allow-Origin: * Access-Control-Allow-Methods: GET, POST, OPTIONS Access-Control-Allow-Headers: Content-Type, Authorization ``` Do not combine `Access-Control-Allow-Origin: *` with `Access-Control-Allow-Credentials: true` — browsers reject this per the CORS spec. Widget routes use wildcard origin and do not require credentialed requests. iframe Sandbox Restrictions [#iframe-sandbox-restrictions] **Symptom:** Widget loads but shows a blank page, or JavaScript errors appear in the iframe's console. **Cause:** If you add a `sandbox` attribute to the iframe, it blocks scripts, forms, and same-origin access by default. **Fix:** If you must use `sandbox`, include the required permissions: ```html ``` `allow-scripts` and `allow-same-origin` together effectively disable sandboxing. If security isolation is your goal, host the widget on a separate subdomain instead. Content Security Policy (CSP) [#content-security-policy-csp] **Symptom:** Widget script or iframe blocked. Console shows `Refused to load the script` or `Refused to frame`. **Cause:** Your site's CSP headers don't allow loading resources from the Atlas API domain. **Fix:** Add the Atlas API domain to your CSP: ``` Content-Security-Policy: script-src 'self' https://your-atlas-api.example.com; frame-src 'self' https://your-atlas-api.example.com; connect-src 'self' https://your-atlas-api.example.com; ``` If you use a `` tag for CSP: ```html ``` Auth Token Not Reaching the Widget [#auth-token-not-reaching-the-widget] **Symptom:** Widget loads but shows "Unauthorized" or API key prompt even though you passed a token. **Causes and fixes:** Check: Token sent before widget is ready [#check-token-sent-before-widget-is-ready] The widget iframe must finish loading before it can receive postMessage commands. Always wait for the `atlas:ready` event: ```javascript // WRONG — widget may not be ready yet iframe.contentWindow.postMessage({ type: "auth", token: myToken }, origin); // CORRECT — wait for the widget to signal readiness window.addEventListener("message", (event) => { if (event.origin !== ATLAS_ORIGIN) return; if (event.data?.type === "atlas:ready") { // Widget is ready — now send the token iframe.contentWindow.postMessage( { type: "auth", token: myToken }, ATLAS_ORIGIN, ); } }); ``` Check: Origin mismatch [#check-origin-mismatch] The widget only accepts messages from `window.parent`. If your iframe is nested inside another iframe, the parent origin check will fail. Ensure your page is the direct parent of the Atlas iframe. Also verify the origin you pass to `postMessage` matches the actual widget URL: ```javascript // The origin must match the iframe's src domain exactly const ATLAS_ORIGIN = "https://your-atlas-api.example.com"; // No trailing slash iframe.contentWindow.postMessage( { type: "auth", token: myToken }, ATLAS_ORIGIN, ); ``` Check: Using HTTPS [#check-using-https] Auth tokens are transmitted via postMessage. If either the host page or the widget is served over HTTP, tokens can be intercepted. Use HTTPS for both in production. Check: Script tag with data-api-key [#check-script-tag-with-data-api-key] If using the script tag loader, verify the `data-api-key` attribute is set on the correct ` ``` Widget Shows "Unable to Load Atlas Chat" [#widget-shows-unable-to-load-atlas-chat] **Symptom:** Widget displays a gray error message instead of the chat interface. **Causes:** * The widget JS bundle is not built — run `bun run build` in `packages/react/` * The `/widget/atlas-widget.js` or `/widget/atlas-widget.css` assets return 404 * A JavaScript error occurred during initialization (check the iframe's console) **Diagnosis:** Open browser DevTools, switch to the iframe's context, and check the console for errors. The widget logs all errors with the `[Atlas Widget]` prefix and sends them to the parent via `atlas:error` postMessage. Error Handling [#error-handling] The widget handles errors automatically with contextual UI. This section documents what happens for each error type and how to listen for errors from your host page. This is primarily for **developers** integrating the widget. Error States [#error-states] | Condition | User-facing message | Icon | Recovery | | ------------------ | ----------------------------------------------------- | ------------- | --------------------------------------------- | | API unreachable | "Unable to connect to Atlas." | ServerCrash | Retry button (manual) | | Auth failure | Auth-mode-specific (e.g. "Your session has expired.") | ShieldAlert | Re-authenticate | | Browser offline | "You appear to be offline." | WifiOff | Auto-retries when `navigator.onLine` restores | | Rate limited | "Too many requests." + countdown | Clock | Auto-retries after countdown reaches 0 | | Server error (5xx) | "Something went wrong on our end." | ServerCrash | Retry button (manual) | | Unknown error | "Something went wrong. Please try again." | AlertTriangle | Retry button (manual) | Errors are classified client-side before the server response is parsed. The `ClientErrorCode` type covers five cases: ```typescript type ClientErrorCode = | "api_unreachable" // fetch failed, ECONNREFUSED, ENOTFOUND | "auth_failure" // HTTP 401 | "rate_limited_http" // HTTP 429 | "server_error" // HTTP 5xx | "offline"; // navigator.onLine === false ``` Server-side errors are parsed from the JSON response body and mapped to a `ChatErrorCode`. See the [React Hooks reference](/reference/react#auth--error-types) for the full list. Listening for Errors [#listening-for-errors] Script Tag (Programmatic API) [#script-tag-programmatic-api] ```javascript // Listen for errors via the Atlas.on() method Atlas.on("error", (detail) => { // detail: { code?: string, message?: string } console.error("Widget error:", detail.code, detail.message); // Report to your monitoring service myErrorTracker.capture("atlas-widget-error", detail); }); ``` Script Tag (Data Attribute Callback) [#script-tag-data-attribute-callback] ```html ``` iframe postMessage [#iframe-postmessage] The widget emits `atlas:error` messages to the parent window for every error. The payload includes the error code, title, detail, and retryability: ```javascript window.addEventListener("message", (event) => { // Always check origin to prevent spoofed messages if (event.origin !== "https://your-atlas-api.example.com") return; if (event.data?.type === "atlas:error") { const { code, message, detail, retryable } = event.data.error; // code: ClientErrorCode or ChatErrorCode (e.g. "api_unreachable", "auth_error") // message: user-facing title (e.g. "Unable to connect to Atlas.") // detail: optional secondary context // retryable: true for transient errors, false for permanent ones if (!retryable) { // Permanent error — show your own fallback UI or redirect to login showFallbackUI(message); } } }); ``` React Component [#react-component] When using the `AtlasChat` React component, errors are rendered inline automatically. For programmatic access, use the `useAtlasChat` hook: ```tsx import { useAtlasChat, parseChatError } from "@useatlas/react/hooks"; function ChatUI() { const { error, status } = useAtlasChat(); if (error) { // parseChatError extracts structured info from the AI SDK error const info = parseChatError(error, "simple-key"); console.log(info.title); // "Unable to connect to Atlas." console.log(info.clientCode); // "api_unreachable" console.log(info.retryable); // true } // ... render chat UI } ``` Auth Token Refresh [#auth-token-refresh] When a token expires mid-session, the widget shows an auth error. To refresh the token without reloading the page: ```javascript // When your app refreshes a token, push it to the widget function onTokenRefresh(newToken) { Atlas.setAuthToken(newToken); } // Example: refresh before expiry using a JWT decode const payload = JSON.parse(atob(token.split(".")[1])); const expiresIn = payload.exp * 1000 - Date.now(); setTimeout(async () => { const newToken = await refreshAuthToken(); Atlas.setAuthToken(newToken); }, expiresIn - 60_000); // Refresh 1 minute before expiry ``` ```javascript const iframe = document.querySelector("iframe"); const ATLAS_ORIGIN = "https://your-atlas-api.example.com"; // Send a fresh token when the old one expires function onTokenRefresh(newToken) { iframe.contentWindow.postMessage( { type: "auth", token: newToken }, ATLAS_ORIGIN, ); } // Proactive refresh: send a new token before expiry setInterval(async () => { const newToken = await myApp.refreshToken(); onTokenRefresh(newToken); }, 15 * 60 * 1000); // Every 15 minutes ``` API Unreachable Behavior [#api-unreachable-behavior] When the Atlas API is unreachable (server down, network error, DNS failure), the widget: 1. Shows an inline error banner with the `ServerCrash` icon and "Unable to connect to Atlas." 2. Displays the detail "Check your API URL configuration and ensure the server is running." 3. Shows a "Try again" button for manual retry 4. Emits an `atlas:error` postMessage to the parent window with `code: "api_unreachable"` and `retryable: true` The widget does **not** auto-retry on API unreachable — the user must click "Try again" or you must programmatically retry by re-sending the query. *** Custom Tool Renderers [#custom-tool-renderers] Override how tool results render inside the widget using the `toolRenderers` prop on `` (React component) or via the headless hooks. This is useful for matching your product's design system or adding custom interactions like CSV export buttons. How It Works [#how-it-works] Every tool invocation in the agent's response is rendered by a component. Custom renderers take precedence over built-in defaults. If no custom renderer is provided, the widget falls back to its built-in cards (SQL result table, explore output, Python charts). ```tsx import { AtlasChat } from "@useatlas/react"; import "@useatlas/react/styles.css"; function App() { return ( ); } ``` Renderer Props [#renderer-props] Every custom renderer receives `ToolRendererProps`: ```typescript interface ToolRendererProps { toolName: string; // Name of the tool (e.g. "executeSQL") args: Record; // Input arguments passed to the tool result: T; // Tool output — null while the tool is running isLoading: boolean; // Whether the tool invocation is still in progress } ``` Tool Result Field Reference [#tool-result-field-reference] `executeSQL` — SQL query results [#executesql--sql-query-results] Type: `SQLToolResult | null` (null while loading) **Success shape:** | Field | Type | Description | | ------------- | --------------------------- | ------------------------------------------------- | | `success` | `true` | Discriminant — always `true` on success | | `columns` | `string[]` | Column names in query order | | `rows` | `Record[]` | Array of row objects keyed by column name | | `truncated` | `boolean \| undefined` | Whether results were truncated by the row limit | | `explanation` | `string \| undefined` | Agent's natural-language explanation of the query | | `row_count` | `number \| undefined` | Total row count before truncation | **Error shape:** | Field | Type | Description | | --------- | -------- | -------------------------------------------- | | `success` | `false` | Discriminant — always `false` on error | | `error` | `string` | Error message from the database or validator | **Example custom renderer:** ```tsx import type { ToolRendererProps, SQLToolResult } from "@useatlas/react"; function MySQLRenderer({ result, isLoading, args }: ToolRendererProps) { if (isLoading || !result) { return
Running query...
; } if (!result.success) { return
Query failed: {result.error}
; } return (
{/* Show the SQL that was executed */} {args.sql && (
          {String(args.sql)}
        
)} {/* Result count */}
{result.rows.length} rows{result.truncated ? " (truncated)" : ""}
{/* Data table */}
{result.columns.map((col) => ( ))} {result.rows.map((row, i) => ( {result.columns.map((col) => ( ))} ))}
{col}
{String(row[col] ?? "")}
{/* Agent explanation */} {result.explanation && (

{result.explanation}

)} ); } ``` `explore` — Semantic layer exploration [#explore--semantic-layer-exploration] Type: `ExploreToolResult | null` (null while loading) The explore tool returns a plain string — the output of the semantic layer file read or search command. | Field | Type | Description | | --------------- | -------- | ---------------------------------------- | | (entire result) | `string` | Raw text output from the explore command | **Example custom renderer:** ```tsx import type { ToolRendererProps, ExploreToolResult } from "@useatlas/react"; function MyExploreRenderer({ result, isLoading, args }: ToolRendererProps) { if (isLoading || result === null) { return
Exploring schema...
; } return (
{/* args.command contains the explore command that was run */} Explore: {String(args.command ?? "semantic layer")}
        {result}
      
); } ``` `executePython` — Python code execution [#executepython--python-code-execution] Type: `PythonToolResult | null` (null while loading) **Success shape:** | Field | Type | Description | | ---------------- | ----------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------- | | `success` | `true` | Discriminant — always `true` on success | | `output` | `string \| undefined` | stdout/stderr text output | | `explanation` | `string \| undefined` | Agent's natural-language explanation | | `table` | `{ columns: string[]; rows: unknown[][] } \| undefined` | Tabular output (column-ordered, not keyed) | | `charts` | `{ base64: string; mimeType: "image/png" }[] \| undefined` | Static chart images (matplotlib, etc.) | | `rechartsCharts` | `Array<{ type: "line" \| "bar" \| "pie"; data: Record[]; categoryKey: string; valueKeys: string[] }> \| undefined` | Interactive chart data for Recharts rendering | **Error shape:** | Field | Type | Description | | --------- | --------------------- | -------------------------------------- | | `success` | `false` | Discriminant — always `false` on error | | `error` | `string` | Error message from the Python runtime | | `output` | `string \| undefined` | Any stdout captured before the error | **Example custom renderer:** ```tsx import type { ToolRendererProps, PythonToolResult } from "@useatlas/react"; function MyPythonRenderer({ result, isLoading }: ToolRendererProps) { if (isLoading || !result) { return
Running Python...
; } if (!result.success) { return (

Execution failed

{result.error}
{result.output &&
{result.output}
}
); } return (
{/* Text output */} {result.output && (
{result.output}
)} {/* Static chart images */} {result.charts?.map((chart, i) => ( {`Chart ))} {/* Interactive Recharts data — render with your preferred chart library */} {result.rechartsCharts?.map((chart, i) => (

{chart.type} chart — {chart.categoryKey} vs {chart.valueKeys.join(", ")}

{/* Plug in your own chart component here */}
{JSON.stringify(chart.data.slice(0, 3), null, 2)}
))} {/* Tabular output */} {result.table && ( {result.table.columns.map((col) => ( ))} {result.table.rows.map((row, i) => ( {row.map((cell, j) => ( ))} ))}
{col}
{String(cell ?? "")}
)}
); } ``` Custom / Plugin Tools [#custom--plugin-tools] Any tool name can have a custom renderer. Plugin tools (e.g. from BigQuery, Salesforce, or custom plugins) are passed through with `ToolRendererProps`: ```tsx import type { ToolRendererProps } from "@useatlas/react"; // Renderer for a custom "searchDocs" plugin tool function DocsSearchRenderer({ result, isLoading, args }: ToolRendererProps) { if (isLoading || !result) return
Searching docs...
; // Cast to your expected shape const data = result as { results: { title: string; url: string }[] }; return ( ); } // Register in toolRenderers ``` Using Custom Renderers with Headless Hooks [#using-custom-renderers-with-headless-hooks] The headless hooks expose tool invocations via the `parts` array on each message. Use `ToolRendererProps` types to build your own rendering logic: ```tsx import { useAtlasChat } from "@useatlas/react/hooks"; import type { SQLToolResult } from "@useatlas/react/hooks"; function ChatUI() { const { messages } = useAtlasChat(); return (
{messages.map((msg) => msg.parts?.map((part, i) => { if (part.type === "text") return

{part.text}

; if (part.type === "tool-invocation") { // Access tool name, args, and result from the part const { toolName, args, result, state } = part.toolInvocation; const isLoading = state !== "result"; if (toolName === "executeSQL" && result) { const sqlResult = result as SQLToolResult; // Render your custom SQL table here } } return null; }), )}
); } ``` *** CSS Customization Guide [#css-customization-guide] The Atlas widget uses CSS custom properties (variables) scoped to `.atlas-root` for all visual design tokens. These follow the [shadcn/ui](https://ui.shadcn.com/) neutral base with the [OKLCH color space](https://developer.mozilla.org/en-US/docs/Web/CSS/color_value/oklch). Quick Reference [#quick-reference] All variables are listed in the [CSS Variables](#css-variables) section above. Here are the most common customization recipes. Brand Color Override [#brand-color-override] Change the primary brand color across the widget — affects buttons, links, and focus rings: ```html ``` ```tsx import { useEffect } from "react"; import { AtlasChat } from "@useatlas/react"; import "@useatlas/react/styles.css"; // Override via CSS — the brand color drives --primary in the full app // Place this in your app's CSS, after the Atlas styles import // .atlas-root { --atlas-brand: oklch(0.55 0.2 275); } // Or override at runtime via the useAtlasTheme hook import { useAtlasTheme } from "@useatlas/react/hooks"; function BrandSetup() { const { applyBrandColor } = useAtlasTheme(); // Call once on mount — sets --atlas-brand on :root useEffect(() => applyBrandColor("oklch(0.55 0.2 275)"), []); return null; } ``` ```css /* Brand color — indigo */ .atlas-root { --atlas-brand: oklch(0.55 0.2 275); --primary: oklch(0.55 0.2 275); --primary-foreground: oklch(0.98 0 0); } .dark .atlas-root { --primary: oklch(0.7 0.15 275); --primary-foreground: oklch(0.15 0 0); } ``` Dark Mode Override [#dark-mode-override] Force dark mode regardless of user preference, or customize the dark theme colors: ```css /* Force dark mode on the Atlas widget */ .atlas-root { --background: oklch(0.145 0 0); --foreground: oklch(0.985 0 0); --card: oklch(0.145 0 0); --card-foreground: oklch(0.985 0 0); --muted: oklch(0.269 0 0); --muted-foreground: oklch(0.708 0 0); --border: oklch(0.269 0 0); --input: oklch(0.269 0 0); } ``` Or use the `data-theme` attribute / `Atlas.setTheme()` / `?theme=dark` query parameter to let the widget handle dark mode with its built-in tokens. Font Override [#font-override] The widget inherits `font-family` from its container. Override it by targeting `.atlas-root` or specific elements: ```css /* Set a custom font for the entire widget */ .atlas-root { font-family: "Inter", system-ui, sans-serif; } /* Or target just the input field */ [data-atlas-input] { font-family: "JetBrains Mono", monospace; font-size: 14px; } /* Override message text font */ [data-atlas-messages] { font-family: "Merriweather", serif; line-height: 1.7; } ``` Complete Brand Theming Example [#complete-brand-theming-example-1] Combine all customizations for a fully branded widget: ```css /* Acme Analytics — branded Atlas widget */ .atlas-root { /* Brand indigo */ --atlas-brand: oklch(0.55 0.2 275); --primary: oklch(0.55 0.2 275); --primary-foreground: oklch(0.98 0 0); /* Warmer background */ --background: oklch(0.99 0.005 275); --foreground: oklch(0.12 0.02 275); /* Rounded corners */ --radius: 0.75rem; /* Subtle brand-tinted borders */ --border: oklch(0.92 0.01 275); --input: oklch(0.92 0.01 275); /* Custom font */ font-family: "Inter", system-ui, sans-serif; } .dark .atlas-root { --primary: oklch(0.7 0.15 275); --primary-foreground: oklch(0.15 0 0); --background: oklch(0.13 0.02 275); --foreground: oklch(0.96 0.005 275); --border: oklch(0.25 0.01 275); --input: oklch(0.25 0.01 275); } ``` Data Attribute Selectors [#data-attribute-selectors] Key widget elements expose `data-*` attributes for targeted styling: | Selector | Element | | ----------------------- | ------------------------------------------------ | | `[data-atlas-input]` | The chat text input field | | `[data-atlas-form]` | The input form container | | `[data-atlas-messages]` | The scrollable messages container | | `[data-atlas-logo]` | The Atlas logo SVG | | `.atlas-root` | The root container (all design tokens live here) | *** Security Considerations [#security-considerations] **Allowed Origins:** The widget script and iframe routes set `Access-Control-Allow-Origin: *` and `Content-Security-Policy: frame-ancestors *` to allow embedding from any domain. If you need to restrict origins, configure your reverse proxy or CDN to override these headers. **Auth Tokens:** When using `data-api-key` or `Atlas.setAuthToken()`, the token is sent to the widget iframe via `postMessage`. Use HTTPS to prevent token interception. Prefer short-lived tokens over long-lived API keys for production deployments. **Logo URLs:** Custom logos must use HTTPS. Non-HTTPS URLs are silently rejected to prevent mixed content and `javascript:` / `data:` URI attacks. **Accent Colors:** The `accent` parameter is validated as a 3- or 6-digit hex string. Invalid values are silently ignored. *** Troubleshooting [#troubleshooting] Widget loads but shows "Unable to connect to Atlas" [#widget-loads-but-shows-unable-to-connect-to-atlas] **Cause:** The `data-api-url` attribute (script tag) or `apiUrl` query parameter (iframe) doesn't match your running Atlas API server, or CORS headers are being stripped by a reverse proxy. **Fix:** Verify the URL is correct and reachable from the browser. Check the browser console for CORS errors. See [CORS Errors](#cors-errors) above for proxy configuration. Widget appears but chat input is unresponsive [#widget-appears-but-chat-input-is-unresponsive] **Cause:** The widget JavaScript bundle failed to load or a Content Security Policy is blocking scripts from the Atlas API domain. **Fix:** Open browser DevTools, check the Console and Network tabs for blocked requests. Add the Atlas API domain to your CSP `script-src` and `connect-src` directives. See [Content Security Policy (CSP)](#content-security-policy-csp) above. Widget doesn't match your site's theme [#widget-doesnt-match-your-sites-theme] **Cause:** The widget defaults to `light` theme (script tag) or `system` theme (iframe). It doesn't automatically inherit your site's CSS. **Fix:** Set `data-theme="dark"` on the script tag, or use `?theme=dark` on the iframe URL. For dynamic theming, use `Atlas.setTheme()` or the `theme` postMessage. See [Theming](#theming) for details. For more, see [Troubleshooting](/guides/troubleshooting). *** See Also [#see-also] * [React Hooks Reference](/reference/react) — Headless React hooks for building custom chat UIs * [SDK Reference](/reference/sdk) — TypeScript SDK for server-side and headless integrations * [Choosing an Integration](/guides/choosing-an-integration) — Compare widget, hooks, SDK, and REST API * [Rate Limiting & Retry](/guides/rate-limiting#widget--embed-behavior) — How the widget handles 429 responses * [Authentication](/deployment/authentication) — Auth mode setup for widget deployments *** Using the Widget [#using-the-widget] This section is for **end users** interacting with the Atlas widget embedded in a website or application. Asking questions [#asking-questions] Click the chat bubble (or the embedded chat panel) to start a conversation. Type your question in natural language — for example, "What was last month's revenue?" or "Show me the top 10 customers by order count." The agent translates your question into a SQL query, runs it against your database, and returns the results with a narrative explanation. Understanding responses [#understanding-responses] Responses include: * **Answer text** — A plain-language summary of the results * **SQL query** — The exact query that was run (shown in a code block) * **Data table** — A formatted table of results (limited to the first rows for readability) * **Charts** — Visual representations when the agent generates them Rate limits [#rate-limits] If you send too many requests in a short period, you will see a **"Too many requests"** message with a countdown timer. Wait for the countdown to complete, then try again. This limit protects the database from excessive load. Error messages [#error-messages] If something goes wrong, the widget shows a contextual error message with a recovery action (e.g., "Try again" button, or "Your session has expired" with a re-authentication prompt). You do not need to do anything technical — the widget handles errors and guides you toward resolution. --- # IP Allowlisting (/guides/ip-allowlisting) Atlas supports per-workspace IP allowlisting. When configured, only requests originating from allowed CIDR ranges can access the workspace — all other requests are rejected with a `403` response. This applies to both API and UI access. IP allowlisting is available on [app.useatlas.dev](https://app.useatlas.dev) Enterprise plans. Self-hosted deployments do not include enterprise features. * [Managed auth](/deployment/authentication#managed-auth) enabled * Internal database configured (`DATABASE_URL`) * Active Enterprise plan on [app.useatlas.dev](https://app.useatlas.dev) * Admin role required for all IP allowlist endpoints * `ATLAS_TRUST_PROXY` set to `"true"` if running behind a reverse proxy (required for accurate IP detection) *** How It Works [#how-it-works] IP allowlisting is **opt-in per workspace**. When no allowlist entries exist for an organization, all IPs are permitted. Once you add the first CIDR entry, only IPs matching at least one entry are allowed through. Enforcement Points [#enforcement-points] The IP allowlist is checked at two enforcement points in the request pipeline, both **after** authentication so that the user's organization context is available: 1. **Chat endpoint** (`POST /api/v1/chat`) — checked in the chat request preamble 2. **Admin API** (`/api/v1/admin/*`) — checked in the admin auth preamble When a request is denied: ```json { "error": "ip_not_allowed", "message": "Your IP address is not in the workspace's allowlist.", "requestId": "550e8400-e29b-41d4-a716-446655440000" } ``` The response status is `403 Forbidden`. IP Detection [#ip-detection] Atlas extracts the client IP from `X-Forwarded-For` or `X-Real-IP` headers, but **only when `ATLAS_TRUST_PROXY` is enabled**. Without it, these headers are ignored to prevent spoofing. If no IP can be determined and the allowlist is non-empty, the request is denied. Caching [#caching] Allowlist entries are cached in memory with a **30-second TTL** for performance. After adding or removing entries via the API, changes take effect within 30 seconds for all requests. The API mutations (`POST` and `DELETE`) immediately invalidate the cache for the affected organization. Fail-Closed Design [#fail-closed-design] If the allowlist check encounters a database error, the request is **blocked** (fail-closed). This follows the security principle that a failing security check should deny access rather than silently allow it. *** API Endpoints [#api-endpoints] All endpoints are mounted at `/api/v1/admin/ip-allowlist` and require the `admin` role plus an active enterprise license. | Method | Path | Description | | -------- | ------ | -------------------------------------------- | | `GET` | `/` | List all allowlist entries for the workspace | | `POST` | `/` | Add a CIDR range to the allowlist | | `DELETE` | `/:id` | Remove an allowlist entry by ID | List Entries [#list-entries] Returns all IP allowlist entries for the admin's active organization, plus the caller's current detected IP address. ```bash curl https://your-atlas.example.com/api/v1/admin/ip-allowlist \ -H "Authorization: Bearer " ``` **Response (200):** ```json { "entries": [ { "id": "550e8400-e29b-41d4-a716-446655440000", "orgId": "org_abc123", "cidr": "10.0.0.0/8", "description": "Office network", "createdAt": "2026-03-22T10:00:00.000Z", "createdBy": "user_xyz" } ], "total": 1, "callerIP": "10.0.1.42" } ``` The `callerIP` field shows the IP address Atlas detected for the current request. Use this to verify your IP before adding allowlist rules. Add Entry [#add-entry] Adds a CIDR range to the workspace's IP allowlist. Supports both IPv4 and IPv6 notation. ```bash curl -X POST https://your-atlas.example.com/api/v1/admin/ip-allowlist \ -H "Content-Type: application/json" \ -H "Authorization: Bearer " \ -d '{ "cidr": "10.0.0.0/8", "description": "Office network" }' ``` **Request body:** | Field | Type | Required | Description | | ------------- | ------ | -------- | ------------------------------------------------------------------- | | `cidr` | string | Yes | CIDR notation — e.g., `10.0.0.0/8` (IPv4) or `2001:db8::/32` (IPv6) | | `description` | string | No | Human-readable label for the IP range | **Response (201):** ```json { "entry": { "id": "550e8400-e29b-41d4-a716-446655440000", "orgId": "org_abc123", "cidr": "10.0.0.0/8", "description": "Office network", "createdAt": "2026-03-22T10:00:00.000Z", "createdBy": "user_xyz" } } ``` Remove Entry [#remove-entry] Removes an IP allowlist entry by ID. Changes take effect immediately (cache is invalidated). ```bash curl -X DELETE https://your-atlas.example.com/api/v1/admin/ip-allowlist/550e8400-e29b-41d4-a716-446655440000 \ -H "Authorization: Bearer " ``` **Response (200):** ```json { "message": "IP allowlist entry removed." } ``` *** CIDR Format [#cidr-format] CIDR (Classless Inter-Domain Routing) notation specifies an IP range as a base address and prefix length. IPv4 Examples [#ipv4-examples] | CIDR | Range | Use Case | | ----------------- | --------------------------- | ----------------------- | | `10.0.0.0/8` | 10.0.0.0 – 10.255.255.255 | Large corporate network | | `192.168.1.0/24` | 192.168.1.0 – 192.168.1.255 | Single subnet | | `203.0.113.42/32` | 203.0.113.42 only | Single IP address | IPv6 Examples [#ipv6-examples] | CIDR | Use Case | | --------------- | -------------------- | | `2001:db8::/32` | IPv6 address block | | `::1/128` | Localhost only | | `fe80::/10` | Link-local addresses | Atlas validates CIDR format on input using the Node.js `net` module. Invalid notation is rejected with a `400` error that includes the expected format. The CIDR string is stored as provided; normalization is applied internally for IP matching. *** Error Responses [#error-responses] | Status | Code | When | | ------ | --------------------- | ---------------------------------------------- | | 400 | `validation` | Invalid CIDR format | | 400 | `bad_request` | Missing `cidr` field or no active organization | | 403 | `enterprise_required` | Enterprise license not active | | 404 | `not_found` | Entry ID does not exist | | 404 | `not_available` | Internal database not configured | | 409 | `conflict` | CIDR range already in the allowlist | *** Database Schema [#database-schema] The `ip_allowlist` table in the internal database: ```sql CREATE TABLE IF NOT EXISTS ip_allowlist ( id UUID PRIMARY KEY DEFAULT gen_random_uuid(), org_id TEXT NOT NULL, cidr TEXT NOT NULL, description TEXT, created_at TIMESTAMPTZ NOT NULL DEFAULT now(), created_by TEXT, UNIQUE(org_id, cidr) ); ``` A unique constraint on `(org_id, cidr)` prevents duplicate entries within an organization. *** Troubleshooting [#troubleshooting] Locked Out [#locked-out] If you accidentally lock yourself out by adding a restrictive allowlist: 1. Connect directly to the internal database (`DATABASE_URL`) 2. Remove the restrictive entry: ```sql DELETE FROM ip_allowlist WHERE org_id = '' AND cidr = ''; ``` 3. Alternatively, remove all entries to disable the allowlist: ```sql DELETE FROM ip_allowlist WHERE org_id = ''; ``` 4. The cache expires within 30 seconds — retry after that Before adding your first allowlist entry, use the `GET` endpoint to check the `callerIP` field. Make sure your current IP is covered by the CIDR range you are adding. IP Not Detected [#ip-not-detected] If `callerIP` is `null` in the list response: * Set `ATLAS_TRUST_PROXY=true` if Atlas is behind a reverse proxy, load balancer, or CDN * Ensure your proxy forwards `X-Forwarded-For` or `X-Real-IP` headers * Without a detectable IP, requests are denied when the allowlist is non-empty IPv6 Support [#ipv6-support] Atlas fully supports IPv6 CIDR ranges, including: * Full notation: `2001:0db8:0000:0000:0000:0000:0000:0000/32` * Compressed notation: `2001:db8::/32` * IPv4-mapped IPv6: `::ffff:192.168.1.1` IPv4 and IPv6 ranges are matched independently — an IPv4 address will not match an IPv6 CIDR range and vice versa. *** See Also [#see-also] * [Enterprise SSO](/guides/enterprise-sso) — SAML/OIDC single sign-on and enforcement * [Authentication](/deployment/authentication) — Auth mode setup and configuration * [Admin Console](/guides/admin-console) — Manage users and sessions * [Environment Variables](/reference/environment-variables) — Full variable reference --- # White-Labeling (/guides/white-labeling) White-labeling lets customers remove Atlas branding and replace it with their own identity — custom logo, colors, favicon, and text — in the admin console. White-labeling is available on [app.useatlas.dev](https://app.useatlas.dev) Enterprise plans. Self-hosted deployments do not include enterprise features. * Atlas deployed with an [internal database](/deployment/deploy) (`DATABASE_URL`) * Active Enterprise plan on [app.useatlas.dev](https://app.useatlas.dev) * Admin role (`admin`, `owner`, or `platform_admin`) *** What gets branded [#what-gets-branded] When white-labeling is active in the admin console: | Element | Default | White-labeled | | ---------------- | ------------------ | ------------------ | | Sidebar logo | Atlas triangle SVG | Your logo URL | | Sidebar title | "Atlas" | Your logo text | | Sidebar subtitle | "Admin Console" | Hidden (or custom) | | Favicon | Atlas default | Your favicon URL | | Page title | "Atlas" | Your logo text | The public branding endpoint (`GET /api/v1/branding`) is available for custom integrations including embedded widgets. Configuring branding [#configuring-branding] Admin UI [#admin-ui] 1. Navigate to **Admin Console → Branding** 2. Fill in the fields: * **Logo URL** — URL to your logo image (PNG, SVG, or JPEG) * **Logo Text** — Text next to the logo (e.g. your company name) * **Primary Color** — 6-digit hex color (e.g. `#FF5500`) * **Favicon URL** — URL to your custom favicon * **Hide Atlas Branding** — Toggle to remove "Atlas" text 3. Preview changes in the live preview section 4. Click **Save** The PUT endpoint performs a full replacement. Any field not included in the request is reset to its default (null or false). To preserve existing values, send all fields. API [#api] ```bash # Get current branding curl -s "$ATLAS_API_URL/api/v1/admin/branding" \ -H "Cookie: $SESSION_COOKIE" | jq . # Set branding curl -s -X PUT "$ATLAS_API_URL/api/v1/admin/branding" \ -H "Cookie: $SESSION_COOKIE" \ -H "Content-Type: application/json" \ -d '{ "logoUrl": "https://example.com/logo.png", "logoText": "Acme Analytics", "primaryColor": "#FF5500", "faviconUrl": "https://example.com/favicon.ico", "hideAtlasBranding": true }' | jq . # Reset to Atlas defaults curl -s -X DELETE "$ATLAS_API_URL/api/v1/admin/branding" \ -H "Cookie: $SESSION_COOKIE" | jq . ``` Public endpoint [#public-endpoint] The frontend fetches branding from a public endpoint that does not require admin auth: ```bash curl -s "$ATLAS_API_URL/api/v1/branding" \ -H "Cookie: $SESSION_COOKIE" | jq . ``` This endpoint resolves the workspace from the session and returns only public-safe branding fields (no internal IDs or timestamps). Configuration reference [#configuration-reference] | Field | Type | Description | | ------------------- | ---------------- | ------------------------------------------ | | `logoUrl` | `string \| null` | URL to custom logo image | | `logoText` | `string \| null` | Text displayed next to or instead of logo | | `primaryColor` | `string \| null` | 6-digit hex color (e.g. `#FF5500`) | | `faviconUrl` | `string \| null` | URL to custom favicon | | `hideAtlasBranding` | `boolean` | Remove "Atlas" text from the admin sidebar | Branding is stored per-organization in the internal database. Each workspace can have its own branding configuration. Setting a field to `null` clears it and reverts to the Atlas default for that element. Resetting branding [#resetting-branding] To revert to Atlas defaults, either: * Click **Reset to defaults** in the admin branding page * Send `DELETE /api/v1/admin/branding` This removes all custom branding for the workspace. --- # Plugin Cookbook (/plugins/cookbook) Recipes for real-world plugin scenarios. The [Authoring Guide](/plugins/authoring-guide) covers how to build a plugin; this page covers how to handle the messy parts. For an overview of all official plugins, see the [Plugin Directory](/plugins/overview). Caching Strategies [#caching-strategies] In-Memory Cache with TTL [#in-memory-cache-with-ttl] Context plugins call `load()` on every agent invocation. Hitting an external API or database each time is wasteful. Cache the result with a TTL: ```typescript import { definePlugin } from "@useatlas/plugin-sdk"; export default definePlugin({ id: "cached-context", types: ["context"], version: "1.0.0", contextProvider: { _cache: null as string | null, _cacheExpiry: 0, _ttlMs: 5 * 60 * 1000, // 5 minutes async load() { if (this._cache && Date.now() < this._cacheExpiry) { return this._cache; } // Fetch from your source — API, database, filesystem, etc. const response = await fetch("https://internal.example.com/glossary"); const terms = (await response.json()) as { term: string; definition: string }[]; const markdown = terms.map((t) => `- **${t.term}**: ${t.definition}`).join("\n"); this._cache = `## Company Glossary\n\n${markdown}`; this._cacheExpiry = Date.now() + this._ttlMs; return this._cache; }, async refresh() { this._cache = null; this._cacheExpiry = 0; }, }, }); ``` Key points: * **`refresh()`** is called when the semantic layer reloads or via the admin UI. It clears the cache so the next `load()` fetches fresh data. * Keep the TTL short enough to pick up changes, long enough to avoid hammering your source. 5 minutes is a good default. * For size-bounded caches (e.g. many keys), use a `Map` with LRU eviction instead of a single string. LRU Cache for Multiple Keys [#lru-cache-for-multiple-keys] When a context plugin serves different content based on runtime conditions (e.g. per-datasource context), use a bounded map: ```typescript import { definePlugin } from "@useatlas/plugin-sdk"; const MAX_CACHE_ENTRIES = 50; function lruSet(cache: Map, key: string, value: string) { if (cache.size >= MAX_CACHE_ENTRIES) { // Map iteration order is insertion order — delete the oldest const oldest = cache.keys().next().value!; cache.delete(oldest); } cache.set(key, value); } export default definePlugin({ id: "multi-source-context", types: ["context"], version: "1.0.0", contextProvider: { _cache: new Map(), async load() { // In practice, the key might come from a config value or environment const key = "default"; const cached = this._cache.get(key); if (cached) return cached; const result = await fetchContextForSource(key); lruSet(this._cache, key, result); return result; }, async refresh() { this._cache.clear(); }, }, }); async function fetchContextForSource(source: string): Promise { // Your fetching logic here return `Context for ${source}`; } ``` Error Handling [#error-handling] Fatal vs Degraded: `initialize()` vs `healthCheck()` [#fatal-vs-degraded-initialize-vs-healthcheck] The plugin lifecycle has two distinct error surfaces. Getting them right is the difference between a plugin that blocks startup when misconfigured and one that gracefully degrades when an external service is down. **Throw from `initialize()`** when the plugin *cannot possibly work* — missing credentials, invalid config, unreachable required service. This blocks server startup, which is what you want: fail fast before serving traffic. ```typescript import { definePlugin } from "@useatlas/plugin-sdk"; export default definePlugin({ id: "strict-context", types: ["context"], version: "1.0.0", contextProvider: { async load() { return ""; }, }, async initialize(ctx) { // Fatal: the API key is structurally required const apiKey = ctx.config["MY_API_KEY"] as string | undefined; if (!apiKey) { throw new Error("MY_API_KEY is required — set it in atlas.config.ts"); } // Fatal: verify the service is reachable at boot const response = await fetch("https://api.example.com/health", { headers: { Authorization: `Bearer ${apiKey}` }, signal: AbortSignal.timeout(10_000), }); if (!response.ok) { throw new Error(`API health check failed: ${response.status}`); } ctx.logger.info("Plugin initialized, API reachable"); }, }); ``` **Return `{ healthy: false }` from `healthCheck()`** for runtime degradation — the plugin initialized fine but the external service went down, latency spiked, or a transient error occurred. Atlas keeps running; the health endpoint reports the degradation. ```typescript import type { PluginHealthResult } from "@useatlas/plugin-sdk"; async healthCheck(): Promise { const start = performance.now(); try { const response = await fetch("https://api.example.com/health", { signal: AbortSignal.timeout(5_000), }); const latencyMs = Math.round(performance.now() - start); if (!response.ok) { return { healthy: false, message: `API returned ${response.status}`, latencyMs }; } return { healthy: true, latencyMs }; } catch (err) { return { healthy: false, message: err instanceof Error ? err.message : String(err), latencyMs: Math.round(performance.now() - start), }; } } ``` Never throw from `healthCheck()` or `teardown()`. These methods must always return a result. Throwing from health checks crashes the health probe; throwing from teardown prevents other plugins from cleaning up. Retry with Backoff [#retry-with-backoff] For datasource plugins that connect to flaky services, add retry logic to the connection factory: ```typescript import type { PluginDBConnection, PluginQueryResult } from "@useatlas/plugin-sdk"; function withRetry( conn: PluginDBConnection, maxRetries = 3, baseDelayMs = 500, ): PluginDBConnection { return { async query(sql: string, timeoutMs?: number): Promise { let lastError: unknown; for (let attempt = 0; attempt <= maxRetries; attempt++) { try { return await conn.query(sql, timeoutMs); } catch (err) { lastError = err; if (attempt < maxRetries) { const delay = baseDelayMs * 2 ** attempt; await new Promise((resolve) => setTimeout(resolve, delay)); } } } throw lastError; }, close: () => conn.close(), }; } ``` Use it in your connection factory: ```typescript connection: { create: () => withRetry(createMyConnection(config), 3, 500), dbType: "postgres", }, ``` Credential Management [#credential-management] Config-Driven Credentials [#config-driven-credentials] Always pass credentials through plugin config, not by reading `process.env` inside the plugin. This makes dependencies explicit and testable: ```typescript // atlas.config.ts — credentials are explicit, visible in one place import { defineConfig } from "@atlas/api/lib/config"; import { myPlugin } from "./plugins/my-plugin"; export default defineConfig({ plugins: [ myPlugin({ apiKey: process.env.MY_API_KEY!, apiSecret: process.env.MY_API_SECRET!, }), ], }); ``` Never commit credentials to version control. Use environment variables in `atlas.config.ts` and add `.env` to `.gitignore`. Multi-Credential Plugins (OAuth Refresh) [#multi-credential-plugins-oauth-refresh] Some plugins need to manage rotating credentials — e.g. OAuth tokens with refresh flows. Keep the token state internal and refresh transparently: ```typescript import { z } from "zod"; import { createPlugin } from "@useatlas/plugin-sdk"; import type { AtlasContextPlugin, PluginHealthResult } from "@useatlas/plugin-sdk"; const configSchema = z.object({ clientId: z.string().min(1), clientSecret: z.string().min(1), refreshToken: z.string().min(1), tokenUrl: z.string().url(), }); type OAuthConfig = z.infer; interface TokenState { accessToken: string; expiresAt: number; } async function refreshAccessToken(config: OAuthConfig): Promise { const response = await fetch(config.tokenUrl, { method: "POST", headers: { "Content-Type": "application/x-www-form-urlencoded" }, body: new URLSearchParams({ grant_type: "refresh_token", client_id: config.clientId, client_secret: config.clientSecret, refresh_token: config.refreshToken, }), }); if (!response.ok) { throw new Error(`Token refresh failed: ${response.status}`); } const data = (await response.json()) as { access_token: string; expires_in: number }; return { accessToken: data.access_token, expiresAt: Date.now() + data.expires_in * 1000 - 60_000, // 1 min buffer }; } export const oauthContextPlugin = createPlugin>({ configSchema, create(config) { let tokenState: TokenState | null = null; async function getToken(): Promise { if (!tokenState || Date.now() >= tokenState.expiresAt) { tokenState = await refreshAccessToken(config); } return tokenState.accessToken; } return { id: "oauth-context", types: ["context"] as const, version: "1.0.0", config, contextProvider: { async load() { const token = await getToken(); const response = await fetch("https://api.example.com/context", { headers: { Authorization: `Bearer ${token}` }, }); return response.text(); }, async refresh() { // Force token refresh on next load tokenState = null; }, }, async initialize(ctx) { // Verify credentials work at boot try { tokenState = await refreshAccessToken(config); ctx.logger.info("OAuth credentials verified"); } catch (err) { throw new Error( `OAuth initialization failed: ${err instanceof Error ? err.message : err}`, ); } }, async healthCheck(): Promise { try { await getToken(); return { healthy: true }; } catch (err) { return { healthy: false, message: err instanceof Error ? err.message : String(err), }; } }, }; }, }); ``` Credential Rotation Without Restart [#credential-rotation-without-restart] For credentials that rotate externally (e.g. vault-managed secrets), read the current value on each use rather than capturing it at startup: ```typescript import { definePlugin } from "@useatlas/plugin-sdk"; export default definePlugin({ id: "vault-context", types: ["context"], version: "1.0.0", contextProvider: { async load() { // Read the current secret value each time — supports external rotation const apiKey = process.env.VAULT_MANAGED_API_KEY; if (!apiKey) return ""; const response = await fetch("https://api.example.com/data", { headers: { Authorization: `Bearer ${apiKey}` }, }); return response.text(); }, }, async initialize(ctx) { // Verify the env var exists at boot, but don't capture the value if (!process.env.VAULT_MANAGED_API_KEY) { throw new Error("VAULT_MANAGED_API_KEY must be set"); } ctx.logger.info("Vault-managed credential detected"); }, }); ``` This is the one exception to the "config-driven credentials" rule. When an external secret manager rotates the value, reading `process.env` at call time ensures you always use the latest credential. Document this pattern clearly so operators know the env var must be set. Hook Recipes [#hook-recipes] Hooks intercept agent lifecycle events. Define them on any plugin type via the `hooks` property. Each hook entry has an optional `matcher` (return `true` to run) and a `handler`. Audit Logging via `afterQuery` [#audit-logging-via-afterquery] Log every query with its results and duration: ```typescript import { definePlugin } from "@useatlas/plugin-sdk"; import type { AtlasPluginContext } from "@useatlas/plugin-sdk"; export default definePlugin({ id: "audit-logger", types: ["context"], version: "1.0.0", contextProvider: { async load() { return ""; }, }, hooks: { afterQuery: [{ handler: (ctx) => { // ctx: { sql, connectionId?, result, durationMs } console.log(JSON.stringify({ event: "query_executed", sql: ctx.sql, connectionId: ctx.connectionId, rowCount: ctx.result.rows.length, durationMs: ctx.durationMs, timestamp: new Date().toISOString(), })); }, }], }, }); ``` For persistent audit logging, write to the internal database: ```typescript import { definePlugin } from "@useatlas/plugin-sdk"; import type { AtlasPluginContext } from "@useatlas/plugin-sdk"; let db: AtlasPluginContext["db"] = null; export default definePlugin({ id: "db-audit-logger", types: ["context"], version: "1.0.0", contextProvider: { async load() { return ""; }, }, schema: { plugin_query_audit: { fields: { sql: { type: "string", required: true }, connection_id: { type: "string" }, row_count: { type: "number", required: true }, duration_ms: { type: "number", required: true }, executed_at: { type: "date", required: true }, }, }, }, async initialize(ctx) { db = ctx.db; if (!db) { ctx.logger.warn("No internal DB — audit logs will be skipped"); } }, hooks: { afterQuery: [{ handler: async (ctx) => { if (!db) return; await db.execute( `INSERT INTO plugin_query_audit (sql, connection_id, row_count, duration_ms, executed_at) VALUES ($1, $2, $3, $4, NOW())`, [ctx.sql, ctx.connectionId ?? null, ctx.result.rows.length, ctx.durationMs], ); }, }], }, }); ``` Query Rewriting via `beforeQuery` (Tenant Isolation) [#query-rewriting-via-beforequery-tenant-isolation] Inject a `WHERE` clause to scope queries to the current tenant. `beforeQuery` handlers can return `{ sql }` to rewrite the query or throw to reject it: ```typescript import { definePlugin } from "@useatlas/plugin-sdk"; const TENANT_ID = process.env.ATLAS_TENANT_ID; export default definePlugin({ id: "tenant-filter", types: ["context"], version: "1.0.0", contextProvider: { async load() { return ""; }, }, async initialize(ctx) { if (!TENANT_ID) { throw new Error("ATLAS_TENANT_ID is required for tenant isolation"); } ctx.logger.info(`Tenant isolation active for tenant: ${TENANT_ID}`); }, hooks: { beforeQuery: [{ handler: (ctx) => { // Simple approach: wrap the original query in a CTE with a filter. // WARNING: This uses string interpolation for brevity. In production, // use a SQL AST rewriter or parameterized approach to avoid injection risks. const wrapped = `WITH _original AS (${ctx.sql}) SELECT * FROM _original WHERE tenant_id = '${TENANT_ID}'`; return { sql: wrapped }; }, }], }, }); ``` The CTE-wrapping approach shown above is a simplified example. For production tenant isolation, use Atlas's built-in RLS support (`ATLAS_RLS_ENABLED`, `ATLAS_RLS_COLUMN`, `ATLAS_RLS_CLAIM`) which injects `WHERE` clauses at the validation layer — after all plugin hooks, so plugins cannot strip the filter. Rate Limiting [#rate-limiting] Track query counts per time window and reject queries that exceed the limit: ```typescript import { definePlugin } from "@useatlas/plugin-sdk"; const WINDOW_MS = 60_000; // 1 minute const MAX_QUERIES = 100; const queryLog: number[] = []; export default definePlugin({ id: "rate-limiter", types: ["context"], version: "1.0.0", contextProvider: { async load() { return ""; }, }, hooks: { beforeQuery: [{ handler: () => { const now = Date.now(); // Remove entries outside the window while (queryLog.length > 0 && queryLog[0]! < now - WINDOW_MS) { queryLog.shift(); } if (queryLog.length >= MAX_QUERIES) { throw new Error(`Rate limit exceeded: ${MAX_QUERIES} queries per minute`); } queryLog.push(now); }, }], }, }); ``` Data Masking via `beforeQuery` [#data-masking-via-beforequery] Mask sensitive columns by rewriting the SQL to replace them with redacted values at the database level: ```typescript import { definePlugin } from "@useatlas/plugin-sdk"; const MASKED_COLUMNS = new Set(["email", "phone", "ssn", "credit_card"]); export default definePlugin({ id: "data-masker", types: ["context"], version: "1.0.0", contextProvider: { async load() { return [ "## Data Masking", "The following columns are redacted in query results: email, phone, ssn, credit_card.", "Do not SELECT these columns directly — they will appear as '***REDACTED***'.", ].join("\n"); }, }, hooks: { beforeQuery: [{ handler: (ctx) => { // Replace masked column references with redacted literals. // This is a simplified regex approach — for production use, // consider a SQL AST rewriter for reliable column detection. let rewritten = ctx.sql; for (const col of MASKED_COLUMNS) { const pattern = new RegExp(`\\b${col}\\b`, "gi"); rewritten = rewritten.replace(pattern, `'***REDACTED***' AS ${col}`); } if (rewritten !== ctx.sql) { return { sql: rewritten }; } }, }], }, }); ``` This uses `beforeQuery` (a mutable hook) to rewrite SQL before execution, rather than mutating results after the fact. The regex approach is simplified — for production, use a SQL AST parser to reliably detect column references vs string literals. Request Observation via `onRequest` [#request-observation-via-onrequest] Log or monitor incoming HTTP requests. Note that `onRequest` hooks are observation-only — they cannot block or modify requests: ```typescript import { definePlugin } from "@useatlas/plugin-sdk"; export default definePlugin({ id: "custom-auth-check", types: ["context"], version: "1.0.0", contextProvider: { async load() { return ""; }, }, hooks: { onRequest: [{ matcher: (ctx) => ctx.path.startsWith("/api/"), handler: (ctx) => { const apiKey = ctx.headers["x-custom-api-key"]; if (!apiKey) { // onRequest hooks are observation-only — log the event console.warn(`Missing X-Custom-API-Key on ${ctx.method} ${ctx.path}`); } }, }], }, }); ``` Compliance Gate via `beforeToolCall` [#compliance-gate-via-beforetoolcall] Block or modify tool calls based on business rules. `beforeToolCall` fires before every tool execution in the agent loop — it receives the tool name, args, and request context. Return `{ args }` to rewrite, throw to reject, or return void to pass through: ```typescript import { definePlugin } from "@useatlas/plugin-sdk"; const RESTRICTED_TABLES = new Set(["salary", "ssn_records", "credit_cards"]); export default definePlugin({ id: "compliance-gate", types: ["context"], version: "1.0.0", contextProvider: { async load() { return ""; }, }, hooks: { beforeToolCall: [{ // Only intercept SQL execution, not explore commands matcher: (ctx) => ctx.toolName === "executeSQL", handler: (ctx) => { const sql = (ctx.args.sql as string) ?? ""; for (const table of RESTRICTED_TABLES) { if (sql.toLowerCase().includes(table)) { throw new Error( `Access to ${table} is restricted by compliance policy`, ); } } }, }], }, }); ``` Cost Tracking via `afterToolCall` [#cost-tracking-via-aftertoolcall] Observe every tool call to log usage metrics. `afterToolCall` fires after execution with the result. Return `{ result }` to modify, or return void to observe: ```typescript import { definePlugin } from "@useatlas/plugin-sdk"; export default definePlugin({ id: "tool-usage-tracker", types: ["context"], version: "1.0.0", contextProvider: { async load() { return ""; }, }, hooks: { afterToolCall: [{ handler: (ctx) => { console.log(JSON.stringify({ event: "tool_call", tool: ctx.toolName, userId: ctx.context.userId, conversationId: ctx.context.conversationId, stepCount: ctx.context.toolCallCount, timestamp: new Date().toISOString(), })); }, }], }, }); ``` Tool Call Rate Limiting via `beforeToolCall` [#tool-call-rate-limiting-via-beforetoolcall] Limit the number of tool calls per agent run to control costs: ```typescript import { definePlugin } from "@useatlas/plugin-sdk"; const MAX_TOOL_CALLS = 15; export default definePlugin({ id: "tool-rate-limiter", types: ["context"], version: "1.0.0", contextProvider: { async load() { return ""; }, }, hooks: { beforeToolCall: [{ handler: (ctx) => { if (ctx.context.toolCallCount > MAX_TOOL_CALLS) { throw new Error( `Tool call limit exceeded (${MAX_TOOL_CALLS}). ` + "Please refine your question to require fewer queries.", ); } }, }], }, }); ``` Custom Query Validation [#custom-query-validation] Replacing the SQL Validation Pipeline [#replacing-the-sql-validation-pipeline] Non-SQL datasources (Salesforce SOQL, GraphQL, MongoDB MQL) need their own validation instead of the standard SQL validation pipeline. Use `connection.validate` to replace it entirely: ```typescript import { createPlugin } from "@useatlas/plugin-sdk"; import type { QueryValidationResult } from "@useatlas/plugin-sdk"; import { z } from "zod"; const SOQL_FORBIDDEN = [/\b(DELETE|INSERT|UPDATE|UPSERT|UNDELETE|MERGE)\b/i]; function validateSOQL(query: string): QueryValidationResult { const trimmed = query.trim(); if (!trimmed) return { valid: false, reason: "Empty query" }; // Must start with SELECT if (!/^\s*SELECT\b/i.test(trimmed)) { return { valid: false, reason: "Only SELECT queries are allowed in SOQL" }; } // Check for forbidden DML keywords for (const pattern of SOQL_FORBIDDEN) { if (pattern.test(trimmed)) { return { valid: false, reason: `Forbidden operation: ${pattern.source}` }; } } return { valid: true }; } export const soqlPlugin = createPlugin({ configSchema: z.object({ instanceUrl: z.string().url(), accessToken: z.string().min(1), }), create: (config) => ({ id: "soql-datasource", types: ["datasource"] as const, version: "1.0.0", config, connection: { create: () => createSOQLConnection(config), dbType: "salesforce", validate: validateSOQL, }, dialect: [ "This datasource uses SOQL (Salesforce Object Query Language).", "- Use relationship queries instead of JOINs.", "- No SELECT * — always list specific fields.", ].join("\n"), }), }); function createSOQLConnection(config: { instanceUrl: string; accessToken: string }) { // Your connection factory here throw new Error("Not implemented — replace with your driver"); } ``` Key points: * **`validate` completely replaces** the standard 4-layer SQL validation (empty check, regex guard, AST parse, table whitelist). It is your responsibility to enforce safety. * **`reason` is user-facing** — it appears in error responses shown to the agent and in audit logs. * **Auto-LIMIT is skipped** for custom-validated connections since non-SQL languages may not support `LIMIT`. * **RLS injection is skipped** for custom-validated connections since the SQL rewriter can't parse non-SQL queries. * **Hooks still fire** — `beforeQuery` can rewrite the query and the rewritten query is re-validated through your custom validator. * When a custom `validate` function is provided, `parserDialect` and `forbiddenPatterns` are ignored. Async Validation [#async-validation] Validators can be asynchronous — useful when validation requires an external call (e.g. checking a schema service or permission system): ```typescript connection: { create: () => myConn, dbType: "custom-api", validate: async (query) => { const response = await fetch("https://schema.internal/validate", { method: "POST", body: JSON.stringify({ query }), headers: { "Content-Type": "application/json" }, signal: AbortSignal.timeout(5000), }); if (!response.ok) { return { valid: false, reason: "Schema service unavailable" }; } const result = await response.json() as { allowed: boolean; message?: string }; return result.allowed ? { valid: true } : { valid: false, reason: result.message ?? "Query rejected by schema service" }; }, }, ``` Async validators add latency to every query. Prefer synchronous validation when possible. If you must call an external service, add a timeout and consider caching the schema locally. Complete Plugin with SOQL Length-Limit Validator [#complete-plugin-with-soql-length-limit-validator] A full datasource plugin that enforces Salesforce SOQL limits: query length cap, SELECT-only, and forbidden DML keywords: ```typescript import { z } from "zod"; import { createPlugin } from "@useatlas/plugin-sdk"; import type { AtlasDatasourcePlugin, PluginDBConnection, PluginQueryResult, QueryValidationResult, } from "@useatlas/plugin-sdk"; const SOQL_MAX_LENGTH = 20_000; // Salesforce SOQL character limit const SOQL_FORBIDDEN = [/\b(DELETE|INSERT|UPDATE|UPSERT|UNDELETE|MERGE)\b/i]; function validateSOQL(query: string): QueryValidationResult { const trimmed = query.trim(); if (!trimmed) return { valid: false, reason: "Empty query" }; if (trimmed.length > SOQL_MAX_LENGTH) { return { valid: false, reason: `SOQL query exceeds ${SOQL_MAX_LENGTH} character limit (${trimmed.length} chars)`, }; } if (!/^\s*SELECT\b/i.test(trimmed)) { return { valid: false, reason: "Only SELECT queries are allowed in SOQL" }; } for (const pattern of SOQL_FORBIDDEN) { if (pattern.test(trimmed)) { return { valid: false, reason: `Forbidden SOQL operation: ${pattern.source}` }; } } return { valid: true }; } const configSchema = z.object({ instanceUrl: z.string().url(), accessToken: z.string().min(1), }); type SalesforceConfig = z.infer; function createSOQLConnection(config: SalesforceConfig): PluginDBConnection { return { async query(soql: string, timeoutMs?: number): Promise { const response = await fetch( `${config.instanceUrl}/services/data/v59.0/query?q=${encodeURIComponent(soql)}`, { headers: { Authorization: `Bearer ${config.accessToken}` }, signal: timeoutMs ? AbortSignal.timeout(timeoutMs) : undefined, }, ); if (!response.ok) throw new Error(`SOQL query failed: ${response.status}`); const data = (await response.json()) as { records: Record[] }; const rows = data.records; const columns = rows.length > 0 ? Object.keys(rows[0]!).filter((k) => k !== "attributes") : []; return { columns, rows }; }, async close() {}, }; } export const salesforcePlugin = createPlugin< SalesforceConfig, AtlasDatasourcePlugin >({ configSchema, create: (config) => ({ id: "salesforce-soql", types: ["datasource"] as const, version: "1.0.0", config, connection: { create: () => createSOQLConnection(config), dbType: "salesforce", validate: validateSOQL, }, dialect: [ "This datasource uses SOQL (Salesforce Object Query Language).", "- Use relationship queries instead of JOINs (e.g. Account.Name).", "- No SELECT * — always list specific fields.", "- Maximum query length: 20,000 characters.", ].join("\n"), }), }); ``` Register in `atlas.config.ts`: ```typescript plugins: [ salesforcePlugin({ instanceUrl: process.env.SF_INSTANCE_URL!, accessToken: process.env.SF_ACCESS_TOKEN!, }), ], ``` Key points: * **`validate` completely replaces** the standard 4-layer SQL validation — it is your responsibility to enforce safety * **`reason` is user-facing** — it appears in error responses shown to the agent and in audit logs * **Auto-LIMIT and RLS are skipped** for custom-validated connections since non-SQL languages may not support them * **Hooks still fire** — queries rewritten by `beforeQuery` hooks are re-validated through `validateSOQL` Advanced Patterns [#advanced-patterns] Registering Custom Tools [#registering-custom-tools] Plugins can add tools to the agent via `ctx.tools.register()` in `initialize()`. The tool becomes available to the agent alongside the built-in `explore` and `executeSQL` tools: ```typescript import { z } from "zod"; import { definePlugin } from "@useatlas/plugin-sdk"; import { tool } from "@useatlas/plugin-sdk/ai"; export default definePlugin({ id: "inventory-lookup", types: ["context"], version: "1.0.0", contextProvider: { async load() { return "## Inventory Tool\nUse `lookupInventory` to check stock levels by SKU."; }, }, async initialize(ctx) { ctx.tools.register({ name: "lookupInventory", description: "Check current inventory levels for a product SKU", tool: tool({ description: "Look up current inventory by SKU", inputSchema: z.object({ sku: z.string().describe("Product SKU code"), }), execute: async ({ sku }) => { const response = await fetch( `https://inventory.internal/api/stock/${encodeURIComponent(sku)}`, ); if (!response.ok) return { error: `SKU not found: ${sku}` }; return response.json(); }, }), }); ctx.logger.info("Inventory lookup tool registered"); }, }); ``` The context plugin's `load()` returns prompt guidance so the agent knows when and how to use the tool. The tool itself is registered via the tool registry during initialization. Using the Internal Database [#using-the-internal-database] Plugins can read and write to the Atlas internal database (`DATABASE_URL`) via `ctx.db`. This is useful for persistent state — settings, caches, plugin-specific data: ```typescript import { definePlugin } from "@useatlas/plugin-sdk"; import type { AtlasPluginContext, PluginHealthResult } from "@useatlas/plugin-sdk"; let db: AtlasPluginContext["db"] = null; export default definePlugin({ id: "query-cache", types: ["context"], version: "1.0.0", schema: { plugin_query_cache: { fields: { query_hash: { type: "string", required: true, unique: true }, result_json: { type: "string", required: true }, cached_at: { type: "date", required: true }, }, }, }, contextProvider: { async load() { return ""; }, }, async initialize(ctx) { db = ctx.db; if (!db) { ctx.logger.warn("No internal DB — query cache disabled"); return; } ctx.logger.info("Query cache plugin initialized"); }, hooks: { beforeQuery: [{ handler: async (ctx) => { if (!db) return; // Simple hash — Bun-specific. For portability, use: // crypto.createHash("sha256").update(ctx.sql).digest("hex") const hash = Bun.hash(ctx.sql).toString(36); const result = await db.query( "SELECT result_json FROM plugin_query_cache WHERE query_hash = $1", [hash], ); if (result.rows.length > 0) { // Cache hit — could be used for analytics, but can't short-circuit the query // (beforeQuery can only rewrite or reject, not return cached results) } }, }], afterQuery: [{ handler: async (ctx) => { if (!db) return; const hash = Bun.hash(ctx.sql).toString(36); // Bun-specific (see note above) await db.execute( `INSERT INTO plugin_query_cache (query_hash, result_json, cached_at) VALUES ($1, $2, NOW()) ON CONFLICT (query_hash) DO UPDATE SET result_json = $2, cached_at = NOW()`, [hash, JSON.stringify(ctx.result)], ); }, }], }, }); ``` `ctx.db` is `null` when `DATABASE_URL` is not set. Always check for `null` before using it — Atlas works without an internal database. Dynamic Entity Factories [#dynamic-entity-factories] Datasource plugins can discover entities at boot time instead of hardcoding them. Use an async factory function for `entities`: ```typescript import { z } from "zod"; import { createPlugin } from "@useatlas/plugin-sdk"; import type { AtlasDatasourcePlugin, PluginEntity } from "@useatlas/plugin-sdk"; const configSchema = z.object({ url: z.string().url(), schema: z.string().default("public"), }); type DynamicConfig = z.infer; export const dynamicPlugin = createPlugin>({ configSchema, create(config) { return { id: "dynamic-datasource", types: ["datasource"] as const, version: "1.0.0", config, connection: { create: () => createConnectionFromUrl(config.url), dbType: "postgres", }, // Async factory — called once at boot, entities merged into the whitelist entities: async (): Promise => { const conn = createConnectionFromUrl(config.url); try { // Escape the schema name to prevent injection. // PluginDBConnection.query() doesn't support parameterized queries, // so validate the input or use your driver's parameterized API directly. const safeSchema = config.schema.replace(/'/g, "''"); const result = await conn.query( `SELECT table_name, obj_description((quote_ident(table_schema) || '.' || quote_ident(table_name))::regclass) AS description FROM information_schema.tables WHERE table_schema = '${safeSchema}' AND table_type = 'BASE TABLE'`, ); return result.rows.map((row) => ({ name: row.table_name as string, yaml: [ `table: ${row.table_name}`, `description: ${(row.description as string) || "Auto-discovered table"}`, "dimensions: {}", ].join("\n"), })); } finally { await conn.close(); } }, }; }, }); function createConnectionFromUrl(url: string) { // Your connection factory here throw new Error("Not implemented — replace with your driver"); } ``` Multi-Type Plugins [#multi-type-plugins] A single plugin can implement multiple types. For example, a plugin that provides both a datasource connection and context guidance: ```typescript import { z } from "zod"; import { createPlugin } from "@useatlas/plugin-sdk"; import type { AtlasDatasourcePlugin, AtlasContextPlugin, PluginHealthResult, } from "@useatlas/plugin-sdk"; const configSchema = z.object({ url: z.string().url(), dialect: z.string().default("Use APPROX_COUNT_DISTINCT for cardinality estimates."), }); type Config = z.infer; // Intersection type for multi-type plugins type DatasourceAndContext = AtlasDatasourcePlugin & AtlasContextPlugin; export const multiPlugin = createPlugin({ configSchema, create(config) { return { id: "multi-type-example", types: ["datasource", "context"] as const, version: "1.0.0", config, // Datasource facet connection: { create: () => createMyConnection(config.url), dbType: "postgres", }, dialect: config.dialect, // Context facet contextProvider: { async load() { return `## Dialect Notes\n\n${config.dialect}`; }, }, async healthCheck(): Promise { try { const conn = createMyConnection(config.url); await conn.query("SELECT 1", 5000); await conn.close(); return { healthy: true }; } catch (err) { return { healthy: false, message: err instanceof Error ? err.message : String(err), }; } }, }; }, }); function createMyConnection(url: string) { throw new Error("Not implemented — replace with your driver"); } ``` Multi-Tenant Plugins [#multi-tenant-plugins] Serve different configurations per tenant by keying on a runtime identifier (e.g. an environment variable or request header): ```typescript import { z } from "zod"; import { createPlugin } from "@useatlas/plugin-sdk"; import type { AtlasDatasourcePlugin, PluginDBConnection } from "@useatlas/plugin-sdk"; const configSchema = z.object({ tenants: z.record( z.string(), // tenant ID z.object({ url: z.string().url(), schema: z.string().default("public"), }), ), defaultTenant: z.string(), }); type MultiTenantConfig = z.infer; export const multiTenantPlugin = createPlugin< MultiTenantConfig, AtlasDatasourcePlugin >({ configSchema, create(config) { const connectionPool = new Map(); function getConnectionForTenant(tenantId: string): PluginDBConnection { const tenantConfig = config.tenants[tenantId]; if (!tenantConfig) { throw new Error(`Unknown tenant: ${tenantId}`); } let conn = connectionPool.get(tenantId); if (!conn) { conn = createTenantConnection(tenantConfig.url, tenantConfig.schema); connectionPool.set(tenantId, conn); } return conn; } return { id: "multi-tenant-datasource", types: ["datasource"] as const, version: "1.0.0", config, connection: { create: () => getConnectionForTenant(config.defaultTenant), dbType: "postgres", }, async initialize(ctx) { const tenantIds = Object.keys(config.tenants); ctx.logger.info( `Multi-tenant plugin initialized with ${tenantIds.length} tenant(s): ${tenantIds.join(", ")}`, ); }, async teardown() { const closes = [...connectionPool.values()].map((c) => c.close()); await Promise.all(closes); connectionPool.clear(); }, }; }, }); function createTenantConnection(url: string, schema: string): PluginDBConnection { throw new Error("Not implemented — replace with your driver"); } ``` Register with per-tenant connection strings: ```typescript // atlas.config.ts plugins: [ multiTenantPlugin({ defaultTenant: "acme", tenants: { acme: { url: process.env.ACME_DB_URL!, schema: "acme" }, globex: { url: process.env.GLOBEX_DB_URL!, schema: "globex" }, }, }), ], ``` --- # Plugin Composition (/plugins/composition) When you register multiple plugins in `atlas.config.ts`, Atlas wires them into the runtime in a specific order with clear rules for priority, chaining, and conflict resolution. This page covers how plugins compose. Multiple Datasource Plugins [#multiple-datasource-plugins] Each datasource plugin gets a unique connection registered under its plugin `id`. The default connection from `ATLAS_DATASOURCE_URL` coexists alongside plugin connections -- they don't replace it. ```typescript // atlas.config.ts — multiple datasource plugins coexist with the default connection import { defineConfig } from "@atlas/api/lib/config"; import { clickhousePlugin } from "@useatlas/clickhouse"; import { snowflakePlugin } from "@useatlas/snowflake"; export default defineConfig({ datasources: { default: { url: process.env.ATLAS_DATASOURCE_URL! }, // Primary datasource }, plugins: [ // Each plugin registers a connection under its plugin ID clickhousePlugin({ url: process.env.CLICKHOUSE_URL! }), snowflakePlugin({ account: process.env.SNOWFLAKE_ACCOUNT!, username: process.env.SNOWFLAKE_USER!, password: process.env.SNOWFLAKE_PASSWORD!, }), ], }); ``` At boot, `wireDatasourcePlugins` iterates over all healthy datasource plugins and calls `connection.create()` on each. The returned connection is registered in the `ConnectionRegistry` under the plugin's `id`: | Connection ID | Source | | -------------- | ------------------------------ | | `"default"` | `ATLAS_DATASOURCE_URL` env var | | `"clickhouse"` | ClickHouse plugin (plugin id) | | `"snowflake"` | Snowflake plugin (plugin id) | The agent uses `connectionId` when calling `executeSQL` to route queries to the right database. Plugin-provided entities (via the `entities` property) are merged into the table whitelist scoped to their connection -- so a ClickHouse table can't be queried through the Snowflake connection. If a datasource plugin provides a `dialect` string, it's injected into the agent's system prompt as dialect-specific guidance (e.g., "Use `SAFE_DIVIDE` instead of `/` for BigQuery"). Failure isolation [#failure-isolation] If one datasource plugin fails to connect, only that plugin is marked unhealthy. The others continue working: ``` [INFO] plugins:wiring Datasource plugin wired pluginId="clickhouse" [ERROR] plugins:wiring Failed to wire datasource plugin pluginId="snowflake" err="Connection refused" ``` The agent can still query the default connection and ClickHouse. Snowflake queries will fail with a clear error. Sandbox Plugin Priority [#sandbox-plugin-priority] When multiple sandbox plugins are registered, Atlas selects the one with the highest `priority` value. Higher numbers win. Built-in priority scale [#built-in-priority-scale] | Backend | Priority | Selection | | ------------------ | -------- | ----------------------------------------------- | | Vercel Sandbox | 100 | `ATLAS_RUNTIME=vercel` | | nsjail | 75 | `ATLAS_SANDBOX=nsjail` or auto-detected on PATH | | **Plugin default** | **60** | `SANDBOX_DEFAULT_PRIORITY` from the SDK | | Sidecar | 50 | `ATLAS_SANDBOX_URL` set | | just-bash | 0 | Fallback (dev only) | The priority values for built-in backends are reference numbers from the SDK -- they establish where plugin priorities sit relative to the built-in chain. Built-in backends are selected via a fixed precedence order, not numeric comparison. Plugin backends use `priority` for sorting among themselves. Sandbox plugins default to priority 60 (between nsjail and sidecar). Override `priority` to control placement: ```typescript import { definePlugin, SANDBOX_DEFAULT_PRIORITY } from "@useatlas/plugin-sdk"; export default definePlugin({ id: "e2b-sandbox", types: ["sandbox"], version: "1.0.0", sandbox: { priority: 90, // Higher than nsjail (75), lower than Vercel (100) async create(semanticRoot) { // Return an ExploreBackend implementation return { async exec(command) { // Execute in E2B sandbox... return { stdout: "", stderr: "", exitCode: 0 }; }, }; }, }, }); ``` Selection logic [#selection-logic] The selection runs in `getExploreBackend()` in `packages/api/src/lib/tools/explore.ts`: 1. All healthy sandbox plugins are collected from the registry 2. Sorted by `priority` descending (highest first) 3. Each is tried in order -- `sandbox.create(semanticRoot)` is called 4. The first one that succeeds becomes the active backend (cached for the process lifetime, unless invalidated by an infrastructure error) 5. If all plugins fail, Atlas falls through to the built-in detection chain (Vercel > nsjail explicit > sidecar > nsjail auto-detect > just-bash) When `ATLAS_SANDBOX=nsjail` is explicitly set, sandbox plugins are **skipped entirely**. The operator is explicitly requesting nsjail -- plugin backends won't override that. Two sandbox plugins [#two-sandbox-plugins] ```typescript // atlas.config.ts import { defineConfig } from "@atlas/api/lib/config"; import { e2bSandboxPlugin } from "@useatlas/e2b"; import { daytonaSandboxPlugin } from "@useatlas/daytona"; export default defineConfig({ plugins: [ e2bSandboxPlugin({ apiKey: process.env.E2B_API_KEY!, priority: 90 }), daytonaSandboxPlugin({ endpoint: process.env.DAYTONA_URL!, priority: 80 }), ], }); ``` Atlas tries E2B first (priority 90). If `create()` throws, it tries Daytona (priority 80). If both fail, the built-in chain takes over. Hook Execution Order [#hook-execution-order] Hooks fire in **plugin registration order** -- the order of the `plugins: []` array in `atlas.config.ts`. This applies to all hook types: `beforeQuery`, `afterQuery`, `beforeExplore`, `afterExplore`, `onRequest`, `onResponse`. Mutable hooks chain [#mutable-hooks-chain] `beforeQuery` and `beforeExplore` are **mutable** hooks. Each handler receives the context with the latest mutated value, and can return a mutation to pass forward: ``` Plugin A beforeQuery → { sql: "SELECT * FROM orders" } ↓ returns { sql: "SELECT * FROM orders WHERE tenant_id = 42" } Plugin B beforeQuery → { sql: "SELECT * FROM orders WHERE tenant_id = 42" } ↓ returns void (no mutation) Plugin C beforeQuery → { sql: "SELECT * FROM orders WHERE tenant_id = 42" } ↓ returns { sql: "SELECT * FROM orders WHERE tenant_id = 42 LIMIT 100" } Final SQL: "SELECT * FROM orders WHERE tenant_id = 42 LIMIT 100" ``` The rules: * **Return a mutation object** (e.g., `{ sql: "..." }` or `{ command: "..." }`) to rewrite the value for downstream hooks * **Return `void`/`undefined`** to pass through without changes * **Throw an error** to reject the operation entirely -- the chain stops and the query/command is denied * **Type mismatch** in the mutation is caught and logged as an error -- the mutation is ignored and the previous value is preserved * Only **healthy** plugins participate -- unhealthy plugins are skipped ```typescript import { definePlugin } from "@useatlas/plugin-sdk"; export default definePlugin({ id: "tenant-filter", types: ["context"], version: "1.0.0", contextProvider: { async load() { return ""; }, }, hooks: { beforeQuery: [ { // Optional matcher -- skip queries that already have a WHERE clause matcher: (ctx) => !ctx.sql.toLowerCase().includes("where"), handler: (ctx) => { return { sql: `${ctx.sql} WHERE tenant_id = 42` }; }, }, ], }, }); ``` Observation hooks continue on error [#observation-hooks-continue-on-error] `afterQuery`, `afterExplore`, `onRequest`, and `onResponse` are **observation-only** hooks. Return values are discarded. If one handler throws, the error is caught and logged -- execution continues to the next handler: ``` Plugin A afterQuery → logs to analytics ✓ Plugin B afterQuery → throws Error("logging service down") → caught, logged, continues Plugin C afterQuery → writes audit record ✓ ``` This means observation hooks are safe for monitoring, logging, and analytics. A single plugin failure won't block the others. Matcher filtering [#matcher-filtering] Every hook entry supports an optional `matcher` function. When present, the handler only fires if `matcher` returns `true`: ```typescript hooks: { beforeQuery: [ { matcher: (ctx) => ctx.connectionId === "warehouse", handler: (ctx) => { // Only runs for queries targeting the "warehouse" connection return { sql: ctx.sql.replace(/SELECT \*/, "SELECT TOP 1000 *") }; }, }, ], afterQuery: [ { matcher: (ctx) => ctx.durationMs > 5000, handler: (ctx) => { console.warn(`Slow query (${ctx.durationMs}ms): ${ctx.sql}`); }, }, ], }, ``` If a matcher itself throws, the error is caught and logged, and that hook entry is skipped (not treated as a rejection). Plugin Type Constraints [#plugin-type-constraints] Multiple plugins of the same type [#multiple-plugins-of-the-same-type] You can register multiple plugins of the same type. This is the normal case for datasource plugins (connect to multiple databases) and context plugins (inject multiple context fragments): ```typescript export default defineConfig({ plugins: [ clickhousePlugin({ url: process.env.CLICKHOUSE_URL! }), snowflakePlugin({ account: "...", username: "...", password: "..." }), companyGlossaryPlugin(), teamContextPlugin({ teamId: "engineering" }), ], }); ``` Plugin IDs must be unique [#plugin-ids-must-be-unique] Every plugin must have a unique `id`. Duplicate IDs throw at two levels: 1. **Config validation** -- `validatePlugins()` in `config.ts` checks for duplicates before the server starts: ``` Error: plugin "my-plugin" (index 2) has duplicate id "my-plugin" (first seen at index 0) ``` 2. **Registry registration** -- `PluginRegistry.register()` throws if the ID is already registered: ``` Error: Plugin "my-plugin" is already registered ``` If you need two instances of the same plugin (e.g., two ClickHouse clusters), use different IDs: ```typescript export default defineConfig({ plugins: [ clickhousePlugin({ id: "clickhouse-prod", url: process.env.CH_PROD_URL! }), clickhousePlugin({ id: "clickhouse-staging", url: process.env.CH_STAGING_URL! }), ], }); ``` The plugin's `id` becomes the `connectionId` for datasource plugins. When the agent calls `executeSQL`, it targets a specific connection by ID. Multi-type plugins [#multi-type-plugins] A single plugin can implement multiple types by listing them in the `types` array: ```typescript export default definePlugin({ id: "salesforce", types: ["datasource", "action"], version: "1.0.0", connection: { /* ... */ }, actions: [ /* ... */ ], }); ``` The plugin participates in wiring for each type it implements. A plugin with `types: ["datasource", "action"]` goes through both `wireDatasourcePlugins` and `wireActionPlugins`. Lifecycle Summary [#lifecycle-summary] Understanding the full lifecycle helps when debugging composition issues: | Phase | Order | Behavior | | ------------------ | ------------------------ | ------------------------------------------------------------------------- | | **Registration** | Array order | `plugins.register()` -- duplicate IDs throw | | **Initialization** | Array order | `plugin.initialize(ctx)` -- failures set "unhealthy", don't crash | | **Wiring** | By type | Datasources, actions, interactions, context -- each type wired separately | | **Hook dispatch** | Array order | Healthy plugins only, matchers applied per-entry | | **Teardown** | **Reverse** order (LIFO) | `plugin.teardown()` -- errors logged, teardown continues | **Registration order matters for hooks.** If Plugin A must see the original SQL before Plugin B rewrites it, register A before B in the `plugins` array. Teardown runs in reverse -- the last plugin registered is torn down first. --- # Plugin Directory (/plugins/overview) Atlas plugins extend the agent with new capabilities. Each plugin is a factory function registered in `atlas.config.ts`. The Plugin SDK (`@useatlas/plugin-sdk`) provides type definitions and helpers for all five plugin types. Official Plugins [#official-plugins] Install a Plugin [#install-a-plugin] 1\. Install the package [#1-install-the-package] ```bash bun add @useatlas/clickhouse ``` Some plugins have **optional peer dependencies** for their database driver or SDK. These are loaded lazily at runtime — if a driver is missing, the plugin tells you exactly what to install when it first tries to use it. Install peer dependencies separately: ```bash # Example: ClickHouse plugin needs @clickhouse/client bun add @clickhouse/client ``` Peer dependencies are optional at install time — the plugin loads them lazily at runtime. If a required driver is missing, you'll get a clear error message telling you exactly what to install (e.g. "ClickHouse support requires the @clickhouse/client package. Install it with: bun add @clickhouse/client"). 2\. Register in `atlas.config.ts` [#2-register-in-atlasconfigts] Import the plugin factory and add it to the `plugins` array: ```typescript // atlas.config.ts — register plugins in the plugins array import { defineConfig } from "@atlas/api/lib/config"; import { clickhousePlugin } from "@useatlas/clickhouse"; export default defineConfig({ plugins: [ // Each plugin is a factory function — config is validated at startup clickhousePlugin({ url: process.env.CLICKHOUSE_URL! }), ], }); ``` Plugin config is validated at startup — invalid options (wrong URL format, missing required fields) fail fast with a clear error before the server starts. 3\. Start the server [#3-start-the-server] ```bash bun run dev ``` Atlas logs each registered plugin at startup. Check the logs to confirm your plugin loaded successfully. Troubleshooting [#troubleshooting] | Symptom | Cause | Fix | | ---------------------------------------------------- | ------------------------------------------- | --------------------------------------------------------------------------------------- | | `Plugin config validation failed` | Invalid config passed to the plugin factory | Check the error details — they list which fields failed and why | | `requires the X package. Install it with: bun add X` | Missing optional peer dependency | Run the suggested `bun add` command | | `is already registered` or `duplicate id` | Two plugins share the same `id` | Remove the duplicate from `atlas.config.ts` | | Import error on plugin | Package not installed or wrong export name | Verify `bun add @useatlas/` completed, check the import name in the plugin README | See the [Plugin Authoring Guide](/plugins/authoring-guide) for a step-by-step tutorial on creating custom plugins. For real-world patterns and advanced recipes, see the [Plugin Cookbook](/plugins/cookbook). Community Plugins [#community-plugins] Community plugins are welcome. If you've built a plugin you'd like to share, [open a pull request](https://github.com/AtlasDevHQ/atlas/pulls) or [start a discussion](https://github.com/AtlasDevHQ/atlas/discussions) on GitHub. --- # Plugin Authoring Guide (/plugins/authoring-guide) Step-by-step guide to building an Atlas plugin. We'll build a complete datasource plugin, then cover how the other four types differ. Choosing a Plugin Type [#choosing-a-plugin-type] | Type | Use when you want to... | Example | | --------------- | ------------------------------------------------------ | ------------------------------------------------- | | **Datasource** | Connect a new database or API as a query target | ClickHouse, Snowflake, Salesforce | | **Context** | Inject additional context into the agent's prompt | Company glossary, user preferences, external docs | | **Interaction** | Add a new surface for users to interact with Atlas | Slack bot, Discord bot, email handler | | **Action** | Let the agent perform write operations (with approval) | Create JIRA ticket, send email, update CRM | | **Sandbox** | Provide a custom code execution environment | E2B, Daytona, custom Docker runner | Prerequisites [#prerequisites] * `@useatlas/plugin-sdk` -- type definitions and helpers * `zod` -- config schema validation * `bun` -- runtime and test runner * An Atlas project with `atlas.config.ts` 1\. Scaffold [#1-scaffold] **Standalone plugin** (publishable to npm): ```bash bun create @useatlas/plugin my-datasource --type datasource ``` This creates a standalone `my-datasource/` directory with `package.json`, `tsconfig.json`, `src/index.ts`, tests, README, and LICENSE -- ready to publish. **In-monorepo plugin** (inside an Atlas project): ```bash bun run atlas -- plugin create my-datasource --type datasource ``` This creates `plugins/my-datasource/` with workspace references. Both generate the same structure: ``` my-datasource/ ├── src/ │ ├── index.ts # Plugin entry point │ └── index.test.ts # Test scaffold ├── package.json ├── tsconfig.json └── README.md ``` Or create the files manually -- the CLI is a convenience, not a requirement. 2\. Config Schema [#2-config-schema] Define what your plugin accepts using Zod: ```typescript // src/config.ts import { z } from "zod"; export const ConfigSchema = z.object({ url: z .string() .min(1, "URL must not be empty") .refine( (u) => u.startsWith("postgresql://") || u.startsWith("postgres://"), "URL must start with postgresql:// or postgres://", ), poolSize: z.number().int().positive().max(500).optional(), }); export type PluginConfig = z.infer; ``` The schema is validated at factory call time -- before the server starts. Invalid config fails fast. 3\. Connection Factory [#3-connection-factory] Implement `PluginDBConnection` -- the interface Atlas uses to query your database: ```typescript // src/connection.ts import type { PluginDBConnection, PluginQueryResult } from "@useatlas/plugin-sdk"; import type { PluginConfig } from "./config"; export function createConnection(config: PluginConfig): PluginDBConnection { let Pool: typeof import("pg").Pool; try { ({ Pool } = require("pg")); } catch (err) { const isNotFound = err != null && typeof err === "object" && "code" in err && (err as NodeJS.ErrnoException).code === "MODULE_NOT_FOUND"; if (isNotFound) { throw new Error("This plugin requires the pg package. Install it with: bun add pg"); } throw err; } const pool = new Pool({ connectionString: config.url, max: config.poolSize ?? 10, }); return { async query(sql: string, timeoutMs?: number): Promise { const client = await pool.connect(); try { if (timeoutMs) { await client.query(`SET statement_timeout = ${timeoutMs}`); } const result = await client.query(sql); return { columns: result.fields.map((f) => f.name), rows: result.rows, }; } finally { client.release(); } }, async close(): Promise { await pool.end(); }, }; } ``` Key points: * `query()` returns `{ columns: string[], rows: Record[] }` * `close()` cleans up resources * Lazy-load the driver with `require()` + `MODULE_NOT_FOUND` handling so it can be an optional peer dependency 4\. Plugin Object [#4-plugin-object] Wire everything together with `createPlugin()`, which validates config and returns a factory function. The `configSchema` can be any object with a `parse()` method -- Zod is recommended but not required (e.g. a custom validator that throws on invalid input works too). For plugins that don't need runtime configuration, use `definePlugin()` instead -- see [`createPlugin` vs `definePlugin`](#createplugin-vs-defineplugin) below. ```typescript // src/index.ts import { createPlugin } from "@useatlas/plugin-sdk"; import type { AtlasDatasourcePlugin, PluginHealthResult } from "@useatlas/plugin-sdk"; import { ConfigSchema, type PluginConfig } from "./config"; import { createConnection } from "./connection"; export function buildPlugin(config: PluginConfig): AtlasDatasourcePlugin { let cachedConnection: ReturnType | undefined; return { id: "my-datasource", types: ["datasource"] as const, version: "1.0.0", name: "My DataSource", config, connection: { create: () => { if (!cachedConnection) { cachedConnection = createConnection(config); } return cachedConnection; }, dbType: "postgres", }, entities: [], dialect: "This datasource uses PostgreSQL. Use DATE_TRUNC() for date truncation.", // Called once during server startup. Throw to block startup (for fatal configuration errors). async initialize(ctx) { ctx.logger.info("My datasource plugin initialized"); }, // Called by `atlas doctor` and the admin API. Always return a result — never throw. // Return `{ healthy: false, message: '...' }` for recoverable issues. async healthCheck(): Promise { const start = performance.now(); try { const conn = createConnection(config); await conn.query("SELECT 1", 5000); await conn.close(); return { healthy: true, latencyMs: Math.round(performance.now() - start) }; } catch (err) { return { healthy: false, message: err instanceof Error ? err.message : String(err), latencyMs: Math.round(performance.now() - start), }; } }, }; } export const myPlugin = createPlugin({ configSchema: ConfigSchema, create: buildPlugin, }); ``` 5\. Register [#5-register] Add to `atlas.config.ts`: ```typescript import { defineConfig } from "@atlas/api/lib/config"; import { myPlugin } from "./plugins/my-datasource/src/index"; export default defineConfig({ plugins: [ myPlugin({ url: process.env.MY_DB_URL! }), ], }); ``` Never commit credentials to version control. Use environment variables (`process.env.MY_DB_URL`) in `atlas.config.ts` and add `.env` to `.gitignore`. 6\. Test [#6-test] ```bash bun test plugins/my-datasource/src/index.test.ts ``` See [Testing](#8-testing) below for a full test example and patterns. 7\. Publish [#7-publish] For npm packages: ```json { "name": "atlas-plugin-my-datasource", "peerDependencies": { "@useatlas/plugin-sdk": ">=0.0.1", "pg": ">=8.0.0" }, "peerDependenciesMeta": { "pg": { "optional": true } }, "devDependencies": { "@useatlas/plugin-sdk": "^0.0.2" } } ``` Convention: `@useatlas/plugin-sdk` goes in both `peerDependencies` (so consumers provide it) and `devDependencies` (so you can build and test locally). Database drivers go as optional peer dependencies. 8\. Testing [#8-testing] Test config validation, plugin shape, and health checks. Use `bun test` for a single file or `bun run test` for the full suite. ```typescript import { describe, test, expect } from "bun:test"; import { myPlugin } from "./index"; describe("my-datasource plugin", () => { test("validates config schema", () => { // Test that invalid config is rejected expect(() => myPlugin({ url: "" })).toThrow(); }); test("creates plugin with valid config", () => { const plugin = myPlugin({ url: "postgresql://localhost/test" }); expect(plugin.id).toBe("my-datasource"); expect(plugin.type).toBe("datasource"); }); test("health check reports status", async () => { const plugin = myPlugin({ url: "postgresql://localhost/test" }); const health = await plugin.healthCheck?.(); expect(health).toHaveProperty("healthy"); }); }); ``` Key testing patterns: * **Config validation** — Verify that invalid configs throw at factory call time, not at runtime * **Plugin shape** — Check `id`, `type`, `version`, and variant-specific properties (`connection`, `contextProvider`, `actions`, etc.) * **Health checks** — Ensure `healthCheck()` returns `{ healthy: boolean }` and never throws (even when the service is unreachable) * **Connection factory** — For datasource plugins, test that `connection.create()` returns a valid `PluginDBConnection` Other Plugin Types [#other-plugin-types] Context Plugin [#context-plugin] Context plugins inject additional knowledge into the agent's system prompt. Implement `contextProvider.load()` to return a string that gets appended to the prompt, and optionally `contextProvider.refresh()` to support cache invalidation. * **`load()`** — Returns a string (typically Markdown) that is appended to the agent's system prompt. Called on each agent invocation. Cache the result internally for performance. * **`refresh()`** — Called when the semantic layer is reloaded or on manual refresh via the admin UI. Use it to clear any internal cache so the next `load()` picks up changes. Here is a minimal example that injects a company glossary: ```typescript import { definePlugin } from "@useatlas/plugin-sdk"; export default definePlugin({ id: "company-glossary", types: ["context"], version: "1.0.0", name: "Company Glossary", contextProvider: { // Cache the loaded context to avoid re-reading on every request _cache: null as string | null, async load() { if (this._cache) return this._cache; // Load from any source: filesystem, database, API, etc. const terms = [ { term: "ARR", definition: "Annual Recurring Revenue — sum of all active subscription values annualized" }, { term: "MRR", definition: "Monthly Recurring Revenue — ARR / 12" }, { term: "churn", definition: "Percentage of customers who cancel within a billing period" }, ]; const lines = terms.map((t) => `- **${t.term}**: ${t.definition}`); this._cache = `## Company Glossary\n\n${lines.join("\n")}`; return this._cache; }, async refresh() { // Clear cache so next load() re-reads from source this._cache = null; }, }, async initialize(ctx) { ctx.logger.info("Company glossary context plugin initialized"); }, }); ``` The returned string from `load()` becomes part of the agent's system prompt, so the agent can use your glossary terms, user preferences, or any domain knowledge when interpreting questions and writing SQL. Interaction Plugin [#interaction-plugin] Interaction plugins add communication surfaces. They may mount Hono routes (Slack, webhooks) or manage non-HTTP transports (MCP stdio): ```typescript export default definePlugin({ id: "my-webhook", types: ["interaction"], version: "1.0.0", routes(app) { app.post("/webhooks/my-service", async (c) => { return c.json({ ok: true }); }); }, }); ``` Action Plugin [#action-plugin] Action plugins give the agent side-effects with approval controls. Actions require user approval before execution: the agent proposes the action, the user sees a confirmation card in the chat UI, and only after approval does `execute()` run. This prevents unintended writes. The approval mode controls who can approve: * **`"manual"`** — Any user in the conversation can approve or reject * **`"admin-only"`** — Only users with the `admin` role can approve * **`"auto"`** — Executes immediately without approval (use sparingly) Here is a complete example that creates a support ticket: ```typescript import { z } from "zod"; import { createPlugin } from "@useatlas/plugin-sdk"; import { tool } from "@useatlas/plugin-sdk/ai"; import type { AtlasActionPlugin, PluginAction } from "@useatlas/plugin-sdk"; const ticketConfigSchema = z.object({ apiUrl: z.string().url(), apiKey: z.string().min(1, "apiKey must not be empty"), defaultPriority: z.enum(["low", "medium", "high"]).default("medium"), }); type TicketConfig = z.infer; export const ticketPlugin = createPlugin>({ configSchema: ticketConfigSchema, create(config) { const action: PluginAction = { name: "createSupportTicket", description: "Create a support ticket from analysis findings", tool: tool({ description: "Create a support ticket. Requires user approval before execution.", inputSchema: z.object({ title: z.string().max(200).describe("Short summary of the issue"), body: z.string().describe("Detailed description with relevant data"), priority: z .enum(["low", "medium", "high"]) .optional() .describe(`Priority level. Defaults to "${config.defaultPriority}"`), }), execute: async ({ title, body, priority }) => { // This only runs AFTER the user approves in the chat UI const response = await fetch(`${config.apiUrl}/tickets`, { method: "POST", headers: { Authorization: `Bearer ${config.apiKey}`, "Content-Type": "application/json", }, body: JSON.stringify({ title, body, priority: priority ?? config.defaultPriority, }), }); if (!response.ok) { throw new Error(`Ticket API returned ${response.status}`); } const ticket = (await response.json()) as { id: string; url: string }; return { ticketId: ticket.id, url: ticket.url }; }, }), actionType: "ticket:create", reversible: false, defaultApproval: "manual", requiredCredentials: ["apiKey"], // ^ Values must match environment variable names (e.g. process.env.apiKey). // At startup, Atlas checks these env vars exist and logs a warning for any // that are missing (see validateActionCredentials in the ToolRegistry). // Missing credentials do not block startup — they produce warnings only. }; return { id: "ticket-action", types: ["action"] as const, version: "1.0.0", name: "Support Ticket Action", config, actions: [action], }; }, }); ``` Register it in `atlas.config.ts`: ```typescript plugins: [ ticketPlugin({ apiUrl: process.env.TICKET_API_URL!, apiKey: process.env.TICKET_API_KEY!, }), ], ``` Sandbox Plugin [#sandbox-plugin] Sandbox plugins provide isolation backends for the explore tool: ```typescript sandbox: { create(semanticRoot: string): PluginExploreBackend { return { async exec(command: string) { // Execute command in isolation, return { stdout, stderr, exitCode } }, async close() { /* cleanup */ }, }; }, priority: 60, }, security: { networkIsolation: true, filesystemIsolation: true, unprivilegedExecution: true, description: "My isolation mechanism...", }, ``` The `priority` field determines selection order when multiple backends are available. Higher values are tried first. Built-in priority scale: | Backend | Priority | Notes | | ------------------ | -------- | ------------------------------------------------------------- | | Vercel sandbox | 100 | Firecracker microVM (Vercel deployments only) | | nsjail | 75 | Linux namespace sandbox (explicit via `ATLAS_SANDBOX=nsjail`) | | **Plugin default** | **60** | `SANDBOX_DEFAULT_PRIORITY` from `@useatlas/plugin-sdk` | | Sidecar | 50 | HTTP-isolated container (set via `ATLAS_SANDBOX_URL`) | | just-bash | 0 | OverlayFs read-only fallback (dev only) | Plugin sandbox backends default to priority 60 (between nsjail and sidecar). Set a higher value to take precedence over built-in backends, or a lower value to act as a fallback. `createPlugin` vs `definePlugin` [#createplugin-vs-defineplugin] The SDK exports two helpers for authoring plugins. Choose based on whether your plugin accepts runtime configuration. **`createPlugin()`** -- Use when the plugin accepts user-configurable options that should be validated at startup. It returns a factory function that validates config via a Zod schema before building the plugin object. This is the Better Auth-style `plugins: [myPlugin({ key: "value" })]` pattern. ```typescript import { createPlugin } from "@useatlas/plugin-sdk"; import { z } from "zod"; export const myPlugin = createPlugin({ configSchema: z.object({ url: z.string().url() }), create: (config) => ({ id: "my-plugin", types: ["datasource"] as const, version: "1.0.0", config, connection: { create: () => makeConnection(config.url), dbType: "postgres" }, }), }); // Usage in atlas.config.ts: plugins: [myPlugin({ url: process.env.MY_URL! })] ``` **`definePlugin()`** -- Use when no user-configurable options exist. It validates the plugin shape at module load time and returns the plugin object directly. ```typescript import { definePlugin } from "@useatlas/plugin-sdk"; export default definePlugin({ id: "my-context", types: ["context"], version: "1.0.0", contextProvider: { async load() { return "Additional context for the agent"; }, }, }); // Usage in atlas.config.ts: import myContext from "./plugins/my-context"; plugins: [myContext] ``` Type Inference with `$InferServerPlugin` [#type-inference-with-inferserverplugin] The SDK exports a `$InferServerPlugin` utility type (following Better Auth's `$Infer` pattern) that lets client code extract plugin types without importing server modules. It works with both `createPlugin()` factory functions and `definePlugin()` direct objects: ```typescript import type { $InferServerPlugin } from "@useatlas/plugin-sdk"; import type { clickhousePlugin } from "@useatlas/clickhouse"; type CH = $InferServerPlugin; // CH["Config"] → { url: string; database?: string } // CH["Type"] → "datasource" // CH["Id"] → string // CH["DbType"] → "clickhouse" ``` Available inference keys: `Config`, `Type`, `Id`, `Name`, `Version`, `DbType` (datasource only), `Actions` (action only), `Security` (sandbox only). Plugin Status Lifecycle [#plugin-status-lifecycle] Plugins transition through a defined set of statuses during their lifetime: | Status | Description | | -------------- | ----------------------------------------------------------------------- | | `registered` | Plugin object has been validated and added to the registry | | `initializing` | `initialize()` is currently running | | `healthy` | Plugin is initialized and operating normally | | `unhealthy` | Plugin is initialized but `healthCheck()` returned `{ healthy: false }` | | `teardown` | `teardown()` has been called during graceful shutdown | The host manages these transitions automatically. Plugin authors do not need to set status directly -- implement `initialize()`, `healthCheck()`, and `teardown()` and the host handles the rest. Hooks [#hooks] Plugins can intercept agent lifecycle events and HTTP requests using hooks. Each hook entry has an optional `matcher` function (return `true` to run the handler; omit to always run) and a `handler` function. Define hooks on any plugin type via the `hooks` property: ```typescript export default definePlugin({ id: "audit-logger", types: ["context"], version: "1.0.0", contextProvider: { async load() { return ""; } }, hooks: { beforeQuery: [{ matcher: (ctx) => ctx.sql.includes("sensitive_table"), handler: (ctx) => { console.log(`Query on sensitive table: ${ctx.sql}`); // Return { sql } to rewrite, throw to reject, or return void to pass through }, }], afterQuery: [{ handler: (ctx) => { console.log(`Query completed in ${ctx.durationMs}ms, ${ctx.result.rows.length} rows`); }, }], }, }); ``` Hook Types [#hook-types] | Hook | Context | Mutable | Description | | --------------- | -------------------------------------------- | ------------------------------------------------------- | -------------------------------------------- | | `beforeQuery` | `{ sql, connectionId? }` | Yes -- return `{ sql }` to rewrite, throw to reject | Fires before each SQL query is executed | | `afterQuery` | `{ sql, connectionId?, result, durationMs }` | No | Fires after each SQL query with results | | `beforeExplore` | `{ command }` | Yes -- return `{ command }` to rewrite, throw to reject | Fires before each explore command | | `afterExplore` | `{ command, output }` | No | Fires after each explore command with output | | `onRequest` | `{ path, method, headers }` | No | HTTP-level: fires before routing a request | | `onResponse` | `{ path, method, status }` | No | HTTP-level: fires after sending a response | `beforeQuery` and `beforeExplore` are mutable hooks -- handlers can return a mutation object (`{ sql }` or `{ command }`) to rewrite the operation, or throw an error to reject it entirely. All other hooks are observation-only (void return). Schema Migrations [#schema-migrations] Plugins can declare tables for the Atlas internal database via the `schema` property. Declared tables are auto-migrated at boot — no manual SQL needed: ```typescript export default definePlugin({ id: "my-plugin", types: ["context"], version: "1.0.0", schema: { my_plugin_cache: { fields: { key: { type: "string", required: true, unique: true }, value: { type: "string", required: true }, updated_at: { type: "date" }, }, }, }, // ... }); ``` The `schema` property is available on all plugin types. It requires `DATABASE_URL` to be set (the internal Postgres database). Use `ctx.db` in `initialize()` or hooks to query your plugin's tables. How It Works [#how-it-works] At server boot, before plugins are initialized, Atlas runs schema migrations automatically: 1. **CREATE TABLE** — New tables are created with an auto-generated `id` (UUID), `created_at`, and `updated_at` columns, plus your declared fields 2. **ALTER TABLE ADD COLUMN** — If you add new fields to a plugin schema in a later version, Atlas detects the missing columns and adds them automatically All migrations are tracked in a `plugin_migrations` table for idempotency — re-running is always safe. Supported Field Types [#supported-field-types] | Plugin Type | PostgreSQL Type | | ----------- | --------------- | | `string` | `TEXT` | | `number` | `INTEGER` | | `boolean` | `BOOLEAN` | | `date` | `TIMESTAMPTZ` | Table Naming [#table-naming] Tables are automatically prefixed with `plugin_{pluginId}_` to avoid collisions with Atlas internal tables and other plugins. For example, a plugin with `id: "jira"` declaring a table `tickets` gets `plugin_jira_tickets`. Limitations [#limitations] * **PostgreSQL only** — Schema migrations require the internal PostgreSQL database (`DATABASE_URL`) * **Column additions only** — New fields added to a schema are handled automatically via `ALTER TABLE ADD COLUMN` * **No column removal** — Removing a field from the schema does not drop the column. Remove columns manually if needed * **No type changes** — Changing a field's type (e.g. `string` → `number`) is not handled. Migrate manually with a new column + data copy * **No renaming** — Renaming a field creates a new column; the old one remains. Clean up manually * **`required` + no `defaultValue` on new columns** — Adding `required: true` without a `defaultValue` to an existing table with rows will fail (`NOT NULL` constraint violation). Always provide a `defaultValue` when adding required fields to a schema that may already have data Datasource Plugin Properties [#datasource-plugin-properties] Beyond the basics shown in [step 4](#4-plugin-object), datasource plugins support several additional properties. `entities` [#entities] Provide semantic layer entity definitions programmatically. Entities are merged into the table whitelist at boot (in-memory only, no disk writes). Can be a static array or an async factory: ```typescript connection: { create: () => myConn, dbType: "postgres" }, entities: [ { name: "users", yaml: "table: users\ndimensions:\n id:\n type: number" }, ], // Or as an async factory: entities: async () => { const tables = await discoverTables(); return tables.map(t => ({ name: t.name, yaml: generateYaml(t) })); }, ``` `dialect` [#dialect] A string injected into the agent's system prompt with SQL dialect guidance: ```typescript dialect: "This datasource uses ClickHouse. Use toStartOfMonth() for date truncation, not DATE_TRUNC().", ``` `connection.validate` — Custom Query Validation [#connectionvalidate--custom-query-validation] Replace the standard SQL validation pipeline with a custom validator. Use this when your datasource speaks a non-SQL query language -- SOQL, GraphQL, MQL, or any custom query DSL where the standard 4-layer SQL validation (empty check, regex guard, AST parse, table whitelist) does not apply. **Signature:** ```typescript validate?(query: string): QueryValidationResult | Promise interface QueryValidationResult { valid: boolean; /** User-facing rejection reason — appears in error responses and audit logs. */ reason?: string; } ``` `validate` is defined on the datasource plugin's `connection` configuration object (`AtlasDatasourcePlugin.connection`), not on the `PluginDBConnection` runtime interface. **Behavior when `validate` is present:** * The entire standard `validateSQL` pipeline is bypassed for this connection * Auto-LIMIT is skipped (non-SQL languages may not support `LIMIT`) * RLS injection is skipped (the SQL rewriter cannot parse non-SQL queries) * `parserDialect` and `forbiddenPatterns` are ignored * Plugin hooks still fire -- queries rewritten by `beforeQuery` hooks are re-validated through this function before execution **Sync example** (SOQL length-limit validator): ```typescript connection: { create: () => mySalesforceConn, dbType: "salesforce", validate(query) { if (query.length > 20_000) { return { valid: false, reason: "SOQL query exceeds 20,000 character limit" }; } if (/\b(DELETE|INSERT|UPDATE|UPSERT)\b/i.test(query)) { return { valid: false, reason: "Only SELECT queries are allowed" }; } return { valid: true }; }, }, ``` **Async example** (external schema validation service): ```typescript connection: { create: () => myConn, dbType: "custom-api", async validate(query) { const res = await fetch("https://schema.internal/validate", { method: "POST", body: JSON.stringify({ query }), headers: { "Content-Type": "application/json" }, signal: AbortSignal.timeout(5000), }); if (!res.ok) return { valid: false, reason: "Schema service unavailable" }; const body = await res.json() as { allowed: boolean; message?: string }; return body.allowed ? { valid: true } : { valid: false, reason: body.message ?? "Query rejected" }; }, }, ``` Async validators add latency to every query. Prefer synchronous validation when possible. If you must call an external service, add a timeout and consider caching the schema locally. **Error propagation:** The `reason` string is user-facing -- it appears in the error response returned to the agent and is recorded in the audit log. Write clear, actionable messages (e.g., `"SOQL query exceeds 20,000 character limit"` rather than `"invalid"`). See the [Plugin Cookbook](/plugins/cookbook#custom-query-validation) for complete plugin examples with custom validators. `connection.parserDialect` and `connection.forbiddenPatterns` [#connectionparserdialect-and-connectionforbiddenpatterns] Customize the standard SQL validation pipeline without fully replacing it: ```typescript connection: { create: () => myConn, dbType: "snowflake", // Override auto-detected parser dialect (case-sensitive, e.g. "Snowflake" not "snowflake") parserDialect: "Snowflake", // Additional regex patterns to block beyond the base DML/DDL guard forbiddenPatterns: [/\bCOPY\s+INTO\b/i, /\bPUT\b/i], }, ``` These are ignored when a custom `validate` function is provided. Both properties are consumed during SQL validation: `parserDialect` sets the AST parser mode used in layer 2, and `forbiddenPatterns` are checked as additional regex guards in layer 1. See [SQL Validation Pipeline](/security/sql-validation) for the full layer breakdown. Plugin Lifecycle [#plugin-lifecycle] `teardown()` [#teardown] Called during graceful shutdown in reverse registration order (LIFO). Use it to close connections, flush buffers, or clean up resources. Never throw from `teardown()`. ```typescript async teardown() { await this.pool.end(); }, ``` `AtlasPluginContext` [#atlasplugincontext] The `ctx` object passed to `initialize()` and hook handlers provides: | Property | Type | Description | | ----------------- | -------------------------------- | ---------------------------------------------------------------------- | | `ctx.db` | `{ query(), execute() } \| null` | Internal Postgres (auth/audit DB). Null when `DATABASE_URL` is not set | | `ctx.connections` | `{ get(id), list() }` | Connection registry for analytics datasources | | `ctx.tools` | `{ register(tool) }` | Tool registry -- plugins can register additional agent tools | | `ctx.logger` | `PluginLogger` | Pino-compatible child logger scoped to the plugin ID | | `ctx.config` | `Record` | Resolved Atlas configuration (cast if you know the shape) | Example -- registering a custom tool from `initialize()`: ```typescript async initialize(ctx) { ctx.tools.register({ name: "lookupInventory", description: "Check inventory levels for a product SKU", tool: tool({ description: "Look up current inventory by SKU", inputSchema: z.object({ sku: z.string() }), execute: async ({ sku }) => fetchInventory(sku), }), }); }, ``` Reference Plugins [#reference-plugins] The Atlas monorepo includes 15 reference plugin implementations in the `plugins/` directory. These serve as working examples for every plugin type: **Datasource:** `clickhouse`, `duckdb`, `mysql`, `salesforce`, `snowflake` **Context:** `yaml-context` **Interaction:** `mcp`, `slack` **Action:** `email`, `jira` **Sandbox:** `daytona`, `e2b`, `nsjail`, `sidecar`, `vercel-sandbox` Browse the source at [`plugins/`](https://github.com/AtlasDevHQ/atlas/tree/main/plugins) for patterns on connection factories, config schemas, health checks, and testing. Common Patterns [#common-patterns] Health Check Contract [#health-check-contract] When implementing `healthCheck()`, follow these five rules: 1. **Always return** `{ healthy: boolean, message?: string, latencyMs?: number }` — never throw 2. **Measure latency** — wrap the probe in `performance.now()` and include `latencyMs` in the result (both success and failure paths) 3. **Catch all errors** — return `{ healthy: false, message: err instanceof Error ? err.message : String(err) }` on failure; never let exceptions escape 4. **Minimal probe** — test connectivity only (e.g. `SELECT 1`, ping an endpoint), not full functionality 5. **Timeout** — probes must have a reasonable timeout (5s default for network calls, 30s for sandbox creation), never hang indefinitely. Use `AbortSignal.timeout()` for fetch-based probes, `Promise.race` for SDK calls that don't support AbortSignal, or the `timeoutMs` parameter on `conn.query()` for database plugins Standard pattern for **database** plugins: ```typescript async healthCheck(): Promise { const start = performance.now(); let conn: PluginDBConnection | undefined; try { conn = createConnection(config); await conn.query("SELECT 1", 5000); // timeout via query parameter return { healthy: true, latencyMs: Math.round(performance.now() - start) }; } catch (err) { return { healthy: false, message: err instanceof Error ? err.message : String(err), latencyMs: Math.round(performance.now() - start), }; } finally { if (conn) await conn.close(); } } ``` Standard pattern for **HTTP API** plugins (email, JIRA): ```typescript async healthCheck(): Promise { const start = performance.now(); try { const response = await fetch("https://api.example.com/health", { headers: { Authorization: `Bearer ${config.apiKey}` }, signal: AbortSignal.timeout(5000), // 5s timeout }); const latencyMs = Math.round(performance.now() - start); if (response.ok) return { healthy: true, latencyMs }; return { healthy: false, message: `API returned ${response.status}`, latencyMs }; } catch (err) { return { healthy: false, message: err instanceof Error ? err.message : String(err), latencyMs: Math.round(performance.now() - start), }; } } ``` Standard pattern for **sandbox** plugins (where probe operations are slow): ```typescript async healthCheck(): Promise { const start = performance.now(); const TIMEOUT = 30_000; // sandbox creation can be slow let sandbox: SandboxInstance | null = null; // hoist for cleanup on timeout let timer: ReturnType; try { const result = await Promise.race([ (async () => { sandbox = await createSandbox(config); // your SDK's create method await sandbox.close(); // cleanup method varies by SDK (kill, stop, delete, etc.) sandbox = null; return "ok" as const; })(), new Promise<"timeout">((resolve) => { timer = setTimeout(() => resolve("timeout"), TIMEOUT); }), ]).finally(() => clearTimeout(timer!)); // always clean up the timer const latencyMs = Math.round(performance.now() - start); if (result === "timeout") { // Best-effort cleanup — sandbox may still be creating if (sandbox) { try { await sandbox.close(); } catch { /* best-effort */ } } return { healthy: false, message: `Timed out after ${TIMEOUT}ms`, latencyMs }; } return { healthy: true, latencyMs }; } catch (err) { if (sandbox) { try { await sandbox.close(); } catch { /* best-effort */ } } return { healthy: false, message: err instanceof Error ? err.message : String(err), latencyMs: Math.round(performance.now() - start), }; } } ``` Standard pattern for **local-only** plugins (filesystem, in-process): ```typescript async healthCheck(): Promise { const start = performance.now(); try { // Verify local resources exist const files = fs.readdirSync(dir).filter((f) => f.endsWith(".yml")); const latencyMs = Math.round(performance.now() - start); if (files.length === 0) { return { healthy: false, message: "No entity files found", latencyMs }; } return { healthy: true, latencyMs }; } catch (err) { return { healthy: false, message: err instanceof Error ? err.message : String(err), latencyMs: Math.round(performance.now() - start), }; } } ``` Error Handling [#error-handling] * **Throw from `initialize()`** to block server startup (fatal misconfiguration) * **Return unhealthy from `healthCheck()`** for runtime degradation (transient errors) * **Never throw from `healthCheck()` or `teardown()`** Config-Driven Credentials [#config-driven-credentials] Pass credentials via plugin config, not environment variables: ```typescript // Good myPlugin({ apiKey: process.env.MY_API_KEY! }) // Bad -- hidden dependency on env var name // inside plugin: process.env.MY_API_KEY ``` *** See Also [#see-also] * [Plugin Directory](/plugins/overview) — Browse all official Atlas plugins * [Plugin Cookbook](/plugins/cookbook) — Real-world patterns for caching, hooks, credentials, and error handling * [Plugin Composition](/plugins/composition) — How multiple plugins interact (ordering, priority, constraints) * [Configuration](/reference/config#plugins) — Registering plugins in `atlas.config.ts` * [SQL Validation Pipeline](/security/sql-validation) — How plugin hooks and custom validators fit into validation --- # Atlas vs Metabase (/comparisons/metabase) [Metabase](https://www.metabase.com/) is a mature open-source business intelligence platform with dashboards, visual query builder, and (in recent versions) AI-assisted querying. Atlas and Metabase overlap on natural-language data querying but serve fundamentally different use cases. Quick Comparison [#quick-comparison] | | Atlas | Metabase | | ------------------------ | -------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- | | **License** | AGPL-3.0 core, MIT client libs | AGPL-3.0 (Pro/Enterprise is proprietary) | | **Category** | Embeddable text-to-SQL agent + hosted SaaS | Full BI platform | | **Embeddable** | Script tag, React component, SDK with streaming | Embedding SDK + public links (SDK requires Pro/Enterprise) | | **AI querying** | Core feature (multi-step agent with tool use) | Metabot AI (cloud-only add-on, $100/mo for 500 requests; open-source gets single-shot SQL only) | | **Visual query builder** | No (natural language + notebook) | Yes | | **Dashboards** | No (notebook interface for exploratory analysis) | Yes (core feature) | | **Semantic layer** | YAML files + web editor + dynamic learning | Data model UI + Data Studio (analyst workbench for glossary, measures, segments) | | **Databases** | Postgres, MySQL + plugins for BigQuery, ClickHouse, DuckDB, Salesforce, Snowflake | 20+ databases | | **Plugin system** | Plugin SDK + 21+ plugins + marketplace | Database drivers + community | | **Auth model** | Managed, BYOT, API key, SSO/SCIM | Managed, SSO, LDAP (Pro) | | **Chat integrations** | 8 platforms (Slack, Teams, Discord, Telegram, Google Chat, GitHub, Linear, WhatsApp) | Slack (paid) | | **Scheduled reports** | Yes (built-in) | Yes (built-in) | | **Admin console** | Yes (connections, users, plugins, semantic editor, analytics, billing) | Yes (comprehensive) | | **MCP server** | Yes (stdio + SSE) | No | | **Python tool** | Sandboxed execution with streaming + charts | No | | **Enterprise features** | SSO/SCIM, custom roles, IP allowlists, approval workflows, PII masking, data residency | SSO, LDAP, sandboxing (Pro/Enterprise) | | **Data residency** | 3-region deployment (US, EU, APAC) | Cloud regions | Different Tools for Different Problems [#different-tools-for-different-problems] **Metabase** is a BI platform. It replaces Excel, Tableau, and Looker for teams that need dashboards, scheduled reports, and visual exploration. AI querying via Metabot is a growing feature — it handles natural language queries, SQL generation, SQL debugging, and smart content reuse — but it's currently a cloud-only add-on ($100/mo for 500 requests). Self-hosted Metabot is on the roadmap. **Atlas** is an AI agent you embed in other applications or use via [Atlas Cloud](https://app.useatlas.dev). It does one thing -- lets users query data in natural language -- and does it as a composable component, not a standalone product. Atlas includes a notebook interface for multi-step exploratory analysis, but it's not a dashboard builder. It also provides 8 chat platform integrations, a plugin marketplace, and enterprise features (SSO/SCIM, PII masking, data residency). These aren't really competitors. If you need dashboards, use Metabase (or Looker, or Tableau). If you need to embed natural-language data querying inside your own product, use Atlas. If you need a multi-step AI agent that reasons through complex analytical questions, Atlas's agent loop (explore semantic layer → write SQL → validate → execute → explain results) is purpose-built for that — Metabot handles single-shot queries. Embedding [#embedding] **Metabase** offers multiple embedding options. Public links and iframe embeds are available in the free tier. The Modular Embedding SDK (React components for charts, dashboards, query builder, and AI chat) requires a Pro or Enterprise license. **Atlas** is embeddable by design at every tier. Add a ` ``` 5\. Rendering tool calls [#5-rendering-tool-calls] In AI SDK v6, tool parts use per-tool type names (`"tool-explore"`, `"tool-executeSQL"`) or `"dynamic-tool"` for dynamic tools. Use `isToolUIPart(part)` from `"ai"` to detect tool parts and access `state`, `input`, `output` directly on the part object. Here is a basic Vue component for rendering tool calls: ```vue ``` The key data shapes: * **`executeSQL` output** -- `{ success, columns, rows, truncated }` on success, or `{ success: false, error }` on failure. Use `rows.length` to get the row count. * **`explore` output** -- a plain string (stdout of the command). * **`state`** -- check for `"output-available"` to know when the result is ready. Other states: `"input-streaming"`, `"input-available"`, `"output-error"`, `"output-denied"`. * **Tool detection** -- use `isToolUIPart(part)` from `"ai"` (not `part.type === "tool-invocation"`). See the [Data Stream Protocol](/frameworks/overview#tool-call-parts-ai-sdk-v6) section for the full part structure and a concrete example. 6\. Synchronous queries (alternative) [#6-synchronous-queries-alternative] If you don't need streaming, use the JSON query endpoint instead: ```typescript // composables/useAtlasQuery.ts export async function queryAtlas(question: string) { const config = useRuntimeConfig(); const apiUrl = config.public.atlasApiUrl || ""; const res = await $fetch(`${apiUrl}/api/v1/query`, { method: "POST", body: { question }, headers: { "Content-Type": "application/json", }, }); return res; } ``` See [Bring Your Own Frontend](/frameworks/overview) for the full architecture and what `@atlas/web` adds on top. --- # SvelteKit (/frameworks/sveltekit) Integrate Atlas into a SvelteKit app using `@ai-sdk/svelte`. > **Prerequisites:** A running Atlas API server. See [Bring Your Own Frontend](/frameworks/overview) for architecture and common setup. As of AI SDK v6, `@ai-sdk/svelte` (v4+) supports the transport-based API (`DefaultChatTransport`, `sendMessage`) shown below. Make sure you install `@ai-sdk/svelte@^4.0.0` and `ai@^6.0.0`. If you prefer not to use streaming, the [sync query endpoint](/frameworks/overview#streaming-vs-synchronous) (`POST /api/v1/query`) works with any HTTP client. The [`@useatlas/sdk`](https://www.npmjs.com/package/@useatlas/sdk) package provides a typed client for that endpoint. *** 1\. Install dependencies [#1-install-dependencies] ```bash bun add @ai-sdk/svelte ai ``` 2\. Configure the API URL [#2-configure-the-api-url] Option A: Same-origin proxy (recommended) [#option-a-same-origin-proxy-recommended] In `vite.config.ts`, proxy `/api` to the Atlas API during development: ```typescript // vite.config.ts import { sveltekit } from "@sveltejs/kit/vite"; import { defineConfig } from "vite"; export default defineConfig({ plugins: [sveltekit()], server: { proxy: { "/api": { target: "http://localhost:3001", changeOrigin: true, }, }, }, }); ``` Option B: Cross-origin [#option-b-cross-origin] ```bash # .env PUBLIC_ATLAS_API_URL=http://localhost:3001 ``` ```bash # Atlas API .env ATLAS_CORS_ORIGIN=http://localhost:5173 ``` 3\. Chat store [#3-chat-store] Create a reusable chat module using `@ai-sdk/svelte`'s `useChat`: ```typescript // src/lib/atlas-chat.ts import { useChat } from "@ai-sdk/svelte"; import { DefaultChatTransport } from "ai"; import { writable, get } from "svelte/store"; import { env } from "$env/dynamic/public"; const apiUrl = env.PUBLIC_ATLAS_API_URL ?? ""; const isCrossOrigin = !!apiUrl; export function createAtlasChat(apiKey: () => string) { const conversationId = writable(null); const transport = new DefaultChatTransport({ api: `${apiUrl}/api/v1/chat`, get headers() { const key = apiKey(); const h: Record = {}; if (key) h["Authorization"] = `Bearer ${key}`; return h; }, credentials: isCrossOrigin ? "include" : undefined, // Pass conversationId to continue an existing conversation body: () => { const convId = get(conversationId); return convId ? { conversationId: convId } : {}; }, // Capture x-conversation-id from the response header fetch: (async (input: RequestInfo | URL, init?: RequestInit) => { const response = await globalThis.fetch(input, init); const convId = response.headers.get("x-conversation-id"); if (convId && convId !== get(conversationId)) { conversationId.set(convId); } return response; }) as typeof fetch, }); const { messages, sendMessage, status, error } = useChat({ transport }); return { messages, sendMessage, status, error, conversationId }; } ``` 4\. Chat page [#4-chat-page] Here is a minimal Svelte component that uses the chat store: ```svelte

Atlas

{#each $messages as m (m.id)} {#if m.role === "user"}
{#each m.parts ?? [] as part, i} {#if part.type === "text"}

{part.text}

{/if} {/each}
{:else}
{#each m.parts ?? [] as part, i} {#if part.type === "text" && part.text.trim()}
{part.text}
{:else if isToolUIPart(part)} {/if} {/each}
{/if} {/each}
``` 5\. Rendering tool calls [#5-rendering-tool-calls] In AI SDK v6, tool parts use per-tool type names (`"tool-explore"`, `"tool-executeSQL"`) or `"dynamic-tool"` for dynamic tools. Use `isToolUIPart(part)` from `"ai"` to detect tool parts and `getToolName(part)` to extract the name. Here is a basic Svelte component for rendering tool calls: ```svelte {#if !done}
{toolName === "executeSQL" ? "Executing query..." : "Running command..."}
{:else if toolName === "executeSQL" && sqlResult}
SQL {input.explanation ?? "Query result"} {rows.length} row{rows.length !== 1 ? "s" : ""}
{#if columns.length && rows.length}
{#each columns as col} {/each} {#each rows as row, i} {#each columns as col} {/each} {/each}
{col}
{row[col] == null ? "\u2014" : row[col]}
{/if}
{:else if toolName === "explore"}
$ {input.command}
{output}
{:else}
Tool: {toolName}
{/if} ``` The key data shapes: * **`executeSQL` output** -- `{ success, columns, rows, truncated }` on success, or `{ success: false, error }` on failure. Use `rows.length` to get the row count. * **`explore` output** -- a plain string (stdout of the command). * **`state`** -- check for `"output-available"` to know when the result is ready. Other states: `"input-streaming"`, `"input-available"`, `"output-error"`, `"output-denied"`. * **Tool detection** -- use `isToolUIPart(part)` from `"ai"` (not `part.type === "tool-invocation"`). See the [Data Stream Protocol](/frameworks/overview#tool-call-parts-ai-sdk-v6) section for the full part structure and a concrete example. 6\. Conversation management [#6-conversation-management] Atlas persists conversations server-side (requires `DATABASE_URL`). Here is a minimal Svelte module for listing, loading, and deleting conversations: ```typescript // src/lib/atlas-conversations.ts import { writable } from "svelte/store"; import { env } from "$env/dynamic/public"; const apiUrl = env.PUBLIC_ATLAS_API_URL ?? ""; export interface Conversation { id: string; userId: string | null; title: string | null; surface: string; connectionId: string | null; starred: boolean; createdAt: string; updatedAt: string; } export interface Message { id: string; conversationId: string; role: "user" | "assistant" | "system" | "tool"; content: unknown; createdAt: string; } export function createConversationStore(apiKey: () => string) { const conversations = writable([]); const loading = writable(false); function headers(): Record { const key = apiKey(); const h: Record = {}; if (key) h["Authorization"] = `Bearer ${key}`; return h; } async function list() { loading.set(true); try { const res = await fetch(`${apiUrl}/api/v1/conversations`, { headers: headers(), }); if (!res.ok) return; const data = await res.json(); conversations.set(data.conversations ?? []); } finally { loading.set(false); } } async function load(id: string): Promise { const res = await fetch(`${apiUrl}/api/v1/conversations/${id}`, { headers: headers(), }); if (!res.ok) throw new Error("Failed to load conversation"); const data = await res.json(); return data.messages ?? []; } async function remove(id: string) { await fetch(`${apiUrl}/api/v1/conversations/${id}`, { method: "DELETE", headers: headers(), }); conversations.update((prev) => prev.filter((c) => c.id !== id)); } return { conversations, loading, list, load, remove }; } ``` To resume a conversation, pass the `conversationId` in the chat request body. The `useChat` composable from `@ai-sdk/svelte` does not manage this automatically, but the chat store in section 3 shows how to track the ID from the `x-conversation-id` response header and include it in subsequent requests via the transport's `body` option. 7\. Synchronous queries (alternative) [#7-synchronous-queries-alternative] If streaming is not needed, use the JSON query endpoint: ```typescript // src/lib/atlas-query.ts import { env } from "$env/dynamic/public"; const apiUrl = env.PUBLIC_ATLAS_API_URL ?? ""; export async function queryAtlas(question: string, apiKey?: string) { const headers: Record = { "Content-Type": "application/json", }; if (apiKey) headers["Authorization"] = `Bearer ${apiKey}`; const res = await fetch(`${apiUrl}/api/v1/query`, { method: "POST", headers, body: JSON.stringify({ question }), }); if (!res.ok) throw new Error(`Atlas query failed: ${res.status}`); return res.json(); } ``` See [Bring Your Own Frontend](/frameworks/overview) for the full architecture and what `@atlas/web` adds on top. --- # Backups & Disaster Recovery (/platform-ops/backups) Atlas provides automated backup and disaster recovery for the internal PostgreSQL database (`DATABASE_URL`). This database stores auth, audit logs, semantic layer metadata, billing, SLA metrics, learned patterns, and chat state — protecting it is critical. Managed backups are available on [app.useatlas.dev](https://app.useatlas.dev) Enterprise plans. Self-hosted deployments should use their own backup strategy (e.g. managed database backups from your cloud provider). * Active Enterprise plan on [app.useatlas.dev](https://app.useatlas.dev) * Internal database configured (`DATABASE_URL`) * Platform admin role for dashboard access * `pg_dump` and `psql` available on the server `PATH` How It Works [#how-it-works] The backup system uses `pg_dump` to create SQL dumps of the internal database, compressed with gzip. Backups are stored in a configurable local directory (or S3-compatible path in the future). | Component | Description | | ----------------- | ----------------------------------------------------------------- | | **Backup engine** | `pg_dump` with gzip compression | | **Scheduler** | Cron-based, default daily at 03:00 UTC | | **Retention** | Auto-purge expired backups (default 30 days) | | **Verification** | Decompress and validate pg\_dump header | | **Restore** | `psql` with single-transaction mode and pre-restore safety backup | Configuration [#configuration] Environment Variables [#environment-variables] | Variable | Default | Description | | ----------------------------- | ----------- | ------------------------------------------- | | `ATLAS_BACKUP_SCHEDULE` | `0 3 * * *` | Cron expression (UTC) for automated backups | | `ATLAS_BACKUP_RETENTION_DAYS` | `30` | Days to keep backups before auto-purge | | `ATLAS_BACKUP_STORAGE_PATH` | `./backups` | Directory for backup files | Admin UI [#admin-ui] Navigate to **Admin → Backups** (platform admin only) to: * View all backups with status, size, and retention info * Trigger manual backups * Verify backup integrity * Restore from a backup * Configure schedule and retention API Configuration [#api-configuration] Update the schedule and retention via API: ```bash curl -X PUT http://localhost:3001/api/v1/platform/backups/config \ -H "Authorization: Bearer $TOKEN" \ -H "Content-Type: application/json" \ -d '{"schedule": "0 */6 * * *", "retentionDays": 60}' ``` Manual Backup [#manual-backup] Trigger an immediate backup via the admin UI or API: ```bash curl -X POST http://localhost:3001/api/v1/platform/backups \ -H "Authorization: Bearer $TOKEN" ``` The backup runs in the foreground — the response includes the backup ID, size, and status. Verification [#verification] Verify a backup's integrity by decompressing the archive and validating the pg\_dump header: ```bash curl -X POST http://localhost:3001/api/v1/platform/backups/$BACKUP_ID/verify \ -H "Authorization: Bearer $TOKEN" ``` A verified backup transitions from `completed` to `verified` status. Disaster Recovery [#disaster-recovery] Restore Process [#restore-process] Restoring from a backup is a two-step process with a confirmation token to prevent accidental restores: **Step 1 — Request restore:** ```bash curl -X POST http://localhost:3001/api/v1/platform/backups/$BACKUP_ID/restore \ -H "Authorization: Bearer $TOKEN" ``` This returns a `confirmationToken` valid for 5 minutes. **Step 2 — Confirm and execute:** ```bash curl -X POST http://localhost:3001/api/v1/platform/backups/$BACKUP_ID/restore/confirm \ -H "Authorization: Bearer $TOKEN" \ -H "Content-Type: application/json" \ -d '{"confirmationToken": "the-token-from-step-1"}' ``` The restore process automatically: 1. Creates a pre-restore backup (safety net) 2. Restores the target backup using `psql --single-transaction` 3. Returns the pre-restore backup ID for rollback if needed Disaster Recovery Runbook [#disaster-recovery-runbook] If the internal database is corrupted or lost: 1. **Assess the situation** — Check if the database is still accessible. If PostgreSQL is running but data is corrupt, the backup system may still work. 2. **Identify the target backup** — List available backups via API or check the backup storage directory directly: ```bash ls -la ./backups/ ``` 3. **Verify the backup** (optional but recommended): ```bash gunzip -t ./backups/atlas-backup-TIMESTAMP.sql.gz ``` 4. **Restore via API** if the API server is running — follow the two-step restore process above. 5. **Manual restore** if the API server is down: ```bash gunzip -c ./backups/atlas-backup-TIMESTAMP.sql.gz | \ psql --single-transaction --set ON_ERROR_STOP=on \ -h $DB_HOST -p $DB_PORT -U $DB_USER -d $DB_NAME ``` 6. **Verify the restore** — Check that auth, conversations, and settings are intact by logging in and running a query. 7. **Restart services** — Restart the Atlas API and web servers to pick up the restored data. Backup files are stored locally by default. For production deployments, consider mounting a persistent volume or configuring an S3-compatible storage path. Keep at least one copy of recent backups off-host. API Reference [#api-reference] | Method | Path | Description | | ------ | ---------------------------------------------- | --------------------------- | | `GET` | `/api/v1/platform/backups` | List all backups | | `POST` | `/api/v1/platform/backups` | Create manual backup | | `POST` | `/api/v1/platform/backups/:id/verify` | Verify backup integrity | | `POST` | `/api/v1/platform/backups/:id/restore` | Request restore token | | `POST` | `/api/v1/platform/backups/:id/restore/confirm` | Execute restore | | `GET` | `/api/v1/platform/backups/config` | Get backup configuration | | `PUT` | `/api/v1/platform/backups/config` | Update backup configuration | All endpoints require `platform_admin` role and enterprise features to be enabled. Troubleshooting [#troubleshooting] `pg_dump` not found [#pg_dump-not-found] The backup engine requires `pg_dump` to be available on the server `PATH`. On Docker deployments, ensure the PostgreSQL client tools are installed: ```dockerfile RUN apt-get update && apt-get install -y postgresql-client ``` Backup fails with permission error [#backup-fails-with-permission-error] Ensure the backup storage directory exists and is writable by the Atlas process: ```bash mkdir -p ./backups && chmod 755 ./backups ``` Restore fails mid-way [#restore-fails-mid-way] The restore uses `--single-transaction`, so a failed restore will not leave the database in a partial state — it rolls back to the pre-restore state. The pre-restore backup is available for manual recovery if needed. --- # Abuse Prevention (/platform-ops/abuse-prevention) Atlas includes built-in abuse prevention that detects anomalous query patterns per workspace and applies a graduated response: **warn**, **throttle**, then **suspend**. This protects your analytics datasource from runaway queries, credential stuffing, and resource exhaustion. * Atlas server running with `DATABASE_URL` configured (internal DB required for persistence) * Admin access for the admin console abuse management page How It Works [#how-it-works] Atlas monitors three anomaly signals per workspace using sliding window counters: | Signal | Default Threshold | Description | | ----------------- | ------------------- | ------------------------------- | | **Query rate** | 200 queries / 5 min | Excessive request volume | | **Error rate** | 50% | High ratio of failed queries | | **Unique tables** | 50 tables / window | Unusual breadth of table access | When a threshold is exceeded, the workspace escalates through three levels: 1. **Warning** — Event logged, visible in admin console. No user impact. 2. **Throttled** — Configurable delay (default 2s) injected before each request. Users experience slower responses. 3. **Suspended** — All requests blocked with `403 workspace_suspended`. Requires admin reinstatement. Each additional threshold breach while already flagged escalates to the next level. Configuration [#configuration] All thresholds are configurable via environment variables: | Variable | Default | Description | | ------------------------------- | ------- | -------------------------------------------- | | `ATLAS_ABUSE_QUERY_RATE` | `200` | Max queries per workspace per sliding window | | `ATLAS_ABUSE_WINDOW_SECONDS` | `300` | Sliding window duration in seconds | | `ATLAS_ABUSE_ERROR_RATE` | `0.5` | Max error rate (0–1) before escalation | | `ATLAS_ABUSE_UNIQUE_TABLES` | `50` | Max unique tables accessed per window | | `ATLAS_ABUSE_THROTTLE_DELAY_MS` | `2000` | Delay injected for throttled workspaces (ms) | Admin Console [#admin-console] The **Abuse Prevention** page in the admin console (`/admin/abuse`) shows: * **Detection Thresholds** — Current configuration values * **Flagged Workspaces** — Table of workspaces with active flags (warning, throttled, or suspended) * **Reinstate** — Button to clear abuse flags and restore normal access Reinstatement [#reinstatement] When you reinstate a workspace: * All abuse counters are reset to zero * The workspace immediately returns to normal operation * If the abusive pattern continues, the workspace will be flagged again Audit Trail [#audit-trail] All abuse events are recorded in the audit trail: * Level changes (warn, throttle, suspend) * Manual reinstatements (includes the admin who performed the action) * Metadata includes query count, error count, unique tables, and escalation count Events are persisted to the `abuse_events` table in the internal database and are accessible via the admin API. API Reference [#api-reference] List flagged workspaces [#list-flagged-workspaces] ``` GET /api/v1/admin/abuse ``` Returns all workspaces with active abuse flags, including recent events. Reinstate a workspace [#reinstate-a-workspace] ``` POST /api/v1/admin/abuse/:workspaceId/reinstate ``` Clears abuse flags and restores normal access for the specified workspace. Get threshold configuration [#get-threshold-configuration] ``` GET /api/v1/admin/abuse/config ``` Returns the current abuse detection threshold configuration. Relationship to Rate Limiting [#relationship-to-rate-limiting] Abuse prevention is separate from [rate limiting](/guides/rate-limiting): * **Rate limiting** (ATLAS\_RATE\_LIMIT\_RPM) is per-user, per-minute, and returns 429 immediately * **Abuse prevention** is per-workspace, uses a longer sliding window, and applies graduated response Both can be active simultaneously. Rate limiting catches individual users making too many requests; abuse prevention catches workspace-level patterns that may involve multiple users or automated access. --- # Observability (/platform-ops/observability) Atlas includes built-in support for OpenTelemetry distributed tracing and structured JSON logging via Pino. Both are zero-overhead when disabled. * Atlas server running (`bun run dev`) * For tracing: an OpenTelemetry-compatible collector (Jaeger, Grafana Tempo, Datadog, etc.) OpenTelemetry Tracing [#opentelemetry-tracing] Atlas uses the `@opentelemetry/api` package to create spans around key operations. When the OpenTelemetry SDK is not initialized, the API returns no-op tracers with zero runtime overhead. Enabling Tracing [#enabling-tracing] Set `OTEL_EXPORTER_OTLP_ENDPOINT` to your collector's OTLP HTTP endpoint: ```bash OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318 ``` The API server initializes the OpenTelemetry Node.js SDK on startup, registering a trace exporter that sends spans to `{OTEL_EXPORTER_OTLP_ENDPOINT}/v1/traces`. The API is identified as `atlas-api`; the web frontend as `atlas`. Both use the package version from `package.json`. When the environment variable is absent, no SDK is initialized and all trace calls are no-ops — zero overhead. What Gets Traced [#what-gets-traced] Atlas creates spans at each layer of the request lifecycle, forming a proper parent-child hierarchy: ``` HTTP Request (http.request) └── Agent Loop (atlas.agent) ├── Step 1 │ ├── atlas.explore │ └── atlas.explore └── Step 2 └── atlas.sql.execute ``` | Span Name | Attributes | Description | | ---------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------- | | `http.request` | `http.method`, `http.target`, `http.status_code` | Root span per API request (Hono middleware) | | `atlas.agent` | `atlas.provider`, `atlas.model`, `atlas.message_count`, `atlas.finish_reason`, `atlas.total_steps`, `atlas.total_input_tokens`, `atlas.total_output_tokens` | Full agent loop (one per `streamText` call) | | `atlas.sql.execute` | `db.system`, `atlas.connection_id`, `atlas.row_count`, `atlas.column_count` | SQL query execution. SQL content is **not** included for security | | `atlas.explore` | `atlas.command` (truncated), `atlas.backend` | Semantic layer file exploration (ls, cat, grep, find) | | `atlas.python.execute` | `code.length` | Python code execution in sandbox | Each span records success/failure status and captures exceptions on error, making it straightforward to trace agent step failures back to specific tool calls. Collector Setup [#collector-setup] Any OpenTelemetry-compatible collector works. Here are common setups: Jaeger [#jaeger] ```bash # Run Jaeger with OTLP ingestion (port 16686 = UI, 4318 = OTLP HTTP) docker run -d --name jaeger \ -p 16686:16686 \ -p 4318:4318 \ jaegertracing/jaeger:latest # Point Atlas at the Jaeger OTLP endpoint OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318 ``` Open `http://localhost:16686` to view traces. Search for service `atlas-api` (API server) or `atlas` (web frontend). Grafana Tempo [#grafana-tempo] ```bash OTEL_EXPORTER_OTLP_ENDPOINT=http://tempo:4318 ``` Query traces in Grafana's Explore view using the Tempo datasource. Datadog [#datadog] Use the Datadog Agent's OTLP ingestion: ```bash OTEL_EXPORTER_OTLP_ENDPOINT=http://datadog-agent:4318 ``` See the [Datadog OTLP documentation](https://docs.datadoghq.com/opentelemetry/otlp_ingest_in_the_agent/) for agent configuration. Generic OTLP Collector [#generic-otlp-collector] Any service that accepts OTLP over HTTP (Honeycomb, Axiom, Signoz, etc.) works by setting the endpoint: ```bash # Set the OTLP endpoint and any required auth headers for your collector OTEL_EXPORTER_OTLP_ENDPOINT=https://api.honeycomb.io OTEL_EXPORTER_OTLP_HEADERS="x-honeycomb-team=your-api-key" ``` Atlas uses `@opentelemetry/exporter-trace-otlp-http` for trace export. Standard OpenTelemetry environment variables like `OTEL_EXPORTER_OTLP_HEADERS` and `OTEL_RESOURCE_ATTRIBUTES` are respected by the underlying SDK. Graceful Shutdown [#graceful-shutdown] The SDK registers a `SIGTERM` handler to flush pending spans before the process exits. This ensures traces from the final requests are not lost during container restarts or deployments. *** Structured Logging [#structured-logging] Atlas uses [Pino](https://getpino.io/) for structured JSON logging. Every log line includes a timestamp, level, component name, and request context (when available). Log Levels [#log-levels] Control verbosity with `ATLAS_LOG_LEVEL`: ```bash ATLAS_LOG_LEVEL=debug # trace | debug | info | warn | error | fatal ``` The default level is `info`. In development (`NODE_ENV !== "production"`), logs are formatted with `pino-pretty` for human readability. In production, logs are emitted as single-line JSON for machine parsing. Log Structure [#log-structure] Each log entry includes: | Field | Description | | ----------- | ----------------------------------------------------------------------------- | | `level` | Numeric Pino level (10=trace, 20=debug, 30=info, 40=warn, 50=error, 60=fatal) | | `time` | Unix timestamp in milliseconds | | `msg` | Human-readable message | | `component` | Module that emitted the log (e.g., `agent`, `sql`, `explore`, `auth`) | | `requestId` | UUID for the current request (when inside a request context) | | `userId` | Authenticated user ID (when inside a request context) | Example Output [#example-output] Production (JSON): ```json {"level":30,"time":1706000000000,"component":"sql","requestId":"abc-123","msg":"Query executed","durationMs":45,"rowCount":100} ``` Development (pretty-printed): ``` [10:30:00.000] INFO (sql): Query executed requestId: "abc-123" durationMs: 45 rowCount: 100 ``` Component Loggers [#component-loggers] Atlas creates child loggers per component using `createLogger("component-name")`. Key components: * `agent` -- Agent loop lifecycle and step transitions * `sql` -- SQL validation, execution, and audit * `explore` -- Semantic layer file access * `auth` -- Authentication and authorization decisions * `admin-routes` -- Admin API operations * `scheduler` -- Scheduled task execution * `conversations` -- Conversation persistence * `actions` -- Action approval and execution *** Troubleshooting [#troubleshooting] No traces appearing in the collector [#no-traces-appearing-in-the-collector] **Cause:** `OTEL_EXPORTER_OTLP_ENDPOINT` is not set, or the collector is unreachable from the Atlas server. **Fix:** Verify the environment variable is set and the endpoint is reachable: `curl http://localhost:4318/v1/traces`. Check that the collector is running and accepting OTLP HTTP connections on the configured port. Logs are JSON in development [#logs-are-json-in-development] **Cause:** `NODE_ENV` is set to `production` (or a non-development value). Pino uses JSON output in production and pretty-printed output in development. **Fix:** For development, ensure `NODE_ENV` is unset or set to `development`. For production where you want readable logs, pipe through pino-pretty: `bun run dev:api | bun x pino-pretty`. Missing `requestId` or `userId` in log entries [#missing-requestid-or-userid-in-log-entries] **Cause:** The log was emitted outside of a request context (e.g., during startup or in a background task like the scheduler). **Fix:** This is expected. Context fields (`requestId`, `userId`) are only present for logs emitted inside an HTTP request handler. Startup and scheduler logs include `component` but not request-scoped fields. For more, see [Troubleshooting](/guides/troubleshooting). *** Related [#related] * [Troubleshooting](/guides/troubleshooting#debug-logging) -- enable debug logging and interpret diagnostic output * [Environment Variables](/reference/environment-variables) -- `ATLAS_LOG_LEVEL`, `OTEL_EXPORTER_OTLP_ENDPOINT`, and related config --- # SLA Monitoring (/platform-ops/sla-monitoring) Atlas provides built-in SLA monitoring that tracks per-workspace query performance and reliability metrics. Platform operators can view latency percentiles (p50/p95/p99), error rates, and uptime — with configurable alerting when thresholds are breached. SLA monitoring is available on [app.useatlas.dev](https://app.useatlas.dev) Enterprise plans. Self-hosted deployments can use their own monitoring infrastructure. * Active Enterprise plan on [app.useatlas.dev](https://app.useatlas.dev) * Internal database configured (`DATABASE_URL`) * Platform admin role for dashboard access How It Works [#how-it-works] Every query execution automatically records two data points: * **Latency** — round-trip time in milliseconds * **Outcome** — success or error These are stored in the internal database and aggregated on-demand into: | Metric | Description | | --------------------------- | -------------------------------------------------------- | | **P50 / P95 / P99 latency** | Query latency percentiles over the time window | | **Error rate** | Percentage of queries that returned errors | | **Uptime** | Percentage of successful queries (inverse of error rate) | | **Total queries** | Query volume per workspace | Metrics are computed over a configurable time window (default: 24 hours). Pass `?hours=N` (1–720) to the API endpoints to adjust the window. Alerting [#alerting] SLA alerts fire when workspace metrics exceed configured thresholds. Two alert types are supported: | Alert Type | Default Threshold | Description | | --------------- | ----------------- | ----------------------------------- | | **P99 Latency** | 5000ms | P99 query latency exceeds threshold | | **Error Rate** | 5% | Error rate exceeds threshold | Alert Lifecycle [#alert-lifecycle] Alerts progress through three states: 1. **Firing** — Threshold breached. Notification sent via webhook (if configured). 2. **Acknowledged** — Operator has acknowledged the alert but it remains active. 3. **Resolved** — Metric returned below threshold. Auto-resolved on next evaluation. Webhook Notifications [#webhook-notifications] Set `ATLAS_SLA_WEBHOOK_URL` to receive alert notifications via HTTP POST: ```json { "type": "sla.alert.fired", "alert": { "id": "abc-123", "workspaceId": "ws-456", "workspaceName": "Acme Corp", "type": "latency_p99", "status": "firing", "currentValue": 6200, "threshold": 5000, "message": "Workspace \"Acme Corp\" p99 latency 6200ms exceeds threshold 5000ms" }, "timestamp": "2026-03-23T10:30:00.000Z" } ``` Configuration [#configuration] Environment Variables [#environment-variables] | Variable | Default | Description | | -------------------------- | ------- | ---------------------------------------- | | `ATLAS_SLA_LATENCY_P99_MS` | `5000` | Default P99 latency alert threshold (ms) | | `ATLAS_SLA_ERROR_RATE_PCT` | `5` | Default error rate alert threshold (%) | | `ATLAS_SLA_WEBHOOK_URL` | — | Webhook URL for alert delivery | Thresholds can also be configured through the admin UI, which takes precedence over env vars. Dashboard [#dashboard] The SLA monitoring dashboard is available in the admin console under **Platform Admin > SLA Monitoring**. It requires the `platform_admin` role. Overview Tab [#overview-tab] A table of all workspaces showing: * Latency percentiles (P50, P95, P99) with color-coded badges * Error rate and uptime percentage * Total query count * Click-through to per-workspace detail with hourly time-series charts Alerts Tab [#alerts-tab] * Active and recent alerts with status badges * One-click acknowledge for firing alerts * Manual "Evaluate Now" to trigger immediate alert evaluation * Threshold configuration dialog API Endpoints [#api-endpoints] All endpoints require `platform_admin` role and are mounted at `/api/v1/platform/sla`. | Method | Path | Description | | ------ | ------------------------------ | ---------------------------------------------------- | | `GET` | `/?hours=24` | All workspaces SLA summary (hours: 1–720) | | `GET` | `/:workspaceId?hours=24` | Per-workspace detail with time-series | | `GET` | `/alerts?status=&limit=100` | List alerts (status: firing, resolved, acknowledged) | | `GET` | `/thresholds` | Current alert thresholds | | `PUT` | `/thresholds` | Update alert thresholds | | `POST` | `/alerts/:alertId/acknowledge` | Acknowledge a firing alert | | `POST` | `/evaluate` | Trigger alert evaluation | --- # Data Residency (/platform-ops/data-residency) Atlas data residency controls let operators assign workspaces to geographic regions (e.g. `eu-west`, `us-east`). Each region maps to a dedicated database, ensuring tenant data stays in the correct jurisdiction. Data residency is available on [app.useatlas.dev](https://app.useatlas.dev) Enterprise plans. Self-hosted deployments manage database routing directly. * Active Enterprise plan on [app.useatlas.dev](https://app.useatlas.dev) * Internal database configured (`DATABASE_URL`) * Platform admin role for dashboard access * At least one region configured in `atlas.config.ts` How It Works [#how-it-works] When a workspace is created or assigned a region, Atlas routes its analytics datasource queries to the region-specific datasource URL configured for that region. Internal data (conversations, audit logs) is stored in the default internal database (`DATABASE_URL`) regardless of region. | Concept | Description | | ------------------ | --------------------------------------------------------------------------- | | **Region** | A geographic identifier (e.g. `us-east`, `eu-west`) mapped to database URLs | | **Assignment** | A workspace is assigned to a region at creation time | | **Migration** | Region can be changed via a cross-region data migration (see below) | | **Default region** | New workspaces get the configured default region if none is specified | Configuration [#configuration] atlas.config.ts [#atlasconfigts] Add a `residency` section to your Atlas configuration: ```typescript import { defineConfig } from "@atlas/api/lib/config"; export default defineConfig({ datasources: { default: { url: process.env.ATLAS_DATASOURCE_URL! }, }, enterprise: { enabled: true, licenseKey: process.env.ATLAS_ENTERPRISE_LICENSE_KEY, }, residency: { regions: { "us-east": { label: "US East (Virginia)", databaseUrl: "postgresql://us-east-db.example.com/atlas", datasourceUrl: "postgresql://us-east-analytics.example.com/warehouse", }, "eu-west": { label: "EU West (Ireland)", databaseUrl: "postgresql://eu-west-db.example.com/atlas", datasourceUrl: "postgresql://eu-west-analytics.example.com/warehouse", }, "ap-southeast": { label: "Asia Pacific (Singapore)", databaseUrl: "postgresql://ap-southeast-db.example.com/atlas", }, }, defaultRegion: "us-east", }, }); ``` Each region requires: | Field | Required | Description | | --------------- | -------- | ------------------------------------------------- | | `label` | Yes | Human-readable name shown in the admin console | | `databaseUrl` | Yes | PostgreSQL URL for the region's internal database | | `datasourceUrl` | No | Override analytics datasource URL for this region | The `defaultRegion` must be one of the configured region keys. Admin UI [#admin-ui] Navigate to **Admin → Data Residency** (platform admin only) to: * View all configured regions with workspace counts and health status * See which workspaces are assigned to which regions * Identify the default region for new workspaces Region assignment is also visible in the **Platform Admin → Workspaces** table as a region column. API [#api] List Configured Regions [#list-configured-regions] ```bash curl http://localhost:3001/api/v1/platform/residency/regions \ -H "Authorization: Bearer $TOKEN" ``` Response: ```json { "regions": [ { "region": "us-east", "label": "US East (Virginia)", "workspaceCount": 12, "healthy": true }, { "region": "eu-west", "label": "EU West (Ireland)", "workspaceCount": 5, "healthy": true } ], "defaultRegion": "us-east" } ``` Assign Region to Workspace [#assign-region-to-workspace] ```bash curl -X POST http://localhost:3001/api/v1/platform/residency/workspaces/$WORKSPACE_ID/region \ -H "Authorization: Bearer $TOKEN" \ -H "Content-Type: application/json" \ -d '{"region": "eu-west"}' ``` Changing a workspace's region triggers a cross-region data migration. See [Region Migration](#region-migration) below. Get Workspace Region [#get-workspace-region] ```bash curl http://localhost:3001/api/v1/platform/residency/workspaces/$WORKSPACE_ID/region \ -H "Authorization: Bearer $TOKEN" ``` List All Assignments [#list-all-assignments] ```bash curl http://localhost:3001/api/v1/platform/residency/assignments \ -H "Authorization: Bearer $TOKEN" ``` Connection Routing [#connection-routing] When a workspace has a region assigned: 1. **Analytics queries** — If the region has a `datasourceUrl`, queries route to that region-specific datasource instead of the default 2. **Internal writes** — All internal data (conversations, audit logs, learned patterns) goes to the default internal database configured via `DATABASE_URL` 3. **No region** — Workspaces without a region use the default datasource and internal database as before The routing is transparent to the agent and end users — the same API endpoints work regardless of region. Misrouting Detection [#misrouting-detection] When running multiple regional API instances, each instance identifies itself via the `ATLAS_API_REGION` env var (or falls back to `residency.defaultRegion` from config). If a request arrives from a workspace assigned to a different region, Atlas logs a warning with the request ID, org ID, expected region, and actual region. The `/api/health` endpoint includes a `region` field showing the instance's identity, plus a `misroutedRequests` counter when any misrouted requests have been detected. Strict mode [#strict-mode] Set `ATLAS_STRICT_ROUTING=true` (or `residency.strictRouting: true` in config) to reject misrouted requests with `421 Misdirected Request`. The response includes a `correctApiUrl` hint (from the region's `apiUrl` config) so the client can redirect: ```json { "error": "misdirected_request", "correctApiUrl": "https://api-eu.useatlas.dev", "expectedRegion": "eu-west", "actualRegion": "us-west" } ``` Region Migration [#region-migration] When a workspace needs to move to a different region, Atlas performs a cross-region data migration. The migration moves conversations, semantic entities, learned patterns, and org-scoped settings from the source region to the target. How it works [#how-it-works-1] The migration runs in 4 phases: 1. **Export** — workspace data is extracted from the source region's internal database into an export bundle 2. **Transfer** — the bundle is sent to the target region's API via an internal service-to-service endpoint 3. **Cutover** — the workspace's region assignment is updated and caches are flushed 4. **Cleanup** — source data is marked for removal after a 7-day grace period During migration, the workspace is **read-only** — write operations (new conversations, settings changes) return `409 Workspace Migrating` until the migration completes. Request a migration [#request-a-migration] ```bash curl -X POST http://localhost:3001/api/v1/admin/residency/migrate \ -H "Authorization: Bearer $TOKEN" \ -H "Content-Type: application/json" \ -d '{"targetRegion": "eu-west"}' ``` Rate limited to one migration per 30 days per workspace. The migration runs in the background — poll `GET /api/v1/admin/residency/migration` for status. Check migration status [#check-migration-status] ```bash curl http://localhost:3001/api/v1/admin/residency/migration \ -H "Authorization: Bearer $TOKEN" ``` What moves [#what-moves] | Data | Strategy | | ----------------------------- | -------------------------------- | | Conversations + messages | Exported and imported via bundle | | Semantic entities (DB-backed) | Exported and imported via bundle | | Learned patterns | Exported and imported via bundle | | Org-scoped settings | Exported and imported via bundle | | Analytics datasource | Customer-managed (not migrated) | | Auth data (users, sessions) | Global — stays in place | Configuration [#configuration-1] Cross-region transfer requires: * `ATLAS_INTERNAL_SECRET` — shared secret across all regional API instances for service-to-service auth * `apiUrl` on each region config — the target region's public API endpoint ```typescript residency: { regions: { "us-east": { label: "US East", databaseUrl: "postgresql://...", apiUrl: "https://api-us.useatlas.dev", // required for migration }, "eu-west": { label: "EU West", databaseUrl: "postgresql://...", apiUrl: "https://api-eu.useatlas.dev", }, }, defaultRegion: "us-east", } ``` Retry and cancel [#retry-and-cancel] Failed migrations can be retried. Pending migrations can be cancelled: ```bash # Retry a failed migration curl -X POST http://localhost:3001/api/v1/admin/residency/migrate/$MIGRATION_ID/retry \ -H "Authorization: Bearer $TOKEN" # Cancel a pending migration curl -X POST http://localhost:3001/api/v1/admin/residency/migrate/$MIGRATION_ID/cancel \ -H "Authorization: Bearer $TOKEN" ``` Limitations [#limitations] * Region health checks report all configured regions as healthy (no active probing) * Requires enterprise license * Region configuration changes require server restart * One migration per workspace per 30 days --- # Platform Admin Console (/platform-ops/platform-admin) The platform admin console provides operators with a unified view across all workspaces. Monitor resource usage, manage workspace lifecycles, detect noisy neighbors, and enforce plan changes — all from a single dashboard. * [Managed auth](/deployment/authentication#managed-auth) enabled * Internal database configured (`DATABASE_URL`) * `platform_admin` user role (assigned via `ATLAS_ADMIN_EMAIL` at first signup or directly in the database) Platform Admin Role [#platform-admin-role] The `platform_admin` role is a user-level role distinct from workspace admin. It uses Better Auth's admin plugin `user.role` field — the same mechanism as the existing `admin` role but with elevated privileges. | Role | Scope | Purpose | | ---------------- | --------- | --------------------------------------- | | `member` | Workspace | Query data, view conversations | | `admin` | Workspace | Manage connections, users, settings | | `owner` | Workspace | Full workspace control, delete org | | `platform_admin` | Platform | Cross-tenant management, all workspaces | Platform admins can also access workspace-level admin routes (they inherit admin capabilities). Assigning the Role [#assigning-the-role] The first user to sign up (when `ATLAS_ADMIN_EMAIL` matches or no admin exists) is automatically assigned `platform_admin`. To promote an existing user: ```sql UPDATE "user" SET role = 'platform_admin' WHERE email = 'operator@example.com'; ``` Dashboard [#dashboard] Navigate to **Admin > Platform Admin** in the sidebar. The dashboard tab shows aggregate statistics: * **Workspaces** — total count with active/suspended breakdown * **Active Users** — total registered users across all workspaces * **Queries (24h)** — platform-wide query volume * **MRR** — monthly recurring revenue estimate based on plan tiers A bar chart shows the top 10 workspaces by query volume. Workspace Management [#workspace-management] The workspaces tab lists all organizations with sortable columns and filters for status and plan tier. Actions [#actions] | Action | Endpoint | Description | | ----------- | ------------------------------------------------ | ----------------------------- | | View | `GET /api/v1/platform/workspaces/:id` | Resource breakdown, user list | | Suspend | `POST /api/v1/platform/workspaces/:id/suspend` | Block all access | | Unsuspend | `POST /api/v1/platform/workspaces/:id/unsuspend` | Restore access | | Delete | `DELETE /api/v1/platform/workspaces/:id` | Soft-delete + cascade cleanup | | Change plan | `PATCH /api/v1/platform/workspaces/:id/plan` | Update plan tier | All actions require confirmation dialogs in the UI. Delete cascades to conversations, semantic entities, learned patterns, suggestions, and scheduled tasks. Noisy Neighbor Detection [#noisy-neighbor-detection] The noisy neighbors tab compares each workspace's resource usage against the platform median. Workspaces consuming more than **3x the median** in queries, tokens, or storage are flagged. Each alert card shows: * The workspace name and plan tier * Which metric exceeded the threshold * The actual value vs. median * The ratio (e.g., `5.2x`) API Reference [#api-reference] All endpoints are gated on `user.role === "platform_admin"`. In no-auth mode (local dev), all requests pass. | Method | Path | Description | | -------- | ------------------------------------------- | ------------------------ | | `GET` | `/api/v1/platform/workspaces` | List all workspaces | | `GET` | `/api/v1/platform/workspaces/:id` | Workspace detail + users | | `POST` | `/api/v1/platform/workspaces/:id/suspend` | Suspend workspace | | `POST` | `/api/v1/platform/workspaces/:id/unsuspend` | Unsuspend workspace | | `DELETE` | `/api/v1/platform/workspaces/:id` | Delete workspace | | `PATCH` | `/api/v1/platform/workspaces/:id/plan` | Change plan tier | | `GET` | `/api/v1/platform/stats` | Platform-wide statistics | | `GET` | `/api/v1/platform/noisy-neighbors` | Noisy neighbor detection | --- # Plugin Catalog Management (/platform-ops/plugin-catalog) The plugin catalog is the platform-level registry of plugins available for installation. Platform operators add plugins to the catalog, set plan-tier requirements, and control visibility. Workspaces install plugins from this catalog via the [Plugin Marketplace](/guides/plugin-marketplace). * [Managed auth](/deployment/authentication#managed-auth) enabled * Internal database configured (`DATABASE_URL`) * [`platform_admin` user role](/platform-ops/platform-admin#platform-admin-role) *** Accessing the Catalog [#accessing-the-catalog] Navigate to **Admin > Platform Admin > Plugin Catalog** (`/admin/platform/plugins`). The page shows a table of all catalog entries with columns for plugin name, type, minimum plan, enabled status, and creation date. *** Adding a Plugin to the Catalog [#adding-a-plugin-to-the-catalog] 1. Click **Add Plugin** in the top-right corner 2. Fill in the form: * **Name** — Display name shown to workspace admins * **Slug** — Unique lowercase identifier (e.g., `my-datasource`). Cannot be changed after creation * **Description** — What the plugin does (shown in the marketplace) * **Type** — Plugin category: Datasource, Context, Interaction, Action, or Sandbox * **Minimum Plan** — The lowest plan tier that can install this plugin (Free, Trial, Team, or Enterprise) * **npm Package** — Optional package name (e.g., `@useatlas/plugin-bigquery`) * **Icon URL** — Optional icon for marketplace display * **Config Schema (JSON)** — Optional JSON Schema defining configuration fields. When set, workspace admins see typed form fields during installation and configuration * **Enabled** — Whether the plugin is visible to workspaces 3. Click **Add to Catalog** The plugin is immediately available for workspaces on matching plan tiers. *** Editing a Catalog Entry [#editing-a-catalog-entry] 1. Click the **pencil icon** next to any entry in the table 2. Modify fields as needed (the slug cannot be changed) 3. Click **Save Changes** Changes to visibility or plan requirements take effect immediately — workspaces see the updated catalog on their next page load. *** Enabling and Disabling Plugins [#enabling-and-disabling-plugins] Use the **toggle switch** in the Status column to enable or disable a plugin: * **Enabled** — Visible in the workspace marketplace. Workspaces can install it * **Disabled** — Hidden from the marketplace. Existing installations continue to work, but no new installations are allowed *** Removing a Plugin from the Catalog [#removing-a-plugin-from-the-catalog] 1. Click the **trash icon** next to the entry 2. Confirm the deletion Deleting a catalog entry **automatically uninstalls the plugin from all workspaces** that currently have it. This action cannot be undone. Consider disabling the plugin instead if you want to prevent new installations while preserving existing ones. *** Config Schema [#config-schema] The config schema uses JSON Schema format to define fields that workspace admins fill in when installing or configuring a plugin. Example: ```json { "properties": { "apiKey": { "type": "string", "title": "API Key", "description": "Your service API key" }, "region": { "type": "string", "title": "Region", "description": "Deployment region" }, "enableCache": { "type": "boolean", "title": "Enable Cache", "description": "Cache query results" } } } ``` When a config schema is set, the marketplace shows a typed form with proper input controls (text fields, toggles, etc.) instead of a raw JSON editor. *** API Endpoints [#api-endpoints] All platform catalog endpoints require the `platform_admin` role. | Method | Path | Description | | -------- | -------------------------------------- | -------------------------------- | | `GET` | `/api/v1/platform/plugins/catalog` | List all catalog entries | | `POST` | `/api/v1/platform/plugins/catalog` | Create a new catalog entry | | `PUT` | `/api/v1/platform/plugins/catalog/:id` | Update a catalog entry | | `DELETE` | `/api/v1/platform/plugins/catalog/:id` | Delete entry + cascade uninstall | *** See Also [#see-also] * [Plugin Marketplace](/guides/plugin-marketplace) — Workspace admin guide for installing plugins * [Platform Admin Console](/platform-ops/platform-admin) — Cross-tenant management overview * [Plugin Authoring Guide](/plugins/authoring-guide) — Build custom plugins --- # Custom Domains (/platform-ops/custom-domains) Custom domains let workspaces use their own URL (e.g. `data.acme.com`) instead of the default `app.useatlas.dev`. Atlas integrates with Railway's custom domain API for provisioning and automatic TLS certificate management via Let's Encrypt. Custom domains are available on [app.useatlas.dev](https://app.useatlas.dev) Enterprise plans. Self-hosted deployments configure domains directly in their hosting provider. Workspace admins can self-serve via **Admin Console → Configuration → Custom Domain**. Platform admins retain cross-workspace management at **Admin Console → Platform → Custom Domains**. * Active Enterprise plan on [app.useatlas.dev](https://app.useatlas.dev) (or self-hosted "free" tier) * Internal database configured (`DATABASE_URL`) * Admin role (workspace admin for self-serve, platform admin for cross-workspace management) * Railway deployment with API token configured How It Works [#how-it-works] | Step | What Happens | | ----------------- | ------------------------------------------------------------------------------------------- | | **Register** | Admin registers a domain in the Atlas admin console. Atlas calls Railway to provision it | | **Configure DNS** | Admin creates a CNAME record pointing their domain at the Railway target | | **Verify** | Admin clicks "Verify" — Atlas checks Railway for DNS propagation and TLS certificate status | | **Route** | Once verified, requests to the custom domain are routed to the correct workspace | Railway handles TLS certificate provisioning automatically. Atlas stores the domain-to-workspace mapping for host-based routing. Configuration [#configuration] Environment Variables [#environment-variables] Add these to your `.env` file (get values from your Railway dashboard): ```bash # Railway custom domain API RAILWAY_API_TOKEN=your-workspace-scoped-token RAILWAY_PROJECT_ID=your-project-id RAILWAY_ENVIRONMENT_ID=your-environment-id RAILWAY_WEB_SERVICE_ID=your-web-service-id ``` | Variable | Description | | ------------------------ | ------------------------------------------------------------------------ | | `RAILWAY_API_TOKEN` | Workspace-scoped API token from Railway dashboard → Settings → Tokens | | `RAILWAY_PROJECT_ID` | Project ID from your Railway project settings | | `RAILWAY_ENVIRONMENT_ID` | Environment ID (typically `production`) | | `RAILWAY_WEB_SERVICE_ID` | Service ID for the web service that should receive custom domain traffic | Workspace Admin (Self-Serve) [#workspace-admin-self-serve] Workspace admins on Enterprise plans can manage their own custom domain at **Admin Console → Configuration → Custom Domain**. One domain per workspace (MVP). Adding a Domain [#adding-a-domain] 1. Navigate to **Admin Console → Configuration → Custom Domain** 2. Enter your domain (e.g. `data.acme.com`) 3. Click **Add Domain** 4. Atlas registers the domain and shows DNS configuration instructions DNS Setup [#dns-setup] After adding, set up a CNAME record with your DNS provider: | Record Type | Name | Value | | ----------- | --------------- | --------------------------------------------------- | | CNAME | `data.acme.com` | `.up.railway.app` (shown in the admin page) | Click the copy button next to the CNAME target to copy it. Checking Status [#checking-status] 1. After creating the DNS record, wait a few minutes for propagation 2. Click **Check Status** on the domain page 3. Status updates to **Active** (green) once DNS is propagated and TLS certificate is issued Removing a Domain [#removing-a-domain] Click **Remove Domain** and confirm. This removes the domain from both Railway and Atlas. Plan Gating [#plan-gating] Non-enterprise workspaces see an upgrade prompt instead of the domain form. Self-hosted deployments (which default to the "free" tier) can use custom domains without plan restrictions. Platform Admin (Cross-Workspace) [#platform-admin-cross-workspace] Platform admins retain full domain management across all workspaces at **Admin Console → Platform → Custom Domains**. Registering a Domain [#registering-a-domain] 1. Click **Add Domain** 2. Enter the **Workspace ID** and **Domain** (e.g. `data.acme.com`) 3. Click **Register Domain** 4. Atlas creates the domain in Railway and shows the CNAME target Verification [#verification] 1. After creating the DNS record, wait a few minutes for propagation 2. Click the **refresh icon** next to the domain in the admin table 3. Atlas checks Railway for DNS propagation and certificate status 4. Status updates to **Verified** (green) once DNS is propagated and TLS certificate is issued Status Badges [#status-badges] | Status | Meaning | | --------------------------------- | ------------------------------------------------------------------------------------------------------------------------- | | **Pending** (yellow) | Domain registered, waiting for DNS setup or certificate provisioning | | **Verified** / **Active** (green) | DNS verified and TLS certificate issued — domain is live. API returns `"verified"` status; workspace UI displays "Active" | | **Failed** (red) | DNS verification or certificate provisioning failed | Deleting a Domain [#deleting-a-domain] Click the trash icon next to any domain. This removes the domain from both Railway (revoking the TLS certificate) and Atlas. Host-Based Routing [#host-based-routing] When a request arrives on a custom domain, Atlas: 1. Looks up the hostname in the `custom_domains` table (with 60-second cache) 2. If a verified match is found, sets the workspace context to the owning workspace 3. All subsequent operations (queries, conversations, etc.) are scoped to that workspace API Reference [#api-reference] Workspace Admin Endpoints [#workspace-admin-endpoints] Require `admin` role and active organization. Enterprise plan required for adding a domain (`POST /api/v1/admin/domain`). | Method | Path | Description | | -------- | ----------------------------- | ------------------------------------------ | | `GET` | `/api/v1/admin/domain` | Get workspace custom domain (null if none) | | `POST` | `/api/v1/admin/domain` | Add a custom domain (`{ domain }`) | | `POST` | `/api/v1/admin/domain/verify` | Check verification status | | `DELETE` | `/api/v1/admin/domain` | Remove workspace custom domain | Platform Admin Endpoints [#platform-admin-endpoints] Require `platform_admin` role and enterprise features. | Method | Path | Description | | -------- | ------------------------------------- | --------------------------------------------- | | `GET` | `/api/v1/platform/domains` | List all custom domains | | `POST` | `/api/v1/platform/domains` | Register a domain (`{ workspaceId, domain }`) | | `POST` | `/api/v1/platform/domains/:id/verify` | Check verification status | | `DELETE` | `/api/v1/platform/domains/:id` | Delete a domain | Troubleshooting [#troubleshooting] Domain stays in "Pending" status [#domain-stays-in-pending-status] * Verify your CNAME record is correctly configured with your DNS provider * DNS propagation can take up to 48 hours (though usually minutes) * Check that the CNAME target matches exactly what Atlas shows in the admin UI * Click the verify button again after waiting Certificate status shows "Failed" [#certificate-status-shows-failed] * Ensure no CAA records on your domain block Let's Encrypt (`letsencrypt.org`) * Verify no conflicting A or AAAA records exist for the same hostname * Delete and re-register the domain if the issue persists "Railway API is not configured" error [#railway-api-is-not-configured-error] * Ensure all four Railway environment variables are set: `RAILWAY_API_TOKEN`, `RAILWAY_PROJECT_ID`, `RAILWAY_ENVIRONMENT_ID`, `RAILWAY_WEB_SERVICE_ID` * Verify the API token has workspace-level permissions in Railway "Domain is not available" error [#domain-is-not-available-error] * The domain may already be registered in another Railway project * Check Railway dashboard for existing custom domain entries --- # Validate SQL without executing (/api-reference/validate-sql/postValidateSql) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Runs the full SQL validation pipeline (empty check, regex guard, AST parse, table whitelist) and returns structured results. Does NOT execute the query. --- # List queryable tables (/api-reference/tables/getTables) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns a simplified view of semantic layer entities with column details, enabling SDK consumers to discover queryable tables. --- # Disconnect Telegram (/api-reference/admin-integrations/deleteAdminIntegrationsTelegram) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Removes the Telegram installation for the current workspace. Any Telegram bot functionality will stop working until reconnected. --- # Connect Telegram (/api-reference/admin-integrations/postAdminIntegrationsTelegram) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Validates a Telegram bot token via the Telegram Bot API and saves the installation for the current workspace. --- # Connect GitHub via personal access token (/api-reference/admin-integrations/postAdminIntegrationsGithub) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Validates a GitHub personal access token via the GitHub API and saves the installation for the current workspace. --- # Connect email delivery provider (/api-reference/admin-integrations/postAdminIntegrationsEmail) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Saves email delivery configuration for the current workspace. Supports SMTP, SendGrid, Postmark, and SES providers. --- # Connect Discord via bot credentials (BYOT) (/api-reference/admin-integrations/postAdminIntegrationsDiscordByot) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Validates a Discord bot token via the Discord API and saves the installation for the current workspace. --- # Disconnect Linear (/api-reference/admin-integrations/deleteAdminIntegrationsLinear) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Removes the Linear installation for the current workspace. Any Linear integration functionality will stop working until reconnected. --- # Connect WhatsApp via Cloud API credentials (/api-reference/admin-integrations/postAdminIntegrationsWhatsapp) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Validates WhatsApp Cloud API credentials via the Meta Graph API and saves the installation for the current workspace. --- # Disconnect GitHub (/api-reference/admin-integrations/deleteAdminIntegrationsGithub) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Removes the GitHub installation for the current workspace. Any GitHub integration functionality will stop working until reconnected. --- # Connect Slack via bot token (BYOT) (/api-reference/admin-integrations/postAdminIntegrationsSlackByot) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Validates a Slack bot token via auth.test and saves the installation for the current workspace. Use when platform OAuth is not configured. --- # Disconnect Teams (/api-reference/admin-integrations/deleteAdminIntegrationsTeams) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Removes the Teams installation for the current workspace. Any Teams bot functionality will stop working until reconnected. --- # Connect Google Chat via service account (/api-reference/admin-integrations/postAdminIntegrationsGchat) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Parses a Google Chat service account JSON key, validates required fields (client\_email, private\_key), and saves the installation for the current workspace. Structural validation only — does not call the Google API. --- # Connect Linear via API key (/api-reference/admin-integrations/postAdminIntegrationsLinear) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Validates a Linear personal API key via the Linear GraphQL API and saves the installation for the current workspace. --- # Disconnect Slack (/api-reference/admin-integrations/deleteAdminIntegrationsSlack) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Removes the Slack installation for the current workspace. Any Slack bot functionality will stop working until reconnected. --- # Get integration status (/api-reference/admin-integrations/getAdminIntegrationsStatus) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the status of all configured integrations for the current workspace: Slack, Teams, Discord, Telegram, Google Chat, GitHub, Linear, WhatsApp, Email, webhooks, available delivery channels, deploy mode, and internal database availability. --- # Send test email (/api-reference/admin-integrations/postAdminIntegrationsEmailTest) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Sends a test email using the saved email configuration for the current workspace. --- # Disconnect WhatsApp (/api-reference/admin-integrations/deleteAdminIntegrationsWhatsapp) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Removes the WhatsApp installation for the current workspace. Any WhatsApp messaging functionality will stop working until reconnected. --- # Disconnect email (/api-reference/admin-integrations/deleteAdminIntegrationsEmail) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Removes the email configuration for the current workspace. Email delivery will fall back to environment variables or be disabled until reconnected. --- # Disconnect Discord (/api-reference/admin-integrations/deleteAdminIntegrationsDiscord) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Removes the Discord installation for the current workspace. Any Discord bot functionality will stop working until reconnected. --- # Disconnect Google Chat (/api-reference/admin-integrations/deleteAdminIntegrationsGchat) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Removes the Google Chat installation for the current workspace. Any Google Chat bot functionality will stop working until reconnected. --- # Connect Teams via app credentials (BYOT) (/api-reference/admin-integrations/postAdminIntegrationsTeamsByot) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Validates Azure Bot app credentials via client credentials token acquisition and saves the installation for the current workspace. --- # Create or update org semantic entity (/api-reference/admin-semantic/putAdminSemanticOrgEntitiesByName) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Upserts a DB-backed semantic entity for the active organization. --- # List semantic metrics (/api-reference/admin-semantic/getAdminSemanticMetrics) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns all discovered semantic metrics from YAML files. --- # Get catalog (/api-reference/admin-semantic/getAdminSemanticCatalog) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the semantic layer catalog (catalog.yml) if it exists. --- # Get raw YAML (top-level) (/api-reference/admin-semantic/getAdminSemanticRawByFile) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Serves raw YAML content for a top-level file (catalog.yml, glossary.yml). --- # List org semantic entities (/api-reference/admin-semantic/getAdminSemanticOrgEntities) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists DB-backed semantic entities for the active organization. --- # Delete org semantic entity (/api-reference/admin-semantic/deleteAdminSemanticOrgEntitiesByName) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Deletes a DB-backed semantic entity for the active organization. --- # Schema diff (/api-reference/admin-semantic/getAdminSemanticDiff) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Compares the live database schema against YAML entity definitions. Optionally specify a connection via ?connection=id. --- # Get entity detail (/api-reference/admin-semantic/getAdminSemanticEntitiesByName) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the full parsed YAML for a single semantic entity. --- # Bulk import org entities from disk (/api-reference/admin-semantic/postAdminSemanticOrgImport) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Imports semantic entities from the org's disk directory into the database. --- # Get raw YAML (subdirectory) (/api-reference/admin-semantic/getAdminSemanticRawByDirByFile) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Serves raw YAML content for a file in a subdirectory (e.g. entities/users.yml). --- # List semantic entities (/api-reference/admin-semantic/getAdminSemanticEntities) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns all discovered semantic layer entities from YAML files. --- # Get glossary (/api-reference/admin-semantic/getAdminSemanticGlossary) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns all glossary terms from semantic/glossary.yml and per-source glossaries. --- # Semantic layer stats (/api-reference/admin-semantic/getAdminSemanticStats) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns aggregate stats: entity count, column count, join count, measure count, coverage gaps. --- # Get org semantic entity (/api-reference/admin-semantic/getAdminSemanticOrgEntitiesByName) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns a single DB-backed semantic entity for the active organization. --- # Query audit log (/api-reference/admin-audit/getAdminAudit) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns paginated audit log entries with optional filters for user, success, date range, connection, table, column, and search. --- # Export audit log as CSV (/api-reference/admin-audit/getAdminAuditExport) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Exports audit log entries as a CSV file (up to 10,000 rows). Respects current filters. --- # Audit filter facets (/api-reference/admin-audit/getAdminAuditFacets) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns distinct tables and columns from the audit log for filter dropdowns. --- # Audit statistics (/api-reference/admin-audit/getAdminAuditStats) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns aggregate audit stats: total queries, error count, error rate, and queries per day for the last 7 days. --- # Revoke invitation (/api-reference/admin-invitations/deleteAdminUsersInvitationsById) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Revokes a pending invitation. --- # List invitations (/api-reference/admin-invitations/getAdminUsersInvitations) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns invitations with optional status filter (pending, accepted, revoked, expired). --- # Create invitation (/api-reference/admin-invitations/postAdminUsersInvite) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Creates an invitation for a new user. Optionally sends an email via Resend. --- # Get billing status (/api-reference/billing/getBilling) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the billing status for the active workspace, including plan details, usage metrics, and subscription info. --- # Toggle BYOT mode (/api-reference/billing/postBillingByot) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Enables or disables Bring Your Own Token (BYOT) mode for the active workspace. Requires admin or owner role. --- # Create Stripe portal session (/api-reference/billing/postBillingPortal) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Creates a Stripe Customer Portal session for the active workspace. Returns a URL to redirect the user to. --- # Suspend a workspace (/api-reference/platform-admin/postPlatformWorkspacesByIdSuspend) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} SaaS only. Suspends a workspace, preventing all user access until reactivated. --- # Aggregate platform stats (/api-reference/platform-admin/getPlatformStats) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} SaaS only. Returns aggregate platform statistics: total workspaces, active users, total queries, MRR. --- # List all workspaces (/api-reference/platform-admin/getPlatformWorkspaces) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} SaaS only. Returns all workspaces across the platform with health metrics, usage data, plan info, and status. --- # Detect noisy neighbors (/api-reference/platform-admin/getPlatformNoisyNeighbors) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} SaaS only. Identifies workspaces consuming disproportionate resources (>3x median queries, tokens, or storage). --- # Change workspace plan tier (/api-reference/platform-admin/patchPlatformWorkspacesByIdPlan) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} SaaS only. Updates the plan tier for a workspace (free, trial, team, enterprise). --- # Get workspace details (/api-reference/platform-admin/getPlatformWorkspacesById) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} SaaS only. Returns detailed workspace information including resource breakdown and user list. --- # Unsuspend a workspace (/api-reference/platform-admin/postPlatformWorkspacesByIdUnsuspend) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} SaaS only. Reactivates a suspended workspace, restoring user access. --- # Delete a workspace (/api-reference/platform-admin/deletePlatformWorkspacesById) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} SaaS only. Soft-deletes a workspace with cascading cleanup (conversations, semantic entities, learned patterns, suggestions, scheduled tasks). --- # Get pending approval count (/api-reference/admin-approval-workflows/getAdminApprovalPendingCount) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the count of pending (non-expired) approval requests for the organization. --- # Expire stale requests (/api-reference/admin-approval-workflows/postAdminApprovalExpire) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Manually expire all pending approval requests past their expiry time. --- # Update approval rule (/api-reference/admin-approval-workflows/putAdminApprovalRulesById) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Update an existing approval rule. --- # List approval requests (/api-reference/admin-approval-workflows/getAdminApprovalQueue) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns approval requests for the organization. Filterable by status via query parameter. --- # Review approval request (/api-reference/admin-approval-workflows/postAdminApprovalQueueById) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Approve or deny a pending approval request. --- # Get approval request (/api-reference/admin-approval-workflows/getAdminApprovalQueueById) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns a single approval request by ID. --- # Create approval rule (/api-reference/admin-approval-workflows/postAdminApprovalRules) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Create a new approval rule for the organization. --- # Delete approval rule (/api-reference/admin-approval-workflows/deleteAdminApprovalRulesById) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Delete an approval rule. Pending requests referencing this rule are not affected. --- # List approval rules (/api-reference/admin-approval-workflows/getAdminApprovalRules) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns all approval rules for the current organization. --- # Drain org pools (/api-reference/admin-connections/postAdminConnectionsPoolOrgsByOrgIdDrain) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Drains all connection pools for a specific organization. --- # Flush cache (/api-reference/admin-connections/postAdminCacheFlush) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Flushes all cache entries. --- # List connections (/api-reference/admin-connections/getAdminConnections) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns all registered database connections. --- # Create connection (/api-reference/admin-connections/postAdminConnections) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Creates a new database connection. Tests it before saving. --- # Get connection detail (/api-reference/admin-connections/getAdminConnectionsById) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns connection detail including masked URL and schema. --- # Update connection (/api-reference/admin-connections/putAdminConnectionsById) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Updates an existing connection's URL, description, or schema. --- # Org-scoped pool metrics (/api-reference/admin-connections/getAdminConnectionsPoolOrgs) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns connection pool metrics scoped by organization. --- # Health check connection (/api-reference/admin-connections/postAdminConnectionsByIdTest) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Runs a health check on an existing connection. --- # Pool metrics (/api-reference/admin-connections/getAdminConnectionsPool) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns connection pool metrics for all connections. --- # Cache statistics (/api-reference/admin-connections/getAdminCacheStats) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns cache hit/miss statistics. --- # Drain connection pool (/api-reference/admin-connections/postAdminConnectionsByIdDrain) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Drains and recreates the pool for a specific connection. --- # Test connection URL (/api-reference/admin-connections/postAdminConnectionsTest) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Tests a database connection URL without persisting it. --- # Delete connection (/api-reference/admin-connections/deleteAdminConnectionsById) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Removes a connection from the registry and internal database. --- # Create a prompt item (/api-reference/admin-prompts/postAdminPromptsByIdItems) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Adds a new prompt item to a collection. The collection must not be built-in. Sort order defaults to MAX + 1 if not provided. --- # Update a prompt item (/api-reference/admin-prompts/patchAdminPromptsByCollectionIdItemsByItemId) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Updates a prompt item's question, description, and/or category. The parent collection must not be built-in. --- # Reorder prompt items (/api-reference/admin-prompts/putAdminPromptsByIdReorder) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Reorders all items within a collection. The itemIds array must contain every item ID in the collection exactly once. --- # List prompt collections (/api-reference/admin-prompts/getAdminPrompts) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns all prompt collections for the admin's active organization, including built-in collections. Ordered by sort\_order then created\_at. --- # Delete a prompt collection (/api-reference/admin-prompts/deleteAdminPromptsById) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Permanently deletes a prompt collection and cascades to its items. Built-in collections cannot be deleted. --- # Delete a prompt item (/api-reference/admin-prompts/deleteAdminPromptsByCollectionIdItemsByItemId) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Permanently removes a prompt item. The parent collection must not be built-in. --- # Update a prompt collection (/api-reference/admin-prompts/patchAdminPromptsById) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Updates a prompt collection's name, industry, and/or description. Built-in collections cannot be modified. --- # Create a prompt collection (/api-reference/admin-prompts/postAdminPrompts) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Creates a new prompt collection. The handler validates that name and industry are present. The collection is always created as non-built-in. --- # Reset guided tour so it can be replayed (/api-reference/onboarding/postOnboardingTourReset) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Clears the tour completion timestamp for the authenticated user, allowing the guided tour to be triggered again. --- # Test a database connection (/api-reference/onboarding/postOnboardingTestConnection) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Validates the URL scheme, creates a temporary connection, runs a health check, and returns the result. The connection is not persisted. Requires managed auth mode and an authenticated session. --- # Set up workspace with demo data (/api-reference/onboarding/postOnboardingUseDemo) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Connects the workspace to the platform's default datasource (ATLAS\_DATASOURCE\_URL) and imports the semantic layer for the chosen demo dataset. Three datasets are available: 'demo' (SaaS CRM, 3 tables), 'cybersec' (Sentinel Security, 62 tables), 'ecommerce' (NovaMart, 52 tables). Defaults to 'demo' if not specified. --- # List enabled social login providers (/api-reference/onboarding/getOnboardingSocialProviders) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns which OAuth providers (Google, GitHub, Microsoft) are configured so the signup page can render the correct buttons. Public endpoint — no authentication required. --- # Complete workspace setup (/api-reference/onboarding/postOnboardingComplete) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Finalizes onboarding by testing the connection, encrypting the URL, and persisting it to the internal database scoped to the user's active organization. Resets the semantic layer whitelist cache so new tables become queryable immediately. --- # Mark guided tour as completed (/api-reference/onboarding/postOnboardingTourComplete) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Records that the authenticated user has completed (or dismissed) the guided tour. Idempotent — calling multiple times is safe. --- # Get guided tour completion status (/api-reference/onboarding/getOnboardingTourStatus) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns whether the authenticated user has completed the guided tour. Used on app load to decide whether to auto-start the tour. --- # Revoke all user sessions (/api-reference/admin-sessions/deleteAdminSessionsUserByUserId) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Revokes all sessions for a specific user. --- # Session statistics (/api-reference/admin-sessions/getAdminSessionsStats) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns total, active, and unique user session counts. --- # Revoke session (/api-reference/admin-sessions/deleteAdminSessionsById) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Revokes a single session by ID. --- # List sessions (/api-reference/admin-sessions/getAdminSessions) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns paginated active sessions with user info. Supports search by email or IP. --- # Discord OAuth install redirect (/api-reference/discord/getDiscordInstall) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Redirects to the Discord OAuth2 authorize page. Requires DISCORD\_CLIENT\_ID to be configured. --- # Discord OAuth callback (/api-reference/discord/getDiscordCallback) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Handles the OAuth2 callback from Discord. Verifies the guild authorization, saves the installation, and returns HTML on success or failure. --- # Historical usage aggregates (/api-reference/admin-usage/getAdminUsageHistory) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns historical usage summaries aggregated by period (daily or monthly). Supports date range filtering and limit. --- # Current period usage (/api-reference/admin-usage/getAdminUsage) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the current billing period usage summary (query count, token count, active users) for the admin's active workspace. --- # Per-user usage breakdown (/api-reference/admin-usage/getAdminUsageBreakdown) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns per-user usage breakdown (query count, token count, login count) for the active workspace. Supports date range filtering and limit. --- # Combined usage dashboard (/api-reference/admin-usage/getAdminUsageSummary) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns a combined dashboard payload: current period usage, plan limits, up to 31 daily history points (today + past 30 days), and per-user breakdown (top 50). --- # Get workspace branding (public) (/api-reference/branding/getBranding) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the workspace's custom branding for the current session. No admin role required. Returns null branding if no custom branding is set. --- # Fork a conversation at a specific message (/api-reference/conversations/postConversationsByIdFork) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Creates a new conversation by forking an existing one at the specified message. Messages up to and including the fork point are copied to the new conversation. Branch metadata is saved to both the source and forked conversation's notebook state. --- # Generate share link (/api-reference/conversations/postConversationsByIdShare) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Creates a shareable link for a conversation. Optionally specify expiry duration and share mode (public or org-only). --- # Delete a conversation (/api-reference/conversations/deleteConversationsById) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Deletes a conversation and all its messages. Enforces ownership when auth is enabled. --- # Star or unstar a conversation (/api-reference/conversations/patchConversationsByIdStar) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Sets the starred status of a conversation. --- # Revoke share link (/api-reference/conversations/deleteConversationsByIdShare) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Revokes the share link for a conversation, making it private again. --- # List conversations (/api-reference/conversations/getConversations) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns a paginated list of conversations for the authenticated user. Requires an internal database (DATABASE\_URL). --- # Get conversation share status (/api-reference/conversations/getConversationsByIdShare) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns whether a conversation is currently shared and its share link details. --- # Update notebook state (/api-reference/conversations/patchConversationsByIdNotebookState) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Updates the notebook state of a conversation, including cell order, cell properties, and branch metadata. --- # View a shared conversation (/api-reference/conversations/getPublicConversationsByToken) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the content of a shared conversation. No authentication required for public shares. Org-scoped shares require authentication. Rate limited per IP. --- # Get conversation with messages (/api-reference/conversations/getConversationsById) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns a single conversation with all its messages. Enforces ownership when auth is enabled. --- # Sign in with email (/api-reference/auth/signInEmail) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Authenticates a user with email and password. Returns a session token. Only available when auth mode is 'managed'. --- # Sign out (/api-reference/auth/signOut) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Destroys the current session. --- # Get current session (/api-reference/auth/getSession) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the current session and user info. Requires a valid session cookie or Authorization header. --- # Sign up with email (/api-reference/auth/signUpEmail) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Creates a new user account with email and password. Only available when auth mode is 'managed' (Better Auth). --- # Approve a pending action (/api-reference/actions/postActionsByIdApprove) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Approves a pending action and triggers execution. Returns the updated action with results. For admin-only approval mode, the requester cannot approve their own action (separation of duties). --- # Get action by ID (/api-reference/actions/getActionsById) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns a single action. Only returns actions requested by the authenticated user. --- # List actions (/api-reference/actions/getActions) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns actions filtered by status. Requires ATLAS\_ACTIONS\_ENABLED=true and an internal database. --- # Deny a pending action (/api-reference/actions/postActionsByIdDeny) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Denies a pending action. Optionally provide a reason in the request body. For admin-only approval mode, the requester cannot deny their own action. --- # Rollback an executed action (/api-reference/actions/postActionsByIdRollback) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Rolls back an executed action using stored rollback information. Requires the same approval permissions as the original action. --- # Delete SSO provider (/api-reference/admin-sso/deleteAdminSsoProvidersById) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Permanently removes an SSO provider by ID. --- # Create SSO provider (/api-reference/admin-sso/postAdminSsoProviders) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Creates a new SSO provider for the admin's active organization. Requires type, issuer, domain, and config. --- # Get SSO enforcement status (/api-reference/admin-sso/getAdminSsoEnforcement) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns whether SSO enforcement is enabled for the admin's active organization. --- # List SSO providers (/api-reference/admin-sso/getAdminSsoProviders) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns all SSO providers configured for the admin's active organization. Each provider is returned as a summary (without full config). --- # Get SSO provider (/api-reference/admin-sso/getAdminSsoProvidersById) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns a single SSO provider by ID, including the full (redacted) configuration. --- # Set SSO enforcement (/api-reference/admin-sso/putAdminSsoEnforcement) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Enable or disable SSO enforcement for the admin's active organization. When enabled, password login is blocked for all members — they must sign in via the configured identity provider. Requires at least one active SSO provider to enable enforcement. --- # Update SSO provider (/api-reference/admin-sso/patchAdminSsoProvidersById) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Updates an existing SSO provider. All fields are optional — only provided fields are updated. --- # List semantic entities (/api-reference/semantic/getSemanticEntities) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns a summary of all entity definitions from the semantic layer YAML files. Each entity includes table name, description, column count, join count, and type. --- # Get entity details (/api-reference/semantic/getSemanticEntitiesByName) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the full parsed YAML content for a single semantic entity, including all dimensions, measures, joins, and query patterns. --- # Teams OAuth install redirect (/api-reference/teams/getTeamsInstall) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Redirects to the Azure AD admin consent page. Requires TEAMS\_APP\_ID to be configured. --- # Teams OAuth callback (/api-reference/teams/getTeamsCallback) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Handles the admin consent callback from Azure AD. Saves the tenant authorization and returns HTML on success or failure. --- # Delete setting override (/api-reference/admin-settings/deleteAdminSettingsByKey) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Removes a settings override, reverting to env var or default value. --- # Update setting (/api-reference/admin-settings/putAdminSettingsByKey) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Sets or updates a settings override. Requires internal database. --- # Get all settings (/api-reference/admin-settings/getAdminSettings) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns all known settings with current values and sources. --- # Generate data access compliance report (/api-reference/admin-compliance/getAdminComplianceReportsDataAccess) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns a report of who queried what tables, when, and how often within the specified date range. --- # Delete a PII classification (/api-reference/admin-compliance/deleteAdminComplianceClassificationsById) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} --- # Generate user activity compliance report (/api-reference/admin-compliance/getAdminComplianceReportsUserActivity) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns a report of user query activity, last login timestamp, and role information within the specified date range. --- # List PII column classifications (/api-reference/admin-compliance/getAdminComplianceClassifications) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} --- # Update a PII classification (/api-reference/admin-compliance/putAdminComplianceClassificationsById) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} --- # Widget HTML host page (/api-reference/widget/widgetHost) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Serves a self-contained HTML page for iframe embedding. Renders the AtlasChat component with configurable theme, API URL, position, branding, and initial query. --- # Widget TypeScript declarations (/api-reference/widget/widgetTypeDeclarations) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns ambient TypeScript declarations for window\.Atlas. Fallback for embedders who load only the script tag without installing @useatlas/react. --- # Widget JavaScript bundle (/api-reference/widget/widgetJS) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Self-contained ESM bundle (React + AtlasChat). Cached for 24 hours. --- # Widget script tag loader (/api-reference/widget/widgetLoader) {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns a self-contained IIFE script that injects a floating chat bubble and iframe overlay into any host page. Reads data-\* attributes from its own `