SQL Validation Pipeline
How Atlas enforces read-only SQL execution through a 7-layer defense-in-depth pipeline — explained for developers, PMs, and security reviewers.
Atlas lets an AI agent write SQL against your production database. Without guardrails, this is dangerous — a single malicious or mistaken query could delete data, expose secrets, or lock up your database. The validation pipeline is designed to prevent this.
Every SQL query the agent writes passes through 7 layers of validation before it reaches your database. No query can bypass the pipeline — if any layer rejects, the query is blocked and the agent receives a structured error message explaining why.
Key terms
Before diving in, here are a few terms used throughout this page:
| Term | Meaning |
|---|---|
| DML (Data Manipulation Language) | SQL statements that change data: INSERT, UPDATE, DELETE |
| DDL (Data Definition Language) | SQL statements that change database structure: CREATE TABLE, DROP TABLE, ALTER TABLE |
| AST (Abstract Syntax Tree) | A structured representation of a SQL query, produced by a parser. Like a grammar diagram for code — it lets Atlas understand the structure of a query, not just its text |
| RLS (Row-Level Security) | A mechanism that restricts which rows a user can see by automatically adding filter conditions to every query |
| Table whitelist | The set of tables the agent is allowed to query — derived from your semantic layer entity YAML files |
| CTE (Common Table Expression) | A temporary named result set defined with WITH name AS (...). CTEs are part of the query, not real tables |
| Semantic layer | YAML files that describe your database tables, columns, and relationships — the agent reads these to understand your data. See Semantic Layer Concepts |
Pipeline overview
| Layer | Name | Runs | Purpose |
|---|---|---|---|
| 0 | Empty check | Before execution | Reject empty or whitespace-only input |
| 1 | Regex mutation guard | Before execution | Quick reject of DML/DDL keywords |
| 2 | AST parse | Before execution | Full SQL parse — verify single SELECT statement |
| 3 | Table whitelist | Before execution | Only allow tables defined in the semantic layer |
| 4 | RLS injection | During execution | Inject WHERE clauses for tenant isolation |
| 5 | Auto LIMIT | During execution | Cap result set size to prevent full-table scans |
| 6 | Statement timeout | During execution | Kill queries that run too long |
Layers 0–3 run in validateSQL() before the query touches the database. Layers 4–6 are applied during execution in executeSQL(). Between layers 3 and 4, per-source rate limiting checks concurrency and queries-per-minute limits.
The pipeline is identical across all supported databases. The only difference is the AST parser mode: node-sql-parser uses PostgresQL mode for PostgreSQL connections and MySQL mode for MySQL connections. Plugin-registered database types fall back to the PostgreSQL parser with a warning. Plugins can register a custom parserDialect via ConnectionPluginMeta to override the parser mode.
Layer 0: Empty check
What it does: Rejects empty strings, whitespace-only input, and bare semicolons. Trailing semicolons are stripped before processing.
Why it matters: An empty query sent to the database would return an error, but catching it here provides a clearer error message and avoids unnecessary database round-trips.
-- Blocked
""
" "
";"Layer 1: Regex mutation guard
What it does: A fast first-pass check that scans the query text for keywords associated with data modification or administrative commands. SQL comments are stripped before testing so they cannot be used to hide dangerous keywords, but string literals are preserved — this is why keywords inside string values (like WHERE status = 'DELETE') trigger false positives.
Why it matters: This is the first line of defense against queries that try to change or destroy data. It runs before the more expensive AST parsing, catching obvious attacks cheaply.
Blocked keywords (all databases):
INSERT, UPDATE, DELETE, DROP, CREATE, ALTER, TRUNCATE
GRANT, REVOKE, EXEC, EXECUTE, CALL, KILL
COPY, LOAD, VACUUM, REINDEX, OPTIMIZE
INTO OUTFILEAdditional blocked keywords (MySQL only):
HANDLER, SHOW, DESCRIBE, EXPLAIN, USEExamples:
-- Blocked: DML
INSERT INTO users (name) VALUES ('alice')
-- Blocked: DDL
DROP TABLE users
-- Blocked: comment bypass attempt (comments stripped before check)
/* harmless */ DELETE FROM users
-- Blocked: keyword inside string literal (known false positive)
SELECT * FROM logs WHERE status = 'DELETE'The regex guard is intentionally broad — it prefers false positives over false negatives. A query like WHERE status = 'DELETE' is blocked because the word "DELETE" appears in the text, even though it's inside a string literal. The agent can work around this by reformulating the query (e.g. filtering by a status code instead).
Plugins can register additional forbidden patterns via ConnectionPluginMeta.forbiddenPatterns to add database-specific protection for connection types not covered by the built-in rules.
Layer 2: AST parse
What it does: The query is fully parsed into an Abstract Syntax Tree using node-sql-parser. The parser mode is auto-detected based on the connection type (PostgreSQL or MySQL). This layer enforces three rules:
- Single statement — No semicolon-separated batches. Exactly one statement allowed.
- SELECT only — The parsed statement type must be
select. Any other type is rejected. - Must parse — If the parser cannot understand the query, it is rejected. This is a critical security decision — an unparseable query could be a crafted bypass attempt.
Why it matters: The regex guard (layer 1) catches known bad keywords, but it can't understand SQL structure. A query like SELECT 1; DROP TABLE users contains SELECT, but it also contains a second statement that drops a table. Only a full parser can detect this — the AST reveals the query has two statements, and the second one isn't a SELECT.
-- Blocked: multiple statements (piggybacking attack)
SELECT 1; DROP TABLE users
-- Blocked: non-SELECT
CREATE TABLE evil (id INT)
-- Blocked: unparseable (rejected, not allowed through)
SELECT * FROM users UNION WEIRD SYNTAX
-- Allowed: CTEs (WITH clause)
WITH active AS (SELECT * FROM users WHERE active = true)
SELECT * FROM activeReject by default: If the parser can't understand a query, Atlas blocks it. This is the opposite of how most tools work — they allow queries they don't understand. Atlas treats "I can't parse this" as a potential attack, not a syntax error to ignore.
CTE names (WITH x AS (...)) are collected during parsing and passed to layer 3 so they aren't mistaken for real table names.
Layer 3: Table whitelist
What it does: Every table referenced in the query is checked against a whitelist derived from your semantic layer. Only tables that have an entity YAML file (semantic/entities/*.yml or semantic/{source}/entities/*.yml) are allowed.
Why it matters: Even a valid SELECT can be dangerous if it reads from the wrong table. Your database might contain tables with sensitive data (credentials, PII, internal configuration) that the agent should never access. The whitelist ensures the agent can only query tables you've explicitly described in the semantic layer.
-- Given semantic layer defines: users, orders, products
-- Allowed: both tables are in the semantic layer
SELECT * FROM users JOIN orders ON users.id = orders.user_id
-- Blocked: "secrets" is not in the semantic layer
SELECT * FROM secrets
-- Blocked: schema-qualified reference requires qualified name in whitelist
SELECT * FROM internal.secretsSchema-qualified queries: If a query uses schema.table syntax (e.g. analytics.orders), the qualified name must be in the whitelist. Unqualified names cannot bypass schema restrictions.
CTE exclusion: CTE names collected in layer 2 are excluded from the whitelist check — they are temporary result sets defined by the query, not real tables.
Disabling: Set ATLAS_TABLE_WHITELIST=false to disable (not recommended for production).
Layer 4: RLS injection
What it does: When Row-Level Security is enabled, Atlas automatically adds WHERE clauses to the query based on the authenticated user's identity claims. This is done via AST manipulation — not string concatenation — so the resulting SQL is always syntactically correct.
Why it matters: In multi-tenant applications, different users should only see their own data. Without RLS, the agent could accidentally (or intentionally) query another tenant's rows. RLS makes this impossible by injecting filter conditions that the agent cannot remove or override.
RLS can be configured via environment variables (single-policy shorthand) or declaratively in atlas.config.ts with the rls key (supports multiple policies targeting different tables and claims).
-- Original query
SELECT * FROM orders WHERE status = 'active'
-- After RLS injection (tenant_id from user's JWT org_id claim)
SELECT * FROM orders WHERE status = 'active' AND orders.tenant_id = 'acme-corp'RLS is fail-closed. If the user's claims are missing or cannot be resolved, the query is blocked entirely. A blocked query is always safer than a data leak.
RLS injection runs after validation (layers 0–3) and after all plugin beforeQuery hooks. This ordering ensures plugins cannot strip RLS conditions. See Row-Level Security for full configuration details.
Layer 5: Auto LIMIT
What it does: A LIMIT clause is appended to every query that doesn't already have one. The check is text-based — if the word LIMIT appears anywhere in the query (including in a subquery), no additional LIMIT is appended.
Why it matters: Without a limit, a simple SELECT * FROM events on a table with millions of rows could return a massive result set, consuming memory and bandwidth. Auto LIMIT caps the result set to a safe default.
-- Input (no LIMIT)
SELECT * FROM users
-- Executed
SELECT * FROM users LIMIT 1000Default: 1000 rows. Configure with ATLAS_ROW_LIMIT.
If the result set hits the limit, the response includes truncated: true so the agent knows data was cut off and can add filters or pagination.
Layer 6: Statement timeout
What it does: A per-query execution deadline is set on the database connection. Queries that exceed the deadline are terminated by the database engine itself.
Why it matters: Even a valid, read-only SELECT can consume significant resources — a complex join across large tables, or a query that triggers a sequential scan, could run for minutes and degrade performance for other users. The statement timeout prevents any single query from monopolizing the database.
- PostgreSQL: Session-level
statement_timeoutset on the connection - MySQL: Session-level
MAX_EXECUTION_TIMEset per query
Default: 30 seconds (30000ms). Configure with ATLAS_QUERY_TIMEOUT.
Queries that exceed the timeout are terminated by the database and return an error to the agent.
What gets rejected: concrete examples
Here are real-world attack patterns and which layer catches them:
1. SQL injection via piggybacked statement
An attacker tries to append a destructive statement after a legitimate SELECT:
SELECT * FROM users; DROP TABLE usersCaught at: Layer 1 (regex guard blocks DROP) and layer 2 (AST parser detects two statements). Defense in depth — even if one layer had a bug, the other would catch it.
2. Data exfiltration from an undocumented table
The agent tries to read from a table that exists in the database but isn't described in the semantic layer:
SELECT api_key, secret FROM internal_credentialsCaught at: Layer 3 (table whitelist). internal_credentials has no entity YAML file, so it's not in the allowed set. The agent only knows about tables you've explicitly documented.
3. Comment-wrapped mutation attempt
An attacker hides a DELETE statement inside what looks like a commented-out section, hoping the validator only sees the harmless-looking parts:
/* just a select */ DELETE FROM users WHERE 1=1Caught at: Layer 1 (regex guard). SQL comments are stripped before pattern matching, so DELETE is detected regardless of surrounding comments.
4. Resource exhaustion via unlimited query
The agent writes a query that would return millions of rows, consuming memory and network bandwidth:
SELECT * FROM eventsCaught at: Layer 5 (auto LIMIT). The query is rewritten to SELECT * FROM events LIMIT 1000 before execution. If it still runs too long, layer 6 (statement timeout) terminates it after 30 seconds.
How Atlas compares to other tools
Most text-to-SQL tools validate queries minimally or not at all. Here's how Atlas's pipeline compares:
| Validation layer | Atlas | WrenAI | Vanna | Metabase |
|---|---|---|---|---|
| Empty check | Yes | — | — | — |
| Regex mutation guard | Yes (DML + DDL + admin commands) | — | — | — |
| AST parse (single SELECT) | Yes (reject if unparseable) | — | — | Partial (native driver) |
| Table whitelist | Yes (semantic layer) | Modeling layer | — | Sandboxed permissions |
| RLS / row-level filtering | Yes (AST injection, fail-closed) | — | — | Yes (sandboxing) |
| Auto LIMIT | Yes (configurable) | — | — | Yes (hardcoded) |
| Statement timeout | Yes (configurable) | — | — | Yes |
| Total layers | 7 | ~1 | ~0 | ~3 |
This comparison reflects publicly documented validation behavior as of March 2026. WrenAI and Vanna focus on query generation accuracy rather than execution-time validation. Metabase has a mature permissions model but serves a different use case (BI dashboards vs. AI agent SQL execution).
The key difference isn't just the number of layers — it's the fail-closed philosophy. Atlas rejects anything it doesn't understand (unparseable SQL, unknown tables, missing claims). Most tools default to allowing queries they can't validate.
Plugin hooks
When plugins are installed, two hook points extend the pipeline:
beforeQuery— Plugins can inspect or rewrite SQL before execution. If a plugin rewrites the query, the rewritten SQL goes through layers 0–3 again for re-validation. This prevents plugins from introducing DML or bypassing the whitelist.afterQuery— Plugins receive the query results for logging, transformation, or side effects.
Custom validators: For non-SQL datasources (SOQL, GraphQL, MQL), a plugin can provide a customValidator via ConnectionPluginMeta.validate. When a connection has a custom validator registered, it completely replaces the built-in validation layers 0–3 — validateSQL is not called. RLS injection (layer 4) and auto LIMIT (layer 5) are also skipped, since non-SQL languages may not support standard WHERE/LIMIT syntax. The custom validator is responsible for providing equivalent safety guarantees, including any row-level isolation.
Error handling
When a query fails at any validation layer, the response includes a structured error:
{
"success": false,
"error": "Table \"secrets\" is not in the allowed list. Check catalog.yml for available tables."
}Database errors that might expose connection details or internal state (passwords, connection strings, SSL certificates) are automatically scrubbed before being returned to the agent. The full error is logged server-side.
All queries — successful and failed — are recorded in the audit log with user attribution, timing, and error details.
What this means for you
This page covers content for end users (querying data), operators (configuring and auditing the pipeline), and developers (building on the SDK or plugins).
For end users
Your queries are safe. Every question you ask goes through 7 layers of validation before it reaches the database. You cannot accidentally delete, modify, or corrupt data — the pipeline only allows read-only SELECT queries.
If a query is blocked, the agent will explain why and suggest an alternative approach. Common reasons include referencing a table that hasn't been added to the semantic layer, or using a keyword that looks like a data modification command (even inside a string literal).
For operators
Configuration
Three environment variables control the execution-time layers:
| Variable | Default | What it controls |
|---|---|---|
ATLAS_ROW_LIMIT | 1000 | Maximum rows returned per query (layer 5) |
ATLAS_QUERY_TIMEOUT | 30000 (30s) | Query execution deadline in milliseconds (layer 6) |
ATLAS_TABLE_WHITELIST | true | Whether the table whitelist is enforced (layer 3) |
The first four layers (0-3) are always active and cannot be disabled. This is intentional — they form the core security boundary.
Testing the pipeline
Validate a query without executing it using the SDK or REST API:
curl -X POST https://your-api.example.com/api/v1/validate-sql \
-H "Content-Type: application/json" \
-d '{"sql": "SELECT * FROM users"}'This runs layers 0-3 and returns the validation result. Use this to test edge cases or verify that your semantic layer whitelist is correct.
Security audit
The pipeline follows a fail-closed threat model:
| Threat | Mitigation | Layer |
|---|---|---|
| SQL injection (piggybacked statements) | AST parse rejects multi-statement input | 2 |
| Data exfiltration from undocumented tables | Table whitelist restricts to semantic layer | 3 |
| Mutation via DML/DDL keywords | Regex guard + AST type check (SELECT only) | 1, 2 |
| Comment-wrapped bypass attempts | Comments stripped before regex check | 1 |
| Unparseable SQL used to evade validation | Reject-by-default on parse failure | 2 |
| Resource exhaustion (full-table scans) | Auto LIMIT + statement timeout | 5, 6 |
| Cross-tenant data access | RLS WHERE injection (fail-closed on missing claims) | 4 |
Review query patterns in Admin > Audit Log — see Error handling for what gets recorded.
For developers
If you're building plugins or custom tools, see Plugin hooks above for how beforeQuery/afterQuery hooks and custom validators interact with the pipeline. Custom validators completely replace layers 0-3, so your validator must provide equivalent safety guarantees.
Test coverage
The validation pipeline has ~260 unit tests across 12 test files covering edge cases including:
- Comment-based bypass attempts
- CTE name exclusion
- Schema-qualified table references
- MySQL-specific forbidden patterns
- Semicolon injection
- Unparseable SQL rejection
- String literal false positives
See packages/api/src/lib/tools/__tests__/sql.test.ts for the full test suite.
Run the tests: bun run test (isolated per-file runner).
See Also
- Row-Level Security — Automatic WHERE clause injection based on user claims (layer 4)
- Connect Your Data — Configuring row limits, query timeouts, and table whitelists
- Semantic Layer — How entity YAML files define the table whitelist (layer 3)
- Schema Evolution — Keeping the semantic layer in sync with database changes
- Environment Variables —
ATLAS_ROW_LIMIT,ATLAS_QUERY_TIMEOUT, andATLAS_TABLE_WHITELIST - Plugin Authoring Guide — Extending validation with custom forbidden patterns and validators
- Atlas vs Raw MCP — Why connecting AI directly to your database via MCP skips all of this
Last updated on