Plugin Authoring Guide
Step-by-step guide to building Atlas plugins for datasources, context, interactions, actions, and sandboxes.
Step-by-step guide to building an Atlas plugin. We'll build a complete datasource plugin, then cover how the other four types differ.
Choosing a Plugin Type
| Type | Use when you want to... | Example |
|---|---|---|
| Datasource | Connect a new database or API as a query target | ClickHouse, Snowflake, Salesforce |
| Context | Inject additional context into the agent's prompt | Company glossary, user preferences, external docs |
| Interaction | Add a new surface for users to interact with Atlas | Slack bot, Discord bot, email handler |
| Action | Let the agent perform write operations (with approval) | Create JIRA ticket, send email, update CRM |
| Sandbox | Provide a custom code execution environment | E2B, Daytona, custom Docker runner |
Prerequisites
@useatlas/plugin-sdk-- type definitions and helperszod-- config schema validationbun-- runtime and test runner- An Atlas project with
atlas.config.ts
1. Scaffold
Use the CLI to generate a plugin skeleton:
bun run atlas -- plugin create my-datasource --type datasourceThis creates plugins/my-datasource/ with:
plugins/my-datasource/
├── src/
│ ├── index.ts # Plugin entry point
│ └── index.test.ts # Test scaffold
├── package.json
└── tsconfig.jsonOr create the files manually -- the CLI is a convenience, not a requirement.
2. Config Schema
Define what your plugin accepts using Zod:
// src/config.ts
import { z } from "zod";
export const ConfigSchema = z.object({
url: z
.string()
.min(1, "URL must not be empty")
.refine(
(u) => u.startsWith("postgresql://") || u.startsWith("postgres://"),
"URL must start with postgresql:// or postgres://",
),
poolSize: z.number().int().positive().max(500).optional(),
});
export type PluginConfig = z.infer<typeof ConfigSchema>;The schema is validated at factory call time -- before the server starts. Invalid config fails fast.
3. Connection Factory
Implement PluginDBConnection -- the interface Atlas uses to query your database:
// src/connection.ts
import type { PluginDBConnection, PluginQueryResult } from "@useatlas/plugin-sdk";
import type { PluginConfig } from "./config";
export function createConnection(config: PluginConfig): PluginDBConnection {
let Pool: typeof import("pg").Pool;
try {
({ Pool } = require("pg"));
} catch (err) {
const isNotFound =
err instanceof Error &&
"code" in err &&
(err as NodeJS.ErrnoException).code === "MODULE_NOT_FOUND";
if (isNotFound) {
throw new Error("This plugin requires the pg package. Install it with: bun add pg");
}
throw err;
}
const pool = new Pool({
connectionString: config.url,
max: config.poolSize ?? 10,
});
return {
async query(sql: string, timeoutMs?: number): Promise<PluginQueryResult> {
const client = await pool.connect();
try {
if (timeoutMs) {
await client.query(`SET statement_timeout = ${timeoutMs}`);
}
const result = await client.query(sql);
return {
columns: result.fields.map((f) => f.name),
rows: result.rows,
};
} finally {
client.release();
}
},
async close(): Promise<void> {
await pool.end();
},
};
}Key points:
query()returns{ columns: string[], rows: Record<string, unknown>[] }close()cleans up resources- Lazy-load the driver with
require()+MODULE_NOT_FOUNDhandling so it can be an optional peer dependency
4. Plugin Object
Wire everything together with createPlugin(), which validates config and returns a factory function. The configSchema can be any object with a parse() method -- Zod is recommended but not required (e.g. a custom validator that throws on invalid input works too). For plugins that don't need runtime configuration, use definePlugin() instead -- see createPlugin vs definePlugin below.
// src/index.ts
import { createPlugin } from "@useatlas/plugin-sdk";
import type { AtlasDatasourcePlugin, PluginHealthResult } from "@useatlas/plugin-sdk";
import { ConfigSchema, type PluginConfig } from "./config";
import { createConnection } from "./connection";
export function buildPlugin(config: PluginConfig): AtlasDatasourcePlugin<PluginConfig> {
let cachedConnection: ReturnType<typeof createConnection> | undefined;
return {
id: "my-datasource",
type: "datasource" as const,
version: "1.0.0",
name: "My DataSource",
config,
connection: {
create: () => {
if (!cachedConnection) {
cachedConnection = createConnection(config);
}
return cachedConnection;
},
dbType: "postgres",
},
entities: [],
dialect: "This datasource uses PostgreSQL. Use DATE_TRUNC() for date truncation.",
// Called once during server startup. Throw to block startup (for fatal configuration errors).
async initialize(ctx) {
ctx.logger.info("My datasource plugin initialized");
},
// Called by `atlas doctor` and the admin API. Always return a result — never throw.
// Return `{ healthy: false, message: '...' }` for recoverable issues.
async healthCheck(): Promise<PluginHealthResult> {
const start = performance.now();
try {
const conn = createConnection(config);
await conn.query("SELECT 1", 5000);
await conn.close();
return { healthy: true, latencyMs: Math.round(performance.now() - start) };
} catch (err) {
return {
healthy: false,
message: err instanceof Error ? err.message : String(err),
latencyMs: Math.round(performance.now() - start),
};
}
},
};
}
export const myPlugin = createPlugin({
configSchema: ConfigSchema,
create: buildPlugin,
});5. Register
Add to atlas.config.ts:
import { defineConfig } from "@atlas/api/lib/config";
import { myPlugin } from "./plugins/my-datasource/src/index";
export default defineConfig({
plugins: [
myPlugin({ url: process.env.MY_DB_URL! }),
],
});Never commit credentials to version control. Use environment variables (process.env.MY_DB_URL) in atlas.config.ts and add .env to .gitignore.
6. Test
bun test plugins/my-datasource/src/index.test.tsSee Testing below for a full test example and patterns.
7. Publish
For npm packages:
{
"name": "atlas-plugin-my-datasource",
"peerDependencies": {
"@useatlas/plugin-sdk": ">=0.0.1",
"pg": ">=8.0.0"
},
"peerDependenciesMeta": {
"pg": { "optional": true }
},
"devDependencies": {
"@useatlas/plugin-sdk": "^0.0.2"
}
}Convention: @useatlas/plugin-sdk goes in both peerDependencies (so consumers provide it) and devDependencies (so you can build and test locally). Database drivers go as optional peer dependencies.
8. Testing
Test config validation, plugin shape, and health checks. Use bun test for a single file or bun run test for the full suite.
import { describe, test, expect } from "bun:test";
import { myPlugin } from "./index";
describe("my-datasource plugin", () => {
test("validates config schema", () => {
// Test that invalid config is rejected
expect(() => myPlugin({ url: "" })).toThrow();
});
test("creates plugin with valid config", () => {
const plugin = myPlugin({ url: "postgresql://localhost/test" });
expect(plugin.id).toBe("my-datasource");
expect(plugin.type).toBe("datasource");
});
test("health check reports status", async () => {
const plugin = myPlugin({ url: "postgresql://localhost/test" });
const health = await plugin.healthCheck?.();
expect(health).toHaveProperty("healthy");
});
});Key testing patterns:
- Config validation — Verify that invalid configs throw at factory call time, not at runtime
- Plugin shape — Check
id,type,version, and variant-specific properties (connection,contextProvider,actions, etc.) - Health checks — Ensure
healthCheck()returns{ healthy: boolean }and never throws (even when the service is unreachable) - Connection factory — For datasource plugins, test that
connection.create()returns a validPluginDBConnection
Other Plugin Types
Context Plugin
Context plugins inject additional knowledge into the agent's system prompt. Implement contextProvider.load() to return a string that gets appended to the prompt, and optionally contextProvider.refresh() to support cache invalidation.
load()— Returns a string (typically Markdown) that is appended to the agent's system prompt. Called on each agent invocation. Cache the result internally for performance.refresh()— Called when the semantic layer is reloaded or on manual refresh via the admin UI. Use it to clear any internal cache so the nextload()picks up changes.
Here is a minimal example that injects a company glossary:
import { definePlugin } from "@useatlas/plugin-sdk";
export default definePlugin({
id: "company-glossary",
type: "context",
version: "1.0.0",
name: "Company Glossary",
contextProvider: {
// Cache the loaded context to avoid re-reading on every request
_cache: null as string | null,
async load() {
if (this._cache) return this._cache;
// Load from any source: filesystem, database, API, etc.
const terms = [
{ term: "ARR", definition: "Annual Recurring Revenue — sum of all active subscription values annualized" },
{ term: "MRR", definition: "Monthly Recurring Revenue — ARR / 12" },
{ term: "churn", definition: "Percentage of customers who cancel within a billing period" },
];
const lines = terms.map((t) => `- **${t.term}**: ${t.definition}`);
this._cache = `## Company Glossary\n\n${lines.join("\n")}`;
return this._cache;
},
async refresh() {
// Clear cache so next load() re-reads from source
this._cache = null;
},
},
async initialize(ctx) {
ctx.logger.info("Company glossary context plugin initialized");
},
});The returned string from load() becomes part of the agent's system prompt, so the agent can use your glossary terms, user preferences, or any domain knowledge when interpreting questions and writing SQL.
Interaction Plugin
Interaction plugins add communication surfaces. They may mount Hono routes (Slack, webhooks) or manage non-HTTP transports (MCP stdio):
export default definePlugin({
id: "my-webhook",
type: "interaction",
version: "1.0.0",
routes(app) {
app.post("/webhooks/my-service", async (c) => {
return c.json({ ok: true });
});
},
});Action Plugin
Action plugins give the agent side-effects with approval controls. Actions require user approval before execution: the agent proposes the action, the user sees a confirmation card in the chat UI, and only after approval does execute() run. This prevents unintended writes.
The approval mode controls who can approve:
"manual"— Any user in the conversation can approve or reject"admin-only"— Only users with theadminrole can approve"auto"— Executes immediately without approval (use sparingly)
Here is a complete example that creates a support ticket:
import { z } from "zod";
import { tool } from "ai";
import { createPlugin } from "@useatlas/plugin-sdk";
import type { AtlasActionPlugin, PluginAction } from "@useatlas/plugin-sdk";
const ticketConfigSchema = z.object({
apiUrl: z.string().url(),
apiKey: z.string().min(1, "apiKey must not be empty"),
defaultPriority: z.enum(["low", "medium", "high"]).default("medium"),
});
type TicketConfig = z.infer<typeof ticketConfigSchema>;
export const ticketPlugin = createPlugin<TicketConfig, AtlasActionPlugin<TicketConfig>>({
configSchema: ticketConfigSchema,
create(config) {
const action: PluginAction = {
name: "createSupportTicket",
description: "Create a support ticket from analysis findings",
tool: tool({
description: "Create a support ticket. Requires user approval before execution.",
inputSchema: z.object({
title: z.string().max(200).describe("Short summary of the issue"),
body: z.string().describe("Detailed description with relevant data"),
priority: z
.enum(["low", "medium", "high"])
.optional()
.describe(`Priority level. Defaults to "${config.defaultPriority}"`),
}),
execute: async ({ title, body, priority }) => {
// This only runs AFTER the user approves in the chat UI
const response = await fetch(`${config.apiUrl}/tickets`, {
method: "POST",
headers: {
Authorization: `Bearer ${config.apiKey}`,
"Content-Type": "application/json",
},
body: JSON.stringify({
title,
body,
priority: priority ?? config.defaultPriority,
}),
});
if (!response.ok) {
throw new Error(`Ticket API returned ${response.status}`);
}
const ticket = (await response.json()) as { id: string; url: string };
return { ticketId: ticket.id, url: ticket.url };
},
}),
actionType: "ticket:create",
reversible: false,
defaultApproval: "manual",
requiredCredentials: ["apiKey"],
// ^ Values must match environment variable names (e.g. process.env.apiKey).
// At startup, Atlas checks these env vars exist and logs a warning for any
// that are missing (see validateActionCredentials in the ToolRegistry).
// Missing credentials do not block startup — they produce warnings only.
};
return {
id: "ticket-action",
type: "action" as const,
version: "1.0.0",
name: "Support Ticket Action",
config,
actions: [action],
};
},
});Register it in atlas.config.ts:
plugins: [
ticketPlugin({
apiUrl: process.env.TICKET_API_URL!,
apiKey: process.env.TICKET_API_KEY!,
}),
],Sandbox Plugin
Sandbox plugins provide isolation backends for the explore tool:
sandbox: {
create(semanticRoot: string): PluginExploreBackend {
return {
async exec(command: string) {
// Execute command in isolation, return { stdout, stderr, exitCode }
},
async close() { /* cleanup */ },
};
},
priority: 60,
},
security: {
networkIsolation: true,
filesystemIsolation: true,
unprivilegedExecution: true,
description: "My isolation mechanism...",
},The priority field determines selection order when multiple backends are available. Higher values are tried first. Built-in priority scale:
| Backend | Priority | Notes |
|---|---|---|
| Vercel sandbox | 100 | Firecracker microVM (Vercel deployments only) |
| nsjail | 75 | Linux namespace sandbox (explicit via ATLAS_SANDBOX=nsjail) |
| Plugin default | 60 | SANDBOX_DEFAULT_PRIORITY from @useatlas/plugin-sdk |
| Sidecar | 50 | HTTP-isolated container (set via ATLAS_SANDBOX_URL) |
| just-bash | 0 | OverlayFs read-only fallback (dev only) |
Plugin sandbox backends default to priority 60 (between nsjail and sidecar). Set a higher value to take precedence over built-in backends, or a lower value to act as a fallback.
createPlugin vs definePlugin
The SDK exports two helpers for authoring plugins. Choose based on whether your plugin accepts runtime configuration.
createPlugin() -- Use when the plugin accepts user-configurable options that should be validated at startup. It returns a factory function that validates config via a Zod schema before building the plugin object. This is the Better Auth-style plugins: [myPlugin({ key: "value" })] pattern.
import { createPlugin } from "@useatlas/plugin-sdk";
import { z } from "zod";
export const myPlugin = createPlugin({
configSchema: z.object({ url: z.string().url() }),
create: (config) => ({
id: "my-plugin",
type: "datasource" as const,
version: "1.0.0",
config,
connection: { create: () => makeConnection(config.url), dbType: "postgres" },
}),
});
// Usage in atlas.config.ts:
plugins: [myPlugin({ url: process.env.MY_URL! })]definePlugin() -- Use when no user-configurable options exist. It validates the plugin shape at module load time and returns the plugin object directly.
import { definePlugin } from "@useatlas/plugin-sdk";
export default definePlugin({
id: "my-context",
type: "context",
version: "1.0.0",
contextProvider: {
async load() { return "Additional context for the agent"; },
},
});
// Usage in atlas.config.ts:
import myContext from "./plugins/my-context";
plugins: [myContext]Type Inference with $InferServerPlugin
The SDK exports a $InferServerPlugin utility type (following Better Auth's $Infer pattern) that lets client code extract plugin types without importing server modules. It works with both createPlugin() factory functions and definePlugin() direct objects:
import type { $InferServerPlugin } from "@useatlas/plugin-sdk";
import type { clickhousePlugin } from "@atlas/plugin-clickhouse-datasource";
type CH = $InferServerPlugin<typeof clickhousePlugin>;
// CH["Config"] → { url: string; database?: string }
// CH["Type"] → "datasource"
// CH["Id"] → string
// CH["DbType"] → "clickhouse"Available inference keys: Config, Type, Id, Name, Version, DbType (datasource only), Actions (action only), Security (sandbox only).
Plugin Status Lifecycle
Plugins transition through a defined set of statuses during their lifetime:
| Status | Description |
|---|---|
registered | Plugin object has been validated and added to the registry |
initializing | initialize() is currently running |
healthy | Plugin is initialized and operating normally |
unhealthy | Plugin is initialized but healthCheck() returned { healthy: false } |
teardown | teardown() has been called during graceful shutdown |
The host manages these transitions automatically. Plugin authors do not need to set status directly -- implement initialize(), healthCheck(), and teardown() and the host handles the rest.
Hooks
Plugins can intercept agent lifecycle events and HTTP requests using hooks. Each hook entry has an optional matcher function (return true to run the handler; omit to always run) and a handler function.
Define hooks on any plugin type via the hooks property:
export default definePlugin({
id: "audit-logger",
type: "context",
version: "1.0.0",
contextProvider: { async load() { return ""; } },
hooks: {
beforeQuery: [{
matcher: (ctx) => ctx.sql.includes("sensitive_table"),
handler: (ctx) => {
console.log(`Query on sensitive table: ${ctx.sql}`);
// Return { sql } to rewrite, throw to reject, or return void to pass through
},
}],
afterQuery: [{
handler: (ctx) => {
console.log(`Query completed in ${ctx.durationMs}ms, ${ctx.result.rows.length} rows`);
},
}],
},
});Hook Types
| Hook | Context | Mutable | Description |
|---|---|---|---|
beforeQuery | { sql, connectionId? } | Yes -- return { sql } to rewrite, throw to reject | Fires before each SQL query is executed |
afterQuery | { sql, connectionId?, result, durationMs } | No | Fires after each SQL query with results |
beforeExplore | { command } | Yes -- return { command } to rewrite, throw to reject | Fires before each explore command |
afterExplore | { command, output } | No | Fires after each explore command with output |
onRequest | { path, method, headers } | No | HTTP-level: fires before routing a request |
onResponse | { path, method, status } | No | HTTP-level: fires after sending a response |
beforeQuery and beforeExplore are mutable hooks -- handlers can return a mutation object ({ sql } or { command }) to rewrite the operation, or throw an error to reject it entirely. All other hooks are observation-only (void return).
Schema Migrations
Plugins can declare tables for the Atlas internal database via the schema property. Declared tables are auto-migrated at boot — no manual SQL needed:
export default definePlugin({
id: "my-plugin",
type: "context",
version: "1.0.0",
schema: {
my_plugin_cache: {
fields: {
key: { type: "string", required: true, unique: true },
value: { type: "string", required: true },
updated_at: { type: "date" },
},
},
},
// ...
});The schema property is available on all plugin types. It requires DATABASE_URL to be set (the internal Postgres database). Use ctx.db in initialize() or hooks to query your plugin's tables.
Datasource Plugin Properties
Beyond the basics shown in step 4, datasource plugins support several additional properties.
entities
Provide semantic layer entity definitions programmatically. Entities are merged into the table whitelist at boot (in-memory only, no disk writes). Can be a static array or an async factory:
connection: { create: () => myConn, dbType: "postgres" },
entities: [
{ name: "users", yaml: "table: users\ndimensions:\n id:\n type: number" },
],
// Or as an async factory:
entities: async () => {
const tables = await discoverTables();
return tables.map(t => ({ name: t.name, yaml: generateYaml(t) }));
},dialect
A string injected into the agent's system prompt with SQL dialect guidance:
dialect: "This datasource uses ClickHouse. Use toStartOfMonth() for date truncation, not DATE_TRUNC().",connection.validate
Replace the standard SQL validation pipeline with a custom validator. Use for non-SQL query languages (SOQL, GraphQL, MQL):
connection: {
create: () => myConn,
dbType: "salesforce",
validate: (query) => {
if (query.includes("DELETE")) return { valid: false, reason: "DELETE not allowed" };
return { valid: true };
},
},connection.parserDialect and connection.forbiddenPatterns
Customize the standard SQL validation pipeline without fully replacing it:
connection: {
create: () => myConn,
dbType: "snowflake",
// Override auto-detected parser dialect (case-sensitive, e.g. "Snowflake" not "snowflake")
parserDialect: "Snowflake",
// Additional regex patterns to block beyond the base DML/DDL guard
forbiddenPatterns: [/\bCOPY\s+INTO\b/i, /\bPUT\b/i],
},These are ignored when a custom validate function is provided.
Both properties are consumed during SQL validation: parserDialect sets the AST parser mode used in layer 2, and forbiddenPatterns are checked as additional regex guards in layer 1. See SQL Validation Pipeline for the full layer breakdown.
Plugin Lifecycle
teardown()
Called during graceful shutdown in reverse registration order (LIFO). Use it to close connections, flush buffers, or clean up resources. Never throw from teardown().
async teardown() {
await this.pool.end();
},AtlasPluginContext
The ctx object passed to initialize() and hook handlers provides:
| Property | Type | Description |
|---|---|---|
ctx.db | { query(), execute() } | null | Internal Postgres (auth/audit DB). Null when DATABASE_URL is not set |
ctx.connections | { get(id), list() } | Connection registry for analytics datasources |
ctx.tools | { register(tool) } | Tool registry -- plugins can register additional agent tools |
ctx.logger | PluginLogger | Pino-compatible child logger scoped to the plugin ID |
ctx.config | Record<string, unknown> | Resolved Atlas configuration (cast if you know the shape) |
Example -- registering a custom tool from initialize():
async initialize(ctx) {
ctx.tools.register({
name: "lookupInventory",
description: "Check inventory levels for a product SKU",
tool: tool({
description: "Look up current inventory by SKU",
inputSchema: z.object({ sku: z.string() }),
execute: async ({ sku }) => fetchInventory(sku),
}),
});
},Reference Plugins
The Atlas monorepo includes 15 reference plugin implementations in the plugins/ directory. These serve as working examples for every plugin type:
Datasource: clickhouse-datasource, duckdb-datasource, mysql-datasource, salesforce-datasource, snowflake-datasource
Context: yaml-context
Interaction: mcp-interaction, slack-interaction
Action: email-action, jira-action
Sandbox: daytona-sandbox, e2b-sandbox, nsjail-sandbox, sidecar-sandbox, vercel-sandbox
Browse the source at plugins/ for patterns on connection factories, config schemas, health checks, and testing.
Common Patterns
Health Checks
Always return { healthy, message?, latencyMs? }, never throw:
async healthCheck(): Promise<PluginHealthResult> {
try {
const start = performance.now();
await ping();
return { healthy: true, latencyMs: Math.round(performance.now() - start) };
} catch (err) {
return { healthy: false, message: err instanceof Error ? err.message : String(err) };
}
}Error Handling
- Throw from
initialize()to block server startup (fatal misconfiguration) - Return unhealthy from
healthCheck()for runtime degradation (transient errors) - Never throw from
healthCheck()orteardown()
Config-Driven Credentials
Pass credentials via plugin config, not environment variables:
// Good
myPlugin({ apiKey: process.env.MY_API_KEY! })
// Bad -- hidden dependency on env var name
// inside plugin: process.env.MY_API_KEY