Plugin Authoring Guide
Step-by-step guide to building Atlas plugins for datasources, context, interactions, actions, and sandboxes.
Step-by-step guide to building an Atlas plugin. We'll build a complete datasource plugin, then cover how the other four types differ.
Choosing a Plugin Type
| Type | Use when you want to... | Example |
|---|---|---|
| Datasource | Connect a new database or API as a query target | ClickHouse, Snowflake, Salesforce |
| Context | Inject additional context into the agent's prompt | Company glossary, user preferences, external docs |
| Interaction | Add a new surface for users to interact with Atlas | Slack bot, Discord bot, email handler |
| Action | Let the agent perform write operations (with approval) | Create JIRA ticket, send email, update CRM |
| Sandbox | Provide a custom code execution environment | E2B, Daytona, custom Docker runner |
Prerequisites
@useatlas/plugin-sdk-- type definitions and helperszod-- config schema validationbun-- runtime and test runner- An Atlas project with
atlas.config.ts
1. Scaffold
Standalone plugin (publishable to npm):
bun create @useatlas/plugin my-datasource --type datasourceThis creates a standalone my-datasource/ directory with package.json, tsconfig.json, src/index.ts, tests, README, and LICENSE -- ready to publish.
In-monorepo plugin (inside an Atlas project):
bun run atlas -- plugin create my-datasource --type datasourceThis creates plugins/my-datasource/ with workspace references.
Both generate the same structure:
my-datasource/
├── src/
│ ├── index.ts # Plugin entry point
│ └── index.test.ts # Test scaffold
├── package.json
├── tsconfig.json
└── README.mdOr create the files manually -- the CLI is a convenience, not a requirement.
2. Config Schema
Define what your plugin accepts using Zod:
// src/config.ts
import { z } from "zod";
export const ConfigSchema = z.object({
url: z
.string()
.min(1, "URL must not be empty")
.refine(
(u) => u.startsWith("postgresql://") || u.startsWith("postgres://"),
"URL must start with postgresql:// or postgres://",
),
poolSize: z.number().int().positive().max(500).optional(),
});
export type PluginConfig = z.infer<typeof ConfigSchema>;The schema is validated at factory call time -- before the server starts. Invalid config fails fast.
3. Connection Factory
Implement PluginDBConnection -- the interface Atlas uses to query your database:
// src/connection.ts
import type { PluginDBConnection, PluginQueryResult } from "@useatlas/plugin-sdk";
import type { PluginConfig } from "./config";
export function createConnection(config: PluginConfig): PluginDBConnection {
let Pool: typeof import("pg").Pool;
try {
({ Pool } = require("pg"));
} catch (err) {
const isNotFound =
err != null &&
typeof err === "object" &&
"code" in err &&
(err as NodeJS.ErrnoException).code === "MODULE_NOT_FOUND";
if (isNotFound) {
throw new Error("This plugin requires the pg package. Install it with: bun add pg");
}
throw err;
}
const pool = new Pool({
connectionString: config.url,
max: config.poolSize ?? 10,
});
return {
async query(sql: string, timeoutMs?: number): Promise<PluginQueryResult> {
const client = await pool.connect();
try {
if (timeoutMs) {
await client.query(`SET statement_timeout = ${timeoutMs}`);
}
const result = await client.query(sql);
return {
columns: result.fields.map((f) => f.name),
rows: result.rows,
};
} finally {
client.release();
}
},
async close(): Promise<void> {
await pool.end();
},
};
}Key points:
query()returns{ columns: string[], rows: Record<string, unknown>[] }close()cleans up resources- Lazy-load the driver with
require()+MODULE_NOT_FOUNDhandling so it can be an optional peer dependency
4. Plugin Object
Wire everything together with createPlugin(), which validates config and returns a factory function. The configSchema can be any object with a parse() method -- Zod is recommended but not required (e.g. a custom validator that throws on invalid input works too). For plugins that don't need runtime configuration, use definePlugin() instead -- see createPlugin vs definePlugin below.
// src/index.ts
import { createPlugin } from "@useatlas/plugin-sdk";
import type { AtlasDatasourcePlugin, PluginHealthResult } from "@useatlas/plugin-sdk";
import { ConfigSchema, type PluginConfig } from "./config";
import { createConnection } from "./connection";
export function buildPlugin(config: PluginConfig): AtlasDatasourcePlugin<PluginConfig> {
let cachedConnection: ReturnType<typeof createConnection> | undefined;
return {
id: "my-datasource",
types: ["datasource"] as const,
version: "1.0.0",
name: "My DataSource",
config,
connection: {
create: () => {
if (!cachedConnection) {
cachedConnection = createConnection(config);
}
return cachedConnection;
},
dbType: "postgres",
},
entities: [],
dialect: "This datasource uses PostgreSQL. Use DATE_TRUNC() for date truncation.",
// Called once during server startup. Throw to block startup (for fatal configuration errors).
async initialize(ctx) {
ctx.logger.info("My datasource plugin initialized");
},
// Called by `atlas doctor` and the admin API. Always return a result — never throw.
// Return `{ healthy: false, message: '...' }` for recoverable issues.
async healthCheck(): Promise<PluginHealthResult> {
const start = performance.now();
try {
const conn = createConnection(config);
await conn.query("SELECT 1", 5000);
await conn.close();
return { healthy: true, latencyMs: Math.round(performance.now() - start) };
} catch (err) {
return {
healthy: false,
message: err instanceof Error ? err.message : String(err),
latencyMs: Math.round(performance.now() - start),
};
}
},
};
}
export const myPlugin = createPlugin({
configSchema: ConfigSchema,
create: buildPlugin,
});5. Register
Add to atlas.config.ts:
import { defineConfig } from "@atlas/api/lib/config";
import { myPlugin } from "./plugins/my-datasource/src/index";
export default defineConfig({
plugins: [
myPlugin({ url: process.env.MY_DB_URL! }),
],
});Never commit credentials to version control. Use environment variables (process.env.MY_DB_URL) in atlas.config.ts and add .env to .gitignore.
6. Test
bun test plugins/my-datasource/src/index.test.tsSee Testing below for a full test example and patterns.
7. Publish
For npm packages:
{
"name": "atlas-plugin-my-datasource",
"peerDependencies": {
"@useatlas/plugin-sdk": ">=0.0.1",
"pg": ">=8.0.0"
},
"peerDependenciesMeta": {
"pg": { "optional": true }
},
"devDependencies": {
"@useatlas/plugin-sdk": "^0.0.2"
}
}Convention: @useatlas/plugin-sdk goes in both peerDependencies (so consumers provide it) and devDependencies (so you can build and test locally). Database drivers go as optional peer dependencies.
8. Testing
Test config validation, plugin shape, and health checks. Use bun test for a single file or bun run test for the full suite.
import { describe, test, expect } from "bun:test";
import { myPlugin } from "./index";
describe("my-datasource plugin", () => {
test("validates config schema", () => {
// Test that invalid config is rejected
expect(() => myPlugin({ url: "" })).toThrow();
});
test("creates plugin with valid config", () => {
const plugin = myPlugin({ url: "postgresql://localhost/test" });
expect(plugin.id).toBe("my-datasource");
expect(plugin.type).toBe("datasource");
});
test("health check reports status", async () => {
const plugin = myPlugin({ url: "postgresql://localhost/test" });
const health = await plugin.healthCheck?.();
expect(health).toHaveProperty("healthy");
});
});Key testing patterns:
- Config validation — Verify that invalid configs throw at factory call time, not at runtime
- Plugin shape — Check
id,type,version, and variant-specific properties (connection,contextProvider,actions, etc.) - Health checks — Ensure
healthCheck()returns{ healthy: boolean }and never throws (even when the service is unreachable) - Connection factory — For datasource plugins, test that
connection.create()returns a validPluginDBConnection
Other Plugin Types
Context Plugin
Context plugins inject additional knowledge into the agent's system prompt. Implement contextProvider.load() to return a string that gets appended to the prompt, and optionally contextProvider.refresh() to support cache invalidation.
load()— Returns a string (typically Markdown) that is appended to the agent's system prompt. Called on each agent invocation. Cache the result internally for performance.refresh()— Called when the semantic layer is reloaded or on manual refresh via the admin UI. Use it to clear any internal cache so the nextload()picks up changes.
Here is a minimal example that injects a company glossary:
import { definePlugin } from "@useatlas/plugin-sdk";
export default definePlugin({
id: "company-glossary",
types: ["context"],
version: "1.0.0",
name: "Company Glossary",
contextProvider: {
// Cache the loaded context to avoid re-reading on every request
_cache: null as string | null,
async load() {
if (this._cache) return this._cache;
// Load from any source: filesystem, database, API, etc.
const terms = [
{ term: "ARR", definition: "Annual Recurring Revenue — sum of all active subscription values annualized" },
{ term: "MRR", definition: "Monthly Recurring Revenue — ARR / 12" },
{ term: "churn", definition: "Percentage of customers who cancel within a billing period" },
];
const lines = terms.map((t) => `- **${t.term}**: ${t.definition}`);
this._cache = `## Company Glossary\n\n${lines.join("\n")}`;
return this._cache;
},
async refresh() {
// Clear cache so next load() re-reads from source
this._cache = null;
},
},
async initialize(ctx) {
ctx.logger.info("Company glossary context plugin initialized");
},
});The returned string from load() becomes part of the agent's system prompt, so the agent can use your glossary terms, user preferences, or any domain knowledge when interpreting questions and writing SQL.
Interaction Plugin
Interaction plugins add communication surfaces. They may mount Hono routes (Slack, webhooks) or manage non-HTTP transports (MCP stdio):
export default definePlugin({
id: "my-webhook",
types: ["interaction"],
version: "1.0.0",
routes(app) {
app.post("/webhooks/my-service", async (c) => {
return c.json({ ok: true });
});
},
});Action Plugin
Action plugins give the agent side-effects with approval controls. Actions require user approval before execution: the agent proposes the action, the user sees a confirmation card in the chat UI, and only after approval does execute() run. This prevents unintended writes.
The approval mode controls who can approve:
"manual"— Any user in the conversation can approve or reject"admin-only"— Only users with theadminrole can approve"auto"— Executes immediately without approval (use sparingly)
Here is a complete example that creates a support ticket:
import { z } from "zod";
import { createPlugin } from "@useatlas/plugin-sdk";
import { tool } from "@useatlas/plugin-sdk/ai";
import type { AtlasActionPlugin, PluginAction } from "@useatlas/plugin-sdk";
const ticketConfigSchema = z.object({
apiUrl: z.string().url(),
apiKey: z.string().min(1, "apiKey must not be empty"),
defaultPriority: z.enum(["low", "medium", "high"]).default("medium"),
});
type TicketConfig = z.infer<typeof ticketConfigSchema>;
export const ticketPlugin = createPlugin<TicketConfig, AtlasActionPlugin<TicketConfig>>({
configSchema: ticketConfigSchema,
create(config) {
const action: PluginAction = {
name: "createSupportTicket",
description: "Create a support ticket from analysis findings",
tool: tool({
description: "Create a support ticket. Requires user approval before execution.",
inputSchema: z.object({
title: z.string().max(200).describe("Short summary of the issue"),
body: z.string().describe("Detailed description with relevant data"),
priority: z
.enum(["low", "medium", "high"])
.optional()
.describe(`Priority level. Defaults to "${config.defaultPriority}"`),
}),
execute: async ({ title, body, priority }) => {
// This only runs AFTER the user approves in the chat UI
const response = await fetch(`${config.apiUrl}/tickets`, {
method: "POST",
headers: {
Authorization: `Bearer ${config.apiKey}`,
"Content-Type": "application/json",
},
body: JSON.stringify({
title,
body,
priority: priority ?? config.defaultPriority,
}),
});
if (!response.ok) {
throw new Error(`Ticket API returned ${response.status}`);
}
const ticket = (await response.json()) as { id: string; url: string };
return { ticketId: ticket.id, url: ticket.url };
},
}),
actionType: "ticket:create",
reversible: false,
defaultApproval: "manual",
requiredCredentials: ["apiKey"],
// ^ Values must match environment variable names (e.g. process.env.apiKey).
// At startup, Atlas checks these env vars exist and logs a warning for any
// that are missing (see validateActionCredentials in the ToolRegistry).
// Missing credentials do not block startup — they produce warnings only.
};
return {
id: "ticket-action",
types: ["action"] as const,
version: "1.0.0",
name: "Support Ticket Action",
config,
actions: [action],
};
},
});Register it in atlas.config.ts:
plugins: [
ticketPlugin({
apiUrl: process.env.TICKET_API_URL!,
apiKey: process.env.TICKET_API_KEY!,
}),
],Sandbox Plugin
Sandbox plugins provide isolation backends for the explore tool:
sandbox: {
create(semanticRoot: string): PluginExploreBackend {
return {
async exec(command: string) {
// Execute command in isolation, return { stdout, stderr, exitCode }
},
async close() { /* cleanup */ },
};
},
priority: 60,
},
security: {
networkIsolation: true,
filesystemIsolation: true,
unprivilegedExecution: true,
description: "My isolation mechanism...",
},The priority field determines selection order when multiple backends are available. Higher values are tried first. Built-in priority scale:
| Backend | Priority | Notes |
|---|---|---|
| Vercel sandbox | 100 | Firecracker microVM (Vercel deployments only) |
| nsjail | 75 | Linux namespace sandbox (explicit via ATLAS_SANDBOX=nsjail) |
| Plugin default | 60 | SANDBOX_DEFAULT_PRIORITY from @useatlas/plugin-sdk |
| Sidecar | 50 | HTTP-isolated container (set via ATLAS_SANDBOX_URL) |
| just-bash | 0 | OverlayFs read-only fallback (dev only) |
Plugin sandbox backends default to priority 60 (between nsjail and sidecar). Set a higher value to take precedence over built-in backends, or a lower value to act as a fallback.
createPlugin vs definePlugin
The SDK exports two helpers for authoring plugins. Choose based on whether your plugin accepts runtime configuration.
createPlugin() -- Use when the plugin accepts user-configurable options that should be validated at startup. It returns a factory function that validates config via a Zod schema before building the plugin object. This is the Better Auth-style plugins: [myPlugin({ key: "value" })] pattern.
import { createPlugin } from "@useatlas/plugin-sdk";
import { z } from "zod";
export const myPlugin = createPlugin({
configSchema: z.object({ url: z.string().url() }),
create: (config) => ({
id: "my-plugin",
types: ["datasource"] as const,
version: "1.0.0",
config,
connection: { create: () => makeConnection(config.url), dbType: "postgres" },
}),
});
// Usage in atlas.config.ts:
plugins: [myPlugin({ url: process.env.MY_URL! })]definePlugin() -- Use when no user-configurable options exist. It validates the plugin shape at module load time and returns the plugin object directly.
import { definePlugin } from "@useatlas/plugin-sdk";
export default definePlugin({
id: "my-context",
types: ["context"],
version: "1.0.0",
contextProvider: {
async load() { return "Additional context for the agent"; },
},
});
// Usage in atlas.config.ts:
import myContext from "./plugins/my-context";
plugins: [myContext]Type Inference with $InferServerPlugin
The SDK exports a $InferServerPlugin utility type (following Better Auth's $Infer pattern) that lets client code extract plugin types without importing server modules. It works with both createPlugin() factory functions and definePlugin() direct objects:
import type { $InferServerPlugin } from "@useatlas/plugin-sdk";
import type { clickhousePlugin } from "@useatlas/clickhouse";
type CH = $InferServerPlugin<typeof clickhousePlugin>;
// CH["Config"] → { url: string; database?: string }
// CH["Type"] → "datasource"
// CH["Id"] → string
// CH["DbType"] → "clickhouse"Available inference keys: Config, Type, Id, Name, Version, DbType (datasource only), Actions (action only), Security (sandbox only).
Plugin Status Lifecycle
Plugins transition through a defined set of statuses during their lifetime:
| Status | Description |
|---|---|
registered | Plugin object has been validated and added to the registry |
initializing | initialize() is currently running |
healthy | Plugin is initialized and operating normally |
unhealthy | Plugin is initialized but healthCheck() returned { healthy: false } |
teardown | teardown() has been called during graceful shutdown |
The host manages these transitions automatically. Plugin authors do not need to set status directly -- implement initialize(), healthCheck(), and teardown() and the host handles the rest.
Hooks
Plugins can intercept agent lifecycle events and HTTP requests using hooks. Each hook entry has an optional matcher function (return true to run the handler; omit to always run) and a handler function.
Define hooks on any plugin type via the hooks property:
export default definePlugin({
id: "audit-logger",
types: ["context"],
version: "1.0.0",
contextProvider: { async load() { return ""; } },
hooks: {
beforeQuery: [{
matcher: (ctx) => ctx.sql.includes("sensitive_table"),
handler: (ctx) => {
console.log(`Query on sensitive table: ${ctx.sql}`);
// Return { sql } to rewrite, throw to reject, or return void to pass through
},
}],
afterQuery: [{
handler: (ctx) => {
console.log(`Query completed in ${ctx.durationMs}ms, ${ctx.result.rows.length} rows`);
},
}],
},
});Hook Types
| Hook | Context | Mutable | Description |
|---|---|---|---|
beforeQuery | { sql, connectionId? } | Yes -- return { sql } to rewrite, throw to reject | Fires before each SQL query is executed |
afterQuery | { sql, connectionId?, result, durationMs } | No | Fires after each SQL query with results |
beforeExplore | { command } | Yes -- return { command } to rewrite, throw to reject | Fires before each explore command |
afterExplore | { command, output } | No | Fires after each explore command with output |
onRequest | { path, method, headers } | No | HTTP-level: fires before routing a request |
onResponse | { path, method, status } | No | HTTP-level: fires after sending a response |
beforeQuery and beforeExplore are mutable hooks -- handlers can return a mutation object ({ sql } or { command }) to rewrite the operation, or throw an error to reject it entirely. All other hooks are observation-only (void return).
Schema Migrations
Plugins can declare tables for the Atlas internal database via the schema property. Declared tables are auto-migrated at boot — no manual SQL needed:
export default definePlugin({
id: "my-plugin",
types: ["context"],
version: "1.0.0",
schema: {
my_plugin_cache: {
fields: {
key: { type: "string", required: true, unique: true },
value: { type: "string", required: true },
updated_at: { type: "date" },
},
},
},
// ...
});The schema property is available on all plugin types. It requires DATABASE_URL to be set (the internal Postgres database). Use ctx.db in initialize() or hooks to query your plugin's tables.
How It Works
At server boot, before plugins are initialized, Atlas runs schema migrations automatically:
- CREATE TABLE — New tables are created with an auto-generated
id(UUID),created_at, andupdated_atcolumns, plus your declared fields - ALTER TABLE ADD COLUMN — If you add new fields to a plugin schema in a later version, Atlas detects the missing columns and adds them automatically
All migrations are tracked in a plugin_migrations table for idempotency — re-running is always safe.
Supported Field Types
| Plugin Type | PostgreSQL Type |
|---|---|
string | TEXT |
number | INTEGER |
boolean | BOOLEAN |
date | TIMESTAMPTZ |
Table Naming
Tables are automatically prefixed with plugin_{pluginId}_ to avoid collisions with Atlas internal tables and other plugins. For example, a plugin with id: "jira" declaring a table tickets gets plugin_jira_tickets.
Limitations
- PostgreSQL only — Schema migrations require the internal PostgreSQL database (
DATABASE_URL) - Column additions only — New fields added to a schema are handled automatically via
ALTER TABLE ADD COLUMN - No column removal — Removing a field from the schema does not drop the column. Remove columns manually if needed
- No type changes — Changing a field's type (e.g.
string→number) is not handled. Migrate manually with a new column + data copy - No renaming — Renaming a field creates a new column; the old one remains. Clean up manually
required+ nodefaultValueon new columns — Addingrequired: truewithout adefaultValueto an existing table with rows will fail (NOT NULLconstraint violation). Always provide adefaultValuewhen adding required fields to a schema that may already have data
Datasource Plugin Properties
Beyond the basics shown in step 4, datasource plugins support several additional properties.
entities
Provide semantic layer entity definitions programmatically. Entities are merged into the table whitelist at boot (in-memory only, no disk writes). Can be a static array or an async factory:
connection: { create: () => myConn, dbType: "postgres" },
entities: [
{ name: "users", yaml: "table: users\ndimensions:\n id:\n type: number" },
],
// Or as an async factory:
entities: async () => {
const tables = await discoverTables();
return tables.map(t => ({ name: t.name, yaml: generateYaml(t) }));
},dialect
A string injected into the agent's system prompt with SQL dialect guidance:
dialect: "This datasource uses ClickHouse. Use toStartOfMonth() for date truncation, not DATE_TRUNC().",connection.validate — Custom Query Validation
Replace the standard SQL validation pipeline with a custom validator. Use this when your datasource speaks a non-SQL query language -- SOQL, GraphQL, MQL, or any custom query DSL where the standard 4-layer SQL validation (empty check, regex guard, AST parse, table whitelist) does not apply.
Signature:
validate?(query: string): QueryValidationResult | Promise<QueryValidationResult>
interface QueryValidationResult {
valid: boolean;
/** User-facing rejection reason — appears in error responses and audit logs. */
reason?: string;
}validate is defined on the datasource plugin's connection configuration object (AtlasDatasourcePlugin.connection), not on the PluginDBConnection runtime interface.
Behavior when validate is present:
- The entire standard
validateSQLpipeline is bypassed for this connection - Auto-LIMIT is skipped (non-SQL languages may not support
LIMIT) - RLS injection is skipped (the SQL rewriter cannot parse non-SQL queries)
parserDialectandforbiddenPatternsare ignored- Plugin hooks still fire -- queries rewritten by
beforeQueryhooks are re-validated through this function before execution
Sync example (SOQL length-limit validator):
connection: {
create: () => mySalesforceConn,
dbType: "salesforce",
validate(query) {
if (query.length > 20_000) {
return { valid: false, reason: "SOQL query exceeds 20,000 character limit" };
}
if (/\b(DELETE|INSERT|UPDATE|UPSERT)\b/i.test(query)) {
return { valid: false, reason: "Only SELECT queries are allowed" };
}
return { valid: true };
},
},Async example (external schema validation service):
connection: {
create: () => myConn,
dbType: "custom-api",
async validate(query) {
const res = await fetch("https://schema.internal/validate", {
method: "POST",
body: JSON.stringify({ query }),
headers: { "Content-Type": "application/json" },
signal: AbortSignal.timeout(5000),
});
if (!res.ok) return { valid: false, reason: "Schema service unavailable" };
const body = await res.json() as { allowed: boolean; message?: string };
return body.allowed
? { valid: true }
: { valid: false, reason: body.message ?? "Query rejected" };
},
},Async validators add latency to every query. Prefer synchronous validation when possible. If you must call an external service, add a timeout and consider caching the schema locally.
Error propagation: The reason string is user-facing -- it appears in the error response returned to the agent and is recorded in the audit log. Write clear, actionable messages (e.g., "SOQL query exceeds 20,000 character limit" rather than "invalid").
See the Plugin Cookbook for complete plugin examples with custom validators.
connection.parserDialect and connection.forbiddenPatterns
Customize the standard SQL validation pipeline without fully replacing it:
connection: {
create: () => myConn,
dbType: "snowflake",
// Override auto-detected parser dialect (case-sensitive, e.g. "Snowflake" not "snowflake")
parserDialect: "Snowflake",
// Additional regex patterns to block beyond the base DML/DDL guard
forbiddenPatterns: [/\bCOPY\s+INTO\b/i, /\bPUT\b/i],
},These are ignored when a custom validate function is provided.
Both properties are consumed during SQL validation: parserDialect sets the AST parser mode used in layer 2, and forbiddenPatterns are checked as additional regex guards in layer 1. See SQL Validation Pipeline for the full layer breakdown.
Plugin Lifecycle
teardown()
Called during graceful shutdown in reverse registration order (LIFO). Use it to close connections, flush buffers, or clean up resources. Never throw from teardown().
async teardown() {
await this.pool.end();
},AtlasPluginContext
The ctx object passed to initialize() and hook handlers provides:
| Property | Type | Description |
|---|---|---|
ctx.db | { query(), execute() } | null | Internal Postgres (auth/audit DB). Null when DATABASE_URL is not set |
ctx.connections | { get(id), list() } | Connection registry for analytics datasources |
ctx.tools | { register(tool) } | Tool registry -- plugins can register additional agent tools |
ctx.logger | PluginLogger | Pino-compatible child logger scoped to the plugin ID |
ctx.config | Record<string, unknown> | Resolved Atlas configuration (cast if you know the shape) |
Example -- registering a custom tool from initialize():
async initialize(ctx) {
ctx.tools.register({
name: "lookupInventory",
description: "Check inventory levels for a product SKU",
tool: tool({
description: "Look up current inventory by SKU",
inputSchema: z.object({ sku: z.string() }),
execute: async ({ sku }) => fetchInventory(sku),
}),
});
},Reference Plugins
The Atlas monorepo includes 15 reference plugin implementations in the plugins/ directory. These serve as working examples for every plugin type:
Datasource: clickhouse, duckdb, mysql, salesforce, snowflake
Context: yaml-context
Interaction: mcp, slack
Action: email, jira
Sandbox: daytona, e2b, nsjail, sidecar, vercel-sandbox
Browse the source at plugins/ for patterns on connection factories, config schemas, health checks, and testing.
Common Patterns
Health Check Contract
When implementing healthCheck(), follow these five rules:
- Always return
{ healthy: boolean, message?: string, latencyMs?: number }— never throw - Measure latency — wrap the probe in
performance.now()and includelatencyMsin the result (both success and failure paths) - Catch all errors — return
{ healthy: false, message: err instanceof Error ? err.message : String(err) }on failure; never let exceptions escape - Minimal probe — test connectivity only (e.g.
SELECT 1, ping an endpoint), not full functionality - Timeout — probes must have a reasonable timeout (5s default for network calls, 30s for sandbox creation), never hang indefinitely. Use
AbortSignal.timeout()for fetch-based probes,Promise.racefor SDK calls that don't support AbortSignal, or thetimeoutMsparameter onconn.query()for database plugins
Standard pattern for database plugins:
async healthCheck(): Promise<PluginHealthResult> {
const start = performance.now();
let conn: PluginDBConnection | undefined;
try {
conn = createConnection(config);
await conn.query("SELECT 1", 5000); // timeout via query parameter
return { healthy: true, latencyMs: Math.round(performance.now() - start) };
} catch (err) {
return {
healthy: false,
message: err instanceof Error ? err.message : String(err),
latencyMs: Math.round(performance.now() - start),
};
} finally {
if (conn) await conn.close();
}
}Standard pattern for HTTP API plugins (email, JIRA):
async healthCheck(): Promise<PluginHealthResult> {
const start = performance.now();
try {
const response = await fetch("https://api.example.com/health", {
headers: { Authorization: `Bearer ${config.apiKey}` },
signal: AbortSignal.timeout(5000), // 5s timeout
});
const latencyMs = Math.round(performance.now() - start);
if (response.ok) return { healthy: true, latencyMs };
return { healthy: false, message: `API returned ${response.status}`, latencyMs };
} catch (err) {
return {
healthy: false,
message: err instanceof Error ? err.message : String(err),
latencyMs: Math.round(performance.now() - start),
};
}
}Standard pattern for sandbox plugins (where probe operations are slow):
async healthCheck(): Promise<PluginHealthResult> {
const start = performance.now();
const TIMEOUT = 30_000; // sandbox creation can be slow
let sandbox: SandboxInstance | null = null; // hoist for cleanup on timeout
let timer: ReturnType<typeof setTimeout>;
try {
const result = await Promise.race([
(async () => {
sandbox = await createSandbox(config); // your SDK's create method
await sandbox.close(); // cleanup method varies by SDK (kill, stop, delete, etc.)
sandbox = null;
return "ok" as const;
})(),
new Promise<"timeout">((resolve) => {
timer = setTimeout(() => resolve("timeout"), TIMEOUT);
}),
]).finally(() => clearTimeout(timer!)); // always clean up the timer
const latencyMs = Math.round(performance.now() - start);
if (result === "timeout") {
// Best-effort cleanup — sandbox may still be creating
if (sandbox) {
try { await sandbox.close(); } catch { /* best-effort */ }
}
return { healthy: false, message: `Timed out after ${TIMEOUT}ms`, latencyMs };
}
return { healthy: true, latencyMs };
} catch (err) {
if (sandbox) {
try { await sandbox.close(); } catch { /* best-effort */ }
}
return {
healthy: false,
message: err instanceof Error ? err.message : String(err),
latencyMs: Math.round(performance.now() - start),
};
}
}Standard pattern for local-only plugins (filesystem, in-process):
async healthCheck(): Promise<PluginHealthResult> {
const start = performance.now();
try {
// Verify local resources exist
const files = fs.readdirSync(dir).filter((f) => f.endsWith(".yml"));
const latencyMs = Math.round(performance.now() - start);
if (files.length === 0) {
return { healthy: false, message: "No entity files found", latencyMs };
}
return { healthy: true, latencyMs };
} catch (err) {
return {
healthy: false,
message: err instanceof Error ? err.message : String(err),
latencyMs: Math.round(performance.now() - start),
};
}
}Error Handling
- Throw from
initialize()to block server startup (fatal misconfiguration) - Return unhealthy from
healthCheck()for runtime degradation (transient errors) - Never throw from
healthCheck()orteardown()
Config-Driven Credentials
Pass credentials via plugin config, not environment variables:
// Good
myPlugin({ apiKey: process.env.MY_API_KEY! })
// Bad -- hidden dependency on env var name
// inside plugin: process.env.MY_API_KEYSee Also
- Plugin Directory — Browse all official Atlas plugins
- Plugin Cookbook — Real-world patterns for caching, hooks, credentials, and error handling
- Plugin Composition — How multiple plugins interact (ordering, priority, constraints)
- Configuration — Registering plugins in
atlas.config.ts - SQL Validation Pipeline — How plugin hooks and custom validators fit into validation