Captar is a runtime control layer for AI applications. It helps teams enforce spend limits, tool usage rules, and execution policy inside the application runtime, without requiring a proxy gateway or handing over provider keys by default.
Modern AI apps need more than observability. They need guardrails that act before a request overruns budget, triggers unsafe tooling, or leaves the intended execution path.
Captar is designed to provide:
- Local-first policy enforcement for AI calls and tool execution
- Budget reservation and reconciliation before and after model usage
- Optional export of traces, spend events, and violations to a platform layer
- Project-scoped datasets built from retained trace payloads
- Offline/manual eval workflows built from project datasets
- Minimal integration changes for teams already using OpenAI-based workflows
This repository is a pnpm monorepo managed with Turborepo.
apps/platform- Next.js platform app for local inspection, ingest, and operational flowsapps/docs- Next.js + MDX documentation appapps/site- reserved marketing site for a later release stage
packages/ts/sdk- core TypeScript runtime SDKpackages/ts/config- shared pricing, defaults, and environment configpackages/ts/types- shared public types and contractspackages/ts/utils- utility helperspackages/ts/ui- shared UI helpers
db- Prisma schema, migrations, and seed logicinfra- infrastructure-related assets and notesdemo- demo flows and example runtime usagescripts- project scripts and automation helpers
At a high level, the TypeScript runtime follows this sequence:
- Normalize the provider request
- Evaluate local policy for model and tool usage
- Estimate worst-case cost
- Reserve budget before execution
- Execute the provider or tool call
- Reconcile actual usage and release unused reserve
- Emit traces, spend records, and policy events
- Export strong traces into project datasets for later review or eval prep
- Node.js 20+
pnpm10+- PostgreSQL
pnpm install
cp .env.example .envUpdate .env with your local database and auth values before starting the apps.
pnpm db:generate
pnpm db:push
pnpm db:seedpnpm devUseful app-specific commands:
pnpm --filter @captar/platform dev
pnpm --filter @captar/docs dev
pnpm demo:livepnpm dev
pnpm build
pnpm lint
pnpm test
pnpm demo:live
pnpm db:generate
pnpm db:push
pnpm db:migrate
pnpm db:seedThe example environment file includes the core variables needed for local development:
DATABASE_URLAUTH_SECRETAUTH_URLAUTH_TRUST_HOSTCAPTAR_PLATFORM_URLCAPTAR_DEMO_HOOK_IDCAPTAR_DEMO_USER_EMAILCAPTAR_DEMO_USER_PASSWORD
See .env.example for the current template.
The current repository is focused on the first operational slice of Captar:
- TypeScript / Node runtime SDK
- OpenAI request control flows
- Spend-aware execution and reconciliation
- Tool tracking and approval hooks
- Optional export and platform ingestion paths
- Platform trace debugging, dataset workflows, and manual eval review runs
- Documentation, demo flows, and platform groundwork
The current v1 platform flow is:
- Capture retained prompts and responses from traces.
- Inspect them in the platform trace debugger.
- Export useful traces into append-only project datasets.
- Import or export datasets as
json,jsonl, orcsv. - Create a manual eval from a dataset and launch reviewer runs.
- Score rows with pass/fail plus weighted rubric criteria.
Online evaluators are the next milestone after the manual eval flow. They are not shipped in this repository yet.