Build real AI products with recipes, interactions, and adapters.
Runtime-agnostic core for deterministic workflows and UI-ready interactions.
- Docs: https://llm-core.geekist.co
- Guide (hero path): https://llm-core.geekist.co/guide/interaction-single-turn
- API: https://llm-core.geekist.co/reference/recipes-api
bun add @geekist/llm-core
pnpm add @geekist/llm-core
npm install @geekist/llm-core
yarn add @geekist/llm-coredeno add npm:@geekist/llm-coreimport { fromAiSdkModel } from "@geekist/llm-core/adapters";
import { openai } from "@ai-sdk/openai";
import {
createInteractionPipelineWithDefaults,
runInteractionPipeline,
} from "@geekist/llm-core/interaction";
const model = fromAiSdkModel(openai("gpt-4o-mini"));
const pipeline = createInteractionPipelineWithDefaults();
const result = await runInteractionPipeline(pipeline, {
input: { message: { role: "user", content: "Hello!" } },
adapters: { model },
});
if ("__paused" in result && result.__paused) {
throw new Error("Interaction paused.");
}
console.log(result.artefact.messages[1]?.content);import { recipes } from "@geekist/llm-core/recipes";
import { fromAiSdkModel } from "@geekist/llm-core/adapters";
import { openai } from "@ai-sdk/openai";
const model = fromAiSdkModel(openai("gpt-4o-mini"));
const workflow = recipes.agent().defaults({ adapters: { model } }).build();
const result = await workflow.run({ input: "Draft a short README for a new SDK." });
if (result.status === "ok") {
console.log(result.artefact);
}-
Production chat UI: https://llm-core.geekist.co/guide/interaction-single-turn
-
Sessions + transport: https://llm-core.geekist.co/guide/interaction-sessions
-
End-to-end UI: https://llm-core.geekist.co/guide/end-to-end-ui
-
Workflow orchestration: https://llm-core.geekist.co/guide/hello-world
-
Adapters: https://llm-core.geekist.co/adapters/
-
State validation
-
Optional recipe-level state validator:
Recipe.flow("rag").use(pack).state(validate).build();
-
On
okoutcomes, the validator can annotate diagnostics and emit arecipe.state.invalidtrace event if something looks off.
-
-
Event conventions
-
Helper to emit recipe events into both:
- The adapter event stream (
eventStream.emit/emitMany). - A
state.eventsarray for later inspection in tests or tools.
- The adapter event stream (
-
-
Human-in-the-loop (HITL) gate
-
Built-in
hitl-gaterecipe and pack:- Emits pause tokens and
pauseKind: "human". - Lets you pause a flow, wait for a human decision, and resume.
- Emits pause tokens and
-
-
Testability
- Runtime and helpers are designed to be extremely test-friendly.
- The repo ships with high coverage (see Codecov badge) and static analysis (SonarCloud).
You can use llm-core with:
-
LangChain
- Models, embeddings, text splitters, memory, vector stores.
- Trace integration for LangChain runs.
-
LlamaIndex
- Document stores, vector stores, embeddings, memory.
-
AI SDK
- Models and embeddings, plugged in as adapters.
-
Core primitives
- KV store
- Cache
- Event stream
- Text splitter
- Loader
- Vector store
- Memory
Adapters are pluggable; you can write your own functions that match the adapter types and wire in any provider you like.
See the docs site for up-to-date adapter details and examples.
-
Docs site: https://llm-core.geekist.co/
-
Workflow & recipes:
docs/workflow-api.md,docs/reference/packs-and-recipes.md -
Adapters:
docs/adapters-api.md -
Examples:
- ETL:
docs/examples/etl-pipeline.ts - Agent / RAG examples (and more) on the docs site
- ETL:
bun install
# Static checks
bun run lint
bun run typecheck
# Tests
bun testThe CI pipeline also runs coverage and static analysis (Codecov + SonarCloud).
Active development. APIs are reasonably stable but may still evolve as more adapters and recipes land. Check the docs site and CHANGELOG for breaking changes.
Licensed under the Apache License, Version 2.0.
See the LICENSE file for details.