Skip to content

Composable, dependency-free TypeScript core for LLM workflows and adapters. Clean runtime, high coverage, no heavy framework deps.

Notifications You must be signed in to change notification settings

theGeekist/llm-core

Repository files navigation

@geekist/llm-core

Build real AI products with recipes, interactions, and adapters.

CI codecov SonarCloud Docs

Runtime-agnostic core for deterministic workflows and UI-ready interactions.


Docs


Install

bun add @geekist/llm-core
pnpm add @geekist/llm-core
npm install @geekist/llm-core
yarn add @geekist/llm-core
deno add npm:@geekist/llm-core

Quick start (interaction, single turn)

import { fromAiSdkModel } from "@geekist/llm-core/adapters";
import { openai } from "@ai-sdk/openai";
import {
  createInteractionPipelineWithDefaults,
  runInteractionPipeline,
} from "@geekist/llm-core/interaction";

const model = fromAiSdkModel(openai("gpt-4o-mini"));
const pipeline = createInteractionPipelineWithDefaults();

const result = await runInteractionPipeline(pipeline, {
  input: { message: { role: "user", content: "Hello!" } },
  adapters: { model },
});

if ("__paused" in result && result.__paused) {
  throw new Error("Interaction paused.");
}

console.log(result.artefact.messages[1]?.content);

Quick start (workflow recipe)

import { recipes } from "@geekist/llm-core/recipes";
import { fromAiSdkModel } from "@geekist/llm-core/adapters";
import { openai } from "@ai-sdk/openai";

const model = fromAiSdkModel(openai("gpt-4o-mini"));
const workflow = recipes.agent().defaults({ adapters: { model } }).build();

const result = await workflow.run({ input: "Draft a short README for a new SDK." });

if (result.status === "ok") {
  console.log(result.artefact);
}

Build paths


Adapters today

You can use llm-core with:

  • LangChain

    • Models, embeddings, text splitters, memory, vector stores.
    • Trace integration for LangChain runs.
  • LlamaIndex

    • Document stores, vector stores, embeddings, memory.
  • AI SDK

    • Models and embeddings, plugged in as adapters.
  • Core primitives

    • KV store
    • Cache
    • Event stream
    • Text splitter
    • Loader
    • Vector store
    • Memory

Adapters are pluggable; you can write your own functions that match the adapter types and wire in any provider you like.

See the docs site for up-to-date adapter details and examples.


Docs

  • Docs site: https://llm-core.geekist.co/

  • Workflow & recipes: docs/workflow-api.md, docs/reference/packs-and-recipes.md

  • Adapters: docs/adapters-api.md

  • Examples:

    • ETL: docs/examples/etl-pipeline.ts
    • Agent / RAG examples (and more) on the docs site

Development

bun install

# Static checks
bun run lint
bun run typecheck

# Tests
bun test

The CI pipeline also runs coverage and static analysis (Codecov + SonarCloud).


Status

Active development. APIs are reasonably stable but may still evolve as more adapters and recipes land. Check the docs site and CHANGELOG for breaking changes.


Licence

Licensed under the Apache License, Version 2.0.

See the LICENSE file for details.

About

Composable, dependency-free TypeScript core for LLM workflows and adapters. Clean runtime, high coverage, no heavy framework deps.

Topics

Resources

Stars

Watchers

Forks

Packages

No packages published

Languages