A minimal, embeddable Go agent orchestration library. Single binary. Full context control.
You own the []Message — the library provides the loop and the plumbing.
- One OpenAI-compatible provider — talks to OpenAI, Azure OpenAI, Ollama, GitHub Copilot, and Gemini through a single implementation
- Unified provider manager — one entry point for login, logout, status, model discovery, and default persistence across every supported provider
- Full context control — the caller owns the message slice and decides what goes in it
- Block-shaped messages — each message holds an ordered slice of typed blocks (text, thinking, tool_use, tool_result, image, redacted_thinking), so interleaved assistant output replays faithfully
- Durable session store — pure-Go SQLite (
modernc.org/sqlite) backing for branching message trees, artifact offload for large tool outputs and images, and FTS5 search across block types; shares the same file with parent-app tables - Built-in auth — static API keys, GitHub device flow for Copilot, "Sign in with ChatGPT" for OpenAI (Codex OAuth — device or browser, no API key, no app registration), plus device/PKCE OAuth against the standard OpenAI API for embedders with their own registered app. File-backed storage with expiry handling.
- Tool system — register Go functions as tools with auto-generated JSON Schema
- ReAct agent loop — streaming
Step/Runwith per-block hooks, plus runtime provider/model swap - Model discovery — list and switch between every chat-capable model on every authenticated provider
- Optional adapters — Bubbletea TUI (with slash commands) and HTTP/SSE server
go get github.com/stack-bound/stackllmThe fastest path uses the profile manager to handle auth and model selection for you. On the first call it walks the user through login; after that, LoadDefault returns a ready-to-use provider.
package main
import (
"context"
"fmt"
"github.com/stack-bound/stackllm/agent"
"github.com/stack-bound/stackllm/conversation"
"github.com/stack-bound/stackllm/profile"
"github.com/stack-bound/stackllm/tools"
)
type GreetArgs struct {
Name string `json:"name" jsonschema:"description=Name to greet,required"`
}
func main() {
ctx := context.Background()
// 1. Load the persisted default provider. Run `go run ./examples/login`
// once beforehand to pick a provider and model interactively.
mgr := profile.New()
p, err := mgr.LoadDefault(ctx)
if err != nil {
panic(err)
}
// 2. Register tools.
registry := tools.NewRegistry()
registry.Register("greet", "Greet someone", func(ctx context.Context, args GreetArgs) (string, error) {
return fmt.Sprintf("Hello, %s!", args.Name), nil
})
// 3. Create the agent.
a := agent.New(p,
agent.WithTools(registry),
agent.WithMaxSteps(10),
agent.WithHooks(agent.Hooks{
OnToken: func(ctx context.Context, delta string) { fmt.Print(delta) },
}),
)
// 4. Run.
msgs := conversation.NewBuilder().
System("You are a helpful assistant. Use the greet tool when asked.").
User("Say hello to Alice").
Build()
events, _ := a.Run(ctx, msgs)
for range events {
}
fmt.Println()
}If you would rather wire a provider up directly without the manager, skip to Providers.
profile.Manager composes auth, config, and provider into a single object that knows how to log into any supported backend, discover its models, and persist the user's choice.
mgr := profile.New(profile.WithCallbacks(profile.Callbacks{
OnDeviceCode: func(userCode, verifyURL string) { /* show code */ },
OnOpenURL: func(authURL string) { /* open browser for PKCE sign-in */ },
OnPromptKey: func(providerName string) (string, error) { /* read API key */ },
OnPromptURL: func(providerName, defaultURL string) (string, error) { /* read URL */ },
}))
// Authenticate — GitHub device flow for Copilot, API key for Gemini,
// base URL prompt for Ollama.
mgr.Login(ctx, profile.ProviderCopilot)
// OpenAI has three paths; pick whichever fits the UX:
mgr.LoginOpenAICodexDevice(ctx) // "Sign in with ChatGPT" — headless device code
mgr.LoginOpenAICodexWeb(ctx) // "Sign in with ChatGPT" — browser + local callback
mgr.Login(ctx, profile.ProviderOpenAI) // paste an API key (uses OnPromptKey)
// See which providers are authenticated and which is the default.
statuses, _ := mgr.Status(ctx)
// List chat-capable models across every authenticated provider, sorted.
models, _ := mgr.ListAllModels(ctx)
// Persist the user's choice, preserving any routing metadata.
mgr.SetDefaultModel(models[0])
// Later, in your app:
p, _ := mgr.LoadDefault(ctx)Recently selected models are tracked via RecentModels / TrackRecentModel so interactive pickers can surface them at the top of the list. Credentials are stored in ~/.config/stackllm/auth.json and preferences in ~/.config/stackllm/config.json (or the equivalent XDG_CONFIG_HOME path).
Wire any supported provider directly if you don't want the manager:
provider.OpenAIConfig("gpt-4o", auth.NewStatic(key))
provider.AzureConfig(endpoint, deployment, apiVersion, tokenSource)
provider.OllamaConfig("http://localhost:11434", "llama3")
provider.CopilotConfig("gpt-4o", auth.NewCopilotSource(cfg))
provider.GeminiConfig("gemini-2.5-pro", auth.NewStatic(key))All five share the same Complete(ctx, Request) surface and return a streaming channel of block events (BlockStart, BlockDelta, BlockEnd, ToolCall, Done, Error). Each BlockEnd carries the fully accumulated conversation.Block; the agent concatenates them in order to build the assistant message, preserving any interleaving of thinking, text, and tool_use the model produced.
session.SessionStore is the persistence interface. InMemoryStore and SQLiteStore both ship in the box; embedders can implement their own (Redis, Postgres, etc.) when they need to.
store, _ := session.OpenSQLiteStore(session.SQLiteConfig{AppName: "myapp"})
defer store.Close()
sess := session.New()
sess.Name = "Refactor auth middleware" // optional, surfaced as a column
store.Save(ctx, sess)
loaded, _ := store.Load(ctx, sess.ID)
all, _ := store.List(ctx) // every session, ordered by Updated descList(ctx) returns every row in one slice — fine for small UIs, but unworkable once a long-running embedder accumulates thousands of conversations. Stores that can paginate opt into the optional SessionPaginator capability:
type SessionPaginator interface {
ListPage(ctx context.Context, opts ListOptions) (ListResult, error)
}Both InMemoryStore and SQLiteStore implement it. Feature-detect via type assertion so custom stores stay free to omit it:
if p, ok := store.(session.SessionPaginator); ok {
page, _ := p.ListPage(ctx, session.ListOptions{Limit: 25, Offset: 50})
fmt.Printf("page 3 of %d (%d rows)\n",
(page.Total+24)/25, page.Total)
for _, s := range page.Sessions {
fmt.Println(s.ID, s.Name)
}
}ListOptions{Limit, Offset}—Limit == 0falls back tosession.DefaultListLimit(50); a negativeLimitreturns every matching row (handy for "export everything"). NegativeOffsetis treated as 0.ListResult{Sessions, Total}—Totalis the row count ignoringLimit/Offset, so you can render "page X of Y" without a second query.- Sort order matches
List: most-recently-updated first. - Like
List,ListPagereturns sessions with metadata populated andMessagesempty — callLoadfor the rows you actually want to read in full.
Filters beyond pagination (name search, date range, project filter, etc.) are deliberately left to the embedder for now: the stackllm_sessions schema is documented and stable, so apps that need them can drop down to the shared *sql.DB returned by store.DB() and write the join their domain needs.
| Package | Purpose |
|---|---|
conversation/ |
Block-shaped Message types, builder, context compaction |
auth/ |
Token sources, storage, OAuth flows |
config/ |
User preferences (default provider/model, provider settings) |
profile/ |
Provider manager: login, status, model discovery, defaults |
tools/ |
Tool interface, JSON Schema generation, registry |
provider/ |
OpenAI-compatible LLM provider |
agent/ |
ReAct agent loop with hooks |
session/ |
Session state and persistence |
tui/ |
Bubbletea terminal UI adapter |
web/ |
HTTP/SSE server adapter |
Runnable examples live in examples/.
| Example | Description |
|---|---|
examples/login |
Interactive CLI for provider management — login, logout, status, browse models, and set the default. For OpenAI the menu prompts between device-code ChatGPT sign-in (default), browser ChatGPT sign-in, or an API key; subcommand shortcuts (login openai device, login openai web, login openai key, login copilot, status, models, default copilot/gpt-4o) are available for scripting. |
examples/simple |
Minimal agent with greet and add tools. Walks the user through provider login and model selection on first run, then uses the persisted default on subsequent runs. |
examples/copilot |
Direct Copilot wiring without the manager — shows the two-phase GitHub device flow and caching token source. |
examples/tui |
Full Bubbletea TUI agent. Streams tokens as they arrive, renders tool calls and results inline, supports Ctrl+V image paste (inserts a [Image #N] placeholder and attaches a BlockImage on send), and supports slash commands: /models to switch provider/model at runtime (with recently-used models surfaced first), /new to start a fresh session. Uses the persisted default or falls back to OPENAI_API_KEY. |
examples/sqlite |
Shared-DB demo for session.SQLiteStore: opens a single SQLite file, runs a parent-app migration (memories table), hands the same *sql.DB to session.NewSQLiteStore, saves a conversation, and queries both namespaces to prove coexistence. No network calls — usable as a CI smoke test. |
examples/web |
Browser-only embedding. Serves web.ManagedHandler under /api/* and a minimal single-page UI at / that drives provider login (API keys, Ollama URL, Copilot device flow, "Sign in with ChatGPT" for OpenAI), model selection, default setting, and streaming chat — all over HTTP with no TUI. The ChatGPT browser/PKCE flow is deliberately CLI-only because its OAuth callback lands on the user's localhost, not the server's — remote-hosted web UIs should use the device-code flow (which works regardless of where the server lives). |
Run any of them with go run ./examples/<name>.
MIT