Skip to content

yvgude/lean-ctx

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1,197 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
  ██╗     ███████╗ █████╗ ███╗   ██╗     ██████╗████████╗██╗  ██╗
  ██║     ██╔════╝██╔══██╗████╗  ██║    ██╔════╝╚══██╔══╝╚██╗██╔╝
  ██║     █████╗  ███████║██╔██╗ ██║    ██║        ██║    ╚███╔╝ 
  ██║     ██╔══╝  ██╔══██║██║╚██╗██║    ██║        ██║    ██╔██╗ 
  ███████╗███████╗██║  ██║██║ ╚████║    ╚██████╗   ██║   ██╔╝ ██╗
  ╚══════╝╚══════╝╚═╝  ╚═╝╚═╝  ╚═══╝     ╚═════╝   ╚═╝   ╚═╝  ╚═╝
             Context Runtime for AI Agents

The context layer for AI coding agents

Reduce token waste in Cursor, Claude Code, Copilot, Windsurf, Codex, Gemini & more by 60–95% (up to 99% on cached reads)
Shell Hook + MCP Server · 49 tools · 10 read modes · 90+ patterns · Single Rust binary

CI Security crates.io Downloads npm AUR License Discord X/Twitter Opt-in Telemetry

Website · Docs · Install · Demo · Benchmarks · Cookbook · Security · Changelog · Discord


lean-ctx is a local-first context runtime that compresses file reads + shell output before they reach the LLM. Cached re-reads drop to ~13 tokens.

See it in action:

Map-mode file read + compressed git output demo
Read + Shell
Map-mode reads + compressed CLI output
lean-ctx gain live dashboard demo
Gain (live)
Tokens + USD savings in real time
lean-ctx benchmark report demo
Benchmark proof
Measure compression by language + mode

All GIFs are generated from reproducible VHS tapes in demo/.

What it does

  • File reads (MCP): cached + mode-aware reads (full, map, signatures, diff, …)
  • Shell output (hook): compresses noisy CLI output via 90+ patterns (git, npm, cargo, docker, …)
  • Session memory (CCP): persist task/facts/decisions across chats for faster cold starts
  • HTTP mode: lean-ctx serve for Streamable HTTP MCP + /v1/tools/call (used by the Cookbook + SDK)

How it works (30 seconds)

AI tool  →  (MCP tools + shell commands)  →  lean-ctx  →  your repo + CLI
  • MCP server: exposes ctx_* tools (read modes, caching, deltas, search, memory, multi-agent)
  • Shell hook: transparently compresses common commands so the LLM sees less noise
  • CCP: persists session state so long-running work doesn’t “cold start” every chat

Get started (60 seconds)

# 1) Install (pick one)
curl -fsSL https://leanctx.com/install.sh | sh      # universal (no Rust needed)
brew tap yvgude/lean-ctx && brew install lean-ctx    # macOS / Linux
npm install -g lean-ctx-bin                          # Node.js
cargo install lean-ctx                               # Rust

# 2) Setup (shell + auto-detected AI tools)
lean-ctx setup

# 3) Verify
lean-ctx doctor

# 4) See the payoff
lean-ctx gain --live
lean-ctx wrapped --week

After setup, restart your shell and your editor/AI tool once so the MCP + hooks are active.

Troubleshooting / Safety
  • Disable immediately (current shell): lean-ctx-off
  • Run a single command uncompressed: lean-ctx -c --raw "git status"
  • Update: lean-ctx update
  • Diagnose (shareable): lean-ctx doctor --json

Supported IDEs & AI tools

lean-ctx is a standard MCP server, so it works with any MCP-compatible client.

For first-class integration, run:

lean-ctx init --agent <tool>

Supported <tool> values (24):

Show full list
  • Cursor (cursor)
  • Claude Code (claude)
  • GitHub Copilot (copilot)
  • Windsurf (windsurf)
  • Codex CLI (codex)
  • Gemini CLI (gemini)
  • Cline (cline)
  • Roo Code (roo)
  • OpenCode (opencode)
  • Crush (crush)
  • Amazon Q Developer (amazonq)
  • AWS Kiro (kiro)
  • Antigravity (antigravity)
  • Hermes (hermes)
  • Qwen (qwen)
  • Trae (trae)
  • Verdent (verdent)
  • Pi (pi)
  • Aider (aider)
  • Amp (amp)
  • JetBrains IDEs (jetbrains)
  • Emacs (emacs)
  • Neovim (neovim)
  • Sublime Text (sublime)

Also supported via MCP config (auto-detected in setup): VS Code and Zed.

When to use (and when not to)

Great fit if you…

  • use AI coding tools daily and your sessions are shell-heavy (git/tests/builds)
  • work in medium/large repos (50+ files / monorepos)
  • want a local-first layer with no telemetry by default

Skip it if you…

  • mostly work in tiny repos and rarely call the shell from your AI tool
  • always need raw/unfiltered logs (you can still use --raw, but ROI is lower)

Demo

Try these in any repo:

lean-ctx read rust/src/server/mod.rs -m map
lean-ctx -c "git log -n 5 --oneline"
lean-ctx gain --live
lean-ctx benchmark report .
  • The repo ships the exact tapes used to render the GIFs in demo/
  • Regenerate locally:
vhs demo/leanctx.tape
vhs demo/gain.tape
vhs demo/benchmark.tape

Benchmarks

lean-ctx benchmark report .

Docs

Privacy & security

  • No telemetry by default
  • Optional anonymous stats sharing (opt-in during setup)
  • Disableable update check (config update_check_disabled = true or LEAN_CTX_NO_UPDATE_CHECK=1)
  • Runs locally; your code never leaves your machine unless you explicitly enable cloud sync

See SECURITY.md.

Uninstall

lean-ctx-off       # disable immediately (current shell session)
lean-ctx uninstall # remove hooks + editor configs + data dir

# Remove the binary (pick your install method)
brew uninstall lean-ctx
npm uninstall -g lean-ctx-bin
cargo uninstall lean-ctx

Contributing

Start with CONTRIBUTING.md. Easy first PR: propose a new CLI compression pattern via the issue template.

License

Apache License 2.0 — see LICENSE.

Portions of this software were originally released under the MIT License. See LICENSE-MIT and NOTICE.

About

The context layer for AI coding agents Reduce token waste in Cursor, Claude Code, Copilot, Windsurf, Codex, Gemini & more by 60–95% (up to 99% on cached reads) Shell Hook + MCP Server · 49 tools · 10 read modes · 90+ patterns · Single Rust binary

Topics

Resources

License

Apache-2.0, MIT licenses found

Licenses found

Apache-2.0
LICENSE
MIT
LICENSE-MIT

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors