Skip to content

VforVitorio/lmcode

lmcode

lmcode

A local coding agent CLI powered by LM Studio.
Open source, fully private, no cloud required.

status python license uv


Warning

Honest disclaimer: This is a personal side project. I'm not a LinkedIn AI guru with 47 certifications. I'm building this with Claude Code, so yes β€” there's probably some AI slop hiding somewhere in here. πŸ€–πŸ« 

PRs welcome. Learning out loud and figuring things out as I go also welcome. Stack Overflow-style "this question doesn't belong here, marked as duplicate of a 2009 thread" energy β€” not so much.


lmcode is a coding agent for your terminal that runs entirely on your machine using LM Studio as the inference backend. Think Claude Code or Aider, but local, open source, and extensible via plugins and MCP servers.

This project is under active development. The API and features are not stable yet.


Why

Cloud coding assistants are powerful but they send your code to external servers. Local models have gotten good enough to be genuinely useful for coding tasks, but no good agentic layer exists for LM Studio β€” it only provides inference. lmcode fills that gap.

LM Studio   β†’   lmcode agent   β†’   your codebase
(inference)     (tools + loop)      (stays local)

Features

  • Agent loop β€” iterative tool-calling loop powered by model.act() from the LM Studio Python SDK
  • Coding tools β€” read files, write files, list files, run shell commands, search code (ripgrep), git operations
  • LMCODE.md β€” per-repo memory file, like CLAUDE.md; injected into the system prompt automatically
  • Animated spinner β€” state labels (thinking… / working… / finishing…) with tool name + path during tool calls
  • Tool output panels β€” syntax-highlighted file previews, side-by-side diff blocks for edits, IN/OUT panels for shell commands
  • Ghost-text autocomplete β€” fish-shell style: type /h β†’ dim elp appears, Tab accepts
  • Persistent history β€” Ctrl+R and Up-arrow recall prompts across sessions (~/.lmcode/history)
  • Permission modes β€” ask (confirm each tool), auto (run freely), strict (read-only); Tab cycles between them
  • LMCODE.md β€” per-repo context file injected into the system prompt
  • /compact β€” summarises conversation history via the model, resets the chat, and injects the summary as context
  • /tokens β€” session-wide prompt (↑) and generated (↓) token totals with context arc (β—” 38% 14.2k / 32k tok)
  • /history [N] β€” show last N conversation turns as bordered panels (default 5)
  • /hide-model β€” toggle model name visibility in the live prompt
  • Cycling tips β€” tips below the spinner rotate every 8 s through a shuffled list
  • Context arc indicator β€” ○◔◑◕● with percentage in /status and /tokens; warns at 80 % usage

Slash commands

Command Description
/help Show all available slash commands
/status Show session stats, model info, and context window usage
/tokens Show session prompt (↑) and generated (↓) token totals with context arc
/compact Summarise history, reset chat, inject summary as context
/history [N] Show last N conversation turns as panels (default 5)
/hide-model Toggle model name visibility in the live prompt
/verbose Toggle verbose tool-call output
/tips Toggle cycling tips below the thinking spinner
/stats Toggle per-turn stats line after each response
/tools List all registered tools

Status

Component Status
CLI skeleton (Typer + Rich) βœ… done
LM Studio adapter (model.act) βœ… done
Agent loop + basic tools βœ… done
Slash commands + UX polish βœ… done
Animated spinner + state labels βœ… done
Tool output panels (file, diff, shell) βœ… done
Ghost-text autocomplete + history βœ… done
Ctrl+C interrupt mid-generation βœ… done
Graceful LM Studio disconnect handling βœ… done
Streaming Markdown output πŸ”Ά in progress
Interactive permission UI (ask mode) πŸ”² planned
Session recorder (JSONL) πŸ”² planned
Session viewer (Textual TUI) πŸ”² planned
MCP client πŸ”² planned
Plan mode / Agent mode πŸ”² planned
VSCode extension πŸ”² planned

Requirements

  • Python 3.11+
  • LM Studio with a model loaded. The lms CLI (ships with LM Studio) is the easiest way to get there:
    lms get Qwen2.5-Coder-7B-Instruct@q4_k_m   # download the recommended model (~4.5 GB)
    lms load Qwen2.5-Coder-7B-Instruct           # load it into memory
    lms ps                                        # confirm it is running
    See the LM Studio CLI reference for full documentation. lms controls the inference infrastructure β€” model download, load/unload, server lifecycle. lmcode is the agentic coding layer on top.
  • uv (recommended) or pip

Install

# with uv (recommended)
uv tool install lmcode

# or with pipx
pipx install lmcode

# or with pip
pip install lmcode

Not published to PyPI yet. Install from source in the meantime:

git clone https://github.com/VforVitorio/lmcode
cd lmcode
uv sync
uv run lmcode --help

Quick start

Make sure LM Studio is running with a model loaded and the local server enabled.

# start a chat session in the current repo
lmcode chat

The agent will connect to LM Studio automatically. Type your request and press Enter. Use /help to see all slash commands.

Recommended model: Qwen2.5-Coder-7B-Instruct (Q4_K_M, ~4.5 GB VRAM) β€” best function calling for code tasks at 7B size.


How it works

LM Studio (inference backend)
     β”‚   managed by the `lms` CLI β€” model load/unload, server, logs
     β”‚
     β–Ό
lmcode chat  ──  agent loop (model.act)
     β”‚
     └── Tool Runner
            β”œβ”€β”€ read_file / write_file / list_files
            β”œβ”€β”€ run_shell
            β”œβ”€β”€ search_code (ripgrep)
            └── git (status, diff, commit)

lmcode vs lms: LM Studio's official lms CLI handles infrastructure β€” downloading models, loading/unloading them, controlling the HTTP server, streaming logs. lmcode is the agentic coding layer: it drives the tool-calling loop, manages conversation context, renders diffs and file panels, and enforces permission modes. The two tools are complementary.


Project structure

src/lmcode/
β”œβ”€β”€ cli/          # Typer commands
β”œβ”€β”€ agent/        # agent loop and context management
β”œβ”€β”€ tools/        # built-in coding tools
β”œβ”€β”€ mcp/          # MCP client + OpenAPI β†’ MCP dynamic servers
β”œβ”€β”€ plugins/      # pluggy hookspecs and manager
β”œβ”€β”€ session/      # recorder, storage, event models
β”œβ”€β”€ ui/           # Textual TUI session viewer
└── config/       # settings and LMCODE.md handling

Contributing

This project is in early development. Contributions, feedback, and ideas are very welcome.

  • Open an issue to discuss ideas before opening a PR
  • Keep PRs focused β€” one thing at a time
  • All code is formatted with ruff and type-checked with mypy
git clone https://github.com/VforVitorio/lmcode
cd lmcode
uv sync --all-extras
uv run pytest

Roadmap

v0.1.0 β€” Basic chat βœ…

  • lmcode chat with LM Studio connection
  • Agent loop (model.act) + basic tools
  • Auto-connect to LM Studio

v0.2.0 β€” Full tool suite βœ…

  • write_file, list_files, run_shell, search_code tools
  • Banner + status bar
  • Slash commands (/help, /status, /verbose, /tools, /compact)

v0.3.0 β€” UX polish βœ…

  • /compact β€” summarise history and reset chat
  • /tokens β€” session token totals and context arc
  • /hide-model β€” toggle model name in prompt
  • Cycling tips β€” rotate every 8 s during thinking
  • Context window indicator β€” arc ○◔◑◕● + % in /status and /tokens

v0.4.0 β€” Input & display βœ…

  • Animated spinner with state labels (thinking… / working… / finishing…)
  • Ghost-text slash autocomplete (fish-shell style, Tab to accept)
  • Persistent history β€” Ctrl+R / Up-arrow across sessions
  • read_file syntax panel (one-dark, line numbers, violet border)
  • write_file side-by-side diff block (Codex/Catppuccin palette)
  • run_shell IN/OUT panel with separator
  • /history [N] β€” show last N conversation turns

v0.5.0 β€” Agent modes βœ… done

  • Ctrl+C interrupt mid-generation β€” returns to prompt, shows ^C / interrupted (#60)
  • Verbose tool panels always shown β€” fixed positional-arg merge in _wrap_tool_verbose
  • write_file escape sequences β€” literal \n/\t unescaped before writing
  • SDK channel noise suppression after Ctrl+C
  • Streaming Markdown output (#56)
  • Interactive permission UI β€” diff view + arrow-key confirm in ask mode (#40)
  • Plan mode β€” model proposes a plan before executing (#21)
  • Agent mode β€” autonomous multi-step execution (#22)

v0.6.0 β€” Stability βœ… done

  • Graceful LM Studio disconnect handling (#70)
  • SDK WebSocket JSON noise suppression

v0.6.1 β€” Refactor & polish βœ… done

  • agent/core.py split into focused submodules (_noise, _display, _prompt)
  • write_file mixed newline unescape fix (Qwen 7B compatibility)
  • Full test coverage for display and noise modules

v0.7.0 β€” In progress

  • Streaming Markdown output (#56)
  • Interactive permission UI β€” diff view + arrow-key confirm in ask mode (#40)
  • /model mid-session switch (#19)
  • Enriched startup banner via lms ps --json

v1.0

  • Session recorder + Textual TUI viewer
  • MCP client
  • Stable API + docs site
  • VSCode extension

Inspiration


License

MIT β€” see LICENSE

About

A local coding agent CLI powered by LM Studio. Open source, fully private, no cloud required. (work in progress)

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors