Formerly NanoCoder — renamed to avoid confusion with Nano-Collective/nanocoder. All links from the old repo redirect here automatically.
中文 | English | Claude Code Architecture Deep Dive (7 articles)
512,000 lines of TypeScript → ~1,400 lines of Python.
I spent two days reverse-engineering the leaked Claude Code source — all half a million lines. Then I stripped it down to the load-bearing walls and rebuilt them in Python. The result: every key architectural pattern from Claude Code, in a codebase you can read in one sitting.
CoreCoder is not another AI coding tool. It's a blueprint — the nanoGPT of coding agents. Read it, fork it, build your own.
$ corecoder -m kimi-k2.5
You > read main.py and fix the broken import
> read_file(file_path='main.py')
> edit_file(file_path='main.py', ...)
--- a/main.py
+++ b/main.py
@@ -1 +1 @@
-from utils import halper
+from utils import helper
Fixed: halper → helper.
Claude Code's 512K lines distilled into ~1,400 lines across 7 patterns that actually matter:
| Pattern | Claude Code | CoreCoder |
|---|---|---|
| Search-and-replace editing (unique match + diff) | FileEditTool | tools/edit.py — 70 lines |
| Parallel tool execution | StreamingToolExecutor (530 lines) | agent.py — ThreadPool |
| 3-layer context compression | HISTORY_SNIP → Microcompact → CONTEXT_COLLAPSE | context.py — 145 lines |
| Sub-agent with isolated context | AgentTool (1,397 lines) | tools/agent.py — 50 lines |
| Dangerous command blocking | BashTool (1,143 lines) | tools/bash.py — 95 lines |
| Session persistence | QueryEngine (1,295 lines) | session.py — 65 lines |
| Dynamic system prompt | prompts.ts (914 lines) | prompt.py — 35 lines |
Every pattern is a real, runnable implementation — not a diagram or a blog post.
pip install corecoderPick your model — any OpenAI-compatible API works. You can export env vars or drop a .env file in your project root:
# Kimi K2.5
export OPENAI_API_KEY=your-key OPENAI_BASE_URL=https://api.moonshot.ai/v1
corecoder -m kimi-k2.5
# Claude Opus 4.6 (via OpenRouter)
export OPENAI_API_KEY=your-key OPENAI_BASE_URL=https://openrouter.ai/api/v1
corecoder -m anthropic/claude-opus-4-6
# OpenAI GPT-5
export OPENAI_API_KEY=sk-...
corecoder -m gpt-5
# DeepSeek V3
export OPENAI_API_KEY=sk-... OPENAI_BASE_URL=https://api.deepseek.com
corecoder -m deepseek-chat
# Qwen 3.5
export OPENAI_API_KEY=sk-... OPENAI_BASE_URL=https://dashscope.aliyuncs.com/compatible-mode/v1
corecoder -m qwen-max
# Ollama (local)
export OPENAI_API_KEY=ollama OPENAI_BASE_URL=http://localhost:11434/v1
corecoder -m qwen3:32b
# One-shot mode
corecoder -p "add error handling to parse_config()"The whole thing fits in your head:
corecoder/
├── cli.py REPL + commands 218 lines
├── agent.py Agent loop + parallel tools 122 lines
├── llm.py Streaming client + retry 156 lines
├── context.py 3-layer compression 196 lines
├── session.py Save/resume 68 lines
├── prompt.py System prompt 33 lines
├── config.py Env config 55 lines
└── tools/
├── bash.py Shell + safety + cd tracking 115 lines
├── edit.py Search-replace + diff 85 lines
├── read.py File reading 53 lines
├── write.py File writing 36 lines
├── glob_tool.py File search 47 lines
├── grep.py Content search 78 lines
└── agent.py Sub-agent spawning 58 lines
from corecoder import Agent, LLM
llm = LLM(model="kimi-k2.5", api_key="your-key", base_url="https://api.moonshot.ai/v1")
agent = Agent(llm=llm)
response = agent.chat("find all TODO comments in this project and list them")from corecoder.tools.base import Tool
class HttpTool(Tool):
name = "http"
description = "Fetch a URL."
parameters = {"type": "object", "properties": {"url": {"type": "string"}}, "required": ["url"]}
def execute(self, url: str) -> str:
import urllib.request
return urllib.request.urlopen(url).read().decode()[:5000]/model Show current model
/model <name> Switch model mid-conversation
/compact Compress context (like Claude Code's /compact)
/tokens Token usage + cost estimate
/diff Show files modified this session
/save Save session to disk
/sessions List saved sessions
/reset Clear history
quit Exit
| Claude Code | Claw-Code | Aider | CoreCoder | |
|---|---|---|---|---|
| Code | 512K lines (closed) | 100K+ lines | 50K+ lines | ~1,400 lines |
| Models | Anthropic only | Multi | Multi | Any OpenAI-compatible |
| Readable? | No | Hard | Medium | One afternoon |
| Purpose | Use it | Use it | Use it | Understand it, build yours |
I wrote 7 articles breaking down Claude Code's architecture — the agent loop, tool system, context compression, streaming executor, multi-agent, and 44 hidden feature flags. If you want to understand why CoreCoder is designed this way, start there.
Does CoreCoder support Skills / Subagents / MCP?
No, and that's intentional. CoreCoder is the minimal runnable core — agent loop, tools, streaming, compaction. Skills, Subagents, MCP, hooks, and plugins are upper-layer features that Claude Code layers on top; if CoreCoder had them too it would stop being a teaching artifact. The architecture articles above cover how those systems work in Claude Code, so you can add them yourself if you need to.
If you want Skills specifically, the recipe is small: scan ~/.claude/skills/*.md at startup, list their titles in the system prompt, and let the agent ask for a skill by name before you inline that file's body into the conversation.
- CodeJoust — a CLI arena that races Claude Code, aider, Codex, and Gemini (Cursor + OpenHands next) on the same bug in isolated git worktrees, scores by tests+cost+diff+time, hands you the winning patch. If you ever wondered which AI coding CLI is actually better for your task, CodeJoust answers it empirically.
- AnyCoder — a practical terminal AI coding agent built on the same architecture as CoreCoder but with litellm, session persistence, and 100+ model support. Use this one if you want a tool; use CoreCoder if you want to read source.
- LiteBench — one-command LLM / agent benchmark. Ships 7 built-in tasks (HumanEval/GSM8K/MMLU/...) and YAML-defined custom tasks, with a single-file HTML dashboard.
- RepoWiki — open-source DeepWiki alternative.
pip install repowiki, one command to turn any local or GitHub repo into a wiki with dependency graph, architecture diagram, and LLM-generated module pages.
MIT. Fork it, learn from it, ship something better. A mention of this project is appreciated.
Built by Yufeng He · Agentic AI Researcher @ Moonshot AI (Kimi)
Claude Code Source Analysis — 170K+ reads, 6000 bookmarks on Zhihu