A fork of HKUDS/nanobot with Honcho AI-native memory built in.
nanobot is an ultra-lightweight personal AI assistant (~4,000 lines of core code). This fork adds Honcho for persistent, cross-session user memory β your bot remembers who you are, what you care about, and what you've talked about, without you having to repeat yourself.
- Cross-session memory β context carries over between conversations automatically
- User modeling β builds a representation of the user over time (preferences, history, patterns)
- Prefetch context β relevant user context is injected into the system prompt before each LLM call
- Session rotation β
/clearor/newstarts a fresh conversation while preserving long-term memory - Automatic migration β existing local sessions and memory files migrate to Honcho on first use
Honcho is enabled by default on this fork. The bot works without it (graceful degradation), but memory won't persist across sessions.
git clone https://github.com/plastic-labs/nanobot-honcho.git
cd nanobot-honcho
pip install -e ".[honcho]"
nanobot onboardAdd your LLM provider key and Honcho key to ~/.nanobot/config.json:
{
"providers": {
"openrouter": {
"apiKey": "sk-or-v1-..."
}
}
}Set up Honcho (free key from app.honcho.dev):
nanobot honcho enable --api-key YOUR_HONCHO_KEYnanobot agent # interactive mode
nanobot agent -m "Hello!" # single message
nanobot gateway # start gateway (Telegram, Discord, etc.)| Command | Description |
|---|---|
/new |
Start a new conversation (consolidates memory) |
/clear |
Clear session and start fresh (rotates Honcho session) |
/help |
Show available commands |
Config file: ~/.nanobot/config.json
| Field | Default | Description |
|---|---|---|
honcho.enabled |
true |
Enable Honcho memory integration |
honcho.workspaceId |
"nanobot" |
Honcho workspace identifier |
honcho.prefetch |
true |
Inject user context into system prompt |
honcho.environment |
"production" |
Honcho environment |
nanobot honcho enable --api-key KEY # Enable + save API key
nanobot honcho enable --migrate # Migrate local sessions to Honcho
nanobot honcho disable # Revert to local file-based sessions
nanobot status # Check Honcho connection statusThis fork supports all upstream providers. See the upstream README for the full list:
OpenRouter, Anthropic, OpenAI, DeepSeek, Groq, Gemini, MiniMax, AiHubMix, DashScope, Moonshot, Zhipu, vLLM, OpenAI Codex (OAuth), and any OpenAI-compatible endpoint.
All upstream channels are supported. See upstream docs for setup:
Telegram, Discord, WhatsApp, Feishu, Mochat, DingTalk, Slack, Email, QQ.
nanobot supports MCP for external tool servers. Config format is compatible with Claude Desktop / Cursor. See upstream docs.
User message
|
v
[Honcho prefetch] -- retrieve user context from memory
|
v
[System prompt + user context + conversation history]
|
v
[LLM generates response]
|
v
[Honcho sync] -- save exchange to persistent memory
|
v
Response to user
On first message per session, local JSONL history and MEMORY.md/HISTORY.md files are automatically migrated to Honcho. After migration, Honcho is the single source of truth for conversation persistence.
This fork tracks HKUDS/nanobot and merges upstream regularly. The Honcho integration is maintained as an additive layer β all upstream features (MCP, providers, channels, skills, cron, heartbeat) work as documented.
To contribute Honcho-specific changes, open a PR against this repo. For general nanobot improvements, consider contributing upstream.
MIT β same as upstream.