fagent is a graph-aware, memory-first AI agent runtime for long-running work: layered memory, shadow-context compression, subagents, workflow execution, provider/model-role routing, and a local Graph UI in one inspectable Python stack.
- Why this exists
- What makes
fagentdifferent - Architecture at a glance
- Quick start
- Key CLI commands
- Providers and model roles
- Documentation hub
- Built on nanobot
- Development
Many agent stacks can answer well in a single chat window but lose coherence when work stretches across multiple sessions, tools, and interruptions.
fagent is built around a different target:
- memory should be layered, inspectable, and query-aware
- the main model should receive a compact working brief instead of raw memory clutter
- relationships between tasks, entities, blockers, and decisions should survive between turns
- tool-heavy workflows should not burn main-model tokens on every small repair or orientation step
That is why fagent combines file memory, vector retrieval, graph memory, workflow state, task graph state, experience memory, shadow briefs, and routing logic inside the runtime itself.
fagent does not treat memory as only a transcript. It combines:
- file memory for transparent on-disk artifacts
- vector memory for semantic recall
- graph memory for entities and relationships
- workflow state for in-progress execution snapshots
- task graph state for goals, blockers, and decisions
- experience memory for repeated recoveries
- shadow context for compression before the main model runs
- query-aware routing so retrieval changes by intent
Vector search helps with semantic similarity. It is weaker at questions like:
- what depends on what
- which decision caused this blocker
- which workflow node relates to this fact
- which task state should remain grouped
fagent treats those as graph problems, not just text-search problems. The runtime already stores task and relationship structure, so the agent preserves more than a linear checklist. That graph-shaped continuity is the foundation for richer graph planning and future automatic relevance links without overstating that full automation already exists.
Before the main model reasons, fagent can retrieve memory from multiple stores and compress it into a shadow brief.
That means the main model spends fewer tokens on:
- searching memory
- reconstructing task state from noisy history
- re-discovering known facts
- initial low-value “figure out what is going on” steps
Instead, it starts with directed context: summary, relevant facts, open questions, contradictions, citations, and confidence.
fagent supports:
- subagents for bounded background tasks
run_workflowfor ordered tool chainsworkflowLightas a separate lighter model role for repair and recovery
This lets the main model stay focused on high-value reasoning while smaller operational fixes happen in a constrained execution lane.
fagent ships with a local Graph UI server/editor. You can open graph memory in a browser, inspect nodes and edges, and debug why relationship-oriented recall happened.
That turns graph memory from a black box into something you can inspect and tune.
User / CLI / Chat channel
|
v
Main Agent Loop
|
+--> Tool Registry
| +--> shell / web / files / MCP / workflow / moa / spawn
|
+--> Memory Orchestrator
| +--> file memory
| +--> vector memory
| +--> graph memory
| +--> workflow state
| +--> task graph
| +--> experience patterns
| +--> shadow brief builder
| +--> query-aware router
|
+--> Subagent Manager
|
+--> Provider / Model Role Resolver
+--> main / shadow / workflowLight / graphExtract / graphNormalize / embeddings / autoSummarize
Core implementation modules:
fagent/memory/orchestrator.pyfagent/memory/router.pyfagent/memory/shadow.pyfagent/memory/graph_ui.pyfagent/agent/subagent.pyfagent/agent/tools/workflow.py
Clone and install:
git clone https://github.com/fresed05/fagent.git
cd fagent
pip install -e .Or install from PyPI:
pip install fagent-ai
uv tool install fagent-aifagent onboardfagent onboard writes the full default config tree to ~/.fagent/config.json and creates the default workspace under ~/.fagent/workspace.
{
"providers": {
"openrouter_main": {
"providerKind": "openrouter",
"apiKey": "sk-or-v1-xxx"
}
},
"models": {
"profiles": {
"opus_main": {
"provider": "openrouter_main",
"model": "anthropic/claude-opus-4-5"
}
},
"roles": {
"main": "opus_main"
}
}
}fagent agent
fagent agent -m "Summarize this repository"
fagent statusfagent memory doctor
fagent memory query-v2 "what blockers are connected to the current task"
fagent memory inspect-task-graph cli:direct
fagent memory inspect-experience
fagent memory rebuild-graphfagent memory graph-ui --open
fagent memory graph-ui --query "workflowLight"fagent onboardfagent agentfagent gatewayfagent statusfagent auth login --provider ...fagent channels loginfagent memory doctorfagent memory query-v2fagent memory inspect-task-graphfagent memory inspect-experiencefagent memory rebuild-graphfagent memory graph-ui
Provider instances describe where traffic goes. Model roles describe which model handles which kind of work.
Important built-in roles:
mainshadowworkflowLightgraphExtractgraphNormalizeembeddingsautoSummarize
This allows setups such as:
- strong reasoning model for
main - cheaper repair model for
workflowLight - separate embedding endpoint for
embeddings - separate graph extraction profile for
graphExtract - different summarization model for
autoSummarize
See docs/providers-and-model-roles.md and CONFIGURATION.md.
Start here for deeper guides:
- Docs index
- Memory architecture
- Subagents and workflows
- Graph memory and Graph UI
- Providers and model roles
- CLI and observability
- Why fagent
- Configuration reference
This project is based on nanobot, which established the lightweight agent packaging and multi-channel runtime foundation used here.
fagent extends that base with:
- memory-first positioning
- graph-aware memory and local graph inspection
- subagents and workflow orchestration
- provider/model-role separation
- workspace bootstrap and runtime ergonomics around
~/.fagent
If you want the upstream base project, see HKUDS/nanobot. If you want the forked runtime described in this documentation set, stay here.
pip install -e ".[dev]"
pytest
python -m fagent --version
fagent --version
fagent onboard
fagent memory doctor- default workspace path:
~/.fagent/workspace - default config path:
~/.fagent/config.json - default gateway port:
18790 - WhatsApp uses a local Node.js bridge under
~/.fagent/bridge