Fleki Knowledge is a local-first knowledge graph runtime for agent workflows.
It gives you a knowledge CLI plus packaged skill installs for Codex, Hermes, and OpenClaw, so the same graph can be searched, traced, saved, and rebuilt across multiple agent runtimes.
- Search for exact or literal candidate pages instead of dumping raw files into a folder.
- Trace exact refs back to source records, provenance notes, and PDF render artifacts.
- Work with local markdown, images, and PDFs.
- Keep the graph inspectable on disk under a normal directory tree.
- Use one install flow to provision the CLI and the runtime-specific skill bundles.
Fleki is released under the MIT License.
The knowledge CLI supports five core workflows:
| Command | Purpose |
|---|---|
knowledge status |
Inspect the active graph root and rebuild backlog. |
knowledge search |
List deterministic candidate pages and trace_ref handoffs. |
knowledge trace |
Follow an exact page or section ref back to provenance and source records. |
knowledge save |
Commit a semantic ingestion decision with provenance. |
knowledge rebuild |
Apply page moves, lifecycle updates, and cleanup after larger changes. |
Install from this checkout with:
./install.shRequirements:
uvnpxif Codex is installed on the machine- Node
>=22and npm>=10.9.2if you want the optional review wiki
Useful install variants:
./install.sh --dry-run
./install.sh --surface codex
./install.sh --surface hermes --surface openclawWhat ./install.sh does:
- regenerates the bundled runtime under
skills/knowledge/runtime/** - installs the
knowledgeCLI with PDF support throughdocling - installs or refreshes the Codex skill when Codex is present
- copies the skill into each detected Hermes home
- copies the skill into each detected OpenClaw root
- writes or refreshes the install manifest under
$HOME/.fleki
Install the optional local review wiki on this machine:
./install.sh --review-wikiThat installs the normal Fleki runtime pieces, then installs the Quartz-based review wiki as a per-user service on macOS or Linux. Use the same command again later if you want to refresh the installed review wiki from this checkout. The site serves locally at:
http://127.0.0.1:4151
The review wiki renders the knowledge first, then renders preserved artifacts
inline under each topic or provenance page when possible, and keeps provenance
links below that. It exports only derived review content under
~/.fleki/state/review-wiki. It does not point Quartz at the raw live graph
directory.
Remove the review wiki later with:
./install.sh --remove-review-wikiQuick smoke after install:
knowledge status --json --no-receiptLook for these fields in the output:
resolved_data_rootinstall_manifest_pathrecent_topicsingests_with_confidence_caveatspdf_render_contract_gap_count
The review wiki is a local read-only view of the live knowledge graph. It does not become a second source of truth, and Quartz does not read the raw graph directory directly.
Quick start for the Quartz review site:
- Install Fleki normally with
./install.sh. - Install the review wiki with
./install.sh --review-wiki. - Open
http://127.0.0.1:4151.
If the service is already installed and you pulled new review-wiki code, run
./install.sh --review-wiki again from this checkout to refresh the installed
Quartz workspace and service files.
What the installer sets up:
- installs the normal Fleki CLI and runtime bundles
- copies the repo-owned Quartz overlay from
templates/review-wiki/**into~/.fleki/state/review-wiki/quartz/ - runs
npm installinside that derived Quartz workspace - copies the installed Quartz runtime scaffold from
node_modules/@jackyzha0/quartz/quartz/into the same derived workspace - installs one per-user background service that runs
python -m knowledge_graph.review_wiki.daemon
Current Quartz requirement and pin:
- Quartz
4.5.2 - Node
>=22 - npm
>=10.9.2
Current review-wiki service names:
- macOS:
~/Library/LaunchAgents/dev.fleki.review-wiki.plist - Linux:
~/.config/systemd/user/fleki-review-wiki.service
Current derived state tree:
~/.fleki/state/review-wiki/quartz/content/~/.fleki/state/review-wiki/quartz/public/~/.fleki/state/review-wiki/export-digest.json~/.fleki/state/review-wiki/build.log
How it works:
- Fleki exports
topics/**,topics/indexes/**, andprovenance/**into the derived Quartzcontent/tree - topic pages and provenance pages keep the knowledge first, then render artifact evidence inline below it, then keep provenance links below that
- preserved artifacts render inline when that makes sense, and pointer-backed sources stay as metadata plus links
- Fleki keeps artifact detail pages as fallback targets and exports only the specific copied files those pages embed or link
- Fleki computes an export digest and skips rebuilds when the exported content did not change
- the daemon polls every 5 seconds
- when the digest changes, Fleki runs a Quartz build into a staging public tree
and atomically replaces the live
public/tree - Fleki serves the built static files itself on
127.0.0.1:4151
What it does not do:
- it does not point Quartz at
$HOME/.fleki/knowledge - it does not publish the whole
sources/**tree,receipts/**, or.record.json - it exports only the copied artifacts and PDF render files that the generated review pages actually reference
- it does not use
quartz build --serveas the installed long-running service - it does not use a separate review-wiki config file in v1
Useful machine-level checks:
curl -I http://127.0.0.1:4151- macOS:
launchctl print gui/$(id -u)/dev.fleki.review-wiki - Linux:
systemctl --user status fleki-review-wiki
If you want to expose the review wiki through another local reverse proxy or a tailnet-only Tailscale route, point that layer at:
http://127.0.0.1:4151
That machine-specific network setup is outside the repo installer.
Check the active graph:
knowledge status --json --no-receiptSearch what is already known:
knowledge search "customer.io" --json --no-receiptIf nothing matches, knowledge search returns zero results. It should not invent a nearest answer.
Then trace the returned trace_ref:
knowledge trace <trace_ref> --json --no-receiptPublic trace inputs are exact refs only:
knowledge_idknowledge_id#section_idcurrent_path- page alias
current_path#section_alias
Section aliases use deterministic normalization only. current_understanding, current-understanding, and Current Understanding resolve to the same stored alias. knowledge_id#section_id remains the stable machine ref.
Page-level knowledge trace <page> does not guess a best section. It returns aggregate page lineage plus a supported_sections summary so the agent can inspect the exact section refs.
Commit a save from local files:
knowledge save --bindings bindings.json --decision decision.json --jsonApply a rebuild plan:
knowledge rebuild --plan rebuild.json --jsonknowledge save is the semantic write step.
It applies immediately. There is no preview, validate-only, or dry-run save path.
For a minimal valid save payload, start from the checked-in example templates:
- skills/knowledge/references/examples/minimal-save-bindings.json
- skills/knowledge/references/examples/minimal-save-decision.json
A few usage-critical rules:
- Use
factfor plain observations unless another kind adds stronger semantic meaning. ingest_summary.authority_tierandknowledge_units[].authority_postureare different enums. Do not swap them.- Each binding must declare
source_family. Do not make the app infer it fromsource_kindor file suffixes. - Each binding must declare
timestampas ISO 8601 source-observed time. - If source-observed time is unknown, stop and say that plainly instead of inventing one.
knowledge rebuildownsstaleand delete.
For the full save contract, see skills/knowledge/references/save-ingestion.md.
The canonical mutable graph is not stored in this repo checkout. By default it lives under:
$HOME/.fleki/knowledge
Installing or refreshing the CLI does not clear that graph. A fresh install can still attach to an already populated shared root.
The checked-in knowledge/** tree in this repo is reference content and a migration seed. It is not the live mutable graph.
The on-disk layout is simple:
topics/holds semantic pagesprovenance/holds source-backed notessources/holds copied files or durable pointersassets/holds derived-only render artifacts or extracted filesreceipts/holds command receiptssearch/is optional support state and is not required for correctness
topics/indexes/** and search/ are support artifacts only. knowledge search and knowledge trace must work from the live graph data, not from a hidden index.
Copied PDFs also persist a source-adjacent render bundle:
.render.md.render.manifest.json
See CONTRIBUTING.md for the local setup, verification commands, and runtime sync rules.
- optional
.assets/
If an older PDF source record predates that render contract, knowledge trace surfaces the gap and knowledge status reports it through pdf_render_contract_gap_count.
This project is designed to work across multiple agent runtimes while keeping one shared graph.
- Codex: installs the skill under
~/.agents/skills/knowledge - Hermes: copies the skill into each detected Hermes home
- OpenClaw: copies the skill into each detected OpenClaw root
The standalone CLI is still useful on its own. The integrations mainly ensure those runtimes can use the same knowledge workflow and shared data root.
If something looks wrong, start here:
knowledge status --json --no-receiptCommon fixes:
-
Repair an installed bundle from the bundle itself:
bash skills/knowledge/install/bootstrap.sh
-
Backfill older PDF source records that predate the render-or-omission contract:
.venv/bin/python scripts/backfill_pdf_render_contract.py --json
Notes:
- A non-empty graph after install is expected if
$HOME/.fleki/knowledgealready existed. ingests_with_reading_limitscounts unread or missing content.ingests_with_confidence_caveatscounts operator caveats that do not imply unread content.
If you are working in this repo directly:
src/knowledge_graph/**is the Python implementationskills/knowledge/**is the human-edited skill packageskills/knowledge/runtime/**is the generated bundled runtimeknowledge/README.mddescribes the checked-in reference tree, not the live graph root