Skip to content

funcountry/fleki

Repository files navigation

Fleki Knowledge

Fleki Knowledge is a local-first knowledge graph runtime for agent workflows.

It gives you a knowledge CLI plus packaged skill installs for Codex, Hermes, and OpenClaw, so the same graph can be searched, traced, saved, and rebuilt across multiple agent runtimes.

Why use it

  • Search for exact or literal candidate pages instead of dumping raw files into a folder.
  • Trace exact refs back to source records, provenance notes, and PDF render artifacts.
  • Work with local markdown, images, and PDFs.
  • Keep the graph inspectable on disk under a normal directory tree.
  • Use one install flow to provision the CLI and the runtime-specific skill bundles.

License

Fleki is released under the MIT License.

What it does

The knowledge CLI supports five core workflows:

Command Purpose
knowledge status Inspect the active graph root and rebuild backlog.
knowledge search List deterministic candidate pages and trace_ref handoffs.
knowledge trace Follow an exact page or section ref back to provenance and source records.
knowledge save Commit a semantic ingestion decision with provenance.
knowledge rebuild Apply page moves, lifecycle updates, and cleanup after larger changes.

Install

Install from this checkout with:

./install.sh

Requirements:

  • uv
  • npx if Codex is installed on the machine
  • Node >=22 and npm >=10.9.2 if you want the optional review wiki

Useful install variants:

./install.sh --dry-run
./install.sh --surface codex
./install.sh --surface hermes --surface openclaw

What ./install.sh does:

  • regenerates the bundled runtime under skills/knowledge/runtime/**
  • installs the knowledge CLI with PDF support through docling
  • installs or refreshes the Codex skill when Codex is present
  • copies the skill into each detected Hermes home
  • copies the skill into each detected OpenClaw root
  • writes or refreshes the install manifest under $HOME/.fleki

Install the optional local review wiki on this machine:

./install.sh --review-wiki

That installs the normal Fleki runtime pieces, then installs the Quartz-based review wiki as a per-user service on macOS or Linux. Use the same command again later if you want to refresh the installed review wiki from this checkout. The site serves locally at:

http://127.0.0.1:4151

The review wiki renders the knowledge first, then renders preserved artifacts inline under each topic or provenance page when possible, and keeps provenance links below that. It exports only derived review content under ~/.fleki/state/review-wiki. It does not point Quartz at the raw live graph directory.

Remove the review wiki later with:

./install.sh --remove-review-wiki

Quick smoke after install:

knowledge status --json --no-receipt

Look for these fields in the output:

  • resolved_data_root
  • install_manifest_path
  • recent_topics
  • ingests_with_confidence_caveats
  • pdf_render_contract_gap_count

Review wiki and Quartz

The review wiki is a local read-only view of the live knowledge graph. It does not become a second source of truth, and Quartz does not read the raw graph directory directly.

Quick start for the Quartz review site:

  1. Install Fleki normally with ./install.sh.
  2. Install the review wiki with ./install.sh --review-wiki.
  3. Open http://127.0.0.1:4151.

If the service is already installed and you pulled new review-wiki code, run ./install.sh --review-wiki again from this checkout to refresh the installed Quartz workspace and service files.

What the installer sets up:

  • installs the normal Fleki CLI and runtime bundles
  • copies the repo-owned Quartz overlay from templates/review-wiki/** into ~/.fleki/state/review-wiki/quartz/
  • runs npm install inside that derived Quartz workspace
  • copies the installed Quartz runtime scaffold from node_modules/@jackyzha0/quartz/quartz/ into the same derived workspace
  • installs one per-user background service that runs python -m knowledge_graph.review_wiki.daemon

Current Quartz requirement and pin:

  • Quartz 4.5.2
  • Node >=22
  • npm >=10.9.2

Current review-wiki service names:

  • macOS: ~/Library/LaunchAgents/dev.fleki.review-wiki.plist
  • Linux: ~/.config/systemd/user/fleki-review-wiki.service

Current derived state tree:

  • ~/.fleki/state/review-wiki/quartz/content/
  • ~/.fleki/state/review-wiki/quartz/public/
  • ~/.fleki/state/review-wiki/export-digest.json
  • ~/.fleki/state/review-wiki/build.log

How it works:

  • Fleki exports topics/**, topics/indexes/**, and provenance/** into the derived Quartz content/ tree
  • topic pages and provenance pages keep the knowledge first, then render artifact evidence inline below it, then keep provenance links below that
  • preserved artifacts render inline when that makes sense, and pointer-backed sources stay as metadata plus links
  • Fleki keeps artifact detail pages as fallback targets and exports only the specific copied files those pages embed or link
  • Fleki computes an export digest and skips rebuilds when the exported content did not change
  • the daemon polls every 5 seconds
  • when the digest changes, Fleki runs a Quartz build into a staging public tree and atomically replaces the live public/ tree
  • Fleki serves the built static files itself on 127.0.0.1:4151

What it does not do:

  • it does not point Quartz at $HOME/.fleki/knowledge
  • it does not publish the whole sources/** tree, receipts/**, or .record.json
  • it exports only the copied artifacts and PDF render files that the generated review pages actually reference
  • it does not use quartz build --serve as the installed long-running service
  • it does not use a separate review-wiki config file in v1

Useful machine-level checks:

  • curl -I http://127.0.0.1:4151
  • macOS: launchctl print gui/$(id -u)/dev.fleki.review-wiki
  • Linux: systemctl --user status fleki-review-wiki

If you want to expose the review wiki through another local reverse proxy or a tailnet-only Tailscale route, point that layer at:

http://127.0.0.1:4151

That machine-specific network setup is outside the repo installer.

Quick start

Check the active graph:

knowledge status --json --no-receipt

Search what is already known:

knowledge search "customer.io" --json --no-receipt

If nothing matches, knowledge search returns zero results. It should not invent a nearest answer.

Then trace the returned trace_ref:

knowledge trace <trace_ref> --json --no-receipt

Public trace inputs are exact refs only:

  • knowledge_id
  • knowledge_id#section_id
  • current_path
  • page alias
  • current_path#section_alias

Section aliases use deterministic normalization only. current_understanding, current-understanding, and Current Understanding resolve to the same stored alias. knowledge_id#section_id remains the stable machine ref.

Page-level knowledge trace <page> does not guess a best section. It returns aggregate page lineage plus a supported_sections summary so the agent can inspect the exact section refs.

Commit a save from local files:

knowledge save --bindings bindings.json --decision decision.json --json

Apply a rebuild plan:

knowledge rebuild --plan rebuild.json --json

Save workflow

knowledge save is the semantic write step. It applies immediately. There is no preview, validate-only, or dry-run save path.

For a minimal valid save payload, start from the checked-in example templates:

A few usage-critical rules:

  • Use fact for plain observations unless another kind adds stronger semantic meaning.
  • ingest_summary.authority_tier and knowledge_units[].authority_posture are different enums. Do not swap them.
  • Each binding must declare source_family. Do not make the app infer it from source_kind or file suffixes.
  • Each binding must declare timestamp as ISO 8601 source-observed time.
  • If source-observed time is unknown, stop and say that plainly instead of inventing one.
  • knowledge rebuild owns stale and delete.

For the full save contract, see skills/knowledge/references/save-ingestion.md.

How data is stored

The canonical mutable graph is not stored in this repo checkout. By default it lives under:

$HOME/.fleki/knowledge

Installing or refreshing the CLI does not clear that graph. A fresh install can still attach to an already populated shared root.

The checked-in knowledge/** tree in this repo is reference content and a migration seed. It is not the live mutable graph.

The on-disk layout is simple:

  • topics/ holds semantic pages
  • provenance/ holds source-backed notes
  • sources/ holds copied files or durable pointers
  • assets/ holds derived-only render artifacts or extracted files
  • receipts/ holds command receipts
  • search/ is optional support state and is not required for correctness

topics/indexes/** and search/ are support artifacts only. knowledge search and knowledge trace must work from the live graph data, not from a hidden index.

Copied PDFs also persist a source-adjacent render bundle:

  • .render.md
  • .render.manifest.json

Contributing

See CONTRIBUTING.md for the local setup, verification commands, and runtime sync rules.

  • optional .assets/

If an older PDF source record predates that render contract, knowledge trace surfaces the gap and knowledge status reports it through pdf_render_contract_gap_count.

Runtime integrations

This project is designed to work across multiple agent runtimes while keeping one shared graph.

  • Codex: installs the skill under ~/.agents/skills/knowledge
  • Hermes: copies the skill into each detected Hermes home
  • OpenClaw: copies the skill into each detected OpenClaw root

The standalone CLI is still useful on its own. The integrations mainly ensure those runtimes can use the same knowledge workflow and shared data root.

Maintenance and troubleshooting

If something looks wrong, start here:

knowledge status --json --no-receipt

Common fixes:

  • Repair an installed bundle from the bundle itself:

    bash skills/knowledge/install/bootstrap.sh
  • Backfill older PDF source records that predate the render-or-omission contract:

    .venv/bin/python scripts/backfill_pdf_render_contract.py --json

Notes:

  • A non-empty graph after install is expected if $HOME/.fleki/knowledge already existed.
  • ingests_with_reading_limits counts unread or missing content. ingests_with_confidence_caveats counts operator caveats that do not imply unread content.

Repo map

If you are working in this repo directly:

  • src/knowledge_graph/** is the Python implementation
  • skills/knowledge/** is the human-edited skill package
  • skills/knowledge/runtime/** is the generated bundled runtime
  • knowledge/README.md describes the checked-in reference tree, not the live graph root

Learn more

About

Local-first semantic knowledge graph runtime for multi-agent workflows.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages