Anchor is an early-stage, model-agnostic memory layer for AI agents.
It helps agents keep factual continuity across interactions by combining:
- a protocol-driven reasoning loop (
REMEMBER,CLARIFY,DONE) - semantic retrieval from stored memory chunks
- ingest-time question generation to improve recall
At a high level:
- Anchor receives a user query.
- It proactively decomposes the query into semantic retrieval queries.
- Retrieved memory is synthesized into compact context.
- The core model responds using protocol markers:
REMEMBERwhen more memory lookup is neededCLARIFYwhen user intent is ambiguousDONEwhen the answer is complete
- Anchor loops until completion or a configured remember limit.
- Python package with a low-level orchestrator API
- Local-first workflows (for example Ollama + Chroma), but provider-agnostic by design
- Official custom override pattern for teams that need to swap core components
- Deterministic tests in CI and model evals run manually
Memory write-back and richer ingestion/UI workflows are planned next-stage work.
# Python environment + project tooling
uv sync --group dev
# Install local git hooks
uv run pre-commit installFor local examples and evals, install and run Ollama:
# Install Ollama (macOS)
brew install ollama
# Start Ollama in a separate terminal
ollama serve
# Pull models used by examples/evals
# examples/chat.py
ollama pull qwen3:1.7b
ollama pull bge-m3
# examples/custom_anchor.py (default model)
ollama pull qwen3:4b-instruct
# tests/evals fixtures
ollama pull qwen3:0.6bFor Linux and Windows install steps, use the official Ollama docs: https://docs.ollama.com/
If you only run deterministic tests (-m "not eval"), Ollama is optional.
# Run the default chat example
uv run python examples/chat.py# Default integration path
uv run python examples/chat.py
# Official custom override pattern
uv run python examples/custom_anchor.py# Deterministic tests (CI path)
uv run pytest -m "not eval"
# Live model evals (manual)
uv run pytest -m eval tests/evals# Show loaded/running models
ollama ps
# Stop a loaded model
ollama stop qwen3:4b-instructIf you started Ollama with ollama serve, stop it with Ctrl+C in that terminal.
# Fork the repository on GitHub, then clone your fork.
# Create a feature branch in your fork before running full hooks
git checkout -b yourname/short-topic
# Run hooks on all files
uv run pre-commit run --all-files
# Optional: docs/text whitespace checks
git diff --check-
no-commit-to-branchfailed: This hook blocks direct commits tomain. In your fork, create and commit on a feature branch, for examplegit checkout -b yourname/short-topic. -
end-of-file-fixermodified files: Re-runuv run pre-commit run --all-filesand commit the hook changes.
- Contributor workflow: CONTRIBUTING.md
- Maintainer operations: docs/maintainers.md
- Security disclosure path: SECURITY.md
- Project direction: docs/vision.md, docs/roadmap.md