Autonomous information research agent — like autoresearch, but for knowledge instead of ML experiments.
Write a research brief, run the agent, get a verified report.
brief.md → Researcher (web search) → Verifier (cross-check) → Summarizer → report.md
↑ |
└── refine low-confidence ──┘
Three agent roles, same LLM with different system prompts:
- Researcher — uses
web_searchandweb_fetchtools to gather claims with sources - Verifier — cross-checks each claim independently, assigns confidence scores, discards unreliable ones
- Summarizer — synthesizes verified claims into a report matching your preferred format
- Ollama running locally
- An Ollama API key (free tier available) for web search
# Set your Ollama API key
export OLLAMA_API_KEY="your-key"
# Build
cargo build --release
# Run with the example brief
cargo run -- --brief brief.mdReports are saved to reports/ with timestamps.
inforesearch [OPTIONS]
Options:
-b, --brief <FILE> Path to research brief [default: brief.md]
-c, --config <FILE> Path to config file [default: config.toml]
-o, --output <DIR> Output directory for reports [default: reports]
--threshold <FLOAT> Confidence threshold to accept claims [default: 0.6]
-h, --help Print help
Create a markdown file with these sections:
# Research Brief
## Topic
The subject you want researched
## Angle
Optional perspective or focus area
## Output Format
- Length: brief | detailed | comprehensive
- Tone: technical | casual | executive
- Language: English | 繁體中文 | etc.
## Constraints
Optional requirements (e.g. recency, source count)See brief.md for a full example.
config.toml:
[ollama]
chat_url = "http://localhost:11434/v1" # local LLM inference
search_url = "https://ollama.com/api" # cloud web search
# api_key read from OLLAMA_API_KEY env var
[models]
researcher = "gemma4:31b-cloud"
verifier = "gemma4:31b-cloud"
summarizer = "gemma4:31b-cloud"
[research]
max_tool_calls = 10You can use different models per role — e.g. a smaller model for research, a stronger one for verification.
The CLI is designed to be called from cron or launchd:
# Run nightly at 2am
0 2 * * * OLLAMA_API_KEY=xxx /path/to/inforesearch --brief briefs/topic.md --output reports/MIT