Skip to content

jiayun/inforesearch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

inforesearch

Autonomous information research agent — like autoresearch, but for knowledge instead of ML experiments.

Write a research brief, run the agent, get a verified report.

How it works

brief.md  →  Researcher (web search)  →  Verifier (cross-check)  →  Summarizer  →  report.md
                    ↑                          |
                    └── refine low-confidence ──┘

Three agent roles, same LLM with different system prompts:

  1. Researcher — uses web_search and web_fetch tools to gather claims with sources
  2. Verifier — cross-checks each claim independently, assigns confidence scores, discards unreliable ones
  3. Summarizer — synthesizes verified claims into a report matching your preferred format

Requirements

  • Ollama running locally
  • An Ollama API key (free tier available) for web search

Quick start

# Set your Ollama API key
export OLLAMA_API_KEY="your-key"

# Build
cargo build --release

# Run with the example brief
cargo run -- --brief brief.md

Reports are saved to reports/ with timestamps.

Usage

inforesearch [OPTIONS]

Options:
  -b, --brief <FILE>        Path to research brief [default: brief.md]
  -c, --config <FILE>       Path to config file [default: config.toml]
  -o, --output <DIR>        Output directory for reports [default: reports]
      --threshold <FLOAT>   Confidence threshold to accept claims [default: 0.6]
  -h, --help                Print help

Writing a brief

Create a markdown file with these sections:

# Research Brief

## Topic
The subject you want researched

## Angle
Optional perspective or focus area

## Output Format
- Length: brief | detailed | comprehensive
- Tone: technical | casual | executive
- Language: English | 繁體中文 | etc.

## Constraints
Optional requirements (e.g. recency, source count)

See brief.md for a full example.

Configuration

config.toml:

[ollama]
chat_url = "http://localhost:11434/v1"   # local LLM inference
search_url = "https://ollama.com/api"    # cloud web search
# api_key read from OLLAMA_API_KEY env var

[models]
researcher = "gemma4:31b-cloud"
verifier = "gemma4:31b-cloud"
summarizer = "gemma4:31b-cloud"

[research]
max_tool_calls = 10

You can use different models per role — e.g. a smaller model for research, a stronger one for verification.

Scheduling

The CLI is designed to be called from cron or launchd:

# Run nightly at 2am
0 2 * * * OLLAMA_API_KEY=xxx /path/to/inforesearch --brief briefs/topic.md --output reports/

License

MIT

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages