An intelligent agent built with nanoagent that curates personalized Hacker News digests based on your interests.
- 15+ LLM Providers - OpenAI, Anthropic, DeepSeek, Groq, Ollama, and many more
- Extremely Affordable - Use DeepSeek at $0.14/1M tokens (400x cheaper than GPT-4!)
- Free Options - OpenRouter free tier or 100% free local Ollama
- Smart Story Fetching - Retrieves top, new, best, or trending stories from Hacker News
- Intelligent Filtering - Filters by score, comments, keywords, and time range
- AI-Powered Analysis - Analyzes stories for relevance and importance
- Customizable Preferences - Define your tech interests and filtering criteria
- Multiple Output Formats - Markdown, JSON, or plain text reports
- Automated Daily Digests - Can be scheduled to run daily
- Privacy-Focused - Option to run 100% locally with Ollama
- Bun >= 1.2.0 (or Node.js >= 18)
- An LLM API key (see LLM Providers below) OR local Ollama installation
# Install dependencies
bun install
# or with npm
npm install
# Set up environment variables
cp .env.example .env
# Edit .env and add your API key# Run with default (DeepSeek - cheapest)
bun run dev
# Run with specific examples
bun run examples/daily-hn-cheap.ts # Affordable models
bun run examples/daily-hn-ollama.ts # Local/free models
bun run examples/compare-models.ts # Compare different models
# With npm
npm run devimport { runHNAgent, extractReport } from "./src/index.ts";
const result = await runHNAgent({
model: {
provider: "openai",
model: "gpt-4",
apiKey: process.env.OPENAI_API_KEY,
},
preferences: {
interests: ["AI", "machine learning", "web development"],
min_score: 100,
max_stories: 10,
report_format: "markdown",
},
});
const report = extractReport(result);
console.log(report);interface HNAgentConfig {
model: any; // LLM model configuration
preferences?: {
interests: string[]; // Topics you're interested in
min_score: number; // Minimum story score (default: 50)
min_comments: number; // Minimum comments (default: 10)
exclude_keywords: string[]; // Keywords to exclude
max_stories: number; // Max stories in report (default: 10)
report_format: "markdown" | "json" | "text";
time_range_hours: number; // Only recent stories (default: 24)
};
storyType?: "top" | "new" | "best" | "ask" | "show";
maxIterations?: number; // Max agent steps (default: 20)
}import { runHNAgent } from "./src/index.ts";
import { Recommended, DeepSeek, Groq, Ollama } from "./src/config/models.ts";
// π Recommended: DeepSeek (Best value)
const result = await runHNAgent({ model: Recommended.CHEAPEST });
// π Free: OpenRouter or Ollama
const result = await runHNAgent({ model: Recommended.FREE });
// β‘ Fastest: Groq
const result = await runHNAgent({ model: Recommended.FASTEST });
// π Private: Local Ollama
const result = await runHNAgent({ model: Ollama.LLAMA_3_2 });
// π Premium: OpenAI/Anthropic
const result = await runHNAgent({ model: OpenAI.GPT4 });See LLM Providers section below for complete list and pricing.
nanoagent/
βββ src/
β βββ agents/
β β βββ hackernews-agent.ts # Main agent orchestration
β βββ tools/
β β βββ fetch-hn-stories.ts # Fetch stories from HN API
β β βββ filter-stories.ts # Filter and rank stories
β β βββ analyze-story.ts # AI story analysis
β β βββ generate-report.ts # Report generation
β βββ utils/
β β βββ hn-api.ts # HN API client
β β βββ scoring.ts # Scoring algorithms
β β βββ formatting.ts # Output formatting
β βββ types/
β β βββ hackernews.ts # TypeScript types
β βββ index.ts # Main exports
βββ examples/
β βββ daily-hn.ts # Example usage
βββ package.json
# Run tests
bun test
# Lint code
bun run lint
# Format code
bun run format
# Build
bun run buildThe agent follows this workflow:
- Fetch Stories - Retrieves stories from HN API using
fetch_hn_storiestool - Filter Stories - Applies user criteria using
filter_storiestool - Analyze Stories - Analyzes top stories using
analyze_storytool - Generate Report - Creates formatted report using
generate_reporttool
# π° Your Daily Hacker News Digest
**Date:** Friday, November 7, 2025
**Stories Found:** 10
---
### 1. New Claude 4 Released with Improved Reasoning
**Score:** 847 | **Comments:** 312 | **Posted:** Fri Nov 07 2025 08:23:15
**Topics:** AI, machine learning
**Relevance:** 95%
**Why it matters:** Highly popular with significant community engagement;
active discussion with diverse perspectives
**Summary:** Story about AI, machine learning with 847 points and 312 comments.
**Links:** [Article](https://anthropic.com/...) | [HN Discussion](https://news.ycombinator.com/item?id=...)
---You can schedule the agent to run daily using cron:
# Add to crontab (runs daily at 8 AM)
0 8 * * * cd /path/to/nanoagent && bun run examples/daily-hn.ts >> ~/hn-digest.log 2>&1Or use a task scheduler on Windows.
We support 15+ LLM providers including many affordable and free options!
| Provider | Model | Cost per 1M tokens | Speed | Quality |
|---|---|---|---|---|
| DeepSeek β | deepseek-chat | $0.14 | Fast | Excellent |
| SiliconFlow | Qwen2.5-7B | $0.20 | Fast | Very Good |
| OpenRouter | Free models | FREE | Medium | Good |
| Groq | Llama 3.1 8B | FREE tier | Super Fast | Very Good |
| Ollama π | Any model | FREE | Varies | Good-Excellent |
| Together AI | Llama 3.1 70B | $0.88 | Fast | Excellent |
| Moonshot | moonshot-v1-8k | $1.20 | Medium | Very Good |
| GLM | glm-4 | $1.50 | Medium | Very Good |
| OpenAI | GPT-3.5 Turbo | $1.50 | Fast | Excellent |
| OpenAI | GPT-4 | $60.00 | Medium | Excellent |
| Anthropic | Claude 3 Sonnet | $15.00 | Fast | Excellent |
β = Recommended for best value π = Runs locally (100% free & private)
# Sign up at https://platform.deepseek.com/
export DEEPSEEK_API_KEY="your-key"import { DeepSeek } from "./src/config/models.ts";
const result = await runHNAgent({ model: DeepSeek.CHAT });Pricing: ~$0.14 per 1M tokens (400x cheaper than GPT-4!) Quality: Excellent, comparable to GPT-3.5 Turbo Best for: Daily use, high-volume tasks
# Sign up at https://openrouter.ai/
export OPENROUTER_API_KEY="your-key"import { OpenRouter } from "./src/config/models.ts";
// Free models
const result = await runHNAgent({ model: OpenRouter.LLAMA_3_1_8B_FREE });
// Or paid models (still cheap)
const result = await runHNAgent({ model: OpenRouter.DEEPSEEK_CHAT });Pricing: FREE tier available, paid models from $0.20/1M Benefits: Access 100+ models through one API Best for: Trying different models, free tier users
# Sign up at https://groq.com/
export GROQ_API_KEY="your-key"import { Groq } from "./src/config/models.ts";
const result = await runHNAgent({ model: Groq.LLAMA_3_1_70B });Pricing: FREE tier available, then paid Speed: Up to 750 tokens/second (fastest!) Best for: Real-time applications, quick responses
# Install from https://ollama.ai/
curl -fsSL https://ollama.ai/install.sh | sh
# Start Ollama
ollama serve
# Pull a model
ollama pull llama3.2import { Ollama } from "./src/config/models.ts";
const result = await runHNAgent({ model: Ollama.LLAMA_3_2 });Pricing: FREE (runs on your computer) Privacy: 100% private, no data sent to cloud Best for: Privacy, offline use, unlimited usage
Available models:
llama3.2- Good balance of speed/qualityqwen2.5- Excellent for general usedeepseek-coder-v2- Best for technical contentmistral- Fast and efficientgemma2- Google's model
Moonshot AI:
export MOONSHOT_API_KEY="your-key"- Long context support (up to 200K tokens)
- Sign up: https://platform.moonshot.cn/
SiliconFlow:
export SILICONFLOW_API_KEY="your-key"- Hosts many open source models
- Sign up: https://siliconflow.cn/
GLM (Zhipu AI):
export GLM_API_KEY="your-key"- ChatGLM models
- Sign up: https://open.bigmodel.cn/
OpenAI:
export OPENAI_API_KEY="sk-..."Anthropic Claude:
export ANTHROPIC_API_KEY="sk-ant-..."import { Recommended } from "./src/config/models.ts";
// Best for cost-effectiveness
const model = Recommended.CHEAPEST; // DeepSeek
// Best free option
const model = Recommended.FREE; // OpenRouter free tier
// Best for privacy
const model = Recommended.PRIVATE; // Ollama local
// Best for speed
const model = Recommended.FASTEST; // Groq
// Best for coding/tech content
const model = Recommended.CODING; // DeepSeek Coder
// Best for Chinese language
const model = Recommended.CHINESE; // MoonshotCompare different models with the benchmark script:
bun run examples/compare-models.tsThis will test multiple models and show:
- Response time
- Quality comparison
- Cost estimates
# Edit crontab
crontab -e
# Add this line (runs daily at 8 AM)
0 8 * * * cd /path/to/nanoagent && bun run examples/daily-hn-cheap.ts >> ~/hn-digest.log 2>&1- Open Task Scheduler
- Create Basic Task
- Set trigger: Daily at 8:00 AM
- Action: Start a program
- Program:
bun - Arguments:
run examples/daily-hn-cheap.ts - Start in:
/path/to/nanoagent
# Create ~/.config/systemd/user/hn-digest.service
[Unit]
Description=Daily HN Digest
[Service]
Type=oneshot
WorkingDirectory=/path/to/nanoagent
ExecStart=/usr/bin/bun run examples/daily-hn-cheap.ts# Create ~/.config/systemd/user/hn-digest.timer
[Unit]
Description=Daily HN Digest Timer
[Timer]
OnCalendar=daily
OnCalendar=08:00
Persistent=true
[Install]
WantedBy=timers.target# Enable and start
systemctl --user enable hn-digest.timer
systemctl --user start hn-digest.timerContributions are welcome! Please feel free to submit issues or pull requests.
MIT
- Built with nanoagent
- Uses the official Hacker News API
- Supports 15+ LLM providers for maximum flexibility