Skip to content

ChopperLin/nanoagent

Repository files navigation

πŸ“° Hacker News Agent

An intelligent agent built with nanoagent that curates personalized Hacker News digests based on your interests.

✨ Features

  • 15+ LLM Providers - OpenAI, Anthropic, DeepSeek, Groq, Ollama, and many more
  • Extremely Affordable - Use DeepSeek at $0.14/1M tokens (400x cheaper than GPT-4!)
  • Free Options - OpenRouter free tier or 100% free local Ollama
  • Smart Story Fetching - Retrieves top, new, best, or trending stories from Hacker News
  • Intelligent Filtering - Filters by score, comments, keywords, and time range
  • AI-Powered Analysis - Analyzes stories for relevance and importance
  • Customizable Preferences - Define your tech interests and filtering criteria
  • Multiple Output Formats - Markdown, JSON, or plain text reports
  • Automated Daily Digests - Can be scheduled to run daily
  • Privacy-Focused - Option to run 100% locally with Ollama

πŸš€ Quick Start

Prerequisites

  • Bun >= 1.2.0 (or Node.js >= 18)
  • An LLM API key (see LLM Providers below) OR local Ollama installation

Installation

# Install dependencies
bun install
# or with npm
npm install

# Set up environment variables
cp .env.example .env
# Edit .env and add your API key

Run the Agent

# Run with default (DeepSeek - cheapest)
bun run dev

# Run with specific examples
bun run examples/daily-hn-cheap.ts  # Affordable models
bun run examples/daily-hn-ollama.ts # Local/free models
bun run examples/compare-models.ts  # Compare different models

# With npm
npm run dev

πŸ“– Usage

Basic Example

import { runHNAgent, extractReport } from "./src/index.ts";

const result = await runHNAgent({
  model: {
    provider: "openai",
    model: "gpt-4",
    apiKey: process.env.OPENAI_API_KEY,
  },
  preferences: {
    interests: ["AI", "machine learning", "web development"],
    min_score: 100,
    max_stories: 10,
    report_format: "markdown",
  },
});

const report = extractReport(result);
console.log(report);

Configuration Options

interface HNAgentConfig {
  model: any; // LLM model configuration
  preferences?: {
    interests: string[]; // Topics you're interested in
    min_score: number; // Minimum story score (default: 50)
    min_comments: number; // Minimum comments (default: 10)
    exclude_keywords: string[]; // Keywords to exclude
    max_stories: number; // Max stories in report (default: 10)
    report_format: "markdown" | "json" | "text";
    time_range_hours: number; // Only recent stories (default: 24)
  };
  storyType?: "top" | "new" | "best" | "ask" | "show";
  maxIterations?: number; // Max agent steps (default: 20)
}

Using Different Models

import { runHNAgent } from "./src/index.ts";
import { Recommended, DeepSeek, Groq, Ollama } from "./src/config/models.ts";

// πŸ† Recommended: DeepSeek (Best value)
const result = await runHNAgent({ model: Recommended.CHEAPEST });

// πŸ†“ Free: OpenRouter or Ollama
const result = await runHNAgent({ model: Recommended.FREE });

// ⚑ Fastest: Groq
const result = await runHNAgent({ model: Recommended.FASTEST });

// 🏠 Private: Local Ollama
const result = await runHNAgent({ model: Ollama.LLAMA_3_2 });

// πŸ’Ž Premium: OpenAI/Anthropic
const result = await runHNAgent({ model: OpenAI.GPT4 });

See LLM Providers section below for complete list and pricing.

πŸ› οΈ Project Structure

nanoagent/
β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ agents/
β”‚   β”‚   └── hackernews-agent.ts    # Main agent orchestration
β”‚   β”œβ”€β”€ tools/
β”‚   β”‚   β”œβ”€β”€ fetch-hn-stories.ts    # Fetch stories from HN API
β”‚   β”‚   β”œβ”€β”€ filter-stories.ts      # Filter and rank stories
β”‚   β”‚   β”œβ”€β”€ analyze-story.ts       # AI story analysis
β”‚   β”‚   └── generate-report.ts     # Report generation
β”‚   β”œβ”€β”€ utils/
β”‚   β”‚   β”œβ”€β”€ hn-api.ts              # HN API client
β”‚   β”‚   β”œβ”€β”€ scoring.ts             # Scoring algorithms
β”‚   β”‚   └── formatting.ts          # Output formatting
β”‚   β”œβ”€β”€ types/
β”‚   β”‚   └── hackernews.ts          # TypeScript types
β”‚   └── index.ts                   # Main exports
β”œβ”€β”€ examples/
β”‚   └── daily-hn.ts                # Example usage
└── package.json

πŸ”§ Development

# Run tests
bun test

# Lint code
bun run lint

# Format code
bun run format

# Build
bun run build

πŸ“‹ Agent Workflow

The agent follows this workflow:

  1. Fetch Stories - Retrieves stories from HN API using fetch_hn_stories tool
  2. Filter Stories - Applies user criteria using filter_stories tool
  3. Analyze Stories - Analyzes top stories using analyze_story tool
  4. Generate Report - Creates formatted report using generate_report tool

🎯 Example Output

# πŸ“° Your Daily Hacker News Digest

**Date:** Friday, November 7, 2025
**Stories Found:** 10

---

### 1. New Claude 4 Released with Improved Reasoning

**Score:** 847 | **Comments:** 312 | **Posted:** Fri Nov 07 2025 08:23:15
**Topics:** AI, machine learning
**Relevance:** 95%

**Why it matters:** Highly popular with significant community engagement;
active discussion with diverse perspectives

**Summary:** Story about AI, machine learning with 847 points and 312 comments.

**Links:** [Article](https://anthropic.com/...) | [HN Discussion](https://news.ycombinator.com/item?id=...)

---

πŸ”„ Scheduling Daily Digests

You can schedule the agent to run daily using cron:

# Add to crontab (runs daily at 8 AM)
0 8 * * * cd /path/to/nanoagent && bun run examples/daily-hn.ts >> ~/hn-digest.log 2>&1

Or use a task scheduler on Windows.

πŸ€– LLM Providers Support

We support 15+ LLM providers including many affordable and free options!

πŸ’° Cost Comparison

Provider Model Cost per 1M tokens Speed Quality
DeepSeek ⭐ deepseek-chat $0.14 Fast Excellent
SiliconFlow Qwen2.5-7B $0.20 Fast Very Good
OpenRouter Free models FREE Medium Good
Groq Llama 3.1 8B FREE tier Super Fast Very Good
Ollama 🏠 Any model FREE Varies Good-Excellent
Together AI Llama 3.1 70B $0.88 Fast Excellent
Moonshot moonshot-v1-8k $1.20 Medium Very Good
GLM glm-4 $1.50 Medium Very Good
OpenAI GPT-3.5 Turbo $1.50 Fast Excellent
OpenAI GPT-4 $60.00 Medium Excellent
Anthropic Claude 3 Sonnet $15.00 Fast Excellent

⭐ = Recommended for best value 🏠 = Runs locally (100% free & private)

πŸ“ Setup Instructions by Provider

πŸ† DeepSeek (Recommended - Cheapest!)

# Sign up at https://platform.deepseek.com/
export DEEPSEEK_API_KEY="your-key"
import { DeepSeek } from "./src/config/models.ts";
const result = await runHNAgent({ model: DeepSeek.CHAT });

Pricing: ~$0.14 per 1M tokens (400x cheaper than GPT-4!) Quality: Excellent, comparable to GPT-3.5 Turbo Best for: Daily use, high-volume tasks

πŸ†“ OpenRouter (Free Models Available)

# Sign up at https://openrouter.ai/
export OPENROUTER_API_KEY="your-key"
import { OpenRouter } from "./src/config/models.ts";

// Free models
const result = await runHNAgent({ model: OpenRouter.LLAMA_3_1_8B_FREE });

// Or paid models (still cheap)
const result = await runHNAgent({ model: OpenRouter.DEEPSEEK_CHAT });

Pricing: FREE tier available, paid models from $0.20/1M Benefits: Access 100+ models through one API Best for: Trying different models, free tier users

⚑ Groq (Super Fast!)

# Sign up at https://groq.com/
export GROQ_API_KEY="your-key"
import { Groq } from "./src/config/models.ts";
const result = await runHNAgent({ model: Groq.LLAMA_3_1_70B });

Pricing: FREE tier available, then paid Speed: Up to 750 tokens/second (fastest!) Best for: Real-time applications, quick responses

🏠 Ollama (100% Free & Private)

# Install from https://ollama.ai/
curl -fsSL https://ollama.ai/install.sh | sh

# Start Ollama
ollama serve

# Pull a model
ollama pull llama3.2
import { Ollama } from "./src/config/models.ts";
const result = await runHNAgent({ model: Ollama.LLAMA_3_2 });

Pricing: FREE (runs on your computer) Privacy: 100% private, no data sent to cloud Best for: Privacy, offline use, unlimited usage

Available models:

  • llama3.2 - Good balance of speed/quality
  • qwen2.5 - Excellent for general use
  • deepseek-coder-v2 - Best for technical content
  • mistral - Fast and efficient
  • gemma2 - Google's model

πŸ‡¨πŸ‡³ Chinese Providers (Affordable)

Moonshot AI:

export MOONSHOT_API_KEY="your-key"

SiliconFlow:

export SILICONFLOW_API_KEY="your-key"

GLM (Zhipu AI):

export GLM_API_KEY="your-key"

πŸ’Ž Premium Providers

OpenAI:

export OPENAI_API_KEY="sk-..."

Anthropic Claude:

export ANTHROPIC_API_KEY="sk-ant-..."

🎯 Recommended Configurations

import { Recommended } from "./src/config/models.ts";

// Best for cost-effectiveness
const model = Recommended.CHEAPEST;  // DeepSeek

// Best free option
const model = Recommended.FREE;  // OpenRouter free tier

// Best for privacy
const model = Recommended.PRIVATE;  // Ollama local

// Best for speed
const model = Recommended.FASTEST;  // Groq

// Best for coding/tech content
const model = Recommended.CODING;  // DeepSeek Coder

// Best for Chinese language
const model = Recommended.CHINESE;  // Moonshot

πŸ“Š Benchmark Your Models

Compare different models with the benchmark script:

bun run examples/compare-models.ts

This will test multiple models and show:

  • Response time
  • Quality comparison
  • Cost estimates

πŸ”„ Scheduling Daily Digests

Using Cron (Linux/Mac)

# Edit crontab
crontab -e

# Add this line (runs daily at 8 AM)
0 8 * * * cd /path/to/nanoagent && bun run examples/daily-hn-cheap.ts >> ~/hn-digest.log 2>&1

Using Windows Task Scheduler

  1. Open Task Scheduler
  2. Create Basic Task
  3. Set trigger: Daily at 8:00 AM
  4. Action: Start a program
  5. Program: bun
  6. Arguments: run examples/daily-hn-cheap.ts
  7. Start in: /path/to/nanoagent

Using systemd Timer (Linux)

# Create ~/.config/systemd/user/hn-digest.service
[Unit]
Description=Daily HN Digest

[Service]
Type=oneshot
WorkingDirectory=/path/to/nanoagent
ExecStart=/usr/bin/bun run examples/daily-hn-cheap.ts
# Create ~/.config/systemd/user/hn-digest.timer
[Unit]
Description=Daily HN Digest Timer

[Timer]
OnCalendar=daily
OnCalendar=08:00
Persistent=true

[Install]
WantedBy=timers.target
# Enable and start
systemctl --user enable hn-digest.timer
systemctl --user start hn-digest.timer

🀝 Contributing

Contributions are welcome! Please feel free to submit issues or pull requests.

πŸ“„ License

MIT

πŸ™ Acknowledgments

About

A testbed for AI agent programming

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published