Skip to content
This repository was archived by the owner on Feb 20, 2026. It is now read-only.

yannickrocks/Contract-Guard

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

493 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Autonomous Bug Bounty Hunter

24/7 automated smart contract vulnerability detection and PoC generation for Immunefi bug bounties.

🏁 Archived - This project is no longer actively maintained. It is open source for reference and learning. You can run it locally by following the setup instructions below.

Railway Next.js 16 TypeScript DeFi Vulnerability Taxonomy Tests

Status: 🏁 Archived | This project is no longer actively maintained. See setup instructions below to run locally.


🎯 What It Does

Monitors Immunefi 24/7 → Detects vulnerabilities → Classifies V01-V13+ (Immunefi Top 10 + 2025 patterns + dynamic categories) → Generates PoCs (automated for highest severity, manual per-vulnerability) → Verifies exploits → Learns from outcomes → Sends scrape summaries → Notifies via Telegram

Your only action: Review Telegram alerts and submit bugs to Immunefi for bounties.

NEW:Self-improving detection - System learns from your submissions and automatically improves patterns!

NEW: 📊 Scrape summaries - Get detailed reports after each scraping job showing programs processed, contracts found, and audits queued!


⚡ Quick Start

1. Run Database Migrations (5 minutes) - REQUIRED

Go to: Your Supabase project → SQL Editor (https://supabase.com/dashboard)

Run the complete schema:

Copy/paste scripts/database/fresh-start-schema.sql → Execute

This single script includes all tables, indexes, functions, triggers, RLS policies, views, and seed data needed for the complete system.

Verify:

SELECT table_name FROM information_schema.tables
WHERE table_schema = 'public' AND table_name IN (
  'immunefi_programs', 'audits', 'vulnerabilities',
  'bug_bounty_pocs', 'submission_queue', 'unsupported_chains',
  'detection_patterns', 'bounty_outcomes', 'vulnerability_classifications', 'immunefi_categories'
);
-- Should return 10 rows (6 original + 4 new learning system tables)

1b. Install Data Protection Safeguards (RECOMMENDED)

Protect against accidental data loss:

-- Run in Supabase SQL Editor
scripts/database/protection/data-protection-safeguards.sql

This adds:

  • 🛡️ Bulk delete prevention (blocks deletes >10 rows)
  • 📝 Deletion audit logging (tracks all deletions)
  • 🔄 Recovery functions (restore deleted data)

See README-DATA-PROTECTION.md for details.

Done! System is live and will start hunting bugs automatically.

2. Deploy Frontend to Vercel (Always-On Dashboard)

The frontend provides a web dashboard to view bug bounty results, manage chains, and track audits. Deploy to Vercel for always-accessible dashboard instead of running locally.

Prerequisites:

  • Vercel account (free tier works)
  • All environment variables ready

Deploy to Vercel:

# Install Vercel CLI (if not already installed)
npm install -g vercel

# Login to Vercel
vercel login

# Deploy to production
vercel --prod

Environment Variables Setup:

After deployment, add these environment variables in Vercel dashboard (Settings → Environment Variables):

Required:

  • NEXT_PUBLIC_SUPABASE_URL - Your Supabase project URL
  • SUPABASE_SERVICE_ROLE_KEY - Supabase service role key (for API routes)
  • NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY - Clerk authentication key
  • CLERK_SECRET_KEY - Clerk secret key
  • ANTHROPIC_API_KEY - For AI analysis features

Blockchain RPCs (for contract fetching):

  • ETHEREUM_RPC_URL
  • POLYGON_RPC_URL
  • ARBITRUM_RPC_URL
  • BASE_RPC_URL

Block Explorer APIs:

  • ETHERSCAN_API_KEY - Works for all Etherscan-based explorers

API Rate Limiting (Optimized for Free Tier):

The system uses distributed rate limiting via Redis to prevent API rate limit errors. By default, it uses 2 calls/second per chain (very conservative for free tier APIs with 5 calls/second limit). You can override this per chain:

  • ETHEREUM_API_RATE_LIMIT - Calls per second for Ethereum (default: 4)
  • BSC_API_RATE_LIMIT - Calls per second for BNB Chain (default: 4)
  • POLYGON_API_RATE_LIMIT - Calls per second for Polygon (default: 4)
  • ARBITRUM_API_RATE_LIMIT - Calls per second for Arbitrum (default: 4)
  • BASE_API_RATE_LIMIT - Calls per second for Base (default: 4)
  • OPTIMISM_API_RATE_LIMIT - Calls per second for Optimism (default: 4)
  • AVALANCHE_API_RATE_LIMIT - Calls per second for Avalanche (default: 4)

Examples:

  • Free tier (default): No configuration needed - uses 2 calls/sec automatically
  • Premium tier (10 calls/sec): Set BSC_API_RATE_LIMIT=8 (use 8 to be safe)
  • Advanced tier (20 calls/sec): Set BSC_API_RATE_LIMIT=18 (use 18 to be safe)

Note: The default 2 calls/second is optimized for free tier. If you upgrade to premium, increase the limits accordingly.

Redis (for rate limiting):

  • UPSTASH_REDIS_REST_URL
  • UPSTASH_REDIS_REST_TOKEN

Run Locally (Alternative):

npm install
# Install Playwright browsers (required for scraping feature)
npx playwright install --with-deps chromium
npm run dev  # http://localhost:3001

Note: The scraping feature requires Playwright browsers to be installed. If you see errors about missing libnss3.so or "Executable doesn't exist", run npx playwright install --with-deps chromium to install the browsers and system dependencies.

Frontend URLs:

  • Main Dashboard: /dashboard
  • Bug Bounty Dashboard: /dashboard/bug-bounty
  • DeFi Vulnerability Taxonomy Intelligence: /dashboard/bug-bounty/immunefi-top10NEW! (V01-V13+)
  • Category Management Admin: /dashboard/bug-bounty/categories-adminNEW! (Add/edit categories dynamically)
  • Chain Support Tracker: /dashboard/bug-bounty/chains
  • Reports: /dashboard/reports
  • Audit History: /dashboard/audits

Post-Deployment:

  1. Visit your Vercel URL
  2. Sign in with Clerk authentication
  3. Access /dashboard/bug-bounty to view autonomous bug hunting results
  4. Access /dashboard/bug-bounty/chains to see unsupported chains

Note: Workers run independently on Railway. Frontend is only for viewing results and managing chain support.


🏗️ System Architecture

System Overview (High-Level)

graph TB
    Start([Daily Cron Trigger]) --> Scraper[Immunefi Scraper<br/>Playwright + Claude Haiku]
    Scraper -->|New/Changed Contracts| Redis1[Redis: audits queue]

    Redis1 --> Router[Router Worker<br/>Chain Distribution]
    Router -->|Route by Chain| ChainQueues[Chain-Specific Queues<br/>audits-ethereum, audits-polygon, etc.]

    ChainQueues --> Audit[Audit Processors<br/>14 Chain Workers<br/>Sonnet 4.5 → Opus 4.5]
    Audit -->|Vulnerabilities Found| DB1[(Supabase: vulnerabilities)]
    Audit -->|Critical/High In-Scope| Redis2[Redis: pocs queue]

    Redis2 --> PoC[PoC Executor<br/>Foundry + Opus 4.5]
    PoC -->|Verified PoCs| DB2[(Supabase: bug_bounty_pocs)]
    PoC -->|Successful Exploits| Redis3[Redis: submissions queue]

    Redis3 --> Telegram[Telegram Notifier<br/>Batch Notifications]
    Telegram -->|Bug Reports| User[Telegram Alert]

    DB1 --> Frontend[Vercel Frontend<br/>Dashboard & Reports]
    DB2 --> Frontend

    style Scraper fill:#e1f5ff
    style Router fill:#ffe1f5
    style ChainQueues fill:#fff4e1
    style Audit fill:#fff4e1
    style PoC fill:#ffe1f5
    style Telegram fill:#e1ffe1
    style Frontend fill:#f0e1ff
Loading

Railway Services (All Deployed ✅)

1. Immunefi Scraper (Dockerfile.scraper)

graph LR
    Input[Input:<br/>Daily Cron Trigger<br/>Immunefi Website] --> Stage1[Stage 1: Browser Launch<br/>Headless Chromium]
    Stage1 --> Stage2[Stage 2: Program Scraping<br/>Top 20 Programs<br/>Sorted by Last Updated]
    Stage2 --> Stage3[Stage 3: Contract Extraction<br/>Claude Haiku AI<br/>Extract Addresses]
    Stage3 --> Stage4[Stage 4: Chain Detection<br/>14 EVM Chains Supported]
    Stage4 --> Stage5[Stage 5: Change Detection<br/>Compare with Database<br/>Find New/Updated Contracts]
    Stage5 --> Stage6[Stage 6: Queue Creation<br/>Create Audit Records]
    Stage6 --> Stage7[Stage 7: Stats Finalization<br/>Calculate Totals]
    Stage7 --> Stage8[Stage 8: Send Notification<br/>Telegram Summary]
    Stage8 --> Output[Output:<br/>Redis: audits queue<br/>Telegram: Summary message]

    Stage3 -.->|No Contracts Found| Lucky[Roll the Dice<br/>Lucky Assets Feature]
    Lucky --> Stage5

    style Input fill:#e1f5ff
    style Output fill:#e1ffe1
    style Stage1 fill:#fff4e1
    style Stage2 fill:#fff4e1
    style Stage3 fill:#ffe1f5
    style Stage4 fill:#fff4e1
    style Stage5 fill:#fff4e1
    style Stage6 fill:#fff4e1
    style Stage7 fill:#ffe1f5
    style Stage8 fill:#e1ffe1
Loading
  • Container: Playwright-based browser automation
  • Schedule: Daily cron (20 programs per run)
  • Tech Stack: Playwright v1.57.0, Node.js 20, Chromium browser
  • Responsibilities:
    • Scrapes Immunefi bug bounty programs
    • Detects new contracts and proxy implementation changes
    • Extracts contract addresses via AI (Claude Haiku)
    • Respects robots.txt with rate limiting
    • Queues contracts to Redis job queue (audits queue)
  • Output: Creates audit records in audits table with status: 'pending'

2. Audit Processor (Dockerfile.worker)

graph TB
    Input[Input:<br/>Redis: audits queue<br/>Audit records<br/>status: pending] --> Stage1[Stage 1: Source Fetching<br/>Resolve Proxy Implementations<br/>EIP-1967, EIP-1822<br/>Etherscan API + GitHub]

    Stage1 --> Stage2[Stage 2: Pass 1 - Initial Scan<br/>Sonnet 4.5 AI<br/>~30 seconds<br/>$0.50/contract<br/>Detects Critical/High/Medium]

    Stage2 --> Stage3[Stage 3: Pass 2 - Out-of-Scope Filter<br/>Database Lookup<br/>$0 cost<br/>Filters Before Expensive Verification]

    Stage3 -->|In-Scope Only| Stage4[Stage 4: Pass 3 - Deep Verification<br/>Opus 4.5 AI<br/>~2 minutes<br/>$3.00/contract<br/>Only Critical/High In-Scope]

    Stage3 -->|Out-of-Scope| Skip[Skip Opus<br/>Cost Optimization]

    Stage4 --> Stage5[Stage 5: Classification<br/>DeFi Vulnerability Taxonomy<br/>V01-V13+ Dynamic Categories<br/>Database-Driven Keywords]

    Stage5 --> Stage6[Stage 6: Storage<br/>Store Vulnerabilities<br/>Compress Source Code gzip<br/>Update Audit Status]

    Stage6 -->|Critical/High In-Scope| Output[Output:<br/>Redis: pocs queue<br/>Database: vulnerabilities table<br/>Database: vulnerability_classifications<br/>Audit status: completed]

    Skip --> Stage5

    style Input fill:#e1f5ff
    style Output fill:#e1ffe1
    style Stage1 fill:#fff4e1
    style Stage2 fill:#ffe1f5
    style Stage3 fill:#e1ffe1
    style Stage4 fill:#ffe1f5
    style Stage5 fill:#fff4e1
    style Stage6 fill:#fff4e1
    style Skip fill:#ffcccc
Loading
  • Container: Node.js worker with BullMQ
  • Type: Always-on worker (polls Redis queue)
  • Tech Stack: Node.js 20, BullMQ, Anthropic Claude API
  • Responsibilities:
    • Processes audits queue from Redis
    • Fetches contract source from Etherscan/GitHub APIs
    • Single Pass: Vulnerability Scan - Sonnet 4.5 comprehensive scan (~60-90s, $0.75-1.50/contract)
      • Finds AND verifies exploitable vulnerabilities in one comprehensive pass
      • Detects Critical/High/Medium severity (based on program bounty tiers)
      • Uses Anthropic's SCONE-bench methodology for high-quality detection
      • Only reports vulnerabilities that can be proven with PoCs showing financial impact (≥0.1 ETH)
    • Filtering Pipeline:
      • Out-of-scope validation (cost optimization - filters before expensive operations)
      • Non-exploitable filtering (removes informational/theoretical findings)
    • Classifies vulnerabilities into DeFi Vulnerability Taxonomy (V01-V13+: Immunefi Top 10 + 2025 patterns + dynamic categories)
      • Uses database-driven keyword matching for classification
      • Supports dynamic categories (V14, V15, etc.) added via admin UI
    • Validates out-of-scope, known issues, and impact thresholds
    • Stores findings in vulnerabilities table with Immunefi category classification
    • Compresses source code after analysis (gzip)
    • Updates audit status: queuedfetching_sourcescanningvulnerabilities_found
  • Concurrency: 1 audit at a time (prevents API rate limit bursts)
  • Status Flow: See AUTONOMOUS_BUG_BOUNTY_HUNTER.md for complete status definitions and UI mappings

3. PoC Executor (Dockerfile.poc)

✨ NEW: Interactive Generation with Iterative Refinement - The PoC executor now uses interactive generation for non-trivial vulnerabilities. Instead of generating complete code and testing separately, the AI agent works within an isolated Foundry sandbox, building exploits incrementally through test-fix loops. The agent reads existing code, makes targeted patches, tests immediately, and iterates until success. This approach is more efficient and produces higher quality PoCs. See AUTONOMOUS_BUG_BOUNTY_HUNTER.md for details.

✨ Checkpoint System - The PoC executor includes a checkpoint system that saves progress after each successful stage. If a PoC generation fails or crashes, it automatically resumes from the last successful checkpoint instead of starting over. This significantly improves reliability and reduces wasted API costs. See AUTONOMOUS_BUG_BOUNTY_HUNTER.md for details.

graph TB
    Input[Input:<br/>Redis: pocs queue<br/>Vulnerabilities<br/>Critical/High In-Scope] --> Stage1[Stage 1: PoC Generation<br/>Opus 4.5 AI<br/>Generate Exploit.t.sol<br/>~4 minutes<br/>$1.50/PoC]

    Stage1 --> Stage2[Stage 2: Foundry Setup<br/>Create Temp Directory<br/>forge init<br/>Install forge-std<br/>Patch safeconsole.sol<br/>Configure foundry.toml]

    Stage2 --> Stage3[Stage 3: Compilation<br/>forge build<br/>Auto solc Version Fallback<br/>0.8.31 → 0.8.0<br/>Update pragma if needed]

    Stage3 -->|Compilation Success| Stage4[Stage 4: Execution<br/>forge test --fork-url<br/>Mainnet Fork Testing<br/>Parse Test Results<br/>Validate Exploit Success]

    Stage3 -->|Compilation Failed| Fail1[Mark as Failed<br/>All Versions Tried]

    Stage4 -->|Test Passed| Stage5[Stage 5: Storage<br/>Update Execution Status<br/>Store Logs<br/>Compress Source Code]

    Stage4 -->|Test Failed| Retry{Retry Count<br/>< 3?}
    Retry -->|Yes| Stage4
    Retry -->|No| Fail2[Mark as Failed<br/>Max Retries Reached]

    Stage5 --> Output[Output:<br/>Redis: submissions queue<br/>Database: bug_bounty_pocs<br/>execution_status: success]

    Stage1 -.->|GitHub Contract| Manual[Manual Verification<br/>Required<br/>No Deployed Address]
    Manual --> Fail1

    style Input fill:#e1f5ff
    style Output fill:#e1ffe1
    style Stage1 fill:#ffe1f5
    style Stage2 fill:#fff4e1
    style Stage3 fill:#fff4e1
    style Stage4 fill:#fff4e1
    style Stage5 fill:#fff4e1
    style Fail1 fill:#ffcccc
    style Fail2 fill:#ffcccc
    style Manual fill:#ffcccc
    style Retry fill:#ffffcc
Loading
  • Container: Foundry-enabled Docker container
  • Type: Always-on worker (polls Redis queue)
  • Tech Stack: Node.js 20, Foundry (forge, cast, anvil), BullMQ
  • Responsibilities:
    • Processes pocs queue from Redis
    • Generates Foundry test exploits using Opus 4.5 (~4min, $1.50/PoC)
    • Automated service: Generates PoC for highest severity vulnerability (cost optimization)
    • Manual service: Generates PoC for specific vulnerability when requested via UI
    • Initializes Foundry projects in isolated temp directories
    • Compiles Solidity code with automatic solc version fallback
    • Executes PoCs on mainnet fork (3 retries per PoC)
    • Parses test results and updates execution status
    • Handles GitHub contracts (manual verification required)
    • Compresses source code after PoC completion
  • Resource Requirements: 2 vCPU, 2GB RAM (for Foundry compilation)

4. Telegram Notifier (Dockerfile.telegram)

graph TB
    Input[Input:<br/>Redis: submissions queue<br/>Verified PoCs<br/>Successful Exploits] --> Stage1[Stage 1: Data Fetching<br/>Fetch Audit Record<br/>Vulnerabilities<br/>Program Data]

    Stage1 --> Stage2[Stage 2: Report Generation<br/>Extract Contract Names<br/>Calculate Bounty Estimate<br/>Format Markdown Report]

    Stage2 --> Stage3[Stage 3: PDF Generation<br/>Generate PDF Report<br/>Store in Database]

    Stage3 --> Stage4[Stage 4: Storage<br/>Create submission_queue Entry<br/>Store Report Data]

    Stage4 --> Decision{Severity?}

    Decision -->|Critical| Immediate[Immediate Notification<br/>Send Right Away]
    Decision -->|High| Batch[Batch Notification<br/>Wait 30 Minutes<br/>Group Multiple Bugs]

    Immediate --> Stage5[Stage 5: Send Telegram<br/>Format Message<br/>Program Name<br/>Contract Address<br/>Severity<br/>Bounty Estimate]

    Batch --> Stage5

    Stage5 --> Stage6[Stage 6: Update Status<br/>Mark as notified<br/>Update Database]

    Stage6 --> Output[Output:<br/>Telegram Message<br/>Database: submission_queue<br/>status: notified]

    style Input fill:#e1f5ff
    style Output fill:#e1ffe1
    style Stage1 fill:#fff4e1
    style Stage2 fill:#fff4e1
    style Stage3 fill:#fff4e1
    style Stage4 fill:#fff4e1
    style Stage5 fill:#ffe1f5
    style Stage6 fill:#fff4e1
    style Immediate fill:#ffcccc
    style Batch fill:#ffffcc
    style Decision fill:#e1f5ff
Loading
  • Container: Lightweight Node.js worker
  • Type: Always-on worker (polls Redis queue)
  • Tech Stack: Node.js 20, node-telegram-bot-api, BullMQ
  • Responsibilities:
    • Processes submissions queue from Redis
    • Batches notifications every 30 minutes
    • Sends immediate alerts for Critical bugs
    • Formats reports with bounty estimates
    • Updates submission_queue table with status: 'notified'
  • Output: Telegram messages to configured chat ID

Database (Supabase PostgreSQL)

Core Tables:

  • immunefi_programs - Programs monitored (active/inactive status)
  • audits - Audit records with status tracking (pending → scanning → completed)
  • vulnerabilities - Found vulnerabilities with severity and classification
  • bug_bounty_pocs - PoC code, execution logs, and test results
  • submission_queue - Ready-to-submit bug reports with PDF generation

Learning System Tables:

  • detection_patterns - Learned patterns with success metrics and confidence scores
  • bounty_outcomes - Submission outcomes for continuous learning
  • vulnerability_classifications - DeFi Vulnerability Taxonomy mapping (V01-V13)
  • immunefi_categories - Top 10 reference data and bounty ranges

Supporting Tables:

  • unsupported_chains - Track chains we don't support yet

Job Queue System (BullMQ + Redis)

Architecture:

The system uses a router pattern for simplified job distribution:

  1. Main Queue (audits): Single entry point from API/scraper
  2. Router Worker: Distributes jobs to chain-specific queues based on chainId
  3. Chain Queues: 14 separate queues (audits-ethereum, audits-polygon, etc.)
  4. Chain Workers: One worker per chain (concurrency: 1) for parallel processing

Queues:

  • audits - Main entry queue (API → router worker)
  • audits-{chain} - Chain-specific queues (router → chain workers)
  • pocs - PoC generation/execution jobs (audit processor → PoC executor)
  • submissions - Notification jobs (PoC executor → telegram notifier)

Benefits:

  • Simple API: Single queue for all audits
  • Parallel Processing: Up to 14 audits simultaneously (1 per chain)
  • Rate Limit Compliance: Each chain worker respects API rate limits
  • Clean Separation: Router handles distribution, workers handle processing

Redis: Upstash (free tier) for distributed job queue management

AI Agent Architecture (Optimized & Efficient)

The system uses a centralized, efficient AI agent architecture powered by Vercel AI SDK:

Core Components:

  • AgentPool (lib/ai/agents/agent-pool.ts) - Singleton pattern for agent reuse

    • Pre-configured agents (initial-scan, deep-verification, poc-generation)
    • Eliminates redundant agent instantiation
    • Reduces memory overhead and improves performance
  • AgentContext (lib/ai/context/agent-context.ts) - Per-audit context management

    • Replaces global state hacks with proper encapsulation
    • Manages source code caching per audit
    • Handles tool result caching with proper cleanup
    • Prevents memory leaks with explicit context clearing
  • Consolidated Prompts (lib/ai/prompts/index.ts) - Single source of truth

    • Initial scan prompts (Sonnet 4.5)
    • Deep verification prompts (Opus 4.5)
    • PoC generation prompts
    • Eliminates prompt duplication across codebase
  • Streamlined Tools (lib/ai/tools/) - Essential tools only

    • Source code tools (section reading, function extraction)
    • Pattern detection tools (vulnerability database lookup)
    • Removed "echo" tools that added no value
    • Improved descriptions with "WHEN TO USE" guidance

Benefits:

  • Efficient - Agents reused across audits, not recreated each time
  • Clean - No global state, proper encapsulation
  • Type-safe - Full TypeScript support
  • Maintainable - Single source of truth for prompts and tools
  • Tested - Comprehensive unit tests for architecture components

📊 What Happens After Migrations

Time Event
Now Workers waiting for first scrape
+24 hours First Immunefi scrape runs
+24 hours First audits complete
+48 hours First PoC verified
+48 hours 🎯 First Telegram alert!

💰 Cost Breakdown

  • Railway: ~$20/month (4 workers)
  • Anthropic AI: ~$110-150/month
    • Sonnet 4.5: $0.50/contract
    • Opus 4.5: $3/contract (verification only)
    • PoC generation: ~$1.50/contract
  • Other Services: Free tier

Total: ~$130-170/month ✅

Monitor Costs


📋 Environment Variables

All configured in Railway services. Stored locally in RAILWAY_COMPLETE.env (gitignored).

Key services:

  • ✅ Supabase (database)
  • ✅ Anthropic (AI)
  • ✅ Alchemy (RPC for 14 chains)
  • ✅ Etherscan V2 Unified API (supports all 14 chains)
  • ✅ GitHub (source fetching)
  • ✅ Upstash Redis (job queue)
  • ✅ Telegram Bot (notifications)

🔍 Monitoring

Health Check API

Monitor system health in real-time:

# Check system health (no auth required)
curl https://your-domain.com/api/health

# Returns:
# - Service status (Redis, Supabase, Anthropic)
# - Response times for each service
# - Overall system status (healthy/degraded/unhealthy)

Response:

{
  "status": "healthy",
  "timestamp": "2026-01-XX...",
  "services": [
    { "service": "redis", "healthy": true, "responseTime": 45 },
    { "service": "supabase", "healthy": true, "responseTime": 120 },
    { "service": "anthropic", "healthy": true, "responseTime": 890 }
  ],
  "summary": { "total": 3, "healthy": 3, "unhealthy": 0 }
}

Cost Tracking API

Monitor AI spending:

# Get cost data (requires auth)
curl -H "Authorization: Bearer <token>" \
  https://your-domain.com/api/monitoring/costs?days=30

# Returns daily breakdown:
# - Sonnet calls and costs
# - Opus calls and costs
# - Total estimated costs
# - Summary for the period

Railway Logs

Check each service shows:

  • Scraper: "Waiting for cron trigger"
  • Audit: "Polling audits table"
  • PoC: "Waiting for vulnerabilities"
  • Telegram: "Bot connected"

Database Queries

-- Active programs
SELECT COUNT(*) FROM immunefi_programs WHERE status = 'active';

-- Audits by status
SELECT status, COUNT(*) FROM audits GROUP BY status;

-- Vulnerabilities by severity
SELECT severity, COUNT(*) FROM vulnerabilities GROUP BY severity;

-- PoC results
SELECT execution_status, COUNT(*) FROM bug_bounty_pocs GROUP BY execution_status;

-- Cost tracking (via API or direct query)
SELECT * FROM api_cost_tracking ORDER BY date DESC LIMIT 30;

🎯 Key Features

  • ✅ Autonomous 24/7 monitoring
  • Dynamic DeFi Vulnerability Taxonomy (V01-V13+) - Add new patterns via admin UI
  • Database-driven classification - Detection keywords stored in database, no code changes needed
  • Admin category management - Create/edit/disable categories via web interface
  • Medium severity detection - Automatically enabled when programs offer medium bounties
  • Self-learning system - Improves from every submission
  • ✅ Three-pass AI verification (cost-optimized: Sonnet → Out-of-scope filter → Opus)
  • ✅ Automated PoC generation with Foundry
  • ✅ Mainnet fork execution
  • ✅ Telegram notifications
  • Intelligence dashboard - Track patterns and learning
  • ✅ Single-user access (RLS enforced)
  • ✅ Ethical scraping (respects robots.txt)

📚 Documentation


🔗 Quick Links


🚀 You're Ready!

After running database migrations, the system is fully operational.

Next: Wait for Telegram notifications with bug reports ready to submit! 🎯

📊 Scrape Summary Notifications

After each scraping job completes (daily cron or manual trigger), you'll receive a Telegram message showing:

  • Programs processed (new vs existing with last audit dates)
  • Contracts found (new vs already scanned)
  • Total audits queued for processing
  • Processing duration

Example notification:

✅ SCRAPE COMPLETED - Jan 14, 2026 10:30 AM
Duration: 8 minutes 32 seconds

📊 PROGRAMS (10 processed)

🆕 New Programs (3):
• Ether.Fi (5 contracts queued)
• Compound Finance (8 contracts queued)
• Aave V3 (2 contracts queued)

🔄 Existing Programs (7):
• MakerDAO (last audited: 2 days ago) - 3 contracts queued
• Curve Finance (last audited: 1 week ago) - 4 contracts queued
• Uniswap V3 (last audited: 3 days ago) - 0 contracts (all scanned)
• Lido (last audited: 5 days ago) - 2 contracts queued
... and 3 more

📦 CONTRACTS (45 found)
• New: 15 contracts
• Already scanned: 30 contracts

⚡ AUDITS QUEUED: 20 new audits

This allows you to monitor scraping activity and understand what's being processed without checking logs or the dashboard.


Built with:

About

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors