24/7 automated smart contract vulnerability detection and PoC generation for Immunefi bug bounties.
🏁 Archived - This project is no longer actively maintained. It is open source for reference and learning. You can run it locally by following the setup instructions below.
Status: 🏁 Archived | This project is no longer actively maintained. See setup instructions below to run locally.
Monitors Immunefi 24/7 → Detects vulnerabilities → Classifies V01-V13+ (Immunefi Top 10 + 2025 patterns + dynamic categories) → Generates PoCs (automated for highest severity, manual per-vulnerability) → Verifies exploits → Learns from outcomes → Sends scrape summaries → Notifies via Telegram
Your only action: Review Telegram alerts and submit bugs to Immunefi for bounties.
NEW: ✨ Self-improving detection - System learns from your submissions and automatically improves patterns!
NEW: 📊 Scrape summaries - Get detailed reports after each scraping job showing programs processed, contracts found, and audits queued!
Go to: Your Supabase project → SQL Editor (https://supabase.com/dashboard)
Run the complete schema:
Copy/paste scripts/database/fresh-start-schema.sql → Execute
This single script includes all tables, indexes, functions, triggers, RLS policies, views, and seed data needed for the complete system.
Verify:
SELECT table_name FROM information_schema.tables
WHERE table_schema = 'public' AND table_name IN (
'immunefi_programs', 'audits', 'vulnerabilities',
'bug_bounty_pocs', 'submission_queue', 'unsupported_chains',
'detection_patterns', 'bounty_outcomes', 'vulnerability_classifications', 'immunefi_categories'
);
-- Should return 10 rows (6 original + 4 new learning system tables)Protect against accidental data loss:
-- Run in Supabase SQL Editor
scripts/database/protection/data-protection-safeguards.sqlThis adds:
- 🛡️ Bulk delete prevention (blocks deletes >10 rows)
- 📝 Deletion audit logging (tracks all deletions)
- 🔄 Recovery functions (restore deleted data)
See README-DATA-PROTECTION.md for details.
✅ Done! System is live and will start hunting bugs automatically.
The frontend provides a web dashboard to view bug bounty results, manage chains, and track audits. Deploy to Vercel for always-accessible dashboard instead of running locally.
Prerequisites:
- Vercel account (free tier works)
- All environment variables ready
Deploy to Vercel:
# Install Vercel CLI (if not already installed)
npm install -g vercel
# Login to Vercel
vercel login
# Deploy to production
vercel --prodEnvironment Variables Setup:
After deployment, add these environment variables in Vercel dashboard (Settings → Environment Variables):
Required:
NEXT_PUBLIC_SUPABASE_URL- Your Supabase project URLSUPABASE_SERVICE_ROLE_KEY- Supabase service role key (for API routes)NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY- Clerk authentication keyCLERK_SECRET_KEY- Clerk secret keyANTHROPIC_API_KEY- For AI analysis features
Blockchain RPCs (for contract fetching):
ETHEREUM_RPC_URLPOLYGON_RPC_URLARBITRUM_RPC_URLBASE_RPC_URL
Block Explorer APIs:
ETHERSCAN_API_KEY- Works for all Etherscan-based explorers
API Rate Limiting (Optimized for Free Tier):
The system uses distributed rate limiting via Redis to prevent API rate limit errors. By default, it uses 2 calls/second per chain (very conservative for free tier APIs with 5 calls/second limit). You can override this per chain:
ETHEREUM_API_RATE_LIMIT- Calls per second for Ethereum (default: 4)BSC_API_RATE_LIMIT- Calls per second for BNB Chain (default: 4)POLYGON_API_RATE_LIMIT- Calls per second for Polygon (default: 4)ARBITRUM_API_RATE_LIMIT- Calls per second for Arbitrum (default: 4)BASE_API_RATE_LIMIT- Calls per second for Base (default: 4)OPTIMISM_API_RATE_LIMIT- Calls per second for Optimism (default: 4)AVALANCHE_API_RATE_LIMIT- Calls per second for Avalanche (default: 4)
Examples:
- Free tier (default): No configuration needed - uses 2 calls/sec automatically
- Premium tier (10 calls/sec): Set
BSC_API_RATE_LIMIT=8(use 8 to be safe) - Advanced tier (20 calls/sec): Set
BSC_API_RATE_LIMIT=18(use 18 to be safe)
Note: The default 2 calls/second is optimized for free tier. If you upgrade to premium, increase the limits accordingly.
Redis (for rate limiting):
UPSTASH_REDIS_REST_URLUPSTASH_REDIS_REST_TOKEN
Run Locally (Alternative):
npm install
# Install Playwright browsers (required for scraping feature)
npx playwright install --with-deps chromium
npm run dev # http://localhost:3001Note: The scraping feature requires Playwright browsers to be installed. If you see errors about missing libnss3.so or "Executable doesn't exist", run npx playwright install --with-deps chromium to install the browsers and system dependencies.
Frontend URLs:
- Main Dashboard:
/dashboard - Bug Bounty Dashboard:
/dashboard/bug-bounty - DeFi Vulnerability Taxonomy Intelligence:
/dashboard/bug-bounty/immunefi-top10← NEW! (V01-V13+) - Category Management Admin:
/dashboard/bug-bounty/categories-admin← NEW! (Add/edit categories dynamically) - Chain Support Tracker:
/dashboard/bug-bounty/chains - Reports:
/dashboard/reports - Audit History:
/dashboard/audits
Post-Deployment:
- Visit your Vercel URL
- Sign in with Clerk authentication
- Access
/dashboard/bug-bountyto view autonomous bug hunting results - Access
/dashboard/bug-bounty/chainsto see unsupported chains
Note: Workers run independently on Railway. Frontend is only for viewing results and managing chain support.
graph TB
Start([Daily Cron Trigger]) --> Scraper[Immunefi Scraper<br/>Playwright + Claude Haiku]
Scraper -->|New/Changed Contracts| Redis1[Redis: audits queue]
Redis1 --> Router[Router Worker<br/>Chain Distribution]
Router -->|Route by Chain| ChainQueues[Chain-Specific Queues<br/>audits-ethereum, audits-polygon, etc.]
ChainQueues --> Audit[Audit Processors<br/>14 Chain Workers<br/>Sonnet 4.5 → Opus 4.5]
Audit -->|Vulnerabilities Found| DB1[(Supabase: vulnerabilities)]
Audit -->|Critical/High In-Scope| Redis2[Redis: pocs queue]
Redis2 --> PoC[PoC Executor<br/>Foundry + Opus 4.5]
PoC -->|Verified PoCs| DB2[(Supabase: bug_bounty_pocs)]
PoC -->|Successful Exploits| Redis3[Redis: submissions queue]
Redis3 --> Telegram[Telegram Notifier<br/>Batch Notifications]
Telegram -->|Bug Reports| User[Telegram Alert]
DB1 --> Frontend[Vercel Frontend<br/>Dashboard & Reports]
DB2 --> Frontend
style Scraper fill:#e1f5ff
style Router fill:#ffe1f5
style ChainQueues fill:#fff4e1
style Audit fill:#fff4e1
style PoC fill:#ffe1f5
style Telegram fill:#e1ffe1
style Frontend fill:#f0e1ff
1. Immunefi Scraper (Dockerfile.scraper)
graph LR
Input[Input:<br/>Daily Cron Trigger<br/>Immunefi Website] --> Stage1[Stage 1: Browser Launch<br/>Headless Chromium]
Stage1 --> Stage2[Stage 2: Program Scraping<br/>Top 20 Programs<br/>Sorted by Last Updated]
Stage2 --> Stage3[Stage 3: Contract Extraction<br/>Claude Haiku AI<br/>Extract Addresses]
Stage3 --> Stage4[Stage 4: Chain Detection<br/>14 EVM Chains Supported]
Stage4 --> Stage5[Stage 5: Change Detection<br/>Compare with Database<br/>Find New/Updated Contracts]
Stage5 --> Stage6[Stage 6: Queue Creation<br/>Create Audit Records]
Stage6 --> Stage7[Stage 7: Stats Finalization<br/>Calculate Totals]
Stage7 --> Stage8[Stage 8: Send Notification<br/>Telegram Summary]
Stage8 --> Output[Output:<br/>Redis: audits queue<br/>Telegram: Summary message]
Stage3 -.->|No Contracts Found| Lucky[Roll the Dice<br/>Lucky Assets Feature]
Lucky --> Stage5
style Input fill:#e1f5ff
style Output fill:#e1ffe1
style Stage1 fill:#fff4e1
style Stage2 fill:#fff4e1
style Stage3 fill:#ffe1f5
style Stage4 fill:#fff4e1
style Stage5 fill:#fff4e1
style Stage6 fill:#fff4e1
style Stage7 fill:#ffe1f5
style Stage8 fill:#e1ffe1
- Container: Playwright-based browser automation
- Schedule: Daily cron (20 programs per run)
- Tech Stack: Playwright v1.57.0, Node.js 20, Chromium browser
- Responsibilities:
- Scrapes Immunefi bug bounty programs
- Detects new contracts and proxy implementation changes
- Extracts contract addresses via AI (Claude Haiku)
- Respects robots.txt with rate limiting
- Queues contracts to Redis job queue (
auditsqueue)
- Output: Creates audit records in
auditstable withstatus: 'pending'
2. Audit Processor (Dockerfile.worker)
graph TB
Input[Input:<br/>Redis: audits queue<br/>Audit records<br/>status: pending] --> Stage1[Stage 1: Source Fetching<br/>Resolve Proxy Implementations<br/>EIP-1967, EIP-1822<br/>Etherscan API + GitHub]
Stage1 --> Stage2[Stage 2: Pass 1 - Initial Scan<br/>Sonnet 4.5 AI<br/>~30 seconds<br/>$0.50/contract<br/>Detects Critical/High/Medium]
Stage2 --> Stage3[Stage 3: Pass 2 - Out-of-Scope Filter<br/>Database Lookup<br/>$0 cost<br/>Filters Before Expensive Verification]
Stage3 -->|In-Scope Only| Stage4[Stage 4: Pass 3 - Deep Verification<br/>Opus 4.5 AI<br/>~2 minutes<br/>$3.00/contract<br/>Only Critical/High In-Scope]
Stage3 -->|Out-of-Scope| Skip[Skip Opus<br/>Cost Optimization]
Stage4 --> Stage5[Stage 5: Classification<br/>DeFi Vulnerability Taxonomy<br/>V01-V13+ Dynamic Categories<br/>Database-Driven Keywords]
Stage5 --> Stage6[Stage 6: Storage<br/>Store Vulnerabilities<br/>Compress Source Code gzip<br/>Update Audit Status]
Stage6 -->|Critical/High In-Scope| Output[Output:<br/>Redis: pocs queue<br/>Database: vulnerabilities table<br/>Database: vulnerability_classifications<br/>Audit status: completed]
Skip --> Stage5
style Input fill:#e1f5ff
style Output fill:#e1ffe1
style Stage1 fill:#fff4e1
style Stage2 fill:#ffe1f5
style Stage3 fill:#e1ffe1
style Stage4 fill:#ffe1f5
style Stage5 fill:#fff4e1
style Stage6 fill:#fff4e1
style Skip fill:#ffcccc
- Container: Node.js worker with BullMQ
- Type: Always-on worker (polls Redis queue)
- Tech Stack: Node.js 20, BullMQ, Anthropic Claude API
- Responsibilities:
- Processes
auditsqueue from Redis - Fetches contract source from Etherscan/GitHub APIs
- Single Pass: Vulnerability Scan - Sonnet 4.5 comprehensive scan (~60-90s, $0.75-1.50/contract)
- Finds AND verifies exploitable vulnerabilities in one comprehensive pass
- Detects Critical/High/Medium severity (based on program bounty tiers)
- Uses Anthropic's SCONE-bench methodology for high-quality detection
- Only reports vulnerabilities that can be proven with PoCs showing financial impact (≥0.1 ETH)
- Filtering Pipeline:
- Out-of-scope validation (cost optimization - filters before expensive operations)
- Non-exploitable filtering (removes informational/theoretical findings)
- Classifies vulnerabilities into DeFi Vulnerability Taxonomy (V01-V13+: Immunefi Top 10 + 2025 patterns + dynamic categories)
- Uses database-driven keyword matching for classification
- Supports dynamic categories (V14, V15, etc.) added via admin UI
- Validates out-of-scope, known issues, and impact thresholds
- Stores findings in
vulnerabilitiestable with Immunefi category classification - Compresses source code after analysis (gzip)
- Updates audit status:
queued→fetching_source→scanning→vulnerabilities_found
- Processes
- Concurrency: 1 audit at a time (prevents API rate limit bursts)
- Status Flow: See
AUTONOMOUS_BUG_BOUNTY_HUNTER.mdfor complete status definitions and UI mappings
3. PoC Executor (Dockerfile.poc)
✨ NEW: Interactive Generation with Iterative Refinement - The PoC executor now uses interactive generation for non-trivial vulnerabilities. Instead of generating complete code and testing separately, the AI agent works within an isolated Foundry sandbox, building exploits incrementally through test-fix loops. The agent reads existing code, makes targeted patches, tests immediately, and iterates until success. This approach is more efficient and produces higher quality PoCs. See AUTONOMOUS_BUG_BOUNTY_HUNTER.md for details.
✨ Checkpoint System - The PoC executor includes a checkpoint system that saves progress after each successful stage. If a PoC generation fails or crashes, it automatically resumes from the last successful checkpoint instead of starting over. This significantly improves reliability and reduces wasted API costs. See AUTONOMOUS_BUG_BOUNTY_HUNTER.md for details.
graph TB
Input[Input:<br/>Redis: pocs queue<br/>Vulnerabilities<br/>Critical/High In-Scope] --> Stage1[Stage 1: PoC Generation<br/>Opus 4.5 AI<br/>Generate Exploit.t.sol<br/>~4 minutes<br/>$1.50/PoC]
Stage1 --> Stage2[Stage 2: Foundry Setup<br/>Create Temp Directory<br/>forge init<br/>Install forge-std<br/>Patch safeconsole.sol<br/>Configure foundry.toml]
Stage2 --> Stage3[Stage 3: Compilation<br/>forge build<br/>Auto solc Version Fallback<br/>0.8.31 → 0.8.0<br/>Update pragma if needed]
Stage3 -->|Compilation Success| Stage4[Stage 4: Execution<br/>forge test --fork-url<br/>Mainnet Fork Testing<br/>Parse Test Results<br/>Validate Exploit Success]
Stage3 -->|Compilation Failed| Fail1[Mark as Failed<br/>All Versions Tried]
Stage4 -->|Test Passed| Stage5[Stage 5: Storage<br/>Update Execution Status<br/>Store Logs<br/>Compress Source Code]
Stage4 -->|Test Failed| Retry{Retry Count<br/>< 3?}
Retry -->|Yes| Stage4
Retry -->|No| Fail2[Mark as Failed<br/>Max Retries Reached]
Stage5 --> Output[Output:<br/>Redis: submissions queue<br/>Database: bug_bounty_pocs<br/>execution_status: success]
Stage1 -.->|GitHub Contract| Manual[Manual Verification<br/>Required<br/>No Deployed Address]
Manual --> Fail1
style Input fill:#e1f5ff
style Output fill:#e1ffe1
style Stage1 fill:#ffe1f5
style Stage2 fill:#fff4e1
style Stage3 fill:#fff4e1
style Stage4 fill:#fff4e1
style Stage5 fill:#fff4e1
style Fail1 fill:#ffcccc
style Fail2 fill:#ffcccc
style Manual fill:#ffcccc
style Retry fill:#ffffcc
- Container: Foundry-enabled Docker container
- Type: Always-on worker (polls Redis queue)
- Tech Stack: Node.js 20, Foundry (forge, cast, anvil), BullMQ
- Responsibilities:
- Processes
pocsqueue from Redis - Generates Foundry test exploits using Opus 4.5 (~4min, $1.50/PoC)
- Automated service: Generates PoC for highest severity vulnerability (cost optimization)
- Manual service: Generates PoC for specific vulnerability when requested via UI
- Initializes Foundry projects in isolated temp directories
- Compiles Solidity code with automatic solc version fallback
- Executes PoCs on mainnet fork (3 retries per PoC)
- Parses test results and updates execution status
- Handles GitHub contracts (manual verification required)
- Compresses source code after PoC completion
- Processes
- Resource Requirements: 2 vCPU, 2GB RAM (for Foundry compilation)
4. Telegram Notifier (Dockerfile.telegram)
graph TB
Input[Input:<br/>Redis: submissions queue<br/>Verified PoCs<br/>Successful Exploits] --> Stage1[Stage 1: Data Fetching<br/>Fetch Audit Record<br/>Vulnerabilities<br/>Program Data]
Stage1 --> Stage2[Stage 2: Report Generation<br/>Extract Contract Names<br/>Calculate Bounty Estimate<br/>Format Markdown Report]
Stage2 --> Stage3[Stage 3: PDF Generation<br/>Generate PDF Report<br/>Store in Database]
Stage3 --> Stage4[Stage 4: Storage<br/>Create submission_queue Entry<br/>Store Report Data]
Stage4 --> Decision{Severity?}
Decision -->|Critical| Immediate[Immediate Notification<br/>Send Right Away]
Decision -->|High| Batch[Batch Notification<br/>Wait 30 Minutes<br/>Group Multiple Bugs]
Immediate --> Stage5[Stage 5: Send Telegram<br/>Format Message<br/>Program Name<br/>Contract Address<br/>Severity<br/>Bounty Estimate]
Batch --> Stage5
Stage5 --> Stage6[Stage 6: Update Status<br/>Mark as notified<br/>Update Database]
Stage6 --> Output[Output:<br/>Telegram Message<br/>Database: submission_queue<br/>status: notified]
style Input fill:#e1f5ff
style Output fill:#e1ffe1
style Stage1 fill:#fff4e1
style Stage2 fill:#fff4e1
style Stage3 fill:#fff4e1
style Stage4 fill:#fff4e1
style Stage5 fill:#ffe1f5
style Stage6 fill:#fff4e1
style Immediate fill:#ffcccc
style Batch fill:#ffffcc
style Decision fill:#e1f5ff
- Container: Lightweight Node.js worker
- Type: Always-on worker (polls Redis queue)
- Tech Stack: Node.js 20, node-telegram-bot-api, BullMQ
- Responsibilities:
- Processes
submissionsqueue from Redis - Batches notifications every 30 minutes
- Sends immediate alerts for Critical bugs
- Formats reports with bounty estimates
- Updates
submission_queuetable withstatus: 'notified'
- Processes
- Output: Telegram messages to configured chat ID
Core Tables:
immunefi_programs- Programs monitored (active/inactive status)audits- Audit records with status tracking (pending → scanning → completed)vulnerabilities- Found vulnerabilities with severity and classificationbug_bounty_pocs- PoC code, execution logs, and test resultssubmission_queue- Ready-to-submit bug reports with PDF generation
Learning System Tables:
detection_patterns- Learned patterns with success metrics and confidence scoresbounty_outcomes- Submission outcomes for continuous learningvulnerability_classifications- DeFi Vulnerability Taxonomy mapping (V01-V13)immunefi_categories- Top 10 reference data and bounty ranges
Supporting Tables:
unsupported_chains- Track chains we don't support yet
Architecture:
The system uses a router pattern for simplified job distribution:
- Main Queue (
audits): Single entry point from API/scraper - Router Worker: Distributes jobs to chain-specific queues based on chainId
- Chain Queues: 14 separate queues (
audits-ethereum,audits-polygon, etc.) - Chain Workers: One worker per chain (concurrency: 1) for parallel processing
Queues:
audits- Main entry queue (API → router worker)audits-{chain}- Chain-specific queues (router → chain workers)pocs- PoC generation/execution jobs (audit processor → PoC executor)submissions- Notification jobs (PoC executor → telegram notifier)
Benefits:
- Simple API: Single queue for all audits
- Parallel Processing: Up to 14 audits simultaneously (1 per chain)
- Rate Limit Compliance: Each chain worker respects API rate limits
- Clean Separation: Router handles distribution, workers handle processing
Redis: Upstash (free tier) for distributed job queue management
The system uses a centralized, efficient AI agent architecture powered by Vercel AI SDK:
Core Components:
-
AgentPool (
lib/ai/agents/agent-pool.ts) - Singleton pattern for agent reuse- Pre-configured agents (initial-scan, deep-verification, poc-generation)
- Eliminates redundant agent instantiation
- Reduces memory overhead and improves performance
-
AgentContext (
lib/ai/context/agent-context.ts) - Per-audit context management- Replaces global state hacks with proper encapsulation
- Manages source code caching per audit
- Handles tool result caching with proper cleanup
- Prevents memory leaks with explicit context clearing
-
Consolidated Prompts (
lib/ai/prompts/index.ts) - Single source of truth- Initial scan prompts (Sonnet 4.5)
- Deep verification prompts (Opus 4.5)
- PoC generation prompts
- Eliminates prompt duplication across codebase
-
Streamlined Tools (
lib/ai/tools/) - Essential tools only- Source code tools (section reading, function extraction)
- Pattern detection tools (vulnerability database lookup)
- Removed "echo" tools that added no value
- Improved descriptions with "WHEN TO USE" guidance
Benefits:
- ✅ Efficient - Agents reused across audits, not recreated each time
- ✅ Clean - No global state, proper encapsulation
- ✅ Type-safe - Full TypeScript support
- ✅ Maintainable - Single source of truth for prompts and tools
- ✅ Tested - Comprehensive unit tests for architecture components
| Time | Event |
|---|---|
| Now | Workers waiting for first scrape |
| +24 hours | First Immunefi scrape runs |
| +24 hours | First audits complete |
| +48 hours | First PoC verified |
| +48 hours | 🎯 First Telegram alert! |
- Railway: ~$20/month (4 workers)
- Anthropic AI: ~$110-150/month
- Sonnet 4.5: $0.50/contract
- Opus 4.5: $3/contract (verification only)
- PoC generation: ~$1.50/contract
- Other Services: Free tier
Total: ~$130-170/month ✅
- Anthropic: https://console.anthropic.com/settings/billing
- Railway: https://railway.app/dashboard
All configured in Railway services. Stored locally in RAILWAY_COMPLETE.env (gitignored).
Key services:
- ✅ Supabase (database)
- ✅ Anthropic (AI)
- ✅ Alchemy (RPC for 14 chains)
- ✅ Etherscan V2 Unified API (supports all 14 chains)
- ✅ GitHub (source fetching)
- ✅ Upstash Redis (job queue)
- ✅ Telegram Bot (notifications)
Monitor system health in real-time:
# Check system health (no auth required)
curl https://your-domain.com/api/health
# Returns:
# - Service status (Redis, Supabase, Anthropic)
# - Response times for each service
# - Overall system status (healthy/degraded/unhealthy)Response:
{
"status": "healthy",
"timestamp": "2026-01-XX...",
"services": [
{ "service": "redis", "healthy": true, "responseTime": 45 },
{ "service": "supabase", "healthy": true, "responseTime": 120 },
{ "service": "anthropic", "healthy": true, "responseTime": 890 }
],
"summary": { "total": 3, "healthy": 3, "unhealthy": 0 }
}Monitor AI spending:
# Get cost data (requires auth)
curl -H "Authorization: Bearer <token>" \
https://your-domain.com/api/monitoring/costs?days=30
# Returns daily breakdown:
# - Sonnet calls and costs
# - Opus calls and costs
# - Total estimated costs
# - Summary for the periodCheck each service shows:
- Scraper: "Waiting for cron trigger"
- Audit: "Polling audits table"
- PoC: "Waiting for vulnerabilities"
- Telegram: "Bot connected"
-- Active programs
SELECT COUNT(*) FROM immunefi_programs WHERE status = 'active';
-- Audits by status
SELECT status, COUNT(*) FROM audits GROUP BY status;
-- Vulnerabilities by severity
SELECT severity, COUNT(*) FROM vulnerabilities GROUP BY severity;
-- PoC results
SELECT execution_status, COUNT(*) FROM bug_bounty_pocs GROUP BY execution_status;
-- Cost tracking (via API or direct query)
SELECT * FROM api_cost_tracking ORDER BY date DESC LIMIT 30;- ✅ Autonomous 24/7 monitoring
- ✅ Dynamic DeFi Vulnerability Taxonomy (V01-V13+) - Add new patterns via admin UI
- ✅ Database-driven classification - Detection keywords stored in database, no code changes needed
- ✅ Admin category management - Create/edit/disable categories via web interface
- ✅ Medium severity detection - Automatically enabled when programs offer medium bounties
- ✅ Self-learning system - Improves from every submission
- ✅ Three-pass AI verification (cost-optimized: Sonnet → Out-of-scope filter → Opus)
- ✅ Automated PoC generation with Foundry
- ✅ Mainnet fork execution
- ✅ Telegram notifications
- ✅ Intelligence dashboard - Track patterns and learning
- ✅ Single-user access (RLS enforced)
- ✅ Ethical scraping (respects robots.txt)
- AUTONOMOUS_BUG_BOUNTY_HUNTER.md - Complete technical documentation, troubleshooting, and advanced configuration
- Railway: https://railway.app/dashboard
- Supabase: https://supabase.com/dashboard
- Anthropic: https://console.anthropic.com
After running database migrations, the system is fully operational.
Next: Wait for Telegram notifications with bug reports ready to submit! 🎯
After each scraping job completes (daily cron or manual trigger), you'll receive a Telegram message showing:
- Programs processed (new vs existing with last audit dates)
- Contracts found (new vs already scanned)
- Total audits queued for processing
- Processing duration
Example notification:
✅ SCRAPE COMPLETED - Jan 14, 2026 10:30 AM
Duration: 8 minutes 32 seconds
📊 PROGRAMS (10 processed)
🆕 New Programs (3):
• Ether.Fi (5 contracts queued)
• Compound Finance (8 contracts queued)
• Aave V3 (2 contracts queued)
🔄 Existing Programs (7):
• MakerDAO (last audited: 2 days ago) - 3 contracts queued
• Curve Finance (last audited: 1 week ago) - 4 contracts queued
• Uniswap V3 (last audited: 3 days ago) - 0 contracts (all scanned)
• Lido (last audited: 5 days ago) - 2 contracts queued
... and 3 more
📦 CONTRACTS (45 found)
• New: 15 contracts
• Already scanned: 30 contracts
⚡ AUDITS QUEUED: 20 new audits
This allows you to monitor scraping activity and understand what's being processed without checking logs or the dashboard.
Built with:
- Next.js 16 - Frontend framework (App Router)
- Anthropic Claude - AI analysis (Sonnet 4.5 & Opus 4.5)
- Vercel AI SDK - AI agent orchestration and tool calling
- Supabase - PostgreSQL database with RLS
- Railway - Worker services (4 containers)
- Vercel - Frontend hosting
- Playwright - Browser automation for scraping
- Foundry - PoC execution and testing
- BullMQ - Job queue management
- Upstash Redis - Distributed Redis for queues
- Telegram Bot API - Notifications
- TypeScript - Type-safe development
- esbuild - Fast bundling for workers