A full-stack job search management tool that automatically finds job listings matching your resume, scores them with AI, and tracks your applications through a unified dashboard.
- Upload a resume (PDF) — Gemini extracts your skills, titles, experience, and industries
- Configure search filters — location, remote preference, seniority level, keywords, excluded companies, listing age
- Auto-discover jobs — a daily cron job queries Google Jobs via SerpAPI, deduplicates results, and filters by your preferences
- AI match scoring — jobs are scored 0–100 in batches against your resume with written explanations
- Track applications — move jobs through a pipeline (applied → screening → interview → offer / rejected / ghosted) with "furthest stage reached" tracking for granular rejection analytics
- Search, sort & filter — debounced search bars and sort dropdowns (by score, date, title, company) on both the job inbox and applications pages, combined with tab/status filters and paginated results
- Saved job reminders — amber badge on the sidebar shows saved count, and saved job cards display color-coded aging indicators (green ≤3 days, amber 4–7 days, red >7 days)
- Dashboard analytics — response rate, average time to hear back, weekly volume, status breakdown, rejection funnel by pipeline stage, and a skills profile
- Pipeline logs — persistent daily logs of every search run (SerpAPI results, Gemini model used, scoring, errors) viewable in-app
- Automated backups — daily database snapshots to Supabase Storage with 30-day retention
- Demo mode — read-only guest account with sample data for showcasing the app
| Layer | Technology |
|---|---|
| Framework | Next.js 16 (App Router, Server Components) |
| Language | TypeScript |
| Database | Supabase (PostgreSQL + Auth + Storage + RLS) |
| AI | Google Gemini (3 Flash → 2.5 Flash → 3.1 Flash Lite fallback chain) |
| Job Data | SerpAPI (Google Jobs engine) |
| Styling | Tailwind CSS 4 + Shadcn UI |
| Charts | Recharts |
| Testing | Vitest (115 unit tests) |
| Hosting | Vercel (with Cron for daily search) |
src/
app/
(app)/ # Authenticated pages
page.tsx # Dashboard with analytics + rejection funnel
inbox/ # Job inbox with search, sort, filtering, pagination, bulk actions
resumes/ # Resume management + search filter config
applications/ # Application pipeline tracker with search + sort
logs/ # Pipeline log viewer (admin only)
api/
jobs/ # CRUD + bulk update + manual search trigger
resumes/ # Upload, delete, filter management
applications/ # Application CRUD with status history
stats/ # Dashboard analytics
logs/ # Pipeline log list + detail
cron/daily-search/ # Vercel Cron entrypoint
auth/ # Supabase auth callback
login/ # Magic link authentication + demo login
components/ # Resume card, upload dropzone, sidebar, Shadcn primitives
lib/
search/ # Core search pipeline
query-builder.ts # Resume data + filters -> optimized search queries
serpapi.ts # SerpAPI client + pagination + job normalization
matcher.ts # Gemini batch scoring (5 jobs per API call)
location-filter.ts # Post-fetch geographic filter + remote keyword detection
execute.ts # Orchestrator with structured pipeline logging
resume-parser.ts # Gemini-powered PDF resume extraction
gemini.ts # Shared Gemini client with automatic model fallback
pipeline-logger.ts # Structured log collection, markdown formatting, storage persistence
db-backup.ts # Daily database snapshots to Supabase Storage
supabase/ # Server, browser, and service role client helpers
date-utils.ts # Timezone-safe date formatting
types.ts # Shared TypeScript interfaces
proxy.ts # Auth guard + demo account write protection (Next.js 16 proxy)
- Batch AI scoring — multiple jobs are scored in a single Gemini API call (batches of 5) to stay within rate limits while maintaining score quality
- Shared search executor — the cron job and the "Search Now" button both call
executeJobSearchdirectly, avoiding HTTP round-trips and auth issues - Row Level Security — Supabase RLS policies enforce per-user data isolation at the database level; the demo account's data is completely separate
- Structured pipeline logging — search runs produce categorized markdown logs (SerpAPI results, filtering summaries, score distributions, errors) persisted to Supabase Storage with 14-day retention
- Database-level status tracking — a PostgreSQL
BEFORE UPDATEtrigger logs every application status change tostatus_history, updatesstatus_updated_at, and auto-advancesfurthest_stage(the highest pipeline stage reached, used for rejection funnel analytics) - Batched deduplication — URL-based dedup uses a single
INquery per search instead of per-job queries, with aSetfor O(1) cross-query tracking - Graceful AI fallbacks — a three-model fallback chain (Gemini 3 Flash → 2.5 Flash → 3.1 Flash Lite) ensures API calls succeed even during outages; if all models fail during scoring, jobs default to a score of 50
- Post-fetch location filtering — non-remote jobs from distant locations are filtered after SerpAPI returns but before Gemini scoring, using keyword-based remote detection and state/city matching against the user's location filter
- Saved job aging — saved cards show a color-coded "Saved X days ago" badge (green/amber/red) to discourage letting saved listings go stale
- Demo account isolation — middleware blocks all non-GET requests for the demo user; pipeline logs and admin features are hidden from demo sessions
The daily cron job (/api/cron/daily-search) runs the following steps in order:
- Database backup — snapshot all critical tables to Supabase Storage
- Prune old backups — delete backup files older than 30 days
- Clean up dismissed jobs — remove jobs dismissed more than 3 months ago
- Prune old pipeline logs — delete log files older than 14 days
- Execute job search — query SerpAPI, deduplicate, batch-score with Gemini, insert new jobs
- Persist pipeline logs — write the run's log to Supabase Storage
| Variable | Purpose |
|---|---|
NEXT_PUBLIC_SUPABASE_URL |
Supabase project URL |
NEXT_PUBLIC_SUPABASE_ANON_KEY |
Supabase anonymous key (client-side) |
SUPABASE_SERVICE_ROLE_KEY |
Supabase service role key (server-side, bypasses RLS) |
GEMINI_API_KEY |
Google Gemini API key |
SERPAPI_API_KEY |
SerpAPI key for Google Jobs searches |
CRON_SECRET |
Secret for authenticating Vercel Cron requests |
# Install dependencies
npm install
# Copy environment template and fill in values
cp .env.local.example .env.local
# Run development server
npm run dev
# Run tests
npm test-
Database schema — Run
supabase/setup.sqlin the Supabase SQL Editor. This creates all tables, indexes, RLS policies, and the status change trigger. -
Storage buckets — Create three private buckets in Supabase Storage:
Bucket Purpose Allowed MIME resumesPDF resume files application/pdfpipeline-logsDaily search run logs text/markdowndb-backupsDatabase snapshots application/jsonFor each bucket, add Storage policies granting
authenticatedusers SELECT, INSERT, UPDATE, and DELETE access. -
Authentication — Enable email/password auth in Supabase Auth settings. Add
http://localhost:3000**to the Redirect URLs list. -
(Optional) Demo account — To set up a read-only demo mode:
- Create a user with email
demo@guidepostai.appin Supabase Auth - Run
npx tsx scripts/seed-demo.tsto populate sample data (this only affects the demo account via RLS)
- Create a user with email
Note: The
supabase/migrations/directory contains the historical incremental migrations used during development. For fresh installs, usesupabase/setup.sqlinstead.
MIT