AI-powered job matching that analyzes your resume and surfaces roles that fit your trajectory — not just your keywords.
PathAI is a full-stack job platform where candidates upload a resume and receive AI-ranked job matches. The matching engine combines GPT-4o-mini structured extraction with pgvector cosine similarity to surface roles that are genuinely relevant — correctly levelled by seniority, ranked by semantic fit.
Built as a portfolio project demonstrating vector search, AI-driven personalization, and a polished multi-page product experience.
| Feature | Description |
|---|---|
| Resume Matching | Upload PDF/DOCX — GPT-4o-mini extracts your profile, pgvector ranks jobs by semantic similarity |
| Personalized Match Scores | Every job detail page computes your cosine similarity score on demand against your resume embedding |
| Apply Flow | External job links open the posting then prompt "Did you apply?" — confirmed applications land in your dashboard |
| ATS Dashboard | Track applications across stages: Applied → Phone Screen → Interview → Offer → Hired |
| Saved Jobs | Bookmark any job from the feed; saved roles appear in a dedicated dashboard section |
| Interview Prep | Role-specific interview questions generated per job |
| Company Logos | Clearbit logo API on all job cards with letter-initial fallback |
| Authentication | Supabase Auth with email/password, email confirmation flow, and protected routes |
Pure vector similarity has a well-known blind spot: a Junior Engineer and a VP of Engineering share nearly identical embeddings because the domain is the same. PathAI solves this with a two-layer approach.
Layer 1 — Seniority Pre-Filter
On resume upload, GPT-4o-mini extracts structured metadata:
{
"seniority": "senior",
"domain": "software_engineering",
"skills": ["python", "react", "aws"],
"years_experience": 8
}Jobs are filtered to only include roles within a compatible seniority band before any vector math runs. A mid-level engineer never sees intern or VP postings.
Layer 2 — Vector Similarity
Each job stores a pre-computed pgvector embedding of its title and description. The resume text is embedded at upload time via text-embedding-3-small. Cosine distance is computed in-database and the top matches are returned from the already-filtered pool.
The result: semantically relevant roles, correctly levelled.
┌──────────────────────┐ ┌──────────────────────┐
│ Next.js 16 │ ──────► │ FastAPI Backend │
│ (Vercel) │ ◄────── │ (Railway) │
└──────────────────────┘ └──────────┬───────────┘
│
▼
┌──────────────────────┐
│ Supabase │
│ PostgreSQL + pgvector│
└──────────┬───────────┘
│
▼
┌──────────────────────┐
│ OpenAI API │
│ text-embedding-3- │
│ small + gpt-4o-mini │
└──────────────────────┘
Frontend
| Library | Purpose | |
|---|---|---|
| Next.js 16 | App Router, SSR, routing | |
| React 19 | UI framework | |
| TypeScript | Type safety | |
| Tailwind CSS | Styling | |
| Framer Motion | Animations |
Backend
| Library | Purpose | |
|---|---|---|
| FastAPI | REST API | |
| Python 3.14 | Runtime | |
| SQLAlchemy 2 | ORM |
Data & AI
| Service | Purpose | |
|---|---|---|
| Supabase | PostgreSQL, Auth | |
| pgvector | Vector similarity search | |
| OpenAI | Embeddings + extraction |
PathAI/
├── frontend/ # Next.js 16 — App Router
│ └── src/
│ ├── app/
│ │ ├── page.tsx # Landing page
│ │ ├── jobs/page.tsx # Job listings with filters + pagination
│ │ ├── jobs/[id]/page.tsx # Job detail + AI Match Analysis widget
│ │ ├── resume/page.tsx # Resume upload + profile analysis
│ │ ├── dashboard/page.tsx # ATS dashboard, saved jobs, applications
│ │ ├── about/page.tsx # About / how it works
│ │ └── auth/page.tsx # Sign in / sign up
│ ├── components/
│ │ └── CompanyLogo.tsx # Clearbit logo with letter-initial fallback
│ ├── context/
│ │ └── AuthContext.tsx # Supabase session management
│ └── lib/
│ ├── api.ts # All API calls (jobs, resume, match score, saved)
│ └── supabase.ts # Supabase client
│
└── backend/ # FastAPI (Python)
├── main.py # All routes + middleware
├── models.py # SQLAlchemy ORM models
├── schemas.py # Pydantic request / response schemas
├── auth.py # Supabase JWT verification
├── seed_data.py # Synthetic job generator
└── services/
├── matching.py # Dual-layer matching logic
├── resume_parser.py # PDF/DOCX extraction + GPT-4o-mini parsing
└── embedding.py # OpenAI embeddings wrapper
- Python 3.11+
- Node.js 18+
- Supabase project (free tier works)
- OpenAI API key
cd backend
python -m venv venv && source venv/bin/activate
pip install -r requirements.txt
python seed_data.py # Populate DB with synthetic jobs
uvicorn main:app --reload # http://localhost:8000backend/.env
SUPABASE_URL=
SUPABASE_SERVICE_KEY=
SUPABASE_JWT_SECRET=
OPENAI_API_KEY=
DATABASE_URL=cd frontend
npm install
npm run dev # http://localhost:3000frontend/.env.local
NEXT_PUBLIC_SUPABASE_URL=
NEXT_PUBLIC_SUPABASE_ANON_KEY=
NEXT_PUBLIC_API_URL=http://localhost:8000