This repository is structured as a full-stack scaffold with a shared project workspace:
backend/: FastAPI + LangGraph runtime, auth, persistence, and observabilityfrontend/: reserved frontend workspace for the product UIworkspace/: shared context, PRD, design system, and feature execution docs
The backend is runnable today. The frontend is intentionally left framework-neutral so you can drop in the UI stack you want for the next project.
backend/ # API runtime, Docker stack, tests, observability configs
frontend/ # frontend app workspace placeholder
workspace/ # shared project docs for backend + frontend work
Project knowledge should stay in the shared workspace rather than being duplicated per layer:
workspace/context.md: project context, terminology, and architecture boundariesworkspace/prd.md: project-level product source of truthworkspace/design.md: global design rules for future frontend workworkspace/features/<feature-slug>.md: one feature's execution document
Install the backend dependencies:
make installCreate the backend development env file:
cp backend/.env.example backend/.env.developmentFill in the required backend values:
OPENAI_API_KEY=...
JWT_SECRET_KEY=...Start the default full stack:
make docker-compose-up ENV=developmentOr run the backend locally with Docker Postgres:
cd backend
docker compose --env-file .env.development up -d db
cd ..
make devThe default backend Docker stack includes:
- FastAPI app
- PostgreSQL with pgvector image
- Prometheus
- Grafana
- Loki
- Promtail
Optional host metrics are available through:
make docker-compose-observability-up ENV=development- backend runtime changes: start in
backend/ - frontend product UI changes: start in
frontend/ - product definition, design, and feature execution: start in
workspace/
- Root
makecommands proxy intobackend/ - The frontend directory is intentionally minimal until you pick a concrete UI stack
- Use relative paths when extending repo documentation so the scaffold stays portable
This repo includes a local Ralph loop runner for Linear-driven implementation:
./ralph.sh --epic "YOUR-EPIC-ID-OR-NAME"Useful flags:
--max-iterations 25--model gpt-5.2--allow-dirty--dry-run
The runner uses RALPH_PROMPT.md plus ralph_output.schema.json to make Codex work one issue at a time inside the specified Epic and stop on done, no_ready_issue, or blocked.
Ralph Loop is TDD-first:
- use Red-Green-Refactor for code changes
- prefer integration-style tests through public interfaces
- do not mark an issue complete if required validation cannot be run