Skip to content

dddleader/ideaForge_agent

Repository files navigation

Full-Stack AI Product Scaffold

This repository is structured as a full-stack scaffold with a shared project workspace:

  • backend/: FastAPI + LangGraph runtime, auth, persistence, and observability
  • frontend/: reserved frontend workspace for the product UI
  • workspace/: shared context, PRD, design system, and feature execution docs

The backend is runnable today. The frontend is intentionally left framework-neutral so you can drop in the UI stack you want for the next project.

Repository Layout

backend/    # API runtime, Docker stack, tests, observability configs
frontend/   # frontend app workspace placeholder
workspace/  # shared project docs for backend + frontend work

Shared Workspace Documents

Project knowledge should stay in the shared workspace rather than being duplicated per layer:

Quick Start

Install the backend dependencies:

make install

Create the backend development env file:

cp backend/.env.example backend/.env.development

Fill in the required backend values:

OPENAI_API_KEY=...
JWT_SECRET_KEY=...

Start the default full stack:

make docker-compose-up ENV=development

Or run the backend locally with Docker Postgres:

cd backend
docker compose --env-file .env.development up -d db
cd ..
make dev

Default Platform Stack

The default backend Docker stack includes:

  • FastAPI app
  • PostgreSQL with pgvector image
  • Prometheus
  • Grafana
  • Loki
  • Promtail

Optional host metrics are available through:

make docker-compose-observability-up ENV=development

Where To Work

  • backend runtime changes: start in backend/
  • frontend product UI changes: start in frontend/
  • product definition, design, and feature execution: start in workspace/

Notes

  • Root make commands proxy into backend/
  • The frontend directory is intentionally minimal until you pick a concrete UI stack
  • Use relative paths when extending repo documentation so the scaffold stays portable

Ralph Loop

This repo includes a local Ralph loop runner for Linear-driven implementation:

./ralph.sh --epic "YOUR-EPIC-ID-OR-NAME"

Useful flags:

  • --max-iterations 25
  • --model gpt-5.2
  • --allow-dirty
  • --dry-run

The runner uses RALPH_PROMPT.md plus ralph_output.schema.json to make Codex work one issue at a time inside the specified Epic and stop on done, no_ready_issue, or blocked.

Ralph Loop is TDD-first:

  • use Red-Green-Refactor for code changes
  • prefer integration-style tests through public interfaces
  • do not mark an issue complete if required validation cannot be run

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors