A step-by-step vibe-coding framework with prompts, templates, and best practices.
A practical, end‑to‑end method to ship with AI like an experienced engineering manager: clear intent → tight architecture → strict conventions → iterative delivery. No fluff, just steps, templates, and prompts you can paste into your AI tools.
- Create the docs listed below in your repo(s). Keep them short, living, and versioned.
- Use the provided prompts verbatim or adapt them to your project. IMPORTANT: iterate on prompt results until you are satisfied with the result. The prompt is a starting point and sometimes you will have to ask for adjustments before accepting a result. If you’re not sure about the result – just ask another LLM to have a 3rd opinion.
- Treat AI as a senior engineer paired with you: specify, review, iterate.
- Solve one problem well. Cut scope until it’s trivial to explain in 30 seconds.
- KISS/YAGNI/MVP. Favor the simplest thing that works; evolve from there.
- Context is king. Persist idea/spec/architecture/rules in the repo. Feed them into every AI prompt.
- Determinism beats vibes. Write checklists and acceptance criteria. Make AI confirm the plan before coding.
- Tight loops. Propose → agree → implement → test → document → release.
Define the problem, audience, and constraints. Keep it on a single page.
Later, once project structure is defined, we will put it under docs/idea.md.
It will be the main context for the project.
It’s important to include monetization strategy as it can affect project specification.
Below you can find an example of idea description:
And here are some questions to crystallise your idea:
# Idea one‑pager
- Working name:
- Problem (one sentence):
- Who has it (persona):
- Why now:
- Non‑goals (out of scope):
- Core workflow (3–5 steps):
- Surfaces: web / iOS / Android / others
- Monetization (if any):
- Success metrics (1–3): activation, retention day‑7, conversion, etc.
- Risks & assumptions:
- Constraints: time, budget, data, compliance
We are going to work as a pros, iterate step by step, keeping all changes recorded. For this purpose we are going to use .git repository where we will store all the project related documentation, code, and other artefacts. Learn more about .git here if you aren’t familiar yet.
You don't have to choose a structure. Give the AI your idea and constraints, and let it propose (and scaffold) the best layout for you.
Prompt:
Using the attached
docs/idea.md, propose and scaffold the repository layout that best fits this idea and constraints. Decide between a true monorepo or a constellation of repos and briefly justify the choice. Output: (1) a folder tree, (2) a short rationale, (3) commands to initialize repos/workspaces, (4) baseline README.md files, and (5) locations for docs/, prompts/, rules/, decisions/, and any shared packages. Create adocs/structure.mddocument that describes project structure, repository layout, etc.
A technical specification (or project specification) is needed to document all project requirements, and define resources, timelines, and costs. This document serves as the foundation for further planning and execution, minimizes misunderstandings, and helps ensure that the final result meets our expectations. A concise technical spec is the single source of truth.
Example Specification Structure (what it covers):
- Summary: problem, target user, success metrics.
- Architecture overview: modules, boundaries, data flow diagram.
- Domain model: entities, relationships, invariants.
- API contracts: request/response examples, status codes, errors.
- Testing strategy: unit, component/API, and e2e smoke; coverage expectations; acceptance criteria mapping to tests.
- Data storage: schema, indexes, migrations, retention.
- Security: authZ/authN, secrets, PII handling, rate limits.
- Performance budgets: p95 targets, payload sizes, SLIs/SLOs.
- Observability: logging, tracing, metrics, dashboards.
- Feature flags & config: envs, toggles, kill switches.
- i18n & accessibility (a11y).
- Analytics: events, properties, funnels.
- Non‑goals & future work.
Prompt:
I want to build the app described in
docs/idea.md. I’m going to follow the structure defined indocs/structure.md. Write a concise, implementable technical specification that will act as the roadmap and documentation. The specification should include technical stack, project architecture, domain and data model, API, testing strategy, storage, security, performance budgets, observability & logging strategy, feature flags, i18n/a11y, analytics, non‑goals, deployment strategy. Make it detailed to cover every piece of a project. Think it through. Ask questions if something is unclear or multiple options possible. But in the same time don’t over engineer. We should follow the KISS principle and focus on a production ready MVP. Save the specification atdocs/specification.md
Hint: You don’t need to pick the result as is. You might want to iterate a bit and ask for changes until you are satisfied with the result. If you aren’t technical at all you might just copy the specification to any other LLM (e.g. OpenAI/Grok/Claude/etc) and ask what it thinks about the specification. This way you will have a 3rd opinion and will be able to see if you need any changes.
When working on a large project with a big team, it’s crucial that all code follows a consistent style. Consistency reduces confusion, makes the code easier to read, speeds up onboarding, and helps developers quickly understand and change parts of the system – even those they didn’t originally write.
That’s why companies create coding guidelines and best practices. New developers are expected to follow these standards rather than write code however they’re used to.
The same goes for AI: if we want consistent, maintainable code, we should define the rules and approaches it must follow from the start. Conventions remove ambiguity and make AI output consistent.
Example Convention Structure:
- Language & stack: TypeScript everywhere; Next.js fullstack; Supabase for DB/auth/storage.
- Code style: Prettier + ESLint; strict TS; no
any; explicit return types. - Naming: kebab-case files, PascalCase components, camelCase vars, UPPER_SNAKE envs.
- Project structure:
app/,lib/,components/,db/,server/,tests/. - State & data: React Server Components where possible; client state isolated; server actions for mutations; Zod for IO schemas.
- Errors & logging: never swallow; typed errors;
logger.errorwith request id; user‑safe messages. - Security: input validation at edges; auth on server; least privilege; no secrets in clients.
- Testing: unit (Vitest/Jest), component (React Testing Library), API (supertest), and a Playwright smoke path where applicable. Every PR must add/adjust tests and pass CI.
- Commits/PRs: Conventional Commits; small PRs; checklists; screenshots for UI.
- Docs: every module:
README.md+ usage examples. - Performance: avoid N+1; stream where possible; image optimization by default.
- Accessibility: semantic HTML; labels/roles; focus management; color‑contrast.
- Don’ts: no hidden coupling; no global singletons; no random libs; no TODO‑land in prod.
Prompt:
Generate
docs/convention.mdcapturing our main development rules, aligned withdocs/specification.mdanddocs/structure.md. Keep it concise and enforceable. Include code style, naming, structure, validation, error handling, logging, mandatory testing and CI gates, commits/PRs, security, performance, and a clear “Do/Don’t” list.
This is the “Game Rules”. It defines how exactly AI is going to develop, and iterate: when ask for a permission, and when do things on your own; How to understand that feature is ready, and issue can be closed; How to update the progress, etc. We should make the process explicit so AI can follow it.
Example Workflow:
- Definition of Ready (DoR): problem, scope, acceptance criteria, test plan, rollout plan.
- Definition of Done (DoD): code + tests + docs + PR merged + feature flag defaulted as specified + telemetry added, CI green.
- Iteration loop:
- propose solution; 2) agree; 3) implement; 4) tests; 5) docs; 6) PR review; 7) release; 8) measure.
- Branching: trunk‑based or short‑lived feature branches.
- CI gates (blocking): typecheck, lint, unit, component/API, e2e smoke, bundle size, accessibility checks.
- Progress reporting: markdown table in
docs/progress.mdwith statuses. - Approvals: product + eng sign‑off before coding on big items.
- Commits: link to issue.
Prompt:
Create
docs/workflow.mdthat instructs an AI coding assistant to work strictly by our list of issues defined indocs/issues.md. Before each iteration, propose the solution for agreement; after approval implement with tests and docs; open a PR; then request the next step. Include CI gates and DoR/DoD checklists. This is the “Game Rules”. It defines how exactly AI is going to develop, and iterate: when ask for a permission, and when do things on your own; How to understand that feature is ready, and issue can be closed; How to update the progress, etc.
That’s a detailed technical implementation plan for our project.
Target: clean modular architecture that’s easy to maintain and extend.
Prompt (if you don’t know which stack to pick – AI will decide for you):
Write a staged implementation plan and full project composition using a clean modular architecture for the idea described in
docs/idea.md, taking into account project specification described indocs/specification.md, and project structure described indocs/structure.md. Select which tech stack fits best the according to the idea, specification, and structure described earlier. For each step provide a copy‑pastable instruction (aka prompt) that tells the AI exactly what files to create, types/interfaces/routes, migrations, tests, and acceptance criteria. Each copy-pastable instruction must include a reference to initial idea described indocs/idea.md, selected project structure described indocs/structure.md. Save the plan atdocs/architecture.md
Hint: Project architecture is very important part. For non-techies it might be really tricky to create a good and solid architecture or to see places that needs to be improved. In this case I’d recommend copy architecture.md and ask few other LLMs what do they think about it, what’s missing and what can be improved. This way you can get a solid architecture for your project.
Prompt (if you know the stack you’d like to you – just replace the one in the example with your stack description):
Write a staged implementation plan and full project composition using a clean modular architecture for the idea described in
docs/idea.md, taking into account project specification described indocs/specification.md, and project structure described indocs/structure.md. Also don’t forget about Stack: Next.js fullstack; DB/Auth/Storage: Supabase; Mobile: native shells with WebView bridges (expose native functions like in‑app purchase via evaluateJavaScript). For each step provide a copy‑pastable instruction that tells the AI exactly what files to create, types/interfaces/routes, migrations, tests, and acceptance criteria. Each copy-pastable instruction must include a reference to initial idea described indocs/idea.md, selected project structure described indocs/structure.md. Save the plan atdocs/architecture.md
Create granular issues with close‑instructions and import them in bulk to GitHub.
GitHub Issues Json Example:
[
{
"title": "W-01 – Scaffold Next.js app",
"body": "Create Next.js app with TS, ESLint, Prettier, RSC enabled. Configure routes /health, /about.",
"labels": ["setup", "backend", "web"],
"assignees": [],
"milestone": null
}
]Prompt:
Based on
docs/architecture.mdanddocs/specification.mdcreated fordocs/idea.md, generate a prioritised list of issues for the chosen repo layout according to thedocs/structure.md. For each issue include: title (prefixed with a tag, and a running number – ”W-01 – …” for the first issue for the web app, “I-01 – ..” for the iOS, etc), description, acceptance criteria, estimate, labels, dependencies, and a “copy‑pastable prompt” that can be pasted into the AI to complete the work. The prompt must include required project context from thedocs/idea.md, and all the required technical details according to thedocs/specification.mdanddocs/architecture.md. After generation: (1) save a valid JSON array of issues ready for bulk import via GitHub CLI todocs/issues.md; (2) save all issues todocs/issues.md; (3) save instructions on how to bulk upload issues to GitHub using CLI todocs/issues_upload_instructions.md. Run the GitHub CLI instruction to actually upload all the issues to GitHub.
Most of AI-editors have “rules” on how to work. These rules let us automatically apply earlier created docs/convention.md and docs/workflow.md to each request to LLM.
Depending on editor you use steps can vary a bit.
Here is what to do for Cursor:
- Create a directory
.cursor/rules - Copy
docs/convention.mdto.cursor/rules/convention.mdc, anddocs/workflow.mdto.cursor/rules/workflow.mdc
For each rule set “Always Apply”.
—
For Copilot in VSCode:
- Create a
.github/copilot-instructions.mdfile - Copy content of
docs/convention.mdanddocs/workflow.mdto.github/copilot-instructions.md
Goal: a new engineer (or AI) can be productive in 30 minutes.
Include
- System overview diagram.
- Local setup (commands, envs, seeds).
- Common workflows (run tests, run e2e, add route, add table, add migration).
- Troubleshooting FAQ.
Prompt:
Generate
docs/intro.mdthat explains the architecture described indocs/architecture.md, how to run locally, how to add a new module, and how to troubleshoot common issues. Add links to key files, code samples, and diagrams.
Prompt (start work on the first issue):
Start implementing
docs/idea.mdaccording to thedocs/architecture.mdplan by starting to implement issue fromdocs/issues.md. The issue to work on is defined bydocs/progress.md. Ifdocs/progress.mddoesn’t exist yet this means you should start from the first issue. For each issue create a new branch with a name offeat/issue-issue_running_number-title. Once an issue is implemented you should create a commit, and open a PR intodevelopbranch, and mark an issue as completed indocs/progress.md.docs/progress.mdholds information about completed issues in a format of JSON. If a branch is merged and PR is closed then issue can be marked ascompleted, otherwise it should be marked asin_progress.
- Validate inputs at the edge (Zod). Never trust client data.
- Store secrets in env/secret manager; rotate regularly.
- PII: minimum necessary, encryption at rest (DB), TLS in transit.
- Authorization on the server; feature‑level checks.
- Rate limits on auth and write endpoints; audit logs for critical actions.
- Logging: structured JSON; include request id, user id, latency, error code.
- Metrics: p50/p95 latency, error rate, signup conversion.
- Tracing: wrap critical paths (auth, checkout, content load).
- Analytics: define events in
docs/analytics.mdwith owners.
- Checklists: pre‑release (migrations applied, feature flags default), post‑release (smoke tests, dashboards green).
- Rollbacks: have a revert plan; keep migrations reversible.
- Rollbacks: have a revert plan; keep migrations reversible.
- Changelog: human‑readable changes per release.
- Generating code without a spec or conventions.
- Dumping context in chat but not in the repo.
- Letting AI add dependencies at will.
- Skipping tests and docs “until later”.
- Building many features in parallel without shipping one.
Design‑first prompt
Before coding, propose the minimal design (files, functions, routes, types, tests). Wait for approval.
Close‑the‑issue prompt
Using the attached issue and our docs (docs/idea.md, docs/specification.md, docs/architecture.md, docs/convention.md, docs/workflow.md), implement the change, add tests and docs, open a PR with a summary, and list how acceptance criteria are met.
docs/idea.md— project one‑pagerdocs/structure.md— project repo structuredocs/specification.md— technical specificationdocs/architecture.md— modules & diagramsdocs/convention.md— code/style rulesdocs/workflow.md— iteration rulesdocs/issues.md— issues back‑log in a JSON formatdocs/intro.md— onboarding document for newcomers.cursor/rules— AI guardrails
License: Code — MIT, Docs — CC BY 4.0