A file-based database and constraint engine for structured documentation. .stem files are the DDL — they define what valid documents look like, just as SQL defines what valid rows look like.
| Database concept | Rootline equivalent |
|---|---|
| Table | Directory |
| Row / Record | Markdown file |
| Columns | Frontmatter fields |
| DDL Schema | .stem file |
| Constraint | Validation rule (required, enum, exists) |
| Domain type | domain: property (semantic type) |
Status: Engine and MCP server complete — all CLI commands and 9 MCP tools functional. 16 inference detectors (13 data + 3 governance).
curl -fsSL https://raw.githubusercontent.com/pablontiv/rootline/master/install.sh | bashirm https://raw.githubusercontent.com/pablontiv/rootline/master/install.ps1 | iexgo install github.com/pablontiv/rootline/cmd/rootline@latest# 1. Initialize — infer .stem rules from existing documents
rootline init docs/
# 2. Validate — check all documents against their rules
rootline validate --all
# 3. Describe — see what a valid document looks like
rootline describe docs/api/
# 4. Query — find documents by metadata (expr-lang syntax)
rootline query --where 'estado == "published"'
# 5. Scaffold — create a new document from the schema
rootline new docs/api/auth.md
# 6. Explain — trace why a field has a given value
rootline explain docs/api/auth.md
# 7. Graph — visualize document dependencies
rootline graph docs/ --checkDocumentation already has structure. Rootline makes it explicit, inherited, and queryable.
- The directory tree defines hierarchy
- Rules flow from parent to child via
.stemfiles - Fields are derived via expressions; aggregates roll up from children to parents
- Documents link to each other via
[[wiki-links]], forming a dependency graph - All output is stable JSON, suitable for CI, automation, and AI
Rootline does not render documentation. It models it.
A .stem file is the DDL schema for a directory. It defines what fields exist, what types they have, which are required, and how values are validated. Rootline resolves schemas using walk-up discovery + top-down merge:
- From the target path, walk up collecting
.stemfiles until.gitroot - Merge them top-down (parent → child)
| YAML type | Behavior | Example |
|---|---|---|
| map | Key-level merge | Child adds or overrides keys |
| array | Replace | Child redefines entirely |
| scalar | Replace | Child overrides value |
| null | Remove | Child removes inherited key |
version: 2
schema:
title: { type: string, required: true, domain: title }
status:
domain: lifecycle_state # semantic type — implies type: enum
values: [draft, review, published]
default: draft
ejecutable_en: { type: string, required: true, match: "T*" }
"## Summary": { type: section, required: true }
"## Changelog": { type: section, default: "<!-- TODO -->" }
aggregate:
completed: 'len(filter(descendants, .status == "published"))'
links:
allowed: [blocks, depends]Sections (
type: section) are first-class schema fields — validated, defaulted, and queryable alongside frontmatter.
Fields can declare a domain: — a semantic type that says what a field means, independent of its name. This is the rootline equivalent of SQL DOMAIN or JSON Schema format.
schema:
mi_estado:
domain: lifecycle_state # "this field IS the lifecycle state"
values: [borrador, activo, cerrado]
id:
domain: identifier # implies type: sequence
prefix: "T"
digits: 312 core domains: lifecycle_state, record_type, identifier, title, created_date, due_date, owner, parent_ref, priority, description, confidence, source. Custom domains use namespaced format: acme/sprint_velocity.
Why domains matter:
- Type inference:
domain: lifecycle_stateimpliestype: enum— no need to declare both - Virtual aliases:
rootline query --where 'lifecycle_state == "activo"'works regardless of the field's actual name - Consumer tools: AI agents and MCP clients resolve fields by domain, not by name — works across projects with different naming conventions
- Governance:
rootline analyzeflags fields without domains as governance gaps
Rootline ships as a single static Go binary with no dependencies.
Universal Filtering: Most commands support
--where 'expr'(expr-lang syntax) to filter records before processing.
# Core
rootline validate [file|--all|--staged] [--where 'expr'] [--strict] # Check documents against .stem rules
rootline query [path] --where 'expr' [--count] [--limit N] # Search by metadata (expr-lang syntax)
rootline describe <path> # Show effective schema for a directory
rootline tree [path] [--where 'expr'] # Hierarchical view with completion counts
rootline stats [path] [--where 'expr'] # Summary counts by estado and tipo
rootline graph [path] [--where 'expr'] # Dependency graph (DOT, Mermaid, --check, --open)
rootline explain <file> # Trace field origins, derivations, and errors
# Document lifecycle
rootline init [path] [--force] [--template owner/repo] # Infer .stem or fetch from remote
rootline new <file> [--force] [--dry-run] # Scaffold document from effective schema
rootline set <file> field=value [...] # Mutate frontmatter and sections with validation
rootline fix [file|--all] # Auto-repair: add fields, fix enums, propose changes
rootline validate --all --where 'expr' # Validate only records matching filter
rootline migrate [path] # Detect schema changes, rename, split, --to-v2, --from-levels
rootline analyze [path] [--incremental] # Run 16 detectors (data + governance), produce report
rootline apply [file] [--dry-run] # Apply inference results to .stem and docs
# Tooling
rootline hooks install|uninstall|status # Git pre-commit hook management
rootline completion bash|zsh|fish # Shell completion scripts
rootline serve # MCP server (JSON-RPC 2.0 over stdio)All commands support --output json|table and --field for dot-path extraction:
rootline describe docs/prd/ --field schema.id.next
# "T004"
rootline query --where 'estado == "Pending"' --field path
# docs/projects/P01/tasks/T005-deploy-grafana.md
rootline tree docs/epics/ --where 'estado != "Completed"'
rootline stats docs/epics/ --where 'tipo == "software-module"'
rootline graph docs/epics/ --where 'tipo != "feature"' --checkQueries use expr-lang/expr syntax. Multiple --where flags are combined with AND:
rootline query --where 'estado == "Pending"'
rootline query --where 'tipo in ["lxc", "vm"]' --where 'estado != "Completed"'
rootline query --where 'body contains "migration"'
rootline query --where 'tags != nil' --count.stem files can define derived fields (computed per-record) and aggregates (rolled up from children to parent index files):
derive:
slug: 'slugify(titulo)'
name_lower: 'lower(nombre)'
aggregate:
total: 'len(descendants)'
completed: 'len(filter(descendants, .estado == "Completed"))'Derived and aggregated fields appear alongside frontmatter in query results, stats, and tree output.
Documents reference each other via [[wiki-links]] in their body. Rootline extracts these links and builds a directed graph:
rootline graph docs/ --format mermaid # Mermaid diagram
rootline graph docs/ --format dot # Graphviz DOT
rootline graph docs/ --check # Validate: detect cycles and broken links
rootline graph docs/ --open # Open interactive diagram in browserLink schemas in .stem files control which link types are allowed and validate targets against regex patterns.
rootline fix goes beyond adding missing fields — it proposes intelligent repairs:
rootline fix doc.md --dry-run # Preview proposed changes
rootline fix --all # Fix all files in scopeProposals include: correct misspelled enum values (Levenshtein matching), extend .stem enums for new valid values, migrate values with wiki-link insertion, and infer fields from child documents.
rootline explain traces why a document has its current state — field origins, derivation expressions, aggregation sources, and validation errors:
rootline explain docs/projects/P01/F01/README.mdShows each field's origin (frontmatter, schema default, derived, or aggregated) with the source .stem file and expression.
Rootline is designed as a structured knowledge source for AI assistants. All commands output stable JSON with "version": 1 contracts, making them suitable for tool use and automation.
rootline serve starts a Model Context Protocol (MCP) server over stdio, exposing 9 tools via JSON-RPC 2.0. AI assistants query Rootline using the same contracts as the CLI.
Configure in Claude Desktop or any MCP client:
{
"mcpServers": {
"rootline": {
"command": "rootline",
"args": ["serve"]
}
}
}Available tools: query, validate, describe, tree, stats, explain, fix, graph, set. See MCP Server docs for full tool catalog.
| Topic | Description |
|---|---|
| Init | Schema inference from existing documents |
| Validate | Validation rules, batch mode, staged checks |
| Describe | Describe output, field extraction, source tracking |
| Query Engine | Query contract, operators, result shapes |
| New | Document scaffolding from effective schema |
| Set | Mutate frontmatter and sections with schema validation |
| Fix & Proposals | Auto-repair, enum correction, field inference |
| Explain | Field origin tracing, derivation chain, error diagnosis |
| Tree | Hierarchical view with completion counts |
| Stats | Summary counts by type and state |
| Dependency Graph | Wiki-links, link schema, cycle detection, DOT/Mermaid |
| Derivation Engine | Derive and aggregate expressions, builtins, linked fields |
| Schema Migration | Breaking change detection, field rename, v2 upgrade |
| Levels & Match | Hierarchical field scoping with match patterns |
| MCP Server | Tool catalog, setup, JSON-RPC protocol |
| Extensibility | Extractor architecture, future formats |
| Visual Identity | Logo, colors, usage guidelines |
| Distribution Pipeline | Marketplace distribution pipeline |
go build ./cmd/rootline/ # Build
go test ./... -race # Tests with race detector
go vet ./... # Static analysis
golangci-lint run ./... # Full lintPre-commit hooks run golangci-lint and gofmt automatically. Commits follow Conventional Commits (type(scope): description), enforced by a commit-msg hook. To manually sync skills and rebuild after pulling: bash .githooks/pre-push.
PolyForm Noncommercial 1.0.0 — free for non-commercial use.