diff --git a/README.md b/README.md index 0f54709..34a4e1c 100644 --- a/README.md +++ b/README.md @@ -23,6 +23,9 @@ Development skills for AI coding agents. Plug into your favorite AI coding tool | `minimax-xlsx` | Open, create, read, analyze, edit, or validate Excel/spreadsheet files (.xlsx, .xlsm, .csv, .tsv). Covers creating new xlsx from scratch via XML templates, reading and analyzing with pandas, editing existing files with zero format loss, formula recalculation, validation, and professional financial formatting. | Official | | `minimax-docx` | Professional DOCX document creation, editing, and formatting using OpenXML SDK (.NET). Three pipelines: create new documents from scratch, fill/edit content in existing documents, or apply template formatting with XSD validation gate-check. | Official | | `vision-analysis` | Analyze, describe, and extract information from images using vision AI models. Supports describe, OCR, UI mockup review, chart data extraction, and object detection. Powered by MiniMax VL API with OpenAI GPT-4V fallback. | Community | +| `markdown-mcp` | MCP tool for efficient markdown file reading and editing. Replace Python scripts with direct file operations — get file info, grep, read sections, replace text, insert lines via MCP tools. Python stdlib only, no dependencies. | Community | +| `context-maintainer` | Maintain persistent project context across AI agent sessions. Creates and updates `.context/` files (goals, architecture, decisions, active files, stale files). Triggers on git commit, context threshold, or session end. Gitignored by default. | Community | +| `context-search` | Search and retrieve persistent context across sessions. Works with `.context/` files — get active context, search decisions, find file history. Optional QMD integration for semantic search. Export to Obsidian vault supported. | Community | | `minimax-multimodal-toolkit` | Generate voice, music, video, and image content via MiniMax APIs — the unified entry for MiniMax multimodal use cases. Covers TTS (text-to-speech, voice cloning, voice design, multi-segment), music (songs, instrumentals), video (text-to-video, image-to-video, start-end frame, subject reference, templates, long-form multi-scene), image (text-to-image, image-to-image with character reference), and media processing (convert, concat, trim, extract) via FFmpeg. | Official | ## Installation diff --git a/README_zh.md b/README_zh.md index 01374ce..d46a355 100644 --- a/README_zh.md +++ b/README_zh.md @@ -4,15 +4,15 @@ > **Beta** — 本项目正在积极开发中。技能内容、API 和配置格式可能会在不另行通知的情况下变更。欢迎反馈和贡献。 -面向 AI 编程工具的开发技能库。接入你常用的 AI 编程工具,获得结构化的、生产级质量的前端、全栈、Android、iOS 和着色器开发指导。 +面向 AI 编程工具的开发技能库。接入你常用的 AI 编程工具,获得结构化的,生产级质量的前端、全栈、Android、iOS 和着色器开发指导。 ## 技能列表 | 技能                                 | 简介 | 来源 | |---------------------------------------|------|------| -| `frontend-dev` | 全栈前端开发,融合高级 UI 设计、电影级动画(Framer Motion、GSAP)、通过 MiniMax API 生成媒体资源(图片、视频、音频、音乐、TTS)、基于 AIDA 框架的说服力文案、生成艺术(p5.js、Three.js、Canvas)。技术栈:React / Next.js、Tailwind CSS。 | Official | +| `frontend-dev` | 全栈前端开发,融合高级 UI 设计、电影级动画(Framer Motion、GSAP)、通过 MiniMax API 生成媒体资源(图片、视频、音频,音乐、TTS)、基于 AIDA 框架的说服力文案、生成艺术(p5.js、Three.js、Canvas)。技术栈:React / Next.js、Tailwind CSS。 | Official | | `fullstack-dev` | 全栈后端架构与前后端集成。REST API 设计、认证流程(JWT、Session、OAuth)、实时功能(SSE、WebSocket)、数据库集成(SQL / NoSQL)、生产环境加固与发布清单。引导式工作流:需求收集 → 架构决策 → 实现。 | Official | -| `android-native-dev` | 基于 Material Design 3 的 Android 原生应用开发。Kotlin / Jetpack Compose、自适应布局、Gradle 配置、无障碍(WCAG)、构建问题排查、性能优化与动效系统。 | Official | +| `android-native-dev` | 基于 Material Design 3 的 Android 原生应用开发。Kotlin / Jetpack Compose、自适应布局、Gradle 配置、无障碍(WCAG)、构建问题排查,性能优化与动效系统。 | Official | | `ios-application-dev` | iOS 应用开发指南,涵盖 UIKit、SnapKit 和 SwiftUI。触控目标、安全区域、导航模式、Dynamic Type、深色模式、无障碍、集合视图,符合 Apple HIG 规范。 | Official | | `flutter-dev` | Flutter 跨平台开发指南,涵盖 Widget 模式、Riverpod/Bloc 状态管理、GoRouter 导航、性能优化与测试策略。 | Official | | `react-native-dev` | React Native 与 Expo 开发指南,涵盖组件、样式、动画、导航、状态管理、表单、网络请求、性能优化、测试、原生能力及工程化(项目结构、部署、SDK 升级、CI/CD)。 | Official | @@ -20,9 +20,12 @@ | `gif-sticker-maker` | 将照片(人物、宠物、物品、Logo)转换为 4 张带字幕的动画 GIF 贴纸。Funko Pop / Pop Mart 盲盒风格,基于 MiniMax 图片与视频生成 API。 | Official | | `minimax-pdf` | 基于 token 化设计系统生成、填写和重排 PDF 文档。支持三种模式:CREATE(从零生成,15 种封面风格)、FILL(填写现有表单字段)、REFORMAT(将已有文档重排为新设计)。排版与配色由文档类型自动推导,输出即可打印。 | Official | | `pptx-generator` | 生成、编辑和读取 PowerPoint 演示文稿。支持用 PptxGenJS 从零创建(封面、目录、内容、分节页、总结页),通过 XML 工作流编辑现有 PPTX,或用 markitdown 提取文本。 | Official | -| `minimax-xlsx` | 打开、创建、读取、分析、编辑或验证 Excel/电子表格文件(.xlsx、.xlsm、.csv、.tsv)。支持通过 XML 模板从零创建 xlsx、使用 pandas 读取分析、零格式损失编辑现有文件、公式重算与验证、专业财务格式化。 | Official | -| `minimax-docx` | 基于 OpenXML SDK(.NET)的专业 DOCX 文档创建、编辑与排版。三条流水线:从零创建新文档、填写/编辑现有文档内容、应用模板格式并通过 XSD 验证门控检查。 | Official | +| `minimax-xlsx` | 打开、创建、读取,分析、编辑或验证 Excel/电子表格文件(.xlsx、.xlsm、.csv、.tsv)。支持通过 XML 模板从零创建 xlsx、使用 pandas 读取分析、零格式损失编辑现有文件、公式重算与验证,专业财务格式化。 | Official | +| `minimax-docx` | 基于 OpenXML SDK(.NET)的专业 DOCX 文档创建、编辑与排版。三条流水线:从零创建新文档、填写/编辑现有文档内容,应用模板格式并通过 XSD 验证门控检查。 | Official | | `vision-analysis` | 使用视觉 AI 模型分析、描述和提取图像信息。支持描述、OCR 文字识别、UI 界面审查、图表数据提取和物体检测。基于 MiniMax VL API,OpenAI GPT-4V 作为备选。 | Community | +| `markdown-mcp` | 高效读取和编辑 Markdown 文件的 MCP 工具。无需编写 Python 脚本——通过 MCP 工具直接获取文件信息、搜索、读取章节、替换文本、插入行。纯 Python 标准库,无依赖。 | Community | +| `context-maintainer` | 在 AI Agent 会话之间维护持久化项目上下文。创建并更新 `.context/` 文件(目标、架构、决策、活动文件、陈旧文件)。在 git 提交、上下文阈值或会话结束时触发。默认加入 .gitignore。 | Community | +| `context-search` | 在会话之间搜索和检索持久化上下文。与 `.context/` 文件协同工作——获取活跃上下文、搜索决策、查找文件历史。可选 QMD 集成实现语义搜索。支持导出到 Obsidian vault。 | Community | | `minimax-multimodal-toolkit` | 通过 MiniMax API 生成语音、音乐、视频和图片内容 — MiniMax 多模态使用场景的统一入口。涵盖 TTS(文字转语音、声音克隆、声音设计、多段合成)、音乐(带词歌曲、纯音乐)、视频(文生视频、图生视频、首尾帧、主体参考、模板、长视频多场景)、图片(文生图、图生图含角色参考),以及基于 FFmpeg 的媒体处理(格式转换、拼接、裁剪、提取)。 | Official | ## 安装 diff --git a/skills/context-maintainer/SKILL.md b/skills/context-maintainer/SKILL.md new file mode 100644 index 0000000..a12ba30 --- /dev/null +++ b/skills/context-maintainer/SKILL.md @@ -0,0 +1,209 @@ +--- +name: context-maintainer +description: > + Maintain persistent project context across AI agent sessions using .context/ files. + Load this skill when: the user asks you to "understand this project" or "get context"; + you start a new session and need to understand the project; you complete significant work + and need to record it; the user mentions goals, decisions, or architecture; or you + encounter unfamiliar code and need context. + Triggers: understand this project, get context, maintain context, update goals, + record decision, log this change, what are the current goals, what's the architecture, + what have we done so far. + Also use when: context window is ~70% full (trigger checkpoint); you do a git commit; + you modify more than 30 lines across files; user asks about project structure or goals. +license: MIT +metadata: + version: "1.0" + category: productivity + tools: context-maintainer-mcp +--- + +# Context Maintainer + +Maintain persistent context files in `.context/` that survive across sessions. No more re-scanning the repo every new chat. + +## Core Files + +`.context/` lives in the project root and contains: + +| File | What it holds | When to read/write | +|---|---|---| +| `project.md` | Project overview, key files | Read at session start | +| `architecture.md` | System design, data flow | Read on demand | +| `decisions.md` | Choices made and why | Write when decision made | +| `active-files.md` | Files being worked on + freshness | Update after significant changes | +| `stale-files.md` | Deprecated approaches | Write when flagging stale | +| `goals.md` | Long-term + short-term goals | Read at start, update on checkpoint | +| `relationships.md` | Key dependencies | Read on demand | +| `recent-commits.md` | Last 10 git commits | Update after commits | + +## Prerequisites + +The `context-maintainer-mcp` MCP server must be installed and configured. + +**OpenCode** — add to `~/.config/opencode/opencode.json` or `package.json`: +```json +{ + "mcp": { + "context-maintainer": { + "type": "local", + "command": ["uvx", "context-maintainer-mcp"], + "enabled": true + } + } +} +``` + +Restart the app. Verify with `/mcp` — `context-maintainer` should appear. + +## MCP Tools + +| Tool | When to call | +|---|---| +| `init_context` | User asks "understand this project" | +| `update_context` | After git commit OR significant change (>30 lines) | +| `checkpoint_goals` | Context window ~70% full OR session end | +| `flag_stale` | When a file/approach becomes outdated | +| `promote_stale` | When a stale file becomes relevant again | +| `gitignore_add` | After init_context (default: context is gitignored) | +| `gitignore_remove` | User wants to commit .context/ to repo | +| `gitignore_status` | Check if .context/ is tracked | + +--- + +## When to Update Context + +**NEVER update after every small edit.** Context updates are for significant milestones. + +**Call `update_context` when:** +- After a `git add + commit` — a commit signals meaningful work is done +- You modified > 30 lines across one or more files in one session +- You add a new file that changes project structure +- You remove a significant file or feature + +**Call `checkpoint_goals` when:** +- Context window is ~70% full (before it runs out) +- User says "save my progress" or ends the session +- You want to summarize what you've done before continuing + +**Call `flag_stale` when:** +- A file is deprecated and replaced by a new approach +- A previous decision is invalidated by a new one +- An approach described in active-files.md is no longer relevant + +--- + +## Workflows + +### New Project — First Time + +``` +1. Call init_context(path="/path/to/project") + → Creates .context/ with all files + recent commits +2. Call gitignore_add(path="/path/to/project") + → Adds .context/ to .gitignore (context is personal by default) +3. Read goals.md — update it if user has stated goals +4. Present a brief summary of what was created +``` + +### New Session — Resume Work + +``` +1. Call get_active_context(path="/path/to/project") [via context-search] + OR read goals.md + project.md directly +2. Ask: "What are the current goals?" (read goals.md) +3. Continue work +4. Call update_context when significant changes happen +``` + +### After Significant Change (e.g., after git commit) + +``` +1. Call update_context(path, file="path/to/file.md", summary="What changed") + → Updates active-files.md with freshness + git history +2. If a design decision was made, also call: + → decisions.md entry (manually write, not via tool) +3. Call checkpoint_goals if context is getting full +``` + +### Before Context Runs Out (~70% window) + +``` +1. Call checkpoint_goals(path, short_term=["current work"], long_term=["original goals"]) + → Updates goals.md with session history +2. This frees context space — the files persist and will be read next session +``` + +### Recording a Decision + +``` +1. Read decisions.md +2. Append entry: + ## [YYYY-MM-DD] Decision Title + **Status:** Accepted + **Context:** Problem or question + **Decision:** What was chosen + **Consequences:** What changed +``` + +--- + +## Goals.md Format + +```markdown +--- +last_updated: 2026-04-04T18:30:00Z +--- + +# Long-term Goals +- Migrate MPI halo exchange to non-blocking +- Add benchmarking suite + +# Short-term Goals +- [ACTIVE] Fix MPI halo exchange bug +- [DONE] Add Change 1a timing to opt_log.md + +# Session History +- [2026-04-04] Project context initialized +- [2026-04-05] Added Change 1a: MPI_Init timing +``` + +--- + +## Gitignore — Default: Excluded + +`.context/` is added to `.gitignore` by default (not committed). This keeps personal context private. + +**To share context with the team:** +``` +Call gitignore_remove(path) # removes .context/ from .gitignore +``` +Then commit `.context/` to the repo. + +**To keep context private:** +``` +Call gitignore_add(path) # re-adds .context/ to .gitignore +``` + +--- + +## Context Window Management + +The agent should track its own context usage. When it notices context is ~70% full: + +1. Call `checkpoint_goals` with current short-term and long-term goals +2. Summarize in-session progress into goals.md Session History +3. Continue working — next session will read goals.md and resume + +**Key rule:** Never let context run out mid-task. Checkpoint proactively. + +--- + +## Anti-Patterns + +- ❌ Calling `update_context` after every file edit +- ❌ Writing full file contents into chat instead of using context files +- ❌ Ignoring goals — always know what the user is trying to achieve +- ❌ Not checkpointing before context runs out +- ❌ Committing `.context/` to git by default (personal context) +- ❌ Let stale files accumulate without flagging them diff --git a/skills/context-maintainer/scripts/pyproject.toml b/skills/context-maintainer/scripts/pyproject.toml new file mode 100644 index 0000000..feb54aa --- /dev/null +++ b/skills/context-maintainer/scripts/pyproject.toml @@ -0,0 +1,12 @@ +[project] +name = "context-maintainer-mcp" +version = "1.0.0" +description = "MCP server for maintaining persistent .context/ files across AI agent sessions" +requires-python = ">=3.9" +dependencies = [] + +[project.scripts] +context-maintainer-mcp = "server:main" + +[tool.upload] +distributions = ["sdist", "wheel"] diff --git a/skills/context-maintainer/scripts/requirements.txt b/skills/context-maintainer/scripts/requirements.txt new file mode 100644 index 0000000..6904a25 --- /dev/null +++ b/skills/context-maintainer/scripts/requirements.txt @@ -0,0 +1 @@ +# No external dependencies — Python stdlib only diff --git a/skills/context-maintainer/scripts/server.py b/skills/context-maintainer/scripts/server.py new file mode 100644 index 0000000..323cdff --- /dev/null +++ b/skills/context-maintainer/scripts/server.py @@ -0,0 +1,714 @@ +#!/usr/bin/env python3 +# SPDX-License-Identifier: MIT +""" +context-maintainer MCP server — manages .context/ files for AI coding agents. + +Usage: + uvx context-maintainer-mcp + +Provides tools for creating, updating, and managing project context files +that persist across AI agent sessions. + +Environment: + No external dependencies — Python stdlib only. +""" + +import os +import sys +import json +import subprocess +import re +from pathlib import Path +from datetime import datetime, timezone + +CONTEXT_DIR = ".context" +CTX_FILES = [ + "project.md", + "architecture.md", + "decisions.md", + "active-files.md", + "stale-files.md", + "goals.md", + "relationships.md", + "recent-commits.md", +] + + +def iso_now(): + return datetime.now(timezone.utc).strftime("%Y-%m-%dT%H:%M:%SZ") + + +def read_message(): + while True: + line = sys.stdin.readline() + if not line: + return None + line = line.strip() + if line: + try: + return json.loads(line) + except json.JSONDecodeError: + continue + + +def send_response(resp): + sys.stdout.write(json.dumps(resp) + "\n") + sys.stdout.flush() + + +def send_error(req_id, code, message): + send_response( + {"jsonrpc": "2.0", "id": req_id, "error": {"code": code, "message": message}} + ) + + +def run_git(cmd, cwd=None): + try: + result = subprocess.run( + cmd, cwd=cwd, capture_output=True, text=True, timeout=10 + ) + return result.stdout.strip(), result.returncode + except Exception as e: + return str(e), 1 + + +def get_git_history(path, limit=10): + output, code = run_git(["git", "log", f"--oneline", f"-{limit}"], cwd=path) + if code != 0: + return [] + return [l.strip() for l in output.splitlines() if l.strip()] + + +def get_git_last_commit(path, rel_path): + output, code = run_git( + ["git", "log", "-1", "--format=%h %s %an", "--", rel_path], cwd=path + ) + if code == 0 and output: + return output.strip() + return "unknown" + + +def get_file_modified(path, rel_path): + full = os.path.join(path, rel_path) + if not os.path.exists(full): + return None + mtime = os.path.getmtime(full) + return datetime.fromtimestamp(mtime, tz=timezone.utc).strftime("%Y-%m-%d %H:%M") + + +def freshness_score(path, rel_path): + modified = get_file_modified(path, rel_path) + if not modified: + return "UNKNOWN" + try: + mtime = datetime.strptime(modified, "%Y-%m-%d %H:%M") + age = (datetime.now(timezone.utc) - mtime.replace(tzinfo=timezone.utc)).days + if age <= 1: + return "HIGH" + elif age <= 7: + return "MEDIUM" + else: + return "LOW" + except Exception: + return "UNKNOWN" + + +def scan_repo(path): + """Scan a repository and generate initial context files.""" + path = str(Path(path).expanduser().resolve()) + ctx_dir = os.path.join(path, CONTEXT_DIR) + + all_files = [] + code_files = [] + md_files = [] + + for root, dirs, files in os.walk(path): + dirs[:] = [ + d + for d in dirs + if not d.startswith(".") + and d + not in ( + "node_modules", + "__pycache__", + "venv", + ".venv", + "build", + "dist", + "target", + ) + ] + for f in files: + if f.startswith("."): + continue + fp = os.path.join(root, f) + rel = os.path.relpath(fp, path) + all_files.append(rel) + if f.endswith( + ( + ".py", + ".c", + ".cpp", + ".h", + ".hpp", + ".js", + ".ts", + ".jsx", + ".tsx", + ".go", + ".rs", + ".java", + ) + ): + code_files.append(rel) + elif f.endswith(".md"): + md_files.append(rel) + + os.makedirs(ctx_dir, exist_ok=True) + + recent_commits = get_git_history(path, limit=10) + recent_commits_md = ( + "\n".join(f"- {c}" for c in recent_commits) + if recent_commits + else "- No commits found" + ) + + project_md = f"""# Project Overview + +Generated: {iso_now()} + +## Project Root +`{path}` + +## Key Files +{len(all_files)} total files tracked. + +## Code Files ({len(code_files)}) +{chr(10).join(f"- {f}" for f in code_files[:30])} +{"... and more" if len(code_files) > 30 else ""} + +## Markdown Docs +{chr(10).join(f"- {f}" for f in md_files)} +""" + + architecture_md = """# Architecture + +## System Design + + +## Directory Structure + + +## Entry Points + +""" + + decisions_md = """# Decisions Log + +## Why This File? +This file records significant technical decisions made during the project — the options considered, the choice made, and the reasoning behind it. + + +""" + + goals_md = f"""--- +last_updated: {iso_now()} +--- + +# Long-term Goals + + +# Short-term Goals + + +# Session History +- [{datetime.now().strftime("%Y-%m-%d")}] Project context initialized by context-maintainer +""" + + relationships_md = """# File Relationships + +## Key Dependencies + + +## Entry Point + +""" + + stale_files_md = f"""# Stale Files + +## What is Stale? +Files or approaches marked here are deprecated or outdated. They are kept for reference but should not be used as examples of current practice. + +## Deprecated Approaches + +""" + + active_files_md = f"""# Active Files + +## What is Active? +Files being currently worked on or frequently modified. These are the primary targets for the agent's attention. + +## Freshness Scores +- HIGH: modified < 24h +- MEDIUM: modified < 7 days +- LOW: modified > 7 days + +## Active Files + + +## Recently Modified +""" + + files_written = {} + contents = { + "project.md": project_md, + "architecture.md": architecture_md, + "decisions.md": decisions_md, + "goals.md": goals_md, + "relationships.md": relationships_md, + "stale-files.md": stale_files_md, + "active-files.md": active_files_md, + "recent-commits.md": f"# Recent Commits\n\nGenerated: {iso_now()}\n\n{recent_commits_md}", + } + + for fname, content in contents.items(): + fpath = os.path.join(ctx_dir, fname) + with open(fpath, "w") as fh: + fh.write(content) + files_written[fname] = len(content.splitlines()) + + gitignore_path = os.path.join(path, ".gitignore") + gitignore_content = "" + if os.path.exists(gitignore_path): + with open(gitignore_path) as f: + gitignore_content = f.read() + + already_ignored = ( + ".context/" in gitignore_content or f"{CONTEXT_DIR}/" in gitignore_content + ) + + return { + "success": True, + "context_dir": ctx_dir, + "files_created": list(contents.keys()), + "gitignore_note": "already in .gitignore" + if already_ignored + else ".context/ NOT in .gitignore — run gitignore_add to exclude from commits", + } + + +def update_context(path, file, summary, last_commit=None): + """Update or add an entry in active-files.md.""" + path = str(Path(path).expanduser().resolve()) + ctx_file = os.path.join(path, CONTEXT_DIR, "active-files.md") + if not os.path.exists(ctx_file): + return { + "success": False, + "error": f"Context not initialized. Run init_context first.", + } + + modified = get_file_modified(path, file) + freshness = freshness_score(path, file) + commit = last_commit or get_git_last_commit(path, file) + + marker = f"## {file}" + new_entry = f"""{marker} +- **modified:** {modified or "unknown"} +- **last_commit:** {commit} +- **freshness:** {freshness} +- **summary:** {summary} +""" + + with open(ctx_file) as f: + content = f.read() + + if marker in content: + existing_pattern = re.compile( + r"(## " + re.escape(file) + r".*?)(?=\n## |\n#|\Z)", re.DOTALL + ) + content = existing_pattern.sub(new_entry.strip(), content, count=1) + else: + content = content.rstrip() + "\n\n" + new_entry + + with open(ctx_file, "w") as f: + f.write(content) + + return { + "success": True, + "file": file, + "freshness": freshness, + "last_commit": commit, + } + + +def checkpoint_goals(path, short_term, long_term): + """Update goals.md with current goals.""" + path = str(Path(path).expanduser().resolve()) + goals_file = os.path.join(path, CONTEXT_DIR, "goals.md") + if not os.path.exists(goals_file): + return { + "success": False, + "error": "Context not initialized. Run init_context first.", + } + + with open(goals_file) as f: + content = f.read() + + lt_lines = ( + "\n".join(f"- {s}" for s in long_term) + if isinstance(long_term, list) + else str(long_term) + ) + st_lines = ( + "\n".join(f"- {s}" for s in short_term) + if isinstance(short_term, list) + else str(short_term) + ) + + content = re.sub( + r"(?<=^# Long-term Goals\n).*?(?=\n#)", + f"\n{lt_lines}\n", + content, + flags=re.DOTALL | re.MULTILINE, + ) + content = re.sub( + r"(?<=^# Short-term Goals\n).*?(?=\n#|\Z)", + f"\n{st_lines}\n", + content, + flags=re.DOTALL | re.MULTILINE, + ) + content = re.sub(r"last_updated:.*", f"last_updated: {iso_now()}", content) + + session_entry = f"- [{datetime.now().strftime('%Y-%m-%d')}] Goals checkpointed" + if "## Session History" not in content: + content += f"\n\n## Session History\n{session_entry}\n" + else: + content = re.sub( + r"(?<=## Session History\n).*?(?=\n#|\Z)", + lambda m: (m.group() or "").rstrip() + f"\n{session_entry}\n", + content, + flags=re.DOTALL, + ) + + with open(goals_file, "w") as f: + f.write(content) + + return {"success": True, "last_updated": iso_now()} + + +def flag_stale(path, pattern, reason): + """Flag a file/pattern as stale in stale-files.md.""" + path = str(Path(path).expanduser().resolve()) + stale_file = os.path.join(path, CONTEXT_DIR, "stale-files.md") + active_file = os.path.join(path, CONTEXT_DIR, "active-files.md") + if not os.path.exists(stale_file): + return {"success": False, "error": "Context not initialized."} + + entry = f""" +### {pattern} +**Reason:** {reason} +**Date flagged:** {datetime.now().strftime("%Y-%m-%d")} +""" + with open(stale_file) as f: + content = f.read() + content = content.rstrip() + "\n" + entry + with open(stale_file, "w") as f: + f.write(content) + + if os.path.exists(active_file): + with open(active_file) as f: + active = f.read() + marker = f"## {pattern}" + if marker in active: + active = re.sub( + r"(## " + re.escape(pattern) + r".*?)(?=\n## |\n#|\Z)", + f"## {pattern} # STALE", + active, + count=1, + flags=re.DOTALL, + ) + with open(active_file, "w") as f: + f.write(active) + + return {"success": True, "pattern": pattern, "reason": reason} + + +def promote_stale(path, pattern): + """Remove stale flag from a pattern.""" + path = str(Path(path).expanduser().resolve()) + stale_file = os.path.join(path, CONTEXT_DIR, "stale-files.md") + active_file = os.path.join(path, CONTEXT_DIR, "active-files.md") + if not os.path.exists(stale_file): + return {"success": False, "error": "Context not initialized."} + + with open(stale_file) as f: + content = f.read() + entry_pattern = re.compile( + r"\n+### " + re.escape(pattern) + r".*?(?=\n### |\n#|\Z)", re.DOTALL + ) + removed = bool(entry_pattern.search(content)) + if removed: + content = entry_pattern.sub("", content) + with open(stale_file, "w") as f: + f.write(content) + + if os.path.exists(active_file): + with open(active_file) as f: + active = f.read() + active = active.replace(f"## {pattern} # STALE", f"## {pattern}") + freshness = freshness_score(path, pattern) + with open(active_file, "w") as f: + f.write(active) + + return {"success": True, "pattern": pattern, "removed_stale": removed} + + +def gitignore_add(path): + """Add .context/ to .gitignore.""" + path = str(Path(path).expanduser().resolve()) + gi_path = os.path.join(path, ".gitignore") + if os.path.exists(gi_path): + with open(gi_path) as f: + content = f.read() + else: + content = "" + if ".context/" not in content: + content += "\n.context/\n" + with open(gi_path, "w") as f: + f.write(content) + return {"success": True, "path": gi_path, "added": ".context/"} + + +def gitignore_remove(path): + """Remove .context/ from .gitignore (opt-in to track).""" + path = str(Path(path).expanduser().resolve()) + gi_path = os.path.join(path, ".gitignore") + if not os.path.exists(gi_path): + return {"success": False, "error": ".gitignore not found"} + with open(gi_path) as f: + lines = f.readlines() + new_lines = [l for l in lines if ".context/" not in l.rstrip("\n")] + with open(gi_path, "w") as f: + f.writelines(new_lines) + return {"success": True, "path": gi_path, "removed": ".context/"} + + +def gitignore_status(path): + """Check if .context/ is in .gitignore.""" + path = str(Path(path).expanduser().resolve()) + gi_path = os.path.join(path, ".gitignore") + if not os.path.exists(gi_path): + return { + "tracked": True, + "path": gi_path, + "note": ".gitignore doesn't exist — context would be tracked by default", + } + with open(gi_path) as f: + content = f.read() + ignored = ".context/" in content or f"{CONTEXT_DIR}/" in content + return {"tracked": not ignored, "path": gi_path} + + +TOOLS = [ + { + "name": "init_context", + "description": "Initialize .context/ directory in a project. Scans the repo, generates all context files, and optionally adds .context/ to .gitignore.", + "inputSchema": { + "type": "object", + "properties": { + "path": {"type": "string", "description": "Path to the project root"} + }, + "required": ["path"], + }, + }, + { + "name": "update_context", + "description": "Update or add an entry in active-files.md with file summary and git history.", + "inputSchema": { + "type": "object", + "properties": { + "path": {"type": "string"}, + "file": { + "type": "string", + "description": "File path relative to project root", + }, + "summary": { + "type": "string", + "description": "Brief description of the file's purpose or recent changes", + }, + "last_commit": { + "type": "string", + "description": "Optional: last git commit for this file", + }, + }, + "required": ["path", "file", "summary"], + }, + }, + { + "name": "checkpoint_goals", + "description": "Update goals.md with current short-term and long-term goals.", + "inputSchema": { + "type": "object", + "properties": { + "path": {"type": "string"}, + "short_term": { + "type": "array", + "items": {"type": "string"}, + "description": "Current short-term goals", + }, + "long_term": { + "type": "array", + "items": {"type": "string"}, + "description": "Long-term goals", + }, + }, + "required": ["path", "short_term", "long_term"], + }, + }, + { + "name": "flag_stale", + "description": "Mark a file or pattern as deprecated/stale.", + "inputSchema": { + "type": "object", + "properties": { + "path": {"type": "string"}, + "pattern": { + "type": "string", + "description": "File name or glob pattern", + }, + "reason": {"type": "string", "description": "Why it is stale"}, + }, + "required": ["path", "pattern", "reason"], + }, + }, + { + "name": "promote_stale", + "description": "Remove stale flag from a pattern, restoring it to active.", + "inputSchema": { + "type": "object", + "properties": {"path": {"type": "string"}, "pattern": {"type": "string"}}, + "required": ["path", "pattern"], + }, + }, + { + "name": "gitignore_add", + "description": "Add .context/ to .gitignore to prevent committing context files.", + "inputSchema": { + "type": "object", + "properties": {"path": {"type": "string"}}, + "required": ["path"], + }, + }, + { + "name": "gitignore_remove", + "description": "Remove .context/ from .gitignore to opt-in to committing context.", + "inputSchema": { + "type": "object", + "properties": {"path": {"type": "string"}}, + "required": ["path"], + }, + }, + { + "name": "gitignore_status", + "description": "Check whether .context/ is currently in .gitignore.", + "inputSchema": { + "type": "object", + "properties": {"path": {"type": "string"}}, + "required": ["path"], + }, + }, +] + + +def handle_request(req): + method = req.get("method") + req_id = req.get("id") + params = req.get("params", {}) + args = params.get("arguments", {}) + + if method == "initialize": + send_response( + { + "jsonrpc": "2.0", + "id": req_id, + "result": { + "protocolVersion": "2024-11-05", + "capabilities": {"tools": {}}, + "serverInfo": { + "name": "context-maintainer-mcp", + "version": "1.0.0", + }, + }, + } + ) + return + + if method == "tools/list": + send_response({"jsonrpc": "2.0", "id": req_id, "result": {"tools": TOOLS}}) + return + + if method == "tools/call": + tool_name = args.get("name") + tool_args = {k: v for k, v in args.items() if k not in ("name",)} + + if tool_name == "init_context": + result = scan_repo(**tool_args) + elif tool_name == "update_context": + result = update_context(**tool_args) + elif tool_name == "checkpoint_goals": + result = checkpoint_goals(**tool_args) + elif tool_name == "flag_stale": + result = flag_stale(**tool_args) + elif tool_name == "promote_stale": + result = promote_stale(**tool_args) + elif tool_name == "gitignore_add": + result = gitignore_add(**tool_args) + elif tool_name == "gitignore_remove": + result = gitignore_remove(**tool_args) + elif tool_name == "gitignore_status": + result = gitignore_status(**tool_args) + else: + result = {"error": f"Unknown tool: {tool_name}"} + + send_response( + { + "jsonrpc": "2.0", + "id": req_id, + "result": {"content": [{"type": "text", "text": json.dumps(result)}]}, + } + ) + return + + send_error(req_id, -32601, f"Method not found: {method}") + + +def main(): + while True: + msg = read_message() + if msg is None: + break + handle_request(msg) + + +if __name__ == "__main__": + main() diff --git a/skills/context-search/SKILL.md b/skills/context-search/SKILL.md new file mode 100644 index 0000000..64f682c --- /dev/null +++ b/skills/context-search/SKILL.md @@ -0,0 +1,167 @@ +--- +name: context-search +description: > + Search and retrieve persistent project context across AI agent sessions. + Load this skill when: starting a new session; encountering unfamiliar code; + needing to understand project structure or architecture; searching for a decision + or goal; wanting to find active files being worked on; or needing context on + a specific file. + Triggers: search context, find context, what's active, what files are we working on, + look up decision, what's the architecture, show me goals, what did we do last session, + context search, get active context. + Also use when: you need to understand a file you haven't seen before; you want to + check if something is stale; you want to export context to Obsidian. +license: MIT +metadata: + version: "1.0" + category: productivity + tools: context-search-mcp +--- + +# Context Search + +Search and retrieve context from `.context/` files. Works alongside `context-maintainer` — `context-search` reads what `context-maintainer` writes. + +## Core Principle + +**Context files are written by `context-maintainer` during work. `context-search` retrieves them.** + +Never write to `.context/` files from this skill — use `context-maintainer` for that. + +## Prerequisites + +The `context-search-mcp` MCP server must be installed and configured. + +**OpenCode** — add to `~/.config/opencode/opencode.json` or `package.json`: +```json +{ + "mcp": { + "context-search": { + "type": "local", + "command": ["uvx", "context-search-mcp"], + "enabled": true + } + } +} +``` + +Restart the app. Verify with `/mcp` — `context-search` should appear. + +## MCP Tools + +| Tool | When to call | +|---|---| +| `get_active_context` | New session starts — get all non-stale context | +| `search_context` | Need to find something specific in context | +| `get_file_context` | Encountering an unfamiliar file | +| `export_to_vault` | User wants to explore context in Obsidian | +| `prune_context` | User asks to clean up old stale entries | +| `check_qmd` | Check if QMD is available for semantic search | + +--- + +## Workflows + +### New Session — Jump-Start + +``` +1. Call get_active_context(path="/path/to/project") + → Returns all active (non-stale) context files as readable summaries +2. Read goals.md output first — always know the goals before anything else +3. Read project.md — understand the project overview +4. Now you can work with full context of where the project is +``` + +### Encountering an Unfamiliar File + +``` +1. Call get_file_context(path="/path/to/project", filename="d2q9-bgk.c") + → Returns: last modified, last commit, freshness, what the file does +2. Call search_context(query="d2q9-bgk.c") if you need more detail +``` + +### Searching for a Decision or Goal + +``` +1. Call search_context(query="MPI halo exchange decision") + → Returns matching lines from decisions.md and other context files +2. If QMD is available, semantic search gives better results +``` + +### QMD Auto-Detection (Optional Enhancement) + +On first `search_context` call, the tool checks if QMD is available: + +- **QMD available + user approves:** Agent asks user: "QMD is available and enables semantic search for your context. Enable it?" + - If yes, agent runs: `qmd collection add /path/to/.context --name project-context` + `qmd embed` + - Subsequent `search_context` calls delegate to QMD's hybrid search +- **QMD not available:** Uses simple grep search (still works, just less semantic) + +### Export to Obsidian + +``` +1. Call export_to_vault(context_path="/path/to/project", vault_path="/path/to/obsidian-vault") + → Copies all .context/*.md files to the Obsidian vault +2. User can now open Obsidian and use its graph view to explore relationships +3. This is one-way (context → vault). External Obsidian changes need manual sync. +``` + +### Prune Old Stale Entries + +``` +1. Call prune_context(path="/path/to/project", older_than_days=30) + → Removes stale entries older than 30 days +2. Present the result: "Removed X old stale entries" +``` + +--- + +## When Context-Search Triggers + +**Always trigger on:** +- New session starts (`get_active_context`) +- Encountering a file you've never seen in the current session (`get_file_context`) +- User asks about architecture, goals, decisions, or project state +- Before starting a new feature, check existing context first + +**Never:** +- Re-read all context files before every edit (use targeted search) +- Dump full context files into chat — summarize and be selective + +--- + +## Context Window Efficiency + +**Token-efficient pattern:** +1. `get_active_context` → returns previews (not full contents) +2. Based on previews, selectively `get_file_context` for specific files +3. `search_context` for targeted lookups + +**Never load all context files fully into the chat** — use the summaries and search. + +--- + +## Example: Starting a New Session + +``` +You: "continue working on the MPI benchmark" + +Agent: + 1. get_active_context(path="/path/to/project") + → goals.md: "[ACTIVE] Fix MPI halo exchange bug" + → active-files.md: "d2q9-bgk.c, opt_log.md are HIGH freshness" + → decisions.md: "Chose non-blocking MPI_Isend over blocking MPI_Send" + 2. Good — I know what we're doing. Resume work. + 3. Make changes + 4. update_context (via context-maintainer) when done +``` + +--- + +## Anti-Patterns + +- ❌ Loading all context files fully into every response +- ❌ Writing to .context/ files from this skill (use context-maintainer) +- ❌ Searching without purpose — be targeted +- ❌ Ignoring freshness scores — HIGH freshness files deserve priority +- ❌ Forgetting to ask about goals at session start diff --git a/skills/context-search/scripts/pyproject.toml b/skills/context-search/scripts/pyproject.toml new file mode 100644 index 0000000..60f4639 --- /dev/null +++ b/skills/context-search/scripts/pyproject.toml @@ -0,0 +1,12 @@ +[project] +name = "context-search-mcp" +version = "1.0.0" +description = "MCP server for searching and retrieving .context/ files — optional QMD integration" +requires-python = ">=3.9" +dependencies = [] + +[project.scripts] +context-search-mcp = "server:main" + +[tool.upload] +distributions = ["sdist", "wheel"] diff --git a/skills/context-search/scripts/requirements.txt b/skills/context-search/scripts/requirements.txt new file mode 100644 index 0000000..57d52c9 --- /dev/null +++ b/skills/context-search/scripts/requirements.txt @@ -0,0 +1,2 @@ +# No external dependencies — Python stdlib only +# Optional: qmd for semantic search (install separately: npm install -g @tobilu/qmd) diff --git a/skills/context-search/scripts/server.py b/skills/context-search/scripts/server.py new file mode 100644 index 0000000..0d63fd7 --- /dev/null +++ b/skills/context-search/scripts/server.py @@ -0,0 +1,418 @@ +#!/usr/bin/env python3 +# SPDX-License-Identifier: MIT +""" +context-search MCP server — search .context/ files for AI coding agents. + +Usage: + uvx context-search-mcp + +Provides tools for searching and retrieving context from .context/ directories. +Includes optional QMD integration for hybrid semantic search. + +Environment: + No external dependencies — Python stdlib only. + QMD integration requires qmd to be installed separately. +""" + +import os +import sys +import json +import re +import shutil +import subprocess +from pathlib import Path +from datetime import datetime, timezone + +CONTEXT_DIR = ".context" + +QMD_CHECKED = False +QMD_AVAILABLE = False + + +def check_qmd(): + global QMD_CHECKED, QMD_AVAILABLE + if QMD_CHECKED: + return QMD_AVAILABLE + QMD_CHECKED = True + result = subprocess.run(["which", "qmd"], capture_output=True, text=True) + if result.returncode == 0: + QMD_AVAILABLE = True + return QMD_AVAILABLE + + +def read_message(): + while True: + line = sys.stdin.readline() + if not line: + return None + line = line.strip() + if line: + try: + return json.loads(line) + except json.JSONDecodeError: + continue + + +def send_response(resp): + sys.stdout.write(json.dumps(resp) + "\n") + sys.stdout.flush() + + +def send_error(req_id, code, message): + send_response( + {"jsonrpc": "2.0", "id": req_id, "error": {"code": code, "message": message}} + ) + + +def get_context_files(path): + ctx_dir = os.path.join(path, CONTEXT_DIR) + if not os.path.isdir(ctx_dir): + return None, f"Context directory not found: {ctx_dir}" + files = {} + for f in os.listdir(ctx_dir): + if f.endswith(".md"): + fpath = os.path.join(ctx_dir, f) + with open(fpath) as fh: + files[f] = fh.read() + return files, None + + +def grep_in_context(path, pattern, max_results=10): + """Simple grep-based search across all .context/ files.""" + files, err = get_context_files(path) + if err: + return {"results": [], "error": err} + + results = [] + pattern_lower = pattern.lower() + for fname, content in files.items(): + lines = content.splitlines() + for i, line in enumerate(lines, 1): + if pattern_lower in line.lower(): + snippet = line.strip()[:120] + context_before = [] + context_after = [] + for j in range(max(0, i - 3), i): + context_before.append((j + 1, lines[j].strip()[:120])) + for j in range(i, min(len(lines), i + 2)): + if j != i - 1: + context_after.append((j + 1, lines[j].strip()[:120])) + score = 1.0 if pattern_lower in snippet.lower() else 0.5 + results.append( + { + "file": fname, + "line": i, + "text": snippet, + "context_before": context_before, + "context_after": context_after, + "score": score, + } + ) + if len(results) >= max_results: + break + results.sort(key=lambda x: x["score"], reverse=True) + return {"results": results, "count": len(results)} + + +def search_context(query, path=None): + """Search .context/ files. If QMD is available, delegate to qmd query.""" + if not path: + return {"results": [], "error": "path is required"} + + ctx_dir = os.path.join(path, CONTEXT_DIR) + if not os.path.isdir(ctx_dir): + return {"results": [], "error": f"Context not initialized at {ctx_dir}"} + + if check_qmd(): + try: + result = subprocess.run( + ["qmd", "query", query, "--json", "-n", "10"], + cwd=ctx_dir, + capture_output=True, + text=True, + timeout=30, + ) + if result.returncode == 0: + parsed = json.loads(result.stdout) + return { + "results": parsed.get("results", []), + "engine": "qmd", + "note": "QMD hybrid search (BM25 + vector + reranking)", + } + except Exception: + pass + + results = grep_in_context(path, query) + results["engine"] = "grep" + results["note"] = "Simple grep search. Install qmd for hybrid semantic search." + return results + + +def get_active_context(path): + """Return all non-stale context entries.""" + files, err = get_context_files(path) + if err: + return {"active": {}, "error": err} + + active = {} + for fname, content in files.items(): + stale_marker = "# STALE" in content or "# stale" in content.lower() + if not stale_marker: + preview = content[:300].strip() + active[fname] = {"preview": preview, "lines": len(content.splitlines())} + else: + active[fname] = { + "preview": "[STALE — see stale-files.md]", + "lines": len(content.splitlines()), + "stale": True, + } + + return {"active": active, "count": len(active)} + + +def get_file_context(path, filename): + """Get context entry for a specific file from active-files.md.""" + active_file = os.path.join(path, CONTEXT_DIR, "active-files.md") + if not os.path.exists(active_file): + return {"entry": None, "error": f"active-files.md not found at {active_file}"} + + with open(active_file) as f: + content = f.read() + + marker = f"## {filename}" + if marker not in content: + return {"entry": None, "found": False, "note": f"No entry found for {filename}"} + + entry = re.search( + r"## " + re.escape(filename) + r".*?(?=\n## |\n#|\Z)", content, re.DOTALL + ) + if entry: + return {"entry": entry.group().strip(), "found": True, "file": filename} + return {"entry": None, "found": False} + + +def export_to_vault(context_path, vault_path): + """Copy .context/ files to an Obsidian vault directory.""" + ctx_dir = os.path.join(context_path, CONTEXT_DIR) + if not os.path.isdir(ctx_dir): + return {"success": False, "error": f"No .context/ directory at {ctx_dir}"} + + vault = Path(vault_path).expanduser().resolve() + os.makedirs(vault, exist_ok=True) + + exported = [] + for f in os.listdir(ctx_dir): + if f.endswith(".md"): + src = os.path.join(ctx_dir, f) + dst = vault / f + shutil.copy2(src, dst) + exported.append(f) + + return { + "success": True, + "vault_path": str(vault), + "exported": exported, + "count": len(exported), + "note": "Context exported to Obsidian vault. You can now use Obsidian's graph view to explore relationships.", + } + + +def prune_context(path, older_than_days=30): + """Remove stale entries older than N days from stale-files.md.""" + stale_file = os.path.join(path, CONTEXT_DIR, "stale-files.md") + if not os.path.exists(stale_file): + return {"pruned": 0, "error": "stale-files.md not found"} + + cutoff = datetime.now(timezone.utc).timestamp() - (older_than_days * 86400) + with open(stale_file) as f: + content = f.read() + + lines = content.splitlines() + new_lines = [] + removed = 0 + i = 0 + while i < len(lines): + line = lines[i] + if "**Date flagged:**" in line: + date_str = line.split("**Date flagged:**")[1].strip() + try: + flagged_date = ( + datetime.strptime(date_str, "%Y-%m-%d") + .replace(tzinfo=timezone.utc) + .timestamp() + ) + if flagged_date < cutoff: + while i < len(lines) and not ( + lines[i].startswith("### ") and i > 0 + ): + i += 1 + removed += 1 + continue + except ValueError: + pass + new_lines.append(line) + i += 1 + + with open(stale_file, "w") as f: + f.write("\n".join(new_lines)) + + return {"success": True, "pruned": removed, "older_than_days": older_than_days} + + +def check_qmd_status(): + """Check if QMD is available and what version.""" + if not check_qmd(): + return { + "available": False, + "note": "QMD not found. Install with: npm install -g @tobilu/qmd", + } + try: + result = subprocess.run(["qmd", "--version"], capture_output=True, text=True) + version = result.stdout.strip() if result.returncode == 0 else "unknown" + return {"available": True, "version": version} + except Exception: + return {"available": True, "note": "QMD found but version check failed"} + + +TOOLS = [ + { + "name": "search_context", + "description": "Search .context/ files for a query. Uses grep by default; if QMD is available, delegates to QMD for hybrid semantic search.", + "inputSchema": { + "type": "object", + "properties": { + "query": {"type": "string", "description": "Search query"}, + "path": {"type": "string", "description": "Path to project root"}, + }, + "required": ["query", "path"], + }, + }, + { + "name": "get_active_context", + "description": "Return all non-stale context entries as a readable summary.", + "inputSchema": { + "type": "object", + "properties": { + "path": {"type": "string", "description": "Path to project root"} + }, + "required": ["path"], + }, + }, + { + "name": "get_file_context", + "description": "Get context entry for a specific file from active-files.md.", + "inputSchema": { + "type": "object", + "properties": { + "path": {"type": "string"}, + "filename": { + "type": "string", + "description": "File name (not full path)", + }, + }, + "required": ["path", "filename"], + }, + }, + { + "name": "export_to_vault", + "description": "Copy .context/ files to an Obsidian vault directory for graph exploration.", + "inputSchema": { + "type": "object", + "properties": { + "context_path": { + "type": "string", + "description": "Path to project root containing .context/", + }, + "vault_path": { + "type": "string", + "description": "Path to Obsidian vault directory", + }, + }, + "required": ["context_path", "vault_path"], + }, + }, + { + "name": "prune_context", + "description": "Remove stale entries older than N days from stale-files.md.", + "inputSchema": { + "type": "object", + "properties": { + "path": {"type": "string"}, + "older_than_days": {"type": "integer", "default": 30}, + }, + "required": ["path"], + }, + }, + { + "name": "check_qmd", + "description": "Check if QMD is available on the system for optional semantic search.", + "inputSchema": {"type": "object", "properties": {}}, + }, +] + + +def handle_request(req): + method = req.get("method") + req_id = req.get("id") + params = req.get("params", {}) + args = params.get("arguments", {}) + + if method == "initialize": + send_response( + { + "jsonrpc": "2.0", + "id": req_id, + "result": { + "protocolVersion": "2024-11-05", + "capabilities": {"tools": {}}, + "serverInfo": {"name": "context-search-mcp", "version": "1.0.0"}, + }, + } + ) + return + + if method == "tools/list": + send_response({"jsonrpc": "2.0", "id": req_id, "result": {"tools": TOOLS}}) + return + + if method == "tools/call": + tool_name = args.get("name") + tool_args = {k: v for k, v in args.items() if k not in ("name",)} + + if tool_name == "search_context": + result = search_context(**tool_args) + elif tool_name == "get_active_context": + result = get_active_context(**tool_args) + elif tool_name == "get_file_context": + result = get_file_context(**tool_args) + elif tool_name == "export_to_vault": + result = export_to_vault(**tool_args) + elif tool_name == "prune_context": + result = prune_context(**tool_args) + elif tool_name == "check_qmd": + result = check_qmd_status() + else: + result = {"error": f"Unknown tool: {tool_name}"} + + send_response( + { + "jsonrpc": "2.0", + "id": req_id, + "result": {"content": [{"type": "text", "text": json.dumps(result)}]}, + } + ) + return + + send_error(req_id, -32601, f"Method not found: {method}") + + +def main(): + while True: + msg = read_message() + if msg is None: + break + handle_request(msg) + + +if __name__ == "__main__": + main() diff --git a/skills/markdown-mcp/SKILL.md b/skills/markdown-mcp/SKILL.md new file mode 100644 index 0000000..85c8183 --- /dev/null +++ b/skills/markdown-mcp/SKILL.md @@ -0,0 +1,191 @@ +--- +name: markdown-mcp +description: > + Use the markdown-mcp tool for all markdown file operations. Load this skill when you need to + read, edit, search, or modify any markdown (.md) file. Replaces the pattern of writing Python + scripts to manipulate files — use the MCP tools directly instead. + Triggers: any time you need to edit a markdown file, read a markdown file, search within a + markdown file, add content to a markdown file, update a table in markdown, insert text into + a markdown file, or replace text in a markdown file. +license: MIT +metadata: + version: "1.0" + category: productivity + tools: markdown-mcp +--- + +# Markdown MCP Tools + +Use these tools for all markdown file operations. **Never write Python scripts to edit files.** + +## Prerequisites + +The `minimax-markdown-mcp` MCP server must be installed and configured. + +**OpenCode** — add to `~/.config/opencode/opencode.json` or `package.json`: +```json +{ + "mcp": { + "markdown-mcp": { + "type": "local", + "command": ["uvx", "minimax-markdown-mcp"], + "enabled": true + } + } +} +``` + +**Cursor** — add to MCP settings: +```json +{ + "mcpServers": { + "markdown-mcp": { + "command": "uvx", + "args": ["minimax-markdown-mcp"] + } + } +} +``` + +Restart the app after configuring. Verify with `/mcp` — `markdown-mcp` should appear. + +## Available Tools + +| Tool | When to use | +|---|---| +| `get_file_info` | Check line count, headers, tables — instant, no content read | +| `grep_lines` | Find a pattern in a file — returns line numbers + snippets | +| `read_section` | Read a specific section by pattern or line offsets | +| `replace_text` | Replace exact text — for small targeted changes | +| `insert_after_line` | Insert text after a specific line number | +| `append_line` | Append text to end of file | + +--- + +## Core Workflow + +**Never do this:** write a Python script → run it → debug → repeat. + +**Always do this:** + +### To understand a file before editing: + +``` +1. get_file_info(path="/path/to/file.md") + → tells you line count, section headers, table presence +2. grep_lines(path="/path/to/file.md", pattern="search term", context=2) + → shows line numbers + snippets around matches +3. read_section(path="/path/to/file.md", start_pat="## Section", end_pat="## Next Section") + → reads only that section +``` + +### To make a targeted change: + +``` +replace_text(path="/path/to/file.md", old_text="exact text to replace", new_text="new text", max_count=1) +``` + +### To insert content (e.g., new section after an existing one): + +``` +1. grep_lines(path="/path/to/file.md", pattern="## Previous Section") + → get the line number +2. insert_after_line(path="/path/to/file.md", after_line=N, text="## New Section\n\nContent.") +``` + +### To add to the end of a file: + +``` +append_line(path="/path/to/file.md", text="## New Section\n\nContent.") +``` + +--- + +## When to Use Each Tool + +### get_file_info — always start here +- No content read, just structure +- Tells you line count, headers, whether there are tables +- Use before grepping if you don't know the file structure + +### grep_lines — find content fast +- No full file read +- Returns line numbers + snippets with context lines +- Use to locate sections before editing + +### read_section — targeted reading +- Read only between two patterns, or between line offsets +- Never read an entire large file +- max 100 lines per call for large sections + +### replace_text — exact replacement +- For small text changes (a phrase, a table row, a code snippet) +- old_text must match EXACTLY including whitespace +- max_count=1 by default (change one occurrence at a time) + +### insert_after_line — precise insertion +- Insert new content after a specific line number +- Use grep_lines to find the right line first +- text can be multi-line (separate lines with \n) + +### append_line — end of file +- For adding new sections at the end +- Simpler than insert_after_line when you don't need precise placement + +--- + +## Common Tasks + +### Add a new change entry to an existing table + +``` +1. grep_lines(path="opt_log.md", pattern="Change\t", context=1) + → find where the table is +2. read_section(path="opt_log.md", start_pat="Change\t", end_pat="\n##") + → read the table +3. replace_text(path="opt_log.md", old_text="5c\t...", new_text="5c\t...\n5d\t...") +``` + +### Insert a new section between existing sections + +``` +1. grep_lines(path="file.md", pattern="## Existing Section") + → get line number +2. insert_after_line(path="file.md", after_line=N, text="## New Section\n\n...") +``` + +### Replace a specific code block + +``` +1. grep_lines(path="file.md", pattern="```c", context=3) + → find the code block +2. replace_text(path="file.md", old_text="```c\nOLD CODE\n```", new_text="```c\nNEW CODE\n```") +``` + +--- + +## File Size Rules + +| Size | Rule | +|---|---| +| Any size | Always `get_file_info` first | +| < 100 lines | Can `read_section` with wide offset | +| > 100 lines | Always `grep_lines` first, then targeted `read_section` | +| > 500 lines | Never read fully. Break into sections via grep | + +--- + +## Anti-Patterns + +- ❌ Writing and running a Python script to edit a file +- ❌ Reading the entire file with `read_section` using large offsets +- ❌ Using `replace_text` with old_text that doesn't exactly match +- ❌ Running `grep` via bash instead of using `grep_lines` +- ❌ Making multiple edits without checking each one's result + +## Output Rules + +After each tool call: +- If success: brief confirmation only ("Done.", "Replaced 1 occurrence.") +- If failure: state the error and what you tried ("replace_text failed: 'pattern' not found. Did you mean...?") +- Never output the full file content after editing diff --git a/skills/markdown-mcp/scripts/pyproject.toml b/skills/markdown-mcp/scripts/pyproject.toml new file mode 100644 index 0000000..f725d58 --- /dev/null +++ b/skills/markdown-mcp/scripts/pyproject.toml @@ -0,0 +1,12 @@ +[project] +name = "minimax-markdown-mcp" +version = "1.0.0" +description = "MCP server for efficient markdown file reading and editing — no Python scripts needed" +requires-python = ">=3.9" +dependencies = [] + +[project.scripts] +minimax-markdown-mcp = "server:main" + +[tool.upload] +distributions = ["sdist", "wheel"] diff --git a/skills/markdown-mcp/scripts/requirements.txt b/skills/markdown-mcp/scripts/requirements.txt new file mode 100644 index 0000000..cff12d3 --- /dev/null +++ b/skills/markdown-mcp/scripts/requirements.txt @@ -0,0 +1 @@ +# No external dependencies — uses only Python stdlib. diff --git a/skills/markdown-mcp/scripts/server.py b/skills/markdown-mcp/scripts/server.py new file mode 100644 index 0000000..e249c4c --- /dev/null +++ b/skills/markdown-mcp/scripts/server.py @@ -0,0 +1,425 @@ +#!/usr/bin/env python3 +# SPDX-License-Identifier: MIT +""" +minimax-markdown-mcp — MCP server for efficient markdown file reading and editing. + +Usage: + uvx minimax-markdown-mcp + +Environment: + MINIMAX_API_KEY (optional, not required for local file operations) + MINIMAX_API_HOST (optional, defaults to https://api.minimaxi.com) + +No external file operation dependencies — uses only stdlib. +""" + +import os +import sys +import json +import re +from pathlib import Path + +STDIO_HEADER = "application/json" + + +def read_message(): + """Read a JSON-RPC message from stdin.""" + while True: + line = sys.stdin.readline() + if not line: + return None + line = line.strip() + if line: + try: + return json.loads(line) + except json.JSONDecodeError: + continue + return None + + +def send_response(resp): + """Send a JSON-RPC response to stdout.""" + sys.stdout.write(json.dumps(resp) + "\n") + sys.stdout.flush() + + +def send_error(req_id, code, message): + """Send a JSON-RPC error.""" + send_response( + {"jsonrpc": "2.0", "id": req_id, "error": {"code": code, "message": message}} + ) + + +def read_section( + path: str, + start_pat: str = None, + end_pat: str = None, + start_offset: int = 0, + end_offset: int = None, +): + """ + Read a section of a file. + + Either by pattern (start_pat/end_pat) or by line offsets. + Returns {"content": "...", "start_line": N, "end_line": N, "total_lines": N} + """ + try: + p = Path(path).expanduser() + if not p.exists(): + return { + "content": "", + "start_line": 0, + "end_line": 0, + "total_lines": 0, + "error": f"File not found: {path}", + } + + lines = p.read_text().splitlines() + total = len(lines) + + if start_pat is not None: + start_idx = next((i for i, l in enumerate(lines) if start_pat in l), None) + if start_idx is None: + return { + "content": "", + "start_line": 0, + "end_line": total, + "total_lines": total, + "error": f"Pattern not found: {start_pat}", + } + if end_pat is not None: + end_idx = next( + ( + i + for i, l in enumerate( + lines[start_idx + 1 :], start=start_idx + 1 + ) + if end_pat in l + ), + None, + ) + if end_idx is None: + end_idx = total - 1 + else: + end_idx = min(start_idx + 100, total - 1) + else: + start_idx = start_offset + end_idx = ( + end_offset + if end_offset is not None + else min(start_idx + 100, total - 1) + ) + + end_idx = min(end_idx, total - 1) + content = "\n".join(lines[start_idx : end_idx + 1]) + return { + "content": content, + "start_line": start_idx + 1, + "end_line": end_idx + 1, + "total_lines": total, + } + + except Exception as e: + return { + "content": "", + "start_line": 0, + "end_line": 0, + "total_lines": 0, + "error": str(e), + } + + +def get_file_info(path: str): + """Get file structure info without reading content.""" + try: + p = Path(path).expanduser() + if not p.exists(): + return { + "exists": False, + "line_count": 0, + "headers": [], + "error": f"File not found: {path}", + } + + lines = p.read_text().splitlines() + headers = [] + for i, line in enumerate(lines): + stripped = line.strip() + if ( + stripped.startswith("# ") + or stripped.startswith("## ") + or stripped.startswith("### ") + ): + headers.append( + { + "level": len(stripped) - len(stripped.lstrip("#")), + "line": i + 1, + "text": stripped.lstrip("# ").strip()[:80], + } + ) + + return { + "exists": True, + "line_count": len(lines), + "size_bytes": p.stat().st_size, + "headers": headers[:50], + "has_tables": bool(re.search(r"\|.*\|.*\|", "\n".join(lines[:100]))), + } + except Exception as e: + return {"exists": False, "line_count": 0, "headers": [], "error": str(e)} + + +def grep_lines(path: str, pattern: str, context: int = 0): + """Find all lines matching a pattern. Returns line numbers + snippets.""" + try: + p = Path(path).expanduser() + if not p.exists(): + return {"matches": [], "count": 0, "error": f"File not found: {path}"} + + lines = p.read_text().splitlines() + matches = [] + for i, line in enumerate(lines): + if pattern in line: + snippet = line.strip()[:120] + if context > 0: + before = [ + (j + 1, lines[j].strip()[:120]) + for j in range(max(0, i - context), i) + ] + after = [ + (j + 1, lines[j].strip()[:120]) + for j in range(i + 1, min(len(lines), i + context + 1)) + ] + else: + before, after = [], [] + matches.append( + {"line": i + 1, "text": snippet, "before": before, "after": after} + ) + + return {"matches": matches, "count": len(matches)} + except Exception as e: + return {"matches": [], "count": 0, "error": str(e)} + + +def insert_after_line(path: str, after_line: int, text: str): + """Insert text after a specific line number. Lines are 1-indexed.""" + try: + p = Path(path).expanduser() + if not p.exists(): + return {"success": False, "error": f"File not found: {path}"} + + lines = p.read_text().splitlines() + total = len(lines) + + if after_line < 1 or after_line > total: + return { + "success": False, + "error": f"Line {after_line} out of range (1-{total})", + } + + new_lines = lines[:after_line] + new_lines.extend(text.splitlines()) + new_lines.extend(lines[after_line:]) + + p.write_text("\n".join(new_lines) + "\n") + return { + "success": True, + "new_total_lines": len(new_lines), + "inserted_after": after_line, + } + except Exception as e: + return {"success": False, "error": str(e)} + + +def replace_text(path: str, old_text: str, new_text: str, max_count: int = 1): + """Replace exact old_text with new_text. max_count=0 means replace all.""" + try: + p = Path(path).expanduser() + if not p.exists(): + return {"success": False, "replaced": 0, "error": f"File not found: {path}"} + + content = p.read_text() + if old_text not in content: + return {"success": False, "replaced": 0, "error": "Text not found in file"} + + if max_count > 0: + new_content = content.replace(old_text, new_text, max_count) + else: + new_content = content.replace(old_text, new_text) + + replaced = ( + content.count(old_text) + if max_count == 0 + else min(content.count(old_text), max_count) + ) + p.write_text(new_content) + return {"success": True, "replaced": replaced} + except Exception as e: + return {"success": False, "replaced": 0, "error": str(e)} + + +def append_line(path: str, text: str): + """Append a line or block of text to the end of a file.""" + try: + p = Path(path).expanduser() + existing = p.read_text() if p.exists() else "" + if existing and not existing.endswith("\n"): + existing += "\n" + p.write_text(existing + text + "\n") + new_lines = (existing + text + "\n").splitlines() + return {"success": True, "appended_after_line": len(existing.splitlines())} + except Exception as e: + return {"success": False, "error": str(e)} + + +def handle_request(req): + """Handle an MCP JSON-RPC request.""" + method = req.get("method") + req_id = req.get("id") + params = req.get("params", {}) + + if method == "initialize": + send_response( + { + "jsonrpc": "2.0", + "id": req_id, + "result": { + "protocolVersion": "2024-11-05", + "capabilities": {"tools": {}}, + "serverInfo": {"name": "minimax-markdown-mcp", "version": "1.0.0"}, + }, + } + ) + return + + if method == "tools/list": + send_response( + { + "jsonrpc": "2.0", + "id": req_id, + "result": { + "tools": [ + { + "name": "get_file_info", + "description": "Get file structure (line count, headers, tables) without reading content.", + "inputSchema": { + "type": "object", + "properties": {"path": {"type": "string"}}, + "required": ["path"], + }, + }, + { + "name": "grep_lines", + "description": "Find lines matching a pattern. Returns line numbers + snippets.", + "inputSchema": { + "type": "object", + "properties": { + "path": {"type": "string"}, + "pattern": {"type": "string"}, + "context": {"type": "integer", "default": 0}, + }, + "required": ["path", "pattern"], + }, + }, + { + "name": "read_section", + "description": "Read a section of a file by pattern or line offsets.", + "inputSchema": { + "type": "object", + "properties": { + "path": {"type": "string"}, + "start_pat": {"type": "string"}, + "end_pat": {"type": "string"}, + "start_offset": {"type": "integer", "default": 0}, + "end_offset": {"type": "integer"}, + }, + "required": ["path"], + }, + }, + { + "name": "insert_after_line", + "description": "Insert text after a specific line number.", + "inputSchema": { + "type": "object", + "properties": { + "path": {"type": "string"}, + "after_line": {"type": "integer"}, + "text": {"type": "string"}, + }, + "required": ["path", "after_line", "text"], + }, + }, + { + "name": "replace_text", + "description": "Replace exact old_text with new_text in a file.", + "inputSchema": { + "type": "object", + "properties": { + "path": {"type": "string"}, + "old_text": {"type": "string"}, + "new_text": {"type": "string"}, + "max_count": {"type": "integer", "default": 1}, + }, + "required": ["path", "old_text", "new_text"], + }, + }, + { + "name": "append_line", + "description": "Append text to end of file.", + "inputSchema": { + "type": "object", + "properties": { + "path": {"type": "string"}, + "text": {"type": "string"}, + }, + "required": ["path", "text"], + }, + }, + ] + }, + } + ) + return + + if method == "tools/call": + tool_name = params.get("name") + tool_args = params.get("arguments", {}) + + if tool_name == "get_file_info": + result = get_file_info(**tool_args) + elif tool_name == "grep_lines": + result = grep_lines(**tool_args) + elif tool_name == "read_section": + result = read_section(**tool_args) + elif tool_name == "insert_after_line": + result = insert_after_line(**tool_args) + elif tool_name == "replace_text": + result = replace_text(**tool_args) + elif tool_name == "append_line": + result = append_line(**tool_args) + else: + result = {"error": f"Unknown tool: {tool_name}"} + + send_response( + { + "jsonrpc": "2.0", + "id": req_id, + "result": {"content": [{"type": "text", "text": json.dumps(result)}]}, + } + ) + return + + send_error(req_id, -32601, f"Method not found: {method}") + + +def main(): + """Main loop — read JSON-RPC messages from stdin.""" + while True: + msg = read_message() + if msg is None: + break + handle_request(msg) + + +if __name__ == "__main__": + main()