User Story
As a developer
I want want to have a MCP Protocol
So that so that I can easily add new tools for the agent to use and have a easier auditable protocol for using that.
Description
I want to use the Model Context Protocol (MCP) for adding capabilities for the agent. With the agentic loop and messaging plus the MCP it should be possible to call via message a tool and run a command with that. the first tool should be a folder explorer so that the ai can read files. But this should only be a poc so that the MCP work the tool also be very simple only to show that the MCP works and is the first stepping stone.
Here are some additional info about MCP:
Host App (Pengine)
└── MCP Client (built into Pengine)
├── MCP Server A (web search tool)
├── MCP Server B (file ops tool)
└── MCP Server C (code sandbox tool)
| Role |
In Pengine |
What It Does |
| Host |
Pengine (Tauri/binary) |
Owns the LLM connection, manages clients |
| Client |
Built into Pengine |
Connects to MCP servers, translates tool calls |
| Server |
Docker container |
Exposes tools via the MCP protocol |
{
"servers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/home/user"]
},
"brave-search": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-brave-search"],
"env": { "BRAVE_API_KEY": "your-key" }
}
}
}
{
"servers": {
"github": {
"command": "docker",
"args": ["run", "--rm", "-i",
"-e", "GITHUB_PERSONAL_ACCESS_TOKEN=your-token",
"mcp/github"]
}
}
}
Three messages that matter:
initialize → handshake, negotiate version
tools/list → discover what tools the server has
tools/call → execute a tool, get text result back
Ollama bridge:
MCP inputSchema → Ollama parameters (same JSON Schema, rename the key)
Ollama tool_calls → MCP tools/call (same name + args)
Best model: qwen3:8b (check ollama show <model> for "tools" capability)
For Pengine:
stdio transport, Docker with -i flag, no ports needed
reuse existing mcp/ servers from Docker Hub or npm
build custom servers in any language — 60 lines of Python
Acceptance Criteria
User Story
As a developer
I want want to have a MCP Protocol
So that so that I can easily add new tools for the agent to use and have a easier auditable protocol for using that.
Description
I want to use the Model Context Protocol (MCP) for adding capabilities for the agent. With the agentic loop and messaging plus the MCP it should be possible to call via message a tool and run a command with that. the first tool should be a folder explorer so that the ai can read files. But this should only be a poc so that the MCP work the tool also be very simple only to show that the MCP works and is the first stepping stone.
Here are some additional info about MCP:
Host App (Pengine)
└── MCP Client (built into Pengine)
├── MCP Server A (web search tool)
├── MCP Server B (file ops tool)
└── MCP Server C (code sandbox tool)
{
"servers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/home/user"]
},
"brave-search": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-brave-search"],
"env": { "BRAVE_API_KEY": "your-key" }
}
}
}
{
"servers": {
"github": {
"command": "docker",
"args": ["run", "--rm", "-i",
"-e", "GITHUB_PERSONAL_ACCESS_TOKEN=your-token",
"mcp/github"]
}
}
}
Three messages that matter:
initialize → handshake, negotiate version
tools/list → discover what tools the server has
tools/call → execute a tool, get text result back
Ollama bridge:
MCP inputSchema → Ollama parameters (same JSON Schema, rename the key)
Ollama tool_calls → MCP tools/call (same name + args)
Best model: qwen3:8b (check ollama show <model> for "tools" capability)
For Pengine:
stdio transport, Docker with -i flag, no ports needed
reuse existing mcp/ servers from Docker Hub or npm
build custom servers in any language — 60 lines of Python
Acceptance Criteria