Automatic MCP Server & OpenAI Tools Bridge for apcore.
apcore-mcp turns any apcore-based project into an MCP Server and OpenAI tool provider — with zero code changes to your existing project.
┌──────────────────┐
│ django-apcore │ ← your existing apcore project (unchanged)
│ flask-apcore │
│ ... │
└────────┬─────────┘
│ extensions directory
▼
┌──────────────────┐
│ apcore-mcp │ ← just install & point to extensions dir
└───┬──────────┬───┘
│ │
▼ ▼
MCP OpenAI
Server Tools
- Zero intrusion — your apcore project needs no code changes, no imports, no dependencies on apcore-mcp
- Zero configuration — point to an extensions directory, everything is auto-discovered
- Pure adapter — apcore-mcp reads from the apcore Registry; it never modifies your modules
- Works with any
xxx-apcoreproject — if it uses the apcore Module Registry, apcore-mcp can serve it
Install apcore-mcp alongside your existing apcore project:
pip install apcore-mcpThat's it. Your existing project requires no changes.
Requires Python 3.10+ and apcore >= 0.5.0.
The repo includes 5 example modules (class-based + binding.yaml) you can run immediately:
pip install -e .
PYTHONPATH=./examples/binding_demo python examples/run.py
# Open http://127.0.0.1:8000/explorer/See examples/README.md for all run modes and module details.
If you already have an apcore-based project with an extensions directory, just run:
apcore-mcp --extensions-dir /path/to/your/extensionsAll modules are auto-discovered and exposed as MCP tools. No code needed.
For tighter integration or when you need filtering/OpenAI output:
from apcore import Registry
from apcore_mcp import serve, to_openai_tools
registry = Registry(extensions_dir="./extensions")
registry.discover()
# Launch as MCP Server
serve(registry)
# Or export as OpenAI tools
tools = to_openai_tools(registry)your-project/
├── extensions/ ← modules live here
│ ├── image_resize/
│ ├── text_translate/
│ └── ...
├── your_app.py ← your existing code (untouched)
└── ...
No changes to your project. Just run apcore-mcp alongside it:
# Install (one time)
pip install apcore-mcp
# Run
apcore-mcp --extensions-dir ./extensionsYour existing application continues to work exactly as before. apcore-mcp operates as a separate process that reads from the same extensions directory.
For OpenAI integration, a thin script is needed — but still no changes to your existing modules:
from apcore import Registry
from apcore_mcp import to_openai_tools
registry = Registry(extensions_dir="./extensions")
registry.discover()
tools = to_openai_tools(registry)
# Use with openai.chat.completions.create(tools=tools)Add to ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows):
{
"mcpServers": {
"apcore": {
"command": "apcore-mcp",
"args": ["--extensions-dir", "/path/to/your/extensions"]
}
}
}Add to .mcp.json in your project root:
{
"mcpServers": {
"apcore": {
"command": "apcore-mcp",
"args": ["--extensions-dir", "./extensions"]
}
}
}Add to .cursor/mcp.json in your project root:
{
"mcpServers": {
"apcore": {
"command": "apcore-mcp",
"args": ["--extensions-dir", "./extensions"]
}
}
}apcore-mcp --extensions-dir ./extensions \
--transport streamable-http \
--host 0.0.0.0 \
--port 9000Connect any MCP client to http://your-host:9000/mcp.
apcore-mcp --extensions-dir PATH [OPTIONS]
| Option | Default | Description |
|---|---|---|
--extensions-dir |
(required) | Path to apcore extensions directory |
--transport |
stdio |
Transport: stdio, streamable-http, or sse |
--host |
127.0.0.1 |
Host for HTTP-based transports |
--port |
8000 |
Port for HTTP-based transports (1-65535) |
--name |
apcore-mcp |
MCP server name (max 255 chars) |
--version |
package version | MCP server version string |
--log-level |
INFO |
Logging: DEBUG, INFO, WARNING, ERROR |
--explorer |
off | Enable the browser-based Tool Explorer UI (HTTP only) |
--explorer-prefix |
/explorer |
URL prefix for the explorer UI |
--allow-execute |
off | Allow tool execution from the explorer UI |
--jwt-secret |
— | JWT secret key for Bearer token auth (HTTP only) |
--jwt-key-file |
— | Path to PEM key file for JWT verification (e.g. RS256 public key) |
--jwt-algorithm |
HS256 |
JWT signing algorithm |
--jwt-audience |
— | Expected JWT audience claim |
--jwt-issuer |
— | Expected JWT issuer claim |
--jwt-require-auth |
on | Require valid token; use --no-jwt-require-auth for permissive mode |
--exempt-paths |
— | Comma-separated paths exempt from auth (e.g. /health,/metrics) |
JWT key resolution priority: --jwt-key-file > --jwt-secret > JWT_SECRET environment variable.
Exit codes: 0 normal, 1 invalid arguments, 2 startup failure.
from apcore_mcp import serve
serve(
registry_or_executor, # Registry or Executor
transport="stdio", # "stdio" | "streamable-http" | "sse"
host="127.0.0.1", # host for HTTP transports
port=8000, # port for HTTP transports
name="apcore-mcp", # server name
version=None, # defaults to package version
on_startup=None, # callback before transport starts
on_shutdown=None, # callback after transport completes
tags=None, # filter modules by tags
prefix=None, # filter modules by ID prefix
log_level=None, # logging level ("DEBUG", "INFO", etc.)
validate_inputs=False, # validate inputs against schemas
metrics_collector=None, # MetricsCollector for /metrics endpoint
explorer=False, # enable browser-based Tool Explorer UI
explorer_prefix="/explorer", # URL prefix for the explorer
allow_execute=False, # allow tool execution from the explorer
authenticator=None, # Authenticator for JWT/token auth (HTTP only)
require_auth=True, # False = permissive mode (no 401)
exempt_paths=None, # exact paths that bypass auth
)Accepts either a Registry or Executor. When a Registry is passed, an Executor is created automatically.
When explorer=True is passed to serve(), a browser-based Tool Explorer UI is mounted on HTTP transports. It provides an interactive page for browsing tool schemas and testing tool execution.
serve(registry, transport="streamable-http", explorer=True, allow_execute=True)
# Open http://127.0.0.1:8000/explorer/ in a browserEndpoints:
| Endpoint | Description |
|---|---|
GET /explorer/ |
Interactive HTML page (self-contained, no external dependencies) |
GET /explorer/tools |
JSON array of all tools with name, description, annotations |
GET /explorer/tools/<name> |
Full tool detail with inputSchema |
POST /explorer/tools/<name>/call |
Execute a tool (requires allow_execute=True) |
- HTTP transports only (
streamable-http,sse). Silently ignored forstdio. - Execution disabled by default — set
allow_execute=Trueto enable Try-it. - Custom prefix — use
explorer_prefix="/browse"to mount at a different path.
Optional Bearer token authentication for HTTP transports. Supports symmetric (HS256) and asymmetric (RS256) algorithms.
from apcore_mcp.auth import JWTAuthenticator
auth = JWTAuthenticator(key="my-secret")
serve(
registry,
transport="streamable-http",
authenticator=auth,
explorer=True,
allow_execute=True,
)Permissive mode — allow unauthenticated access (identity is None when no token is provided):
serve(registry, transport="streamable-http", authenticator=auth, require_auth=False)Path exemption — bypass auth for specific paths:
serve(registry, transport="streamable-http", authenticator=auth, exempt_paths={"/health", "/metrics"})See examples/README.md for a runnable JWT demo with a pre-generated test token.
When metrics_collector is provided to serve(), a /metrics HTTP endpoint is exposed that returns metrics in Prometheus text exposition format.
- Available on HTTP-based transports only (
streamable-http,sse). Not available withstdiotransport. - Returns Prometheus text format with Content-Type
text/plain; version=0.0.4; charset=utf-8. - Returns 404 when no
metrics_collectoris configured.
from apcore.observability import MetricsCollector
from apcore_mcp import serve
collector = MetricsCollector()
serve(registry, transport="streamable-http", metrics_collector=collector)
# GET http://127.0.0.1:8000/metrics -> Prometheus text formatfrom apcore_mcp import to_openai_tools
tools = to_openai_tools(
registry_or_executor, # Registry or Executor
embed_annotations=False, # append annotation hints to descriptions
strict=False, # OpenAI Structured Outputs strict mode
tags=None, # filter by tags, e.g. ["image"]
prefix=None, # filter by module ID prefix, e.g. "image"
)Returns a list of dicts directly usable with the OpenAI API:
import openai
client = openai.OpenAI()
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Resize the image to 512x512"}],
tools=tools,
)Strict mode (strict=True): sets additionalProperties: false, makes all properties required (optional ones become nullable), removes defaults.
Annotation embedding (embed_annotations=True): appends [Annotations: read_only, idempotent] to descriptions.
Filtering: tags=["image"] or prefix="text" to expose a subset of modules.
If you need custom middleware, ACL, or execution configuration:
from apcore import Registry, Executor
registry = Registry(extensions_dir="./extensions")
registry.discover()
executor = Executor(registry)
serve(executor)
tools = to_openai_tools(executor)- Auto-discovery — all modules in the extensions directory are found and exposed automatically
- Three transports — stdio (default, for desktop clients), Streamable HTTP, and SSE
- JWT authentication — optional Bearer token auth for HTTP transports with
JWTAuthenticator, permissive mode, PEM key file support, and env var fallback - Annotation mapping — apcore annotations (readonly, destructive, idempotent) map to MCP ToolAnnotations
- Schema conversion — JSON Schema
$ref/$defsinlining, strict mode for OpenAI Structured Outputs - Error sanitization — ACL errors and internal errors are sanitized; stack traces are never leaked
- Dynamic registration — modules registered/unregistered at runtime are reflected immediately
- Dual output — same registry powers both MCP Server and OpenAI tool definitions
- Tool Explorer — browser-based UI for browsing schemas and testing tools interactively, with Swagger-UI-style auth input
| apcore | MCP |
|---|---|
module_id |
Tool name |
description |
Tool description |
input_schema |
inputSchema |
annotations.readonly |
ToolAnnotations.readOnlyHint |
annotations.destructive |
ToolAnnotations.destructiveHint |
annotations.idempotent |
ToolAnnotations.idempotentHint |
annotations.open_world |
ToolAnnotations.openWorldHint |
| apcore | OpenAI |
|---|---|
module_id (image.resize) |
name (image-resize) |
description |
description |
input_schema |
parameters |
Module IDs with dots are normalized to dashes for OpenAI compatibility (bijective mapping).
Your apcore project (unchanged)
│
│ extensions directory
▼
apcore-mcp (separate process / library call)
│
├── MCP Server path
│ SchemaConverter + AnnotationMapper
│ → MCPServerFactory → ExecutionRouter → TransportManager
│
└── OpenAI Tools path
SchemaConverter + AnnotationMapper + IDNormalizer
→ OpenAIConverter → list[dict]
git clone https://github.com/aipartnerup/apcore-mcp-python.git
cd apcore-mcp
pip install -e ".[dev]"
pytest # 450 tests
pytest --cov # with coverage reportsrc/apcore_mcp/
├── __init__.py # Public API: serve(), to_openai_tools()
├── __main__.py # CLI entry point
├── adapters/
│ ├── schema.py # JSON Schema conversion ($ref inlining)
│ ├── annotations.py # Annotation mapping (apcore → MCP/OpenAI)
│ ├── errors.py # Error sanitization
│ └── id_normalizer.py # Module ID normalization (dot ↔ dash)
├── auth/
│ ├── __init__.py # Auth exports
│ ├── protocol.py # Authenticator protocol
│ ├── jwt.py # JWTAuthenticator with ClaimMapping
│ └── middleware.py # ASGI AuthMiddleware + extract_headers()
├── converters/
│ └── openai.py # OpenAI tool definition converter
├── explorer/
│ ├── __init__.py # create_explorer_mount() entry point
│ ├── routes.py # Starlette route handlers
│ └── html.py # Self-contained HTML/CSS/JS page
└── server/
├── factory.py # MCP Server creation and tool building
├── router.py # Tool call → Executor routing
├── transport.py # Transport management (stdio/HTTP/SSE)
└── listener.py # Dynamic module registration listener
Apache-2.0