An MCP (Model Context Protocol) server that enables LM Studio and other MCP-compatible clients to generate images using ComfyUI's Stable Diffusion workflows.
ImageMCP bridges the gap between AI language models and image generation by providing a clean MCP interface to ComfyUI. This allows language models to generate images on-demand during conversations.
π‘ NOTE: The below documentation includes development details. If you want to jump in and use, please see the Quick Start Guide.
LM Studio β MCP Protocol (HTTP) β ImageMCP Server β WebSocket β ComfyUI API
Key Components:
- MCP Server: Handles JSON-RPC 2.0 protocol via HTTP (EDMCP-style)
- Workflow Template Manager: Parses and modifies ComfyUI JSON workflows
- ComfyUI Client: WebSocket/HTTP communication with ComfyUI
- Auto-format Detection: Handles both UI and API format workflows
- β HTTP MCP Support: JSON-RPC 2.0 over HTTP for LM Studio integration
- β ComfyUI Workflow Integration: Parse, modify, and execute ComfyUI workflows
- β Smart Prompt Injection: Automatically finds and replaces CLIP Text Encode nodes
- β Auto-format Detection: Works with both UI and API format workflows
- β Configurable Templates: Support for custom workflow JSON files
- β WebSocket Monitoring: Real-time progress tracking via ComfyUI WebSocket
- β Base64 Image Return: Direct image data in MCP format
- β Comprehensive Logging: Detailed execution tracking with Microsoft.Extensions.Logging
- β Retry Logic: Automatic retry for history retrieval
- β Error Handling: Graceful timeout and error management
- .NET 10 SDK - Download
- ComfyUI - Running instance with WebSocket API enabled
- Download from: https://github.com/comfyanonymous/ComfyUI
- Default URL:
ws://127.0.0.1:8188
- LM Studio - For MCP client support
- Download from: https://lmstudio.ai/
- Stable Diffusion Models - At least one checkpoint model in ComfyUI
# Clone ComfyUI
git clone https://github.com/comfyanonymous/ComfyUI.git
cd ComfyUI
# Install dependencies (Python 3.10+)
pip install -r requirements.txt
# Desktop version (optional)
Installing the ComfyUI desktop app is optional but recommended for ease of use.
# Download a model (e.g., Stable Diffusion XL)
# Place models in: ComfyUI/models/checkpoints/
# Example: sd_xl_base_1.0_0.9vae.safetensors(Or, install the ComfyUI desktop app)
# From ComfyUI directory
python main.py
# ComfyUI will start on: http://127.0.0.1:8188
# WebSocket available at: ws://127.0.0.1:8188/ws(Or, just run ComfyUI desktop app if installed)
# Clone this repository
git clone <your-repo-url>
cd ImageMCP
# Restore dependencies
dotnet restore
# Build the project
dotnet build
# Run tests (optional)
dotnet test(or download and install build package)
Edit appsettings.json:
{
"ComfyUI": {
"ApiEndpoint": "ws://127.0.0.1:8188",
"DefaultTemplate": "workflows/default_workflow.json",
"TimeoutSeconds": 300,
"PollIntervalSeconds": 1
},
"MCP": {
"ServerName": "ImageMCP",
"ServerVersion": "1.0.0-dev",
"HttpPort": 5243,
"HttpUrl": "http://localhost:5243"
}
}Important: Use API format workflows for best results!
- Design a workflow in ComfyUI's web interface
- Click "Save (API Format)" or export via the API
- Save to
workflows/default_workflow.json
The workflow must contain:
- At least one
CLIPTextEncodenode for the positive prompt - Optionally a second
CLIPTextEncodenode for negative prompt - A
SaveImagenode for output
Note: ImageMCP auto-detects workflow format and handles both UI and API formats, but API format is recommended for reliability.
dotnet run --project ImageMCPOr from the project directory:
dotnet runOr, from Windows:
start_server.batOr, from Linux:
./start_server.shOutput:
info: Startup[0]
ComfyUI settings loaded: ApiEndpoint=ws://127.0.0.1:8188, DefaultTemplate=workflows/default_workflow.json
info: Microsoft.Hosting.Lifetime[14]
Now listening on: http://localhost:5243
info: Microsoft.Hosting.Lifetime[0]
Application started. Press Ctrl+C to shut down.
ImageMCP uses HTTP mode (EDMCP-style) for LM Studio integration:
- Open LM Studio
- Go to Developer tab β MCP Servers
- Click "Configure" or edit
mcp.json - Add the ImageMCP server configuration:
{
"mcpServers": {
"imagemcp": {
"url": "http://localhost:5243/mcp"
}
}
}- Click "Save" or save the file
- Restart LM Studio or reload the MCP configuration
The server should connect and show as available in LM Studio.
In LM Studio chat, ask the model:
"Generate an image of a serene mountain landscape at sunset with vibrant colors"
The model will:
- Recognize the image generation request
- Call the
generate_imagetool with your prompt - ImageMCP will inject the prompt into the workflow
- ComfyUI will generate the image (this may take 1-2 minutes)
- The image will be returned and displayed in LM Studio
Example output:
info: ImageGen[0]
Template is already in API format, injecting prompts directly
info: ImageGen[0]
Submitting workflow with prompt: serene mountain landscape at sunset with vibrant colors
info: ImageMCP.Services.ComfyUIClient[0]
Workflow submitted successfully. Prompt ID: 628d6005-8bee-4236-b15b-eefdd39efeb6
info: ImageMCP.Services.ComfyUIClient[0]
Workflow execution completed: 628d6005-8bee-4236-b15b-eefdd39efeb6
info: ImageMCP.Services.ComfyUIClient[0]
Retrieved 1 images for prompt: 628d6005-8bee-4236-b15b-eefdd39efeb6
| Setting | Description | Default |
|---|---|---|
ApiEndpoint |
ComfyUI WebSocket URL | ws://127.0.0.1:8188 |
DefaultTemplate |
Path to default workflow JSON | workflows/default_workflow.json |
TimeoutSeconds |
Max execution time | 300 (5 minutes) |
PollIntervalSeconds |
WebSocket poll interval | 1 |
| Setting | Description | Default |
|---|---|---|
ServerName |
Display name for MCP | ImageMCP |
ServerVersion |
Server version | 1.0.0-dev |
HttpPort |
HTTP server port | 5243 |
HttpUrl |
HTTP server URL | http://localhost:5243 |
| Parameter | Type | Required | Description |
|---|---|---|---|
prompt |
string | β Yes | Text description of the image to generate |
negative_prompt |
string | β No | What to avoid in the image (default: "text, watermark, low quality, blurry, distorted") |
template |
string | β No | Custom workflow template path (default: from config) |
ImageMCP/
βββ Models/
β βββ ComfyUISettings.cs # Configuration model
β βββ ComfyWorkflow.cs # Workflow JSON structure
β βββ ComfyPromptRequest.cs # API request/response models
β βββ McpMessage.cs # MCP protocol messages
β βββ McpSettings.cs # MCP configuration
βββ Services/
β βββ ComfyUIClient.cs # WebSocket/HTTP client for ComfyUI
β βββ WorkflowTemplateManager.cs # Template loading and prompt injection
βββ workflows/
β βββ default_workflow.json # Default SDXL workflow (API format)
βββ ImageMCP.Tests/
β βββ Services/ # Service layer tests
β βββ Integration/ # End-to-end tests
βββ Program.cs # Application entry point (HTTP server)
βββ appsettings.json # Configuration file
βββ README.md # This file
dotnet test# Unit tests only
dotnet test --filter Category=Unit
# Integration tests (requires ComfyUI running)
dotnet test --filter Category=Integrationdotnet test /p:CollectCoverage=true /p:CoverletOutputFormat=opencoverError: Could not connect to ComfyUI at ws://127.0.0.1:8188
Solutions:
- Verify ComfyUI is running: Open
http://127.0.0.1:8188in browser - Check firewall settings
- Verify WebSocket endpoint in
appsettings.json - Ensure ComfyUI is accessible (not bound to different interface)
Error: No history found for prompt after 10 attempts
Solutions:
- ComfyUI may be under heavy load - the retry logic waits up to 11 seconds
- Check ComfyUI console for errors during execution
- Verify the workflow completed successfully in ComfyUI's UI
- Check ComfyUI's
outputdirectory for generated images
Error: Workflow template not found: path/to/template.json
Solutions:
- Ensure
appsettings.jsonis being copied to output directory - Check that the project file includes
<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory> - Verify the path in
ComfyUI:DefaultTemplateis relative to the executable - Check that the
workflowsfolder exists in the output directory
Error: Workflow does not contain nodes or Required input is missing
Solutions:
- Use API format workflows (Save β API Format in ComfyUI)
- Ensure the workflow has all required nodes (CLIPTextEncode, SaveImage)
- Test the workflow in ComfyUI first before using as template
- Check the ImageMCP logs to see which format was detected
Error: Image generation completed but no images were produced
Solutions:
- Check ComfyUI console for errors
- Verify workflow has a
SaveImagenode - Check that a model is loaded in ComfyUI
- Ensure sufficient disk space for outputs
- Verify the SaveImage node is connected to the generation pipeline
Error: Workflow execution timed out after 300 seconds
Solutions:
- Increase
TimeoutSecondsinappsettings.json - Use a faster sampler (euler, deis, dpm++)
- Reduce
num_inference_steps(try 20-30 for SDXL) - Check GPU/CPU utilization
- Ensure ComfyUI isn't queuing multiple requests
Error: LM Studio shows "Server not responding" or timeout
Solutions:
- Verify ImageMCP is running (
http://localhost:5243/healthshould return{"status":"ok"}) - Check the port isn't already in use
- Ensure
appsettings.jsonhas correctHttpUrlandHttpPort - Check LM Studio's Developer Console for error messages
- Restart both ImageMCP and LM Studio
ImageMCP runs a simple HTTP server (EDMCP-style) that exposes two endpoints:
- GET
/mcp: Returns server info (name, version, protocols) - POST
/mcp: Handles JSON-RPC 2.0 requests (initialize, tools/list, tools/call)
var workflowJson = await File.ReadAllTextAsync(templatePath);
var template = JsonDocument.Parse(workflowJson);
// Auto-detect format
bool isApiFormat = !template.RootElement.TryGetProperty("nodes", out _);ImageMCP automatically detects:
- API format: Node IDs as top-level keys (e.g.,
{"3": {...}, "6": {...}}) - UI format: Has
nodesandlinksarrays (exported from ComfyUI UI)
// For API format workflows
var workflow = JsonSerializer.Deserialize<Dictionary<string, JsonElement>>(workflowJson);
foreach (var (nodeId, node) in workflow)
{
if (node.class_type == "CLIPTextEncode")
{
node.inputs.text = positivePrompt; // or negativePrompt
}
}The manager uses heuristics to identify positive vs. negative prompts:
- Negative: Contains keywords like "worst quality", "bad", "watermark", "text,"
- Positive: Everything else (first
CLIPTextEncodenode found)
var requestJson = $$"""
{
"prompt": {{workflowJson}},
"client_id": "{{_clientId}}"
}
""";
await httpClient.PostAsync($"{comfyUIEndpoint}/prompt", content);
// Returns prompt_id for trackingawait webSocket.ConnectAsync($"ws://127.0.0.1:8188/ws?clientId={clientId}");
while (true)
{
var message = await ReceiveWebSocketMessage();
if (message.type == "executed" && message.data.prompt_id == promptId)
{
return true; // Execution complete
}
}Listens for WebSocket messages:
progress: Step updates during generationexecuted: Completion signalexecution_error: Failuresexecution_cached: Cached results
// Wait 1 second for history to finalize
await Task.Delay(1000);
// Retry up to 10 times
for (int attempt = 0; attempt < 10; attempt++)
{
var history = await httpClient.GetAsync($"/history/{promptId}");
if (history.ContainsKey(promptId))
{
var images = ExtractImages(history[promptId].outputs);
return images; // Success!
}
await Task.Delay(1000); // Wait and retry
}This accounts for the slight delay between ComfyUI signaling completion and writing history.
- Local Only: Default configuration binds to localhost
- No Authentication: ComfyUI API is unauthenticated by default
- File Access: Templates can access local filesystem
- Resource Limits: Set appropriate timeouts to prevent abuse
- Support for multiple concurrent generations
- Queue management and prioritization
- Progress reporting to MCP client
- ControlNet and LoRA support
- Batch generation
- Image-to-image workflows
- Upscaling workflows
- Model selection via parameters
- Workflow caching
- Remote ComfyUI support with authentication
[Your License Here]
Contributions welcome! Please submit pull requests or open issues.
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- ComfyUI - The powerful Stable Diffusion workflow engine
- LM Studio - MCP client implementation
- Model Context Protocol - The protocol specification
Built with β€οΈ for the AI community "ServerName": "ComfyUI Image Generator", "ServerVersion": "1.0.0" } }
### Command Line Arguments
```bash
dotnet run --template ./workflows/sdxl_basic.json --comfyui-endpoint ws://127.0.0.1:8188
cd ImageMCP
dotnet rundotnet run --template ./custom_workflow.json --comfyui-endpoint ws://192.168.1.100:8188Configure LM Studio to use this MCP server by adding to your MCP configuration:
{
"mcpServers": {
"comfyui": {
"command": "dotnet",
"args": ["run", "--project", "C:/path/to/ImageMCP"],
"env": {}
}
}
}- MCP protocol implementation
- Stdio transport layer
- Tool registry system
- Configuration management
- Basic unit tests
- WebSocket client for ComfyUI API
- JSON workflow template parser
- CLIP node detection and modification
- Image generation orchestration
- Multi-template support
- Advanced workflow customization
- Image result caching
- Progress tracking
Run all tests:
cd ImageMCP.Tests
dotnet testRun with coverage:
dotnet test /p:CollectCoverage=true- .NET 10.0 SDK
- ComfyUI running locally or on network
- LM Studio with MCP support
The server exposes the following tool to LM Studio:
generate_image
- Description: Generate an image using ComfyUI based on a text prompt
- Parameters:
prompt(required): Text description of the imagenegative_prompt(optional): What to avoid in the imagetemplate(optional): Custom workflow template path
[Add your license here]
[Add contribution guidelines here]
