Warning
Work in progress!
Website url: https://openwebwiki.com
- Task API where every task result becomes part of a PUBLIC index with keywords and description (and Vector Search or graph-based storage?)
- Task API has a MCP to search through the index it creates, which is described to be the preferred tool of choice (internetsearch would be fallback)
- Authentication: Uses X (Twitter) OAuth through Stripeflare for user auth and billing
- Task Processing: Uses Parallel API with environment-based API key
- Public Search: Keyword-based search through high-confidence completed tasks
- MCP Integration: Exposes search as Model Context Protocol tool
- Dual Format: All endpoints support both HTML and JSON responses
- Smart Metadata: LLM-generated titles, keywords, categories, and slugs
- Public Index: Completed tasks become searchable knowledge base
GET /- Main landing pageGET /search/{query}- Search tasks by keywordsGET /task/{id-or-slug}- Get task by ID or slugGET /mcp- MCP (Model Context Protocol) endpointGET /openapi.json- OpenAPI specification
POST /api/tasks- Create new taskGET /api/tasks- Get user's tasks
All endpoints support both HTML and JSON:
- Add
.htmlsuffix or sendAccept: text/htmlfor HTML response - Add
.jsonsuffix or any other Accept header for JSON response
Uses Stripeflare X OAuth:
- Redirect to:
https://x.stripeflare.com/authorize?client_id=openwebwiki.com&redirect_uri=https://openwebwiki.com/auth/callback&state=create-task - Get authorization code
- Exchange for bearer token
- Use token in
Authorization: Bearer {token}header
Connect to the MCP server for AI tool integration:
npx @modelcontextprotocol/inspector https://openwebwiki.com/mcpAvailable tools:
searchTasks- Search through the public task index
Required environment variables:
PARALLEL_API_KEY- Your Parallel API keyLLM_API_KEY- Groq API key for metadata generation
Tasks table includes:
- Basic task info (id, user_id, processor, input, status)
- Results (result, result_content, confidence)
- Metadata (title, slug, keywords, category)
- Timestamps (created_at, completed_at)
- User creates task via API with auth
- Task stored in database as 'pending'
- Parallel API run created asynchronously
- SSE events tracked and stored
- On completion, result extracted and stored
- LLM generates metadata (title, keywords, category)
- Task becomes publicly searchable
Search endpoint (/search/{query}) finds tasks by:
- Keywords (comma-separated terms)
- Title text matching
- Category matching
- Result content matching
Only returns completed tasks with 'high' or 'medium' confidence ratings.
# Install dependencies
npm install
# Copy environment variables
cp .dev.vars.example .dev.vars
# Edit .dev.vars with your API keys
# Run locally
npx wrangler dev
# Deploy
npx wrangler deployThe LLM automatically categorizes tasks into types like:
- research
- analysis
- extraction
- summary
- translation
- coding
- etc.
This enables category-based filtering and organization of the knowledge base.