A production-grade subsystem for discovering, invoking, and orchestrating AI agents and prompt pipelines across your organization.
Call is a minimal, extensible runtime that provides unified invocation syntax, consistent logging, and pluggable backends to execute agents and prompts defined in your repositories. It bridges multiple interfaces (CLI, REST API, MCP, Telegram) with a single source of truth.
Key capabilities:
- 🎯 Unified Discovery — SQLite-based index (
repo.db) for projects, agents, and prompts with wildcard filtering - 🔄 Multiple Interfaces — CLI, REST API (Actions), MCP server, Telegram bot with consistent behavior
- 📝 Metadata-Driven — Markdown-based prompt format with YAML metadata for model settings, orchestration hints
- 🔗 Smart Routing — Telegram integration with session tracking, reply context, and inline token resolution
- 🛡️ Structured Errors — Consistent error envelopes across all surfaces with detailed diagnostics
- 🧪 Battle-Tested — 153+ tests covering API, CLI, bot handlers, payload builders, and model settings precedence
- Quick Start
- Architecture Overview
- Installation & Setup
- Core Concepts
- Using Call
- Configuration
- Features
- YAML Formatting — Readable YAML output for MCP hooks and agent-as-tools
- MCP Message Cleanup — Auto-delete intermediate debug messages in Telegram
- MCP Hook Routing — Dual-channel message routing for debug and user messages
- Specialized Bots — Natural language interaction with project-specific bots
- Project-Level Prompts — Prompts attached directly to projects without agents
- MCP Config — MCP server configuration and setup
- Cards — Agent and prompt card system
- DB Diagnostics — Database diagnostics tool for troubleshooting
- Developer Guide
- Reference
- Changelog
- Security & Best Practices
- Roadmap
# List available agents and prompts
python -m call.cli.main list --project UxFab --format yaml
# Execute an agent with input
python -m call.cli.main call --target DialogPostAnalysis --input "Analyze user feedback"
# Pure GPT call without instructions (uses LLM_MODEL env, default: gpt-5)
python -m call.cli.main call --input "What's the current date and time?"
# Reload repository index
python -m call.cli.main reload --repos agent,prompt
# List prompts with filters
python -m call.cli.main prompts --project * --state ready --format table┌─────────────────────────────────────────────────────────────┐
│ Call Subsystem │
├─────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ CLI │ │ REST │ │ MCP │ │ Telegram │ │
│ │ │ │ Actions │ │ Server │ │ Bot │ │
│ └────┬─────┘ └────┬─────┘ └────┬─────┘ └────┬─────┘ │
│ │ │ │ │ │
│ └─────────────┴──────────────┴──────────────┘ │
│ │ │
│ ┌──────▼──────┐ │
│ │ call.lib │ │
│ │ API │ │
│ └──────┬──────┘ │
│ │ │
│ ┌──────────────────┼──────────────────┐ │
│ │ │ │ │
│ ┌────▼────┐ ┌──────▼──────┐ ┌─────▼────┐ │
│ │ repo_db │ │ discovery │ │ app.call │ │
│ │ (index) │ │ │ │ (runtime)│ │
│ └─────────┘ └─────────────┘ └──────────┘ │
│ │
└─────────────────────────────────────────────────────────────┘
│ │ │
┌────▼────┐ ┌────▼────┐ ┌────▼────┐
│ repo.db │ │ agent/ │ │ prompt/ │
│ (SQLite)│ │ (repo) │ │ (repo) │
└─────────┘ └─────────┘ └─────────┘
src/call/lib/— Core library exposingapi.py(public interface),repo_db.py(SQLite queries),repo_fs.py(filesystem scanning),discovery.py(agent/prompt resolution),logging.py,utils.pysrc/call/app/— Application runtime (call.py) containingbuild_and_run_agent(), welcome banner logic, MCP integration, agents-as-tools wrapperssrc/call/cli/— Command-line interface (main.py) for interactive usage and scriptingsrc/call/actions/— FastAPI REST API with bearer auth, OpenAPI schema generationsrc/call/mcp/— Model Context Protocol server (FastMCP SDK) exposing tools:call,exec,agents,prompts,models,read,write,reloadsrc/call/telegram_bot/— Production Telegram bot with message parsing, reply context extraction, inline buttonsdocs/— Additional documentation (cards.md,mcp_config.md, integration assessments)src/call/tools/— Helper scripts (repos.shfor workspace synchronization)
src/call/actions/publishessrc/call/actions/openapi.jsonand mirrorscall.lib.apihelpers (call,list,models, etc.). Patch the schema whenever endpoints change so client generation stays accurate.src/call/mcp/exposes the same surface as REST (call,exec,notify,reload,models) via presets inmcp_config.sample.yaml(public template) and local overrides inmcp_config.yaml/mcp_config.json. Keep tool signatures aligned with the payload contract.src/call/telegram_bot/fronts the runtime with/agents,/prompts,/call, parsed replies, and renders structured envelopes. Preserve HTML-safe output, welcome banners, and debug logging flows when adjusting handlers.wallet/stores deployment-time secrets such asservice-account-key.json. Only commit placeholders.windsurf/centralizes IDE defaults; update it in lockstep with formatter/linter changes.requirements.txtpins runtime dependencies for CLI, bot, Actions API, and MCP server.- Claude Desktop configs: when editing
claude_desktop_config.json, include only servers withenabled: trueinmcp_config.yamland set filesystem catalog roots toc:/home/toolson Windows.
- Discovery — Scanner (
repo_fs) indexes Markdown cards fromagent/andprompt/repos intorepo.db(default.cache/call/repo.db) - Resolution — Selectors (
project,agent,prompt,target) resolve to aRunnableConfigwith instructions, model settings, metadata - Execution — Runtime (
app.call.build_and_run_agent) constructs an OpenAI Agents pipeline, applies model settings precedence (prompt > agent > project), invokes tools/MCPs - Routing — Results delivered via Telegram (with session tracking), returned as structured JSON, or written to filesystem
- Format:
chatorchat:thread(agent name is not part of the session id). - When
session_idis provided tocall()/call_async()(or to Actions/MCP/CLI), it takes precedence and is used to derive Telegram routing:- The library parses
chat_id/thread_idfrom the providedsession_id. - Environment defaults are NOT used in this case.
- The library parses
- When
session_idis not provided:- If
chat_id/thread_idargs are provided, they are used (with env defaults filling missing pieces). - If neither is provided, no session is created and no Telegram messages are sent. The response omits
session_id.
- If
- On success and on error, when a session is known, responses include
session_id.
-
All library responses use a consistent envelope:
{ "ok": false, "error": { "code": 400, "message": "Your input exceeds the context window of this model. Please adjust your input and try again.", "type": "invalid_request_error", "param": "input", "provider_code": "context_length_exceeded" }, "error_code": 400, "description": "Your input exceeds the context window of this model. Please adjust your input and try again.", "agent": "2-SplitByTopics", "project": "UxFab", "final_output": null, "echo": false } -
Field order is stable (
ok,error,error_code,description, ...). Consumers should read from theerrorobject for structured diagnostics and usedescriptionfor the primary message. -
provider_codecarries the upstream provider identifier (when present). The legacy top-levelcodefield has been removed; existing integrations should read numeric statuses fromerror.codeinstead. -
When no structured payload is available,
erroris omitted anddescriptionfalls back to the raw string. The CLI mirrors this envelope in all formats (json,yaml,text).
- The single JSON payload accepted by Actions and MCP is:
{ project?: string, agent?: string, prompt?: string, target?: string, context?: any, echo?: boolean, session_id?: string }- Exactly one of
project|agent|prompt|targetmust be provided. - The full payload JSON is used as the input string for the agent pipeline.
echodefaults tofalse. When omitted the runtime now returns the final text only; setecho=trueexplicitly to receive the full envelope.
- Python 3.11+ with virtual environment support
- Git for repository management
- Environment variables configured in
.env(see Configuration)
# Clone the repository
git clone https://github.com/strato-space/call.git
cd call
# Create and activate virtual environment
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\Activate.ps1
# Install dependencies
pip install -r requirements.txt
# Configure environment
cp .env.example .env
# Edit .env with your API keys and paths# Reload repository index (scans agent/ and prompt/ directories)
python -m call.cli.main reload --repos agent,prompt
# Verify installation
python -m call.cli.main models --format yaml
python -m call.cli.main list --project * --format textIf installed as a package (e.g., via
uv sync), you can use thecallconsole script instead ofpython -m call.cli.main.
Use src/call/tools/repos.sh to synchronize all Strato repositories:
# Clone or update all core repos (call, agent, prompt, server, rms, voice)
./src/call/tools/repos.sh
# Install Python dependencies
./src/call/tools/repos.sh --pip
# Install MCP servers (requires npm/uv)
./src/call/tools/repos.sh --mcp
# Codex preset (provisions /workspace/.venv)
./src/call/tools/repos.sh --codexSee also:
CHANGELOG.mdfor recent changes,AGENTS.mdfor contributor guidelines.
--pipensures.venvexists, upgradespip, and installs Python requirements forcall,voice, andserver/mcpusing the workspace interpreter.--mcpinstallsuvwhen missing and provisions JavaScript MCP servers (@modelcontextprotocol/server-sequential-thinking,@modelcontextprotocol/server-filesystem) vianpm.--codexclones/fetches sibling repos into/workspace/and builds a shared/workspace/.venvpreset tailored for Codex sandboxes.
Projects are organizational units that contain related agents and prompts:
- Defined by
agent/<Project>/project.mdwith METADATA including model defaults, tg routing - Examples:
UxFab,AgentFab,FanFab
Agents are executable workflows composed of prompts and tools:
- Located at
agent/<Project>/<Agent>/agent.md - METADATA includes:
id,name,aliases,prompts(list of prompt ids), model settings, orchestration hints - Example:
agent/UxFab/DialogPostAnalysis/agent.md - Discovery honors directory names exactly (KISS). No registry or alias expansion is performed beyond what cards declare, so use the on-disk casing.
- Agents can run with zero prompts (pure YAML instructions) or the first prompt listed in
prompts; prompt metadata wins over agent/project metadata when merged.
Prompts are reusable instruction templates:
- Located in
prompt/ready/(production) orprompt/draft/(development) - Markdown format with YAML METADATA block and optional PROMPT section
- METADATA includes:
id,title,project,agent,model,engine,orchestration - Example:
prompt/ready/33-Questioning.md
When you specify a target, Call resolves it with this precedence:
- Project — matches top-level project names (most secure, cannot be overridden)
- Agent — searches
agent/<Project>/<Agent>/directories - Prompt — checks
prompt/ready/andprompt/draft/for matching id/name
Security rationale: This priority ensures prompts cannot hijack project or agent names, maintaining hierarchy integrity.
Executable vs Non-Executable Projects:
- Executable projects (with
<!-- PROMPT:START -->section) run directly - Non-executable projects (metadata only) require an agent to execute
Wildcards (*) are supported in all selectors for flexible filtering.
Forms supported:
- Plain names (
DialogPostAnalysis) or wildcard fragments (33-*). - Path-like notation (
path:Project/Agent/Prompt) for precise scoping. - Tokens are case-sensitive; ambiguity returns a
TOO_MANY_ROWSerror envelope with options to disambiguate.
Resolution tips:
- Exact project matches win before agent fuzzy matches when the name equals a project.
- Use unified
targetwhen unsure about type; explicitproject/agent/promptflags keep scope narrow and skip wildcard broadening. - For debugging resolution issues, use the DB Diagnostics Tool
Model configuration cascades with clear precedence:
prompt > agent > project > LLM_MODEL (env)
This allows:
- Project-level defaults for all agents
- Agent-level overrides for specific workflows
- Prompt-level overrides for fine-grained control
- Runtime overrides via
--modelflag ormodelin payload
Keys supported in METADATA:
model: model identifier (e.g.,gpt-5,gpt-4.1)model-settings: generic settings for all modelsmodel-settings-<model>: model-specific settings (recommended)
Settings fields:
temperature,top_p,frequency_penalty,presence_penalty,max_tokensverbosity:low|medium|highreasoning:{ effort: minimal|low|medium|high, summary?: auto|concise|detailed }
Call maintains a single-source-of-truth index in .cache/call/repo.db:
Schema:
CREATE TABLE repo (
target TEXT PRIMARY KEY,
project TEXT,
agent TEXT,
prompt TEXT,
path TEXT,
state TEXT, -- 'draft' or 'ready'
engine TEXT, -- 'openai', 'openai-agents', etc.
orchestration TEXT, -- 'llm', 'handoff', 'langgraph', etc.
type TEXT, -- 'project', 'agent', 'prompt'
rel_path TEXT,
url TEXT,
goal TEXT,
card TEXT -- full card contents
)Operations:
reload()— rescansagent/andprompt/repos, rebuilds index, and clears the in-memoryAGENT_CACHEso cached agents/sub-agents pick up updated instructions on the next runread(card_id)— returns raw card text from DBwrite(card_id, text)— updates DB and filesystem atomicallylist()— hierarchical listing (projects → agents → prompts)list_prompts()— flat prompt listing with filters
Details:
targetstores the bare identifier (project,agent, orprompt) without prefixes; prompt and agent lookups share the same pool to enable wildcard resolution.stateis inferred from path (draftif the file path containsdraft, elseready).engineandorchestrationare pulled from card METADATA when present so table queries can highlight runtime hints.call.lib.api.read()/write()operate against the DB first and then propagate to disk, mirroring CLIcall read/call writecommands.
Follow the engineering principles documented in
AGENTS.md. That file remains the canonical source for contributor expectations and elaborates on KISS, SOLID/DI, helper sizing, explicit failure paths, and observability.
The CLI provides a complete interface for discovery, execution, and management.
Common Commands:
# Discovery & Listing
call list [--project <name>] [--agent <name>] [--format json|yaml|text]
call agents [--project <name>] [--format text|json|yaml]
call prompts [--project <name>] [--agent <name>] [--state ready|draft] [--format table|json|yaml]
call models [--format yaml]
# Execution
call call --target <name> --input "<text>" [--model <model>] [--session-id <id>]
call exec --project <p> --agent <a> [--content-item <item>] [--model <model>]
# Management
call reload [--repos agent,prompt] [--format yaml]
call read <card_id>
call write <card_id> --card "<content>"
# Debugging
call call --target <name> --input "<text>" --echo # Preview payload
call call --target <name> --print-instructions # Show instructions
call call --target <name> --trace 5 --trace-file trace.txt # Stack dumpsSelection Modes:
-
Keyword-based (
call) — explicit selectorspython -m call.cli.main call --project UxFab --agent DialogPostAnalysis --input "text" -
Unified target (
--target) — resolved via precedence (prompt > agent > project)python -m call.cli.main call --target DialogPostAnalysis --input "text" python -m call.cli.main call --target "33-*" --input "text" # Wildcard
-
Payload-based (
exec) — JSON payload with context itemspython -m call.cli.main exec --project UxFab --agent DialogPostAnalysis \ --content-item "https://docs.google.com/document/d/FILE_ID/edit" \ --content-item '{"type":"text","text":"Hello"}'
-
Pure GPT (no selectors) — input-only mode
python -m call.cli.main call --input "What time is it?"
Input Modes:
--input "text"— raw text, no parsing--parse-input "text @Token"— Telegram-style parsing with token resolution; tokens (including wildcards like@31-*) resolve against the repo index and add context items automatically.--download-context— inline file contents in payload
Wildcard tokens:
- Each
@patternwith*expands against the repo index, adding file references for the first match. - Multiple tokens are deduplicated; add
--echoto inspect the payload preview without execution.
Output Control:
--echo— preview payload without execution--print-instructions— show resolved instructions--print-card— display full card (metadata + prompt)--format json|yaml|text|table— output format--model <model>— override effective model
Examples:
# List all ready prompts in UxFab project
python -m call.cli.main prompts --project UxFab --state ready --format table
# Execute with wildcards and echo
python -m call.cli.main call --target "31-*" --parse-input "@50-* review" --echo
# Direct card manipulation
python -m call.cli.main read 33-Questioning
python -m call.cli.main write 33-Questioning --card "# Updated\n\nNew content"
# Session management
python -m call.cli.main clear-session --chat-id -100123 --thread-id 10See Command Line Interface for more examples.
Call exposes a FastAPI-based REST API with bearer authentication for external integrations.
Base URL: https://example.com
Authentication: Bearer token in Authorization header
Core Endpoints:
| Endpoint | Method | Description |
|---|---|---|
/call |
GET | Execute agent with query params (name, input, model, session_id) |
/exec |
POST | Execute with JSON payload (see Exec payload contract) |
/agents |
GET | List agents hierarchically |
/prompts |
GET | List prompts with filters (project, agent, prompt, state) |
/models |
GET | List available OpenAI models |
/read/{id} |
GET | Read raw card text (returns text/plain) |
/write/{id} |
POST | Write card text (accepts text/plain body) |
/reload |
POST | Rebuild repository index |
/notify |
POST | Send event notification (requires event field) |
On success, /reload returns {"ok": true, "scanned": <count>}.
Examples:
# List prompts for a project
curl -sS "https://example.com/prompts?project=AgentFab" \
-H "Authorization: Bearer YOUR_TOKEN" | jq
# Execute an agent
curl -sS "https://example.com/exec" \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
--data '{"agent":"DialogPostAnalysis","context":{"text":"hi"},"model":"gpt-4o-mini"}'
# Read a card
curl -sS "https://example.com/read/33-Questioning" \
-H "Authorization: Bearer YOUR_TOKEN"
# List available models
curl -sS "https://example.com/models" \
-H "Authorization: Bearer YOUR_TOKEN" | jqModel Override:
Pass model as query parameter or in JSON body to override the effective model for a request:
curl -sS "https://example.com/call?name=DialogPostAnalysis&input=hello&model=gpt-4o-mini" \
-H "Authorization: Bearer YOUR_TOKEN" | jqSee
actions/openapi.jsonfor complete OpenAPI specification.
GET /prompts parameters: project, agent, prompt (supports * wildcard), and state (ready|draft|""). Use target when unsure about identifier type—the API resolves prompts, agents, and projects in one call.
POST /notify contract: expects only an event string (selectors like project or agent are ignored). Include optional payload fields inside the JSON body as needed.
Call implements a Model Context Protocol server using the FastMCP SDK.
Tools exposed:
call(name, input, model?, session_id?)— execute agentexec(payload)— execute with structured JSON payloadagents(query?, include_aliases?, grouped?)— list agentsprompts(project?, agent?, prompt?, state?)— list promptsmodels()— list available modelsread(card_id)— read raw card textwrite(card_id, card_text)— write card textreload()— rebuild repository index and clear the runtimeAGENT_CACHEafter a successful rescannotify(event, payload?)— send event notification
Configuration:
MCP server presets are defined in mcp_config.sample.yaml (shareable template) and local overrides in mcp_config.yaml / mcp_config.json for external MCP servers (filesystem, sequential thinking, Google Sheets, etc.).
Running:
# Standalone (stdio mode)
python -m call.mcp.server
# Via Claude Desktop (configured in claude_desktop_config.json)
# See mcp_config.sample.yaml for server definitions (copy to mcp_config.yaml for local edits)Integration:
The runtime automatically loads external MCP servers when agents specify tools. MCP hook logging captures all tool invocations with YAML-formatted arguments and results.
MCP lifecycle and agent cache
- MCP servers are initialized once and reused between runs according to the lifecycle documented in
src/call/app/call.pyanddocs/mcp_sse_timeouts.md. - Agents (including agents-as-tools) are cached by name in a small in-memory
AGENT_CACHEso they do not need to be re-instantiated on every call. - On reuse, each cached agent receives a fresh
mcp_serverslist built for the current run. This ensures that agents never hold onto MCP server instances whose sessions have already been cleaned up (for example, after Streamable HTTP timeouts or MCP auto-reinitialization for remote servers like Google Sheetsgsh). - This design prevents follow-up calls from failing with
UserError("Server not initialized. Make sure you call connect() first.")after an MCP reconnection, while still keeping agent construction overhead low.
See
docs/mcp_config.mdfor detailed MCP configuration guide.
Runtime events are durably appended to .cache/call/call.db for observability and future event streaming:
Operations:
call.lib.repo_db.push_event(event, payload)— insert event, return sequence idcall.lib.repo_db.iter_events(after_id?, limit?)— read events in batches
Use cases: Replay, audit trail, metrics export, Kafka/NATS migration path
- Prompts and cards are Markdown-only. Each file follows the Strato Prompt Framework with a
METADATAfenced YAML block and an optionalPROMPTblock. - The parser now tolerates cards that contain only a fenced
METADATAblock or are pure YAML files: in those cases the remaining body becomes the prompt text. Malformed YAML still raises aBAD_CARD_FORMATerror, and_load_card()logs the failure through thecall.apilogger. - The index emits warnings for
.mdcards missing validMETADATA. In strict flows (e.g., CLI--print-instructions), malformed or missing metadata surfaces a 400 error to the caller. - The index logs warnings for
.mdprompts missing validMETADATA. In strict paths (e.g., CLI--print-instructions), malformed or missingMETADATAsurfaces a 400 error.
-
Keys
model: the selected model id (e.g.,gpt-5,gpt-4.1).model-settings: generic settings applicable across models.model-settings-<model>: model-specific settings. This is the recommended, canonical form.
-
Runtime precedence: when a prompt, agent, and project each declare a
model, the runtime now appliesprompt > agent > project > $LLM_MODEL(environment default). Tests assert this ordering to prevent regressions. -
Runtime helpers: the runtime now exposes
_send_welcome_banner()and_embed_files_in_user_input()so the Telegram banner logic and JSON file embedding can be unit-tested.test_runtime_helpers.pycovers both the units and howbuild_and_run_agent()wires them up. -
Excluded (do not use in new cards)
model_params,modelParams(generic) andmodel_params_<model>,modelParams<model>(model-suffixed) are not part of the documented schema and must be avoided. Use the hyphenated formsmodel-settingsandmodel-settings-<model>instead.
-
Recognized fields in params
temperature,top_p,frequency_penalty,presence_penalty,max_tokens,verbosity(low|medium|high)reasoning: { effort: minimal|low|medium|high, summary?: auto|concise|detailed }
-
Example
model: gpt-5
model-settings-gpt-5:
reasoning:
effort: low
model-settings-gpt-4.1:
temperature: 0.2
top_p: 0.9Call provides a production Telegram bot with intelligent message parsing and context extraction.
Commands:
/start— welcome message/agents [project]— list agents/prompts [filters]— list prompts with filters/prompts_ready,/prompts_draft— state-specific listings/call [@Target] <input>— execute agent/prompt/reload— rebuild repository index and clear cached agents (forces sub-agents to use updated prompts)/clear— clear conversation session
Message Parsing:
Project-specific "project bots" (also called specialized bots) follow the naming pattern <ProjectName>Bot (for example, StratoProjectBot). When you mention such a bot without providing an explicit @Target, it falls back to the project orchestrator (project.md).
Private DMs:
- Plain text (no
@) → input-only execution (equivalent to/call <input>) @Target <input>→ passed to library for resolution (priority: prompt > agent > project). Target must include the@prefix.@ProjectNameBot [@Target] <input>→ bot name is stripped,@Targetpassed to the library. When no second@token is present the bot falls back to the project orchestrator.@ <input>→ input-only (no target)
Group chats:
- Only messages that mention the bot handle explicitly are handled (either
@ProjectNameBotfor project bots or@StratoSpaceAiBotfor the universal bot) @Target <input>→ passed to library for resolution (target must start with@)@ProjectNameBot <input>→ when no explicit@Targetfollows, the project orchestrator (project.md) is invoked (project bots only)@ProjectNameBot @Target <input>→@Targetpassed to library (same behavior as private chats)@StratoSpaceAiBot ...→ universal bot (no default target, handles any project/agent/prompt)- Messages without
@are ignored
Target Resolution:
- Bot layer does NOT pre-validate targets beyond requiring the explicit
@prefix - All target resolution delegated to
call_api.call_async()(prompt > agent > project hierarchy) - Unknown targets trigger errors from library (not silently ignored by bot)
- Project scoping derives from bot name (
AgentFabBot→AgentFab). When only the bot is mentioned, project bots run their project orchestrator automatically. StratoSpaceAiBotis universal: it never injects a default project. Without@Targetit runs the LLM in "void" mode (user input only, no card instructions). With@Targetit can execute any prompt/agent/project found in the repository.- Enable
CALL_DEBUG=1to trace parsing decisions in logs ([bot]prefix).
Context Extraction:
When replying to messages, the bot builds structured JSON payloads:
{
"agent": "UxResearcherReq",
"input": "optional user message text",
"replay": "optional replay to message text",
"context":
[
{
"type": "text",
"text": "foo headline line.\nbar summary line.\nbaz call-to-action button description.",
"source": {
"type": "file",
"file_id": "13LlOsEr6AGw6n6YX1mzrUIVUdH3xT63-",
"name": "foo-bar-document.docx"
}
},
{
"type": "text",
"text": "foo question about service? bar cloud offer allows foo chain registration. baz on-prem build does not include that.",
"source": {
"type": "session",
"_id": "68afe646ef46aed531a8ecc5",
"name": "foo bar voicebot session"
}
},
{
"type": "session",
"_id": "68c7ab4cab67ffbd365062f1"
},
{
"type": "file",
"file_id": "13LlOsEr6AGw6n6YX1mzrUIVUdH3xT63-"
}
]
}
see repo: prompt/schema/context-array.md
Inline Token Resolution:
Tokens like @3-OnlineChunkSummarization in message text are resolved to file
context items:
{
"type": "file",
"name": "3-OnlineChunkSummarization.md",
"path": "prompt/draft/3-OnlineChunkSummarization.md",
"content": "...",
"mutable": true
}Input Normalization:
- Trailing punctuation stripped:
@220-PM-Status!→220-PM-Status - Newlines preserved in multiline input
--echoflags removed without collapsing whitespace
MCP Hook Integration:
- All MCP tool invocations logged with YAML-formatted args/results
- Service messages sent in silent mode with expandable blockquotes
- Automatic cleanup after final result delivery
- Welcome banners use YAML format for readability
Agents-as-Tools:
When project cards expose helper agents via attributes.agents:
- Each tool invocation logged with
[Agent Tool][name]prefix - Input/output captured in YAML format
- Telegram banners sent best-effort when routing is active
- Tests:
test_agents_tool_wrapper.py
Session Tracking:
Sessions use format chat:thread (no agent name prefix). Routing precedence:
- Explicit
session_idparameter - Message chat/thread ids
- Agent YAML
output.tg.chat_id/thread_id - Environment defaults (
TELEGRAM_DEBUG_CHAT_ID,TELEGRAM_DEBUG_THREAD_ID)
Configuration:
# Project-scoped tokens
TELEGRAM_TOKEN.StratoSpaceAi=111111:AAAAAA
TELEGRAM_TOKEN.AgentFab=222222:BBBBBB
# Default routing
TELEGRAM_DEBUG_CHAT_ID=-100123456789
TELEGRAM_DEBUG_THREAD_ID=10 # OptionalAdditional Telegram context flags (optional):
CALL_INCLUDE_TELEGRAM_MESSAGE— when set to1, the bot appends raw Telegramtelegram_messageitems to thecontextarray for the current message and its reply (when available). Default:0(do not include these items).CALL_INCLUDE_TELEGRAM_BOT— when set to1, the bot appends a{"type": "telegram_bot", "bot": { ... }}item (TelegramgetMe()payload) to thecontextarray on each request. Default:0(do not include this item).TELEGRAM_PHOTO_VARIANT— controls which Telegram photo size is turned into aresource_linkfor messages withphotoarrays. Supported values:smallestormin— pick the smallest image by area,largestormax— pick the largest image by area (default),first— pick the first element in thephotoarray,last— pick the last element in thephotoarray.
Running:
# Start with specific bot identity
python -m call.telegram_bot.bot --bot-name StratoSpaceAiBot
# Debug mode
CALL_DEBUG=1 python -m call.telegram_bot.bot --bot-name AgentFabBotSee
tg-user-guide.mdfor detailed user documentation.
When replying to messages, the bot builds a structured payload with context:
Context item types:
{"type": "text", "text": "..."}— plain text content from replied message{"type": "text", "url": "https://api.telegram.org/file/..."}— document URLs resolved viaget_file(){"type": "file", "name": "...", "path": "...", "content": "..."}— inline file content{"type": "session", "_id": "..."}— session references for conversation threading{"type": "resource_link", "uri": "https://api.telegram.org/file/bot<token>/<path>", "name": "...", "description": "Telegram photo/document/video/voice/audio", "source": {"type": "telegram", "chat_id": 123456, "message_id": 30, "direction": "input|replay"}, "mimeType": "image/jpeg"}— Telegram attachments (photos, documents, video, voice/audio) exposed ascontextentries with Telegram file URLs and Telegram-specific metadata.
When you send or reply to a Telegram message that contains media, the bot:
- resolves Telegram files via
get_file()andCALL_TELEGRAM_TOKEN, - creates one
resource_linkper attachment (including all photos in a media group/album), - keeps the overall
{target, replay, input, context}envelope unchanged for downstream agents.
Replay field:
- Convenience mirror of reply content
- Can be string (single reply) or array (multiple context items)
- Useful for simple reply-based workflows
Payload order guarantee:
{
"target": "...",
"replay": "...",
"input": "...",
"context": [...]
}Control agent conversation length via environment variable:
AGENTS_DEFAULT_MAX_TURNS=150 # Default valueHow it works:
- At import,
call.app.callsetsagents.run.DEFAULT_MAX_TURNS - Runtime uses this value for
Runner.run(..., max_turns=...) - Increase for longer conversations:
AGENTS_DEFAULT_MAX_TURNS=300 - Default SDK value is only 10, Call raises it to 150
When to adjust:
- Short tasks: 50-100 turns
- Standard workflows: 150-200 turns
- Complex multi-step: 300+ turns
- Monitor logs for "max turns reached" warnings
Function: call.app.call.send_digest_notification(**kwargs) sends rich Telegram messages with Telegraph fallback.
Parameters:
text(str | None) — message body; falls back to welcome banner if emptychat_id(int | None) — explicit routing targetmessage_thread_id(int | None) — explicit thread targetagent_name(str | None) — for presentation and button macro resolutionagent_path(str | Path | None) — loadsbuttonssection from agent YAMLinput_text(str | None) — original user input for fallback bannerimage_path(str | Path | None) — sends photo with caption instead of text message
Behavior:
- Text ≥4000 chars → published to Telegraph, returns link banner
- Empty text → minimal banner with
input_textin<code>block - Button macros:
{{digest_url}}replaced with Telegraph link - Photo mode: truncates caption to 1024 chars, uses
textas caption - Returns
telegram.Message | None
Example:
from call.app.call import send_digest_notification
result = send_digest_notification(
text="Long analysis result...",
chat_id=-100123,
message_thread_id=10,
agent_name="DataAnalyzer",
agent_path="agent/UxFab/DataAnalyzer/agent.md",
input_text="Analyze Q3 sales",
image_path="charts/q3_sales.png"
)Call exposes a clean Python API for programmatic integration.
Import conventions:
from call.lib import api as call_api
from call.lib import repo_db as call_repo
from call.lib.logging import configure_logging as call_loggingCore Functions:
# Execute agent/prompt
result = call_api.call(
project="UxFab",
agent="DialogPostAnalysis",
input="Analyze user feedback",
chat_id=-100123,
thread_id=10,
session_id="chat:thread", # Optional override
echo=False
)
# Returns: {"ok": True, "agent": "...", "final_output": "...", "session_id": "..."}
# Async execution
result = await call_api.call_async(
target="DialogPostAnalysis",
input="text",
model="gpt-4o-mini"
)
# List resources
projects = call_api.list(project="UxFab")
prompts = call_api.list_prompts(project="UxFab", state="ready")
models = call_api.models()
# Card operations
card_text = call_api.read("33-Questioning")
call_api.write("33-Questioning", "# Updated\n\nNew content")
# Reload index
result = call_api.reload(repos=["agent", "prompt"])
# Build runnable config
config, error = call_api.build_runnable_instructions_config(
project="UxFab",
agent="DialogPostAnalysis",
prompt="33-Questioning"
)
# Returns: (RunnableConfig, None) or (None, error_dict)RunnableConfig DTO:
@dataclass
class RunnableConfig:
# Identifiers
id: str # Prompt/agent/project id
name: str # Display name
type: str # "project", "agent", "prompt"
# Content
instructions: str # Resolved instructions text
prompt_text: str # Raw prompt body
card_text: str # Full card contents
# Paths
agent_yaml_path: str # Path to agent.md
base_dir: str # Base directory
path: str # Full path to card
url: str # GitHub URL
# Execution
model: str # Effective model (after precedence)
attributes: dict # Merged metadata
# Metadata
project: str
agent: str
prompt: str
goal: str
state: str # "ready" or "draft"
engine: str # "openai", "openai-agents"
orchestration: str # "llm", "handoff", "langgraph"Error Handling:
All library functions return structured envelopes:
# Success
{"ok": True, "final_output": "...", "agent": "...", "session_id": "..."}
# Error
{
"ok": False,
"error": {
"code": 404,
"message": "no data found",
"type": "invalid_request_error"
},
"error_code": 404,
"description": "no data found",
"code": "NO_DATA_FOUND" # Machine-readable code
}Standard error codes:
NO_DATA_FOUND— resource not foundTOO_MANY_ROWS— ambiguous selection (includesoptionsarray)BAD_CARD_FORMAT— malformed METADATA YAMLINTERNAL_ERROR— unexpected runtime errorPIPELINE_ERROR— execution failureREQUEST_FORBIDDEN— upstream API rejection (403)
Session Management:
# Clear sessions
result = await call_api.clear_session(
name="DialogPostAnalysis",
chat_id=-100123,
thread_id=10
)Logging:
from call.lib.logging import configure_logging, debug_print
# Configure at startup
configure_logging() # DEBUG when CALL_DEBUG=1, else INFO
# Debug printing (gated by CALL_DEBUG)
debug_print("[module]", "[tag]", "message", data)Repository Queries:
from call.lib import repo_db
# Direct DB access
projects = repo_db.find_projects(project="UxFab")
agents = repo_db.find_agents(project="UxFab", agent="Dialog*")
prompts = repo_db.find_prompts(state="ready", target="33-*")
# Event log
event_id = repo_db.push_event("session_start", {"agent": "..."})
events = repo_db.iter_events(after_id=100, limit=50)Call is configured via environment variables, typically loaded from .env in the repository root.
# OpenAI API
OPENAI_API_KEY=sk-...
# Repository paths
AGENT_REPO=/path/to/agent # Default: ../agent (sibling dir)
PROMPT_REPO=/path/to/prompt # Default: ../prompt (sibling dir)
# Model defaults
LLM_MODEL=gpt-5 # Default model for all executions# Project-scoped bot tokens (dot notation)
TELEGRAM_TOKEN.StratoSpaceAi=111111:AAAAAA
TELEGRAM_TOKEN.AgentFab=222222:BBBBBB
TELEGRAM_TOKEN.FanFab=333333:CCCCCC
# Default routing (used when not specified in agent YAML or session_id)
TELEGRAM_DEBUG_CHAT_ID=-100123456789
TELEGRAM_DEBUG_THREAD_ID=10 # Optional, omit or set to 0 for no thread
# Telegraph integration
TELEGRAPH_TOKEN=your_telegraph_token# Repository scanning (comma/semicolon-separated)
repos=agent,prompt
# Absolute paths are also accepted; repo kind is auto-detected and mapped to
# AGENT_REPO/PROMPT_REPO automatically.
# repos=/home/strato-space/agent,/home/strato-space/prompt
# Agent execution
AGENTS_DEFAULT_MAX_TURNS=150 # Max conversation turns (default: 150)
# Google Workspace (for document/sheet integrations)
GOOGLE_SERVICE_ACCOUNT_KEY=call/wallet/service-account-key.json# Debug mode (verbose logging)
CALL_DEBUG=1
# JSON logging to stderr
CALL_LOG_JSON=1
# File logging (in addition to stderr)
CALL_LOG_FILE=logs/app.log
# Telegram live testing
TELEGRAM_LIVE=1 # Enable live Telegram send tests
TELEGRAM_LIVE_KIND=skip # Skip Telegram tests in CI# Override default SQLite paths
DB_PATH=.cache/call/repo.db # Repository index (default)
EVENT_DB_PATH=.cache/call/call.db # Event log (default)
# Override cache root and workspace discovery
CALL_CACHE_DIR=.cache/call # Base directory for repo/event DBs
CALL_REPO_ROOT=/home/tools/call # Repo root (pyproject.toml)
CALL_WORKSPACE_ROOT=/home/tools # Workspace with prompt/agent reposIf
CALL_CACHE_DIRis relative, it is resolved againstCALL_WORKSPACE_ROOT(or the detected workspace root).
# Bearer authentication token
CALL_ACTIONS_TOKEN=your_bearer_token_here
# Public base URL for OpenAPI/Actions clients
CALL_ACTIONS_BASE_URL=https://example.comCall uses the following precedence for environment loading:
- Operating system environment variables
call/.env(project-level config)../.env(workspace root config)
Startup behavior:
- If
call/.envis missing, Call copies../.envtocall/.envon first run - Use
override=Falseto preserve OS environment variables
Keep a plaintext .env locally (ignored by git) and commit a separate encrypted copy.
# Encrypt local .env into an encrypted copy
dotenvx encrypt -f .env --env-keys-file .env.keys --stdout > .env.enc
# Decrypt encrypted copy back to .env
dotenvx decrypt -f .env.enc --env-keys-file .env.keys --stdout > .envKeep .env.keys private; it is required to decrypt.
# Core
OPENAI_API_KEY=sk-proj-...
LLM_MODEL=gpt-5
AGENT_REPO=c:/home/tools/agent
PROMPT_REPO=c:/home/tools/prompt
repos=agent,prompt
# Telegram
TELEGRAM_TOKEN.StratoSpaceAi=1234567890:ABCdefGHIjklMNOpqrSTUvwxYZ
TELEGRAM_DEBUG_CHAT_ID=-1001234567890
TELEGRAM_DEBUG_THREAD_ID=42
TELEGRAPH_TOKEN=your_token
# Runtime
AGENTS_DEFAULT_MAX_TURNS=200
CALL_DEBUG=1
CALL_LOG_FILE=logs/call.log
# Google Workspace
GOOGLE_SERVICE_ACCOUNT_KEY=call/wallet/service-account-key.jsonBot tokens use project-name-only notation:
TELEGRAM_TOKEN.<ProjectName>=<token>How it works:
- CLI/Library: pass
project_name="AgentFab"explicitly - Bot runner: derives project name by stripping suffixes (
Bot,_bot,-bot) from--bot-name--bot-name StratoSpaceAiBot→ looks upTELEGRAM_TOKEN.StratoSpaceAi--bot-name AgentFabBot→ looks upTELEGRAM_TOKEN.AgentFab
No fallbacks: If the token key is missing, an error is raised. There is no fallback to a generic TELEGRAM_TOKEN.
Call includes 153+ tests covering API, CLI, bot handlers, model settings precedence, and more.
Run all tests:
pytestRun specific test file:
pytest app/tests/test_builder_config.py -vRun with coverage:
pytest --cov=call --cov-report=htmlEnvironment for testing:
# Skip live Telegram tests in CI
TELEGRAM_LIVE_KIND=skip
TELEGRAM_BOT_TOKEN=""
TELEGRAM_DEBUG_CHAT_ID=""
# For local testing with real Telegram
TELEGRAM_LIVE=1Test organization:
app/tests/— application runtime, bot handlers, payload builderscli/tests/— CLI commands, card operationsactions/tests/— REST API endpointslib/tests/— core library functions, parsing
Platform-specific testing:
- Windows: Activate virtualenv and run
pytestdirectly - Linux/CI: Use
TELEGRAM_LIVE_KIND=skip TELEGRAM_BOT_TOKEN="" TELEGRAM_DEBUG_CHAT_ID="" pytestto skip live Telegram tests
Prefer the project venv interpreter for consistency across environments:
Windows (PowerShell):
cd c:\home\strato-space\call
.venv\Scripts\Activate.ps1
python -m call.cli.main list --project UxFab
python -m call.cli.main call --target Vasil3 --input "рассказывай"
python -m call.cli.main call --target BusinessAnalyticAgent --input "приведи @Vasil3 в соответствие с strato space prompt framework"Linux/macOS (Bash):
cd ~/strato-space/call
source .venv/bin/activate
python -m call.cli.main list --project UxFab
python -m call.cli.main call --target Vasil3 --input "рассказывай"
python -m call.cli.main call --target BusinessAnalyticAgent --input "приведи @Vasil3 в соответствие с strato space prompt framework"Legacy positional syntax (still supported via call.app.call):
python -m call.app.call "Vasil3" "рассказывай"
python -m call.app.call "BusinessAnalyticAgent" "приведи @Vasil3 в соответствие с strato space prompt framework"Before submitting changes:
-
Run tests — Ensure all tests pass
pytest --maxfail=1 --disable-warnings
-
Update documentation — Update
README.md,CHANGELOG.md, and inline docstrings -
Follow conventions — See
AGENTS.mdfor coding standards:- KISS principle
- SOLID design with dependency injection
- Small, focused helpers
- Explicit failure paths
- Observable errors (log everything)
-
Commit message format — Imperative, scope-prefixed:
mcp: tighten tool auth cli: add --download-context flag bot: strip trailing punctuation from agent names -
Update CHANGELOG — Document user-facing changes with date header
Adding a new CLI command:
- Add handler in
cli/main.py - Delegate to
call.lib.apifor business logic - Add tests in
cli/tests/test_<feature>.py - Update CLI examples in README
Adding a new API endpoint:
- Define route in
actions/main.py - Proxy to
call.lib.apihelpers - Update
actions/openapi.jsonschema - Add tests in
actions/tests/test_<endpoint>.py - Document in REST API section
Adding a new MCP tool:
- Define tool in
mcp/server.pywith@mcp.tool()decorator - Delegate to
call.lib.api - Keep signature aligned with REST/CLI
- Update MCP Server section in README
- Reuse the shared warm-up helpers (
preinitialize_mcp_servers_async()/preinitialize_mcp_servers_sync()) fromsrc/call/app/call.pywhen adding new entrypoints to avoid cold starts.
Modifying card format:
- Update parser in
lib/utils.py - Update
docs/cards.mdspecification - Add tests for new/changed fields
- Run full test suite to catch regressions
Enable verbose logging:
$env:CALL_DEBUG=1
python -m call.cli.main call --target MyAgent --input "test"Inspect payload without execution:
python -m call.cli.main call --target MyAgent --input "test" --echoTrace thread stacks:
python -m call.cli.main call --target MyAgent --input "test" --trace 5 --trace-file trace.txtCheck resolved instructions:
python -m call.cli.main call --project UxFab --agent MyAgent --print-instructionsView full card:
python -m call.cli.main read MyAgentMonitor Telegram bot updates:
# JSON logs
$env:CALL_LOG_JSON=1; $env:CALL_DEBUG=1
python -m call.telegram_bot.bot --bot-name TestBot | jq -r 'select(.logger=="call.bot") | .message'Discovery Commands:
| Command | Description | Example |
|---|---|---|
list |
Hierarchical listing of projects/agents/prompts | call list --project UxFab --format yaml |
agents |
List agents (alias for list) |
call agents --project * --format text |
prompts |
Flat prompt listing with filters | call prompts --state ready --format table |
models |
List available OpenAI models | call models --format yaml |
Execution Commands:
| Command | Description | Example |
|---|---|---|
call |
Execute with keyword selectors | call call --target Agent --input "text" |
exec |
Execute with payload | call exec --agent Agent --content-item "url" |
Management Commands:
| Command | Description | Example |
|---|---|---|
reload |
Rebuild repository index | call reload --repos agent,prompt |
read |
Read raw card text | call read CardId |
write |
Write card text | call write CardId --card "content" |
clear-session |
Clear conversation sessions | call clear-session --chat-id -100123 |
Selection Flags:
--project <name>— project filter (supports*)--agent <name>— agent filter (supports*)--prompt <name>— prompt filter (supports*)--target <name>— unified target (resolved via precedence)--state ready|draft— prompt state filter- Project-only selections now probe the repo index before execution: if no agents are found you will receive a
NO_DATA_FOUNDenvelope, and ambiguous projects surfaceTOO_MANY_ROWSwithoptions.
Input Flags:
--input "<text>"— raw text input (no parsing)--parse-input "<text>"— Telegram-style parsing with token resolution--download-context— inline file contents in payload
Output Flags:
--echo— preview payload without execution--print-instructions— show resolved instructions--print-card— display full card--format json|yaml|text|table— output format--model <model>— override effective model--session-id <id>— session identifier (chat:thread)
Debugging Flags:
--trace <seconds>— periodic thread stack dumps--trace-file <path>— write stack dumps to file--json-logs— force JSON log output
Success Response:
{
"ok": true,
"agent": "AgentName",
"agent_path": "/path/to/agent.md",
"final_output": "...",
"echo": false,
"session_id": "-100123:10"
}Error Response:
{
"ok": false,
"error": {
"code": 404,
"message": "no data found",
"type": "invalid_request_error",
"param": null,
"provider_code": null
},
"error_code": 404,
"description": "no data found",
"code": "NO_DATA_FOUND",
"agent": null,
"project": null,
"final_output": null,
"echo": false
}Error Codes:
| Code | HTTP Status | Description |
|---|---|---|
NO_DATA_FOUND |
404 | Resource not found |
TOO_MANY_ROWS |
400 | Ambiguous selection (includes options array) |
BAD_CARD_FORMAT |
400 | Malformed METADATA YAML |
BAD_REQUEST |
400 | Invalid request parameters |
REQUEST_FORBIDDEN |
403 | Upstream API rejection |
INTERNAL_ERROR |
500 | Unexpected runtime error |
PIPELINE_ERROR |
502 | Execution failure |
UPSTREAM_CONNECT_ERROR |
502 | Connection to upstream service failed |
| Code | Meaning |
|---|---|
0 |
Success (ok: true) |
1 |
Error envelope (ok: false) |
Check exit code:
# PowerShell
python -m call.cli.main call --target Agent --input "test"
echo $LASTEXITCODE
# cmd.exe
python -m call.cli.main call --target Agent --input "test"
echo %ERRORLEVEL%Markdown Structure:
<!-- METADATA:START -->
```yaml
id: PromptId
title: Human Title
project: ProjectName
agent: AgentName
model: gpt-5
engine: openai
orchestration: llm
model-settings-gpt-5:
temperature: 0.7
reasoning:
effort: mediumYour instruction text here...
Metadata Keys:
| Key | Type | Description |
|---|---|---|
id |
string | Unique identifier |
title |
string | Human-readable title |
project |
string | Project name |
agent |
string | Agent name |
model |
string | Model identifier (gpt-5, gpt-4.1) |
engine |
string | Runtime engine (openai, openai-agents) |
orchestration |
string | Control flow (llm, handoff, langgraph) |
model-settings |
object | Generic model settings |
model-settings-<model> |
object | Model-specific settings |
tags |
array | Classification tags |
version |
string | Card version |
Model Settings Fields:
| Field | Type | Description |
|---|---|---|
temperature |
float | Sampling temperature (0.0-2.0) |
top_p |
float | Nucleus sampling threshold |
frequency_penalty |
float | Token frequency penalty |
presence_penalty |
float | Token presence penalty |
max_tokens |
int | Maximum response length |
verbosity |
string | Output verbosity (low, medium, high) |
reasoning |
object | Reasoning configuration |
reasoning.effort |
string | Reasoning effort (minimal, low, medium, high) |
reasoning.summary |
string | Summary mode (auto, concise, detailed) |
call (keyword-based):
- Uses explicit flags:
--project,--agent,--prompt,--target --inputpasses raw text (no parsing)--parse-inputuses Telegram parser to build JSON payload with context resolution--print-instructionsshows runnable instruction body (no execution)--print-carddisplays full card with metadata--echopreviews payload and resolved selection snapshot--modeloverrides effective model (highest priority)- Project-only selections report
"agent": nullin resolved payload
exec (payload-based):
- Merges selectors and content items into single JSON payload
- Best for content buckets, Actions/MCP integrations
- Single selector mode: uses
interpret_exec_payload()validator - Multiple selectors: falls back to explicit call path with full payload as input
CALL_DEBUG=1: auto-reloads index before payload building (picks up recent edits)--echoprints payload and exits (no execution)--modelembeds override into payload for downstream callers--format json|yaml|textcontrols output
When to use:
call: Interactive CLI usage, testing, simple invocationsexec: Programmatic integration, content processing, MCP/Actions workflows
Examples:
# call with raw input
python -m call.cli.main call --target AgentFab --input "as is text" --model gpt-4o-mini
# call with parsed input (Telegram-identical payload)
python -m call.cli.main call --target AgentFab --parse-input "@3-OnlineChunkSummarization" --echo
# exec with content items
python -m call.cli.main exec --project UxFab --agent DialogPostAnalysis \
--content-item "https://docs.google.com/document/d/FILE_ID/edit" \
--content-item '{"type":"text","text":"Hello"}'
# exec with multiple selectors (backward-compatible fallback)
python -m call.cli.main exec --project UxFab --agent DialogPostAnalysis --target 33-Questioning --echoAll selectors support * for pattern matching:
# Match any project
python -m call.cli.main list --project "*"
# Match prompts starting with "33-"
python -m call.cli.main prompts --prompt "33-*" --state ready
# Match agents in UxFab
python -m call.cli.main call --target "Dialog*" --input "test"
# Multiple wildcards in parsed input
python -m call.cli.main call --parse-input "@31-* @50-*" --echoWildcard resolution:
- Prompt lookup:
prompt/ready/andprompt/draft/ - Agent lookup:
agent/<Project>/<Agent>/ - Project lookup: top-level project directories
- Precedence: prompt > agent > project
Log Levels:
INFO— standard operational messagesDEBUG— detailed diagnostic information (enabled viaCALL_DEBUG=1)WARNING— non-fatal issuesERROR— execution failures
Log Formats:
# Human-readable (default)
[2025-10-18 11:30:42] [INFO] [call.api] Loading agent: DialogPostAnalysis
# JSON (structured)
{"time":"2025-10-18T11:30:42+03:00","level":"INFO","logger":"call.api","message":"Loading agent: DialogPostAnalysis"}Module Prefixes:
[app]— application runtime, welcome banners, notifications[discovery]— repository scanning, index building[bot]— Telegram bot message parsing[UPDATE]— Telegram update summaries (in bot logs)
Enable JSON logs:
$env:CALL_LOG_JSON=1
python -m call.cli.main call --target Agent --input "test"File logging:
$env:CALL_LOG_FILE="logs/app.log"
python -m call.telegram_bot.bot --bot-name TestBotFilter logs:
# PowerShell
Get-Content -Path .\logs\app.log -Wait | Select-String -Pattern "\[app\]"
# Bash with jq
tail -F logs/app.log | jq -r 'select(.level=="INFO") | .message'Call is part of the Strato Space AI infrastructure and integrates with:
- agent — Agent definitions organized by project (
agent/<Project>/<Agent>/agent.md) - prompt — Reusable prompt templates (
prompt/ready/,prompt/draft/) - call — This repository (runtime & orchestration)
- server — MCP starters, nginx configs
- voice — Voice bot backend (lib, MCP, Actions, CLI)
- rms — Sample customer project repository
- OpenAI — LLM inference via official Python SDK
- Telegram — Bot API for user interactions and notifications
- Telegraph — Long-form content publishing (via
telegraphpackage) - Google Workspace — Document/Sheet access via service account
- MCP Servers — Filesystem, sequential thinking, time, Google Sheets
CHANGELOG.md— Detailed version historyAGENTS.md— Contributor guidelines and coding standardsdocs/cards.md— Prompt card format specificationdocs/mcp_config.md— MCP configuration guidetg-user-guide.md— Telegram bot user manualactions/openapi.json— REST API specification
- Agent Orchestration — Execute multi-step AI workflows with tool calling
- Prompt Management — Version-controlled prompt library with metadata
- Telegram Integration — Natural language interface for agent invocation
- Content Processing — Document analysis, summarization, transformation
- Multi-Project Support — Isolated workspaces per project/team
- Session Tracking — Persistent conversation state across channels
- Observability — Structured logging, event streams, audit trails
- MCP Allowlist — Enforce per-prompt MCP allowlist to restrict tool access
- API Authentication — All REST API endpoints require bearer token authentication
- Credentials Management — Store sensitive credentials in
wallet/directory, never commit to version control - Input Validation — Validate payload sources (URLs, sheets) and sanitize outputs
- Access Control — Restrict powerful pipelines to authorized organization users
- Session Security — Session IDs do not include agent names for privacy
- Error Sanitization — Structured errors prevent information leakage via stack traces
Prompt Design:
- Keep METADATA and PROMPT sections separated
- Use model-specific settings (
model-settings-<model>) for fine-grained control - Document prompt purpose in
titleandgoalmetadata fields - Version prompts with
versionfield for tracking changes
Agent Organization:
- One agent per directory under
agent/<Project>/<Agent>/ - Use
project.mdfor project-wide defaults - Leverage prompt precedence: prompt > agent > project
- Keep agent directories organized with related prompts
Development Workflow:
- Use
draft/for development,ready/for production prompts - Test with
--echoflag before execution - Monitor logs with
CALL_DEBUG=1during development - Run full test suite before committing changes
Production Deployment:
- Use environment-specific
.envfiles - Configure
CALL_LOG_FILEfor persistent logging - Set
AGENTS_DEFAULT_MAX_TURNSappropriately for workload - Monitor
.cache/call/call.dband.cache/call/repo.dbsize, implement rotation if needed
✅ SQLite-based repository index
✅ Multi-interface support (CLI, REST, MCP, Telegram)
✅ Metadata-driven execution with model precedence
✅ Session tracking and routing
✅ Comprehensive test coverage (153+ tests)
- Metrics endpoint (Prometheus-compatible)
- Trace logging with unique chain IDs
- Event streaming to Kafka/NATS
- Real-time execution dashboard
- GitHub prompt resolver with version pinning
- Additional output adapters (Slack, Discord, Email)
- Google Workspace enhanced integration (Docs, Slides)
- S3/Azure Blob storage for artifacts
- Human-in-the-loop workflows
- Conditional branching and error handling
- Agent chaining syntax:
Agent1 --> Agent2 --> Output - Cost tracking and token usage analytics
- Node.js runtime parity (Express + TypeScript)
- Go client library
- Language-agnostic MCP protocol extensions
- Fine-grained authorization model (RBAC)
- Prompt marketplace and sharing
- A/B testing framework for prompts
- Distributed execution across regions
Proprietary — Strato-Space. Internal use within the organization unless explicitly permitted.
Call provides a production-ready runtime for AI agent orchestration with:
✅ Unified discovery via SQLite index (repo.db)
✅ Multiple interfaces (CLI, REST API, MCP, Telegram)
✅ Metadata-driven execution with model settings precedence
✅ Smart routing with session tracking
✅ Structured errors across all surfaces
✅ Battle-tested with 153+ tests
Get started:
# Clone and setup
git clone https://github.com/strato-space/call.git
cd call
python -m venv .venv && .venv\Scripts\Activate.ps1
pip install -r requirements.txt
# Configure
cp .env.example .env
# Edit .env with your API keys
# Initialize
python -m call.cli.main reload --repos agent,prompt
# Execute
python -m call.cli.main call --target MyAgent --input "Hello world"For questions, issues, or contributions, see AGENTS.md for guidelines.
Breaking Change: Target resolution priority changed from prompt → agent → project to project → agent → prompt.
Key Changes:
- ✅ Projects now have highest priority (security improvement)
- ✅ Executable projects (with PROMPT section) can run directly
- ✅ Database path fixed in
.env(.cache/call/repo.db) - ✅ New diagnostic tool:
debug_db.py - ✅ 6 new tests for target resolution
Details: See CHANGELOG_TARGET_RESOLUTION.md
Migration: Use explicit selectors if you have duplicate target names:
# Force prompt selection
call exec --prompt MyTarget
# Force project selection (now default)
call exec --project MyTarget