A powerful, flexible multi-agent system that enables complex task decomposition and execution through hierarchical agent collaboration. Built on a unified tool abstraction where everything—from simple functions to API calls to other agents—is treated as a tool.
Everything is a tool. Whether it's a function, an API call, an MCP server, or even another agent—the orchestrator treats all capabilities uniformly. This design enables:
- Recursive Agent Calls: Agents can delegate tasks to other specialized agents
- Hierarchical Task Decomposition: Complex queries are broken down into manageable subtasks
- Flexible Tool Integration: Seamlessly mix internal tools, external APIs, and sub-agents
- Cycle Detection: Prevents infinite loops with configurable max depth and cycle detection
The system follows a hierarchical orchestration pattern:
User Query → Main Agent → [Tools | Sub-Agents | MCP Servers]
↓ ↓
Direct Sub-Agent → [Their Tools | Other Sub-Agents]
Execution ↓
Synthesized Results → Main Agent → Final Response
- Agent Handoff Protocol: Agents can hand off tasks to specialized agents using
handoff_to_<agent_id>functions - Execution Context Tracking: Each call maintains
call_chainanddepthto prevent cycles - Iterative Refinement: Agents can make multiple tool calls in a loop until they have sufficient information
- Configurable Depth:
max_depthparameter prevents runaway recursive calls
See the detailed architecture diagram: assets/agent_architecture.svg
agents/
├── agent.py # Core Agent class with handoff logic
├── client.py # LLM client wrapper (OpenAI-compatible)
├── tools.py # Tool registry and built-in tools
├── main.py # Entry point and agent orchestration
├── config.json # Agent definitions and project configuration
├── requirements.txt # Python dependencies
├── .env # Environment variables (API keys)
├── LICENSE # GNU GPL v3
└── assets/
└── agent_architecture.txt # System architecture diagram
1. Agent Class (agent.py)
The Agent class is the heart of the system. Each agent has:
- Identity:
agent_idandsystem_promptdefining its role - Capabilities:
tool_nameslist of tools it can use - Delegation:
can_calllist of agent IDs it can hand off to - Limits:
max_stepsfor iterative loops
Key Methods:
_build_tools(): Dynamically constructs tool schemas including handoff functionsrun(): Main execution loop with depth tracking and cycle prevention
Handoff Mechanism:
When an agent calls handoff_to_<agent_id>, the system:
- Identifies the target agent
- Executes
target.run()with incremented depth - Returns the result to the calling agent
- Continues the calling agent's execution
2. Tool System (tools.py)
Tool Registry Pattern:
- Decorator-based registration:
@tool(name, description, parameters) - OpenAI function calling schema generation
- Centralized execution with error handling
Built-in Tools:
| Tool | Purpose | Parameters |
|---|---|---|
get_weather |
Retrieve weather data for a location | location: str |
calculate |
Evaluate math expressions | expression: str |
convert_units |
Convert between units (temp, distance, weight) | value: float, from_unit: str, to_unit: str |
search_knowledge |
Search internal knowledge base | query: str, max_results: int |
get_time |
Get current time with timezone offset | utc_offset: float |
list_files |
List files in a directory | path: str |
Adding New Tools:
@tool(
"my_tool",
"Description of what the tool does",
{
"type": "object",
"properties": {
"param": {"type": "string", "description": "Parameter description"}
},
"required": ["param"]
}
)
def my_tool(param: str):
# Implementation
return {"result": "value"}3. LLM Client (client.py)
Wrapper around OpenAI's API with:
- Environment variable configuration
- Session management for conversation history
- Agent metadata tracking
- Support for custom base URLs
Configuration:
base_url = os.getenv("CREATEAI_API_URL")
api_key = os.getenv("CREATEAI_API_KEY")4. Main Orchestrator (main.py)
Entry point that:
- Loads configuration from
config.json - Builds agent instances with their capabilities
- Executes the main agent with user query
- Handles session history if enabled
Query Execution Flow:
read_config() → build_agents() → main_agent.run() → print result
Configuration (config.json)
{
"agent_id": "main",
"role": "main",
"system_prompt": "You are the main agent...",
"can_call": ["research_agent", "math_agent"],
"tools": ["get_weather", "get_time"],
"max_steps": 20
}Fields:
agent_id: Unique identifierrole: "main" or "sub_agent"system_prompt: Instructions defining the agent's behaviorcan_call: List of agent_ids this agent can hand off totools: List of tool names this agent can usemax_steps: Maximum iterations in the execution loop
{
"session_id": "unique_session",
"agent_settings": {
"max_depth": 3,
"timeout_ms": 30000,
"cycle_detection": true
}
}Current Configuration:
main_agent (orchestrator)
├── Tools: get_weather, get_time, convert_units, list_files
└── Can call:
├── research_agent
│ └── Tools: search_knowledge
├── math_agent
│ └── Tools: calculate, convert_units
└── creative_agent
└── Tools: (uses LLM directly for creative tasks)
- Python 3.10+
- OpenAI-compatible API endpoint
-
Clone the repository
git clone <repository-url> cd agents
-
Install dependencies
pip install -r requirements.txt
-
Configure environment variables
Create a
.envfile in the project root:CREATEAI_API_KEY=your_api_key_here CREATEAI_API_URL=https://your-api-endpoint.com
-
Run the system
python main.py
query = (
"I need help with a few things: "
"1) What's the weather in Tokyo and convert the temperature to Celsius, "
"2) Calculate the compound interest on $10,000 at 5% annual rate for 3 years, "
"3) Find me some info about machine learning, "
"4) Write me a short haiku inspired by the weather in Tokyo."
)
result = agents["main"].run(
query=query,
agents_map=agents,
model="openai/gpt4o_mini",
max_depth=3,
session_id="example_session",
enable_history=True
)Execution Flow:
- Main agent receives complex query
- Calls
get_weathertool for Tokyo - Calls
convert_unitsto convert F→C - Hands off to
research_agentfor ML info - Hands off to
math_agentfor compound interest - Hands off to
creative_agentfor haiku - Synthesizes all results into final response
query = "What is 55kg in lb?"
result = agents["main"].run(
query=query,
agents_map=agents,
model="openai/gpt4o_mini"
)Execution Flow:
- Main agent identifies unit conversion task
- Hands off to
math_agent - Math agent calls
convert_units(55, "kg", "lb") - Returns
121.25 lb
Enable detailed logging to see the agent decision-making process:
# In agent.py, the system prints:
# - Agent ID and received query
# - Handoff decisions: "[agent_id] → handoff to 'target_id'"
# - Tool executions: "[agent_id] → tool 'tool_name' args={...}"
# - Results: "[agent_id] ← result from 'target_id': ..."for step in range(max_steps):
1. Call LLM with current messages + available tools
2. Receive response (may include tool_calls)
3. If no tool_calls → return final answer
4. For each tool_call:
a. If handoff → call sub-agent
b. If tool → execute tool
c. Append result to messages
5. Continue loop- depth: Incremented on each agent handoff
- max_depth: Hard limit (default: 3)
- call_chain: Tracks sequence of agent IDs
- cycle_detection: Prevents agent from calling parent
The system maintains OpenAI-style message history:
messages = [
{"role": "system", "content": "You are..."},
{"role": "user", "content": "User query"},
{"role": "assistant", "content": "...", "tool_calls": [...]},
{"role": "tool", "tool_call_id": "...", "content": "result"}
]-
Define in config.json:
{ "agent_id": "data_agent", "role": "sub_agent", "system_prompt": "You are a data analysis specialist...", "can_call": [], "tools": ["calculate", "list_files"], "max_steps": 10 } -
Add to parent's can_call:
{ "agent_id": "main", "can_call": ["research_agent", "math_agent", "data_agent"] }
@tool(
"call_external_api",
"Call an external REST API",
{
"type": "object",
"properties": {
"endpoint": {"type": "string"},
"method": {"type": "string"},
"data": {"type": "object"}
}
}
)
def call_external_api(endpoint: str, method: str = "GET", data: dict = None):
import requests
response = requests.request(method, endpoint, json=data)
return response.json()The system supports Model Context Protocol (MCP) servers. To add:
-
Update
config.json:{ "tools": [ { "name": "mcp_tool", "auth_ref": "mcp_api_key", "base_url": "https://mcp-server.com" } ] } -
Implement MCP client in
tools.py
[main] Received query: I need help with a few things: 1) What's the weather...
[main] → executing tool 'get_weather' with args={'location': 'Tokyo'}
[main] → handoff to 'math_agent'
[math_agent] Received query: Calculate compound interest on $10,000 at 5%...
[math_agent] → tool 'calculate' args={'expression': '10000 * (1 + 0.05)**3'}
[main] ← result from 'math_agent': The compound interest is $11,576.25
[main] → handoff to 'research_agent'
[research_agent] Received query: Find info about machine learning
[research_agent] → tool 'search_knowledge' args={'query': 'machine learning'}
[main] ← result from 'research_agent': Machine learning involves...
[main] → handoff to 'creative_agent'
[creative_agent] Received query: Write a haiku about Tokyo weather...
[main] ← result from 'creative_agent': Cloudy Tokyo skies...
=== Final Answer ===
Here's everything you requested:
1. **Tokyo Weather**: Currently 58°F (14.4°C), cloudy with 65% humidity
2. **Compound Interest**: $11,576.25 after 3 years
3. **Machine Learning**: Involves supervised and unsupervised learning...
4. **Haiku**:
Cloudy Tokyo skies
Fourteen degrees, misty air
Gentle wind whispers
Run unit tests for individual components:
# Test tools
python -c "from tools import calculate, convert_units, get_weather; \
print(calculate('2+2')); \
print(convert_units(100, 'F', 'C')); \
print(get_weather('Tokyo'))"
# Test agent initialization
python -c "from main import read_config, build_agents; \
config = read_config('config.json'); \
agents = build_agents(config); \
print(list(agents.keys()))"This project is licensed under the GNU General Public License v3.0. See LICENSE for details.
- Add composio tool support
- Add MCP Support
- Add Knowledge Base Tools
Architecture Diagram: assets/agent_architecture.svg