diff --git a/README.md b/README.md index 7979e145..6aaf5a50 100644 --- a/README.md +++ b/README.md @@ -2,10 +2,10 @@ Rootflo

-

Composable Agentic AI Workflow

+

Flo AI 🌊

-Flo AI is a Python framework for building structured AI agents with support for multiple LLM providers, tool integration, and YAML-based configuration. Create production-ready AI agents with minimal code and maximum flexibility. + Build production-ready AI agents with structured outputs, tool integration, and multi-LLM support

@@ -24,10 +24,7 @@ Flo AI is a Python framework for building structured AI agents with support for


- Checkout the docs Β» -
-
- Github + GitHub β€’ Website β€’ @@ -36,58 +33,11 @@ Flo AI is a Python framework for building structured AI agents with support for


-# Flo AI 🌊 - -> Build production-ready AI agents with structured outputs, tool integration, and multi-LLM support +## πŸš€ What is Flo AI? Flo AI is a Python framework that makes building production-ready AI agents and teams as easy as writing YAML. Think "Kubernetes for AI Agents" - compose complex AI architectures using pre-built components while maintaining the flexibility to create your own. -## 🎨 Flo AI Studio - Visual Workflow Designer - -**Create AI workflows visually with our powerful React-based studio!** - -

- Flo AI Studio - Visual Workflow Designer -

- -Flo AI Studio is a modern, intuitive visual editor that allows you to design complex multi-agent workflows through a drag-and-drop interface. Build sophisticated AI systems without writing code, then export them as production-ready YAML configurations. - -### πŸš€ Studio Features - -- **🎯 Visual Design**: Drag-and-drop interface for creating agent workflows -- **πŸ€– Agent Management**: Configure AI agents with different roles, models, and tools -- **πŸ”€ Smart Routing**: Visual router configuration for intelligent workflow decisions -- **πŸ“€ YAML Export**: Export workflows as Flo AI-compatible YAML configurations -- **πŸ“₯ YAML Import**: Import existing workflows for further editing -- **βœ… Workflow Validation**: Real-time validation and error checking -- **πŸ”§ Tool Integration**: Connect agents to external tools and APIs -- **πŸ“‹ Template System**: Quick start with pre-built agent and router templates - -### πŸƒβ€β™‚οΈ Quick Start with Studio - -1. **Start the Studio**: - ```bash - cd studio - pnpm install - pnpm dev - ``` - -2. **Design Your Workflow**: - - Add agents, routers, and tools to the canvas - - Configure their properties and connections - - Test with the built-in validation - -3. **Export & Run**: - ```bash - # Export YAML from the studio, then run with Flo AI - python -c " - from flo_ai.arium import AriumBuilder - builder = AriumBuilder.from_yaml(yaml_file='your_workflow.yaml') - result = await builder.build_and_run(['Your input here']) - " - ``` - -## ✨ Features +### ✨ Key Features - πŸ”Œ **Truly Composable**: Build complex AI systems by combining smaller, reusable components - πŸ—οΈ **Production-Ready**: Built-in best practices and optimizations for production deployments @@ -96,93 +46,33 @@ Flo AI Studio is a modern, intuitive visual editor that allows you to design com - πŸ”§ **Flexible**: Use pre-built components or create your own - 🀝 **Team-Oriented**: Create and manage teams of AI agents working together - πŸ”„ **Langchain Compatible**: Works with all your favorite Langchain tools -- πŸ“Š **OpenTelemetry Integration**: Built-in observability with automatic instrumentation for LLM calls, agent execution, and workflows - -## πŸ“Š OpenTelemetry Integration - -Flo AI includes comprehensive OpenTelemetry integration for production observability. Monitor your AI applications with automatic instrumentation for: - -- πŸ” **LLM Calls**: Track token usage, latency, and errors across all providers -- πŸ€– **Agent Execution**: Monitor performance, tool calls, and retry attempts -- πŸ”„ **Workflows**: Track Arium workflow execution and node traversals -- πŸ“Š **Metrics**: Export performance data to Jaeger, Prometheus, Grafana, or cloud providers - -### Quick Telemetry Setup - -```python -from flo_ai import configure_telemetry, shutdown_telemetry - -# Configure at startup -configure_telemetry( - service_name="my_ai_app", - service_version="1.0.0", - console_export=True # For debugging -) - -# Your application code here... - -# Shutdown to flush data -shutdown_telemetry() -``` - -### Production Monitoring - -```python -# Export to OTLP collector (Jaeger, Prometheus, etc.) -configure_telemetry( - service_name="my_ai_app", - otlp_endpoint="http://localhost:4317" -) -``` - -**πŸ“– [Complete Telemetry Guide β†’](flo_ai/flo_ai/telemetry/README.md)** +- πŸ“Š **OpenTelemetry Integration**: Built-in observability with automatic instrumentation ## πŸ“– Table of Contents - [πŸš€ Quick Start](#-quick-start) - [Installation](#installation) - - [Create Your First AI Agent in 30 seconds](#create-your-first-ai-agent-in-30-seconds) - - [Create a Tool-Using Agent](#create-a-tool-using-agent) - - [Create an Agent with Structured Output](#create-an-agent-with-structured-output) -- [πŸ“Š OpenTelemetry Integration](#-opentelemetry-integration) -- [πŸ“ YAML Configuration](#-yaml-configuration) -- [πŸ”§ Variables System](#-variables-system) -- [πŸ“„ Document Processing](#-document-processing) -- [πŸ› οΈ Tools](#️-tools) - - [🎯 @flo_tool Decorator](#-flo_tool-decorator) -- [🧠 Reasoning Patterns](#-reasoning-patterns) -- [πŸ”§ LLM Providers](#-llm-providers) - - [OpenAI](#openai) - - [Anthropic Claude](#anthropic-claude) - - [Google Gemini](#google-gemini) - - [Google VertexAI](#google-vertexai) - - [Ollama (Local)](#ollama-local) - - [Streaming Support in LLM](#streaming-support) -- [πŸ“Š Output Formatting](#-output-formatting) -- [πŸ”„ Error Handling](#-error-handling) -- [πŸ“š Examples](#-examples) -- [πŸš€ Advanced Features](#-advanced-features) - - [Custom Tool Creation](#custom-tool-creation) - - [YAML Parser Integration](#yaml-parser-integration) + - [Your First Agent (30 seconds)](#your-first-agent-30-seconds) + - [Tool-Using Agent](#tool-using-agent) + - [Structured Output Agent](#structured-output-agent) +- [🎨 Flo AI Studio - Visual Workflow Designer](#-flo-ai-studio---visual-workflow-designer) +- [πŸ”§ Core Features](#-core-features) + - [LLM Providers](#llm-providers) + - [Tools & @flo_tool Decorator](#tools--flo_tool-decorator) + - [Variables System](#variables-system) + - [Document Processing](#document-processing) + - [Output Formatting](#output-formatting) + - [Error Handling](#error-handling) - [πŸ”„ Agent Orchestration with Arium](#-agent-orchestration-with-arium) - - [🌟 Key Features](#-key-features) - - [Quick Start: Simple Agent Chain](#quick-start-simple-agent-chain) - - [Advanced: Conditional Routing](#advanced-conditional-routing) - - [Agent + Tool Workflows](#agent--tool-workflows) - - [Workflow Visualization](#workflow-visualization) - - [Memory and Context Sharing](#memory-and-context-sharing) - - [πŸ“Š Use Cases for Arium](#-use-cases-for-arium) - - [Builder Pattern Benefits](#builder-pattern-benefits) - - [πŸ“„ YAML-Based Arium Workflows](#-yaml-based-arium-workflows) - - [🧠 LLM-Powered Routers in YAML (NEW!)](#-llm-powered-routers-in-yaml-new) - - [πŸ”„ ReflectionRouter: Structured Reflection Workflows (NEW!)](#-reflectionrouter-structured-reflection-workflows-new) - - [πŸ”„ PlanExecuteRouter: Cursor-Style Plan-and-Execute Workflows (NEW!)](#-planexecuterouter-cursor-style-plan-and-execute-workflows-new) -- [πŸ“– Documentation](#-documentation) + - [Simple Agent Chains](#simple-agent-chains) + - [Conditional Routing](#conditional-routing) + - [YAML-Based Workflows](#yaml-based-workflows) + - [LLM-Powered Routers](#llm-powered-routers) + - [ReflectionRouter & PlanExecuteRouter](#reflectionrouter--planexecuterouter) +- [πŸ“Š OpenTelemetry Integration](#-opentelemetry-integration) +- [πŸ“š Examples & Documentation](#-examples--documentation) - [🌟 Why Flo AI?](#-why-flo-ai) -- [🎯 Use Cases](#-use-cases) - [🀝 Contributing](#-contributing) -- [πŸ“œ License](#-license) -- [πŸ™ Acknowledgments](#-acknowledgments) ## πŸš€ Quick Start @@ -194,18 +84,16 @@ pip install flo-ai poetry add flo-ai ``` -### Create Your First AI Agent in 30 seconds +### Your First Agent (30 seconds) ```python import asyncio -from typing import Any from flo_ai.builder.agent_builder import AgentBuilder from flo_ai.llm import OpenAI -from flo_ai.models.agent import Agent -async def main() -> None: +async def main(): # Create a simple conversational agent - agent: Agent = ( + agent = ( AgentBuilder() .with_name('Math Tutor') .with_prompt('You are a helpful math tutor.') @@ -213,2338 +101,456 @@ async def main() -> None: .build() ) - response: Any = await agent.run('What is the formula for the area of a circle?') + response = await agent.run('What is the formula for the area of a circle?') print(f'Response: {response}') asyncio.run(main()) ``` -### Create a Tool-Using Agent +### Tool-Using Agent ```python import asyncio -from typing import Any, Dict, List, Union from flo_ai.builder.agent_builder import AgentBuilder -from flo_ai.tool.base_tool import Tool -from flo_ai.models.base_agent import ReasoningPattern -from flo_ai.models.agent import Agent +from flo_ai.tool import flo_tool from flo_ai.llm import Anthropic +@flo_tool(description="Perform mathematical calculations") async def calculate(operation: str, x: float, y: float) -> float: - if operation == 'add': - return x + y - elif operation == 'multiply': - return x * y - raise ValueError(f'Unknown operation: {operation}') - -# Define a calculator tool -calculator_tool: Tool = Tool( - name='calculate', - description='Perform basic calculations', - function=calculate, - parameters={ - 'operation': { - 'type': 'string', - 'description': 'The operation to perform (add or multiply)', - }, - 'x': {'type': 'number', 'description': 'First number'}, - 'y': {'type': 'number', 'description': 'Second number'}, - }, -) + """Calculate mathematical operations between two numbers.""" + operations = { + 'add': lambda: x + y, + 'subtract': lambda: x - y, + 'multiply': lambda: x * y, + 'divide': lambda: x / y if y != 0 else 0, + } + return operations.get(operation, lambda: 0)() -# Create a tool-using agent with Claude -agent: Agent = ( +async def main(): + agent = ( AgentBuilder() .with_name('Calculator Assistant') .with_prompt('You are a math assistant that can perform calculations.') .with_llm(Anthropic(model='claude-3-5-sonnet-20240620')) - .with_tools([calculator_tool]) - .with_reasoning(ReasoningPattern.REACT) - .with_retries(2) + .with_tools([calculate.tool]) .build() ) -response: Any = await agent.run('Calculate 5 plus 3') -print(f'Response: {response}') + response = await agent.run('Calculate 5 plus 3') + print(f'Response: {response}') + +asyncio.run(main()) ``` -### Create an Agent with Structured Output +### Structured Output Agent ```python import asyncio -from typing import Any, Dict +from pydantic import BaseModel, Field from flo_ai.builder.agent_builder import AgentBuilder from flo_ai.llm import OpenAI -from flo_ai.models.agent import Agent -# Define output schema for structured responses -math_schema: Dict[str, Any] = { - 'type': 'object', - 'properties': { - 'solution': {'type': 'string', 'description': 'The step-by-step solution'}, - 'answer': {'type': 'string', 'description': 'The final answer'}, - }, - 'required': ['solution', 'answer'], -} +class MathSolution(BaseModel): + solution: str = Field(description="Step-by-step solution") + answer: str = Field(description="Final answer") + confidence: float = Field(description="Confidence level (0-1)") -# Create an agent with structured output -agent: Agent = ( +async def main(): + agent = ( AgentBuilder() - .with_name('Structured Math Solver') - .with_prompt('You are a math problem solver that provides structured solutions.') + .with_name('Math Solver') .with_llm(OpenAI(model='gpt-4o')) - .with_output_schema(math_schema) + .with_output_schema(MathSolution) .build() ) -response: Any = await agent.run('Solve: 2x + 5 = 15') -print(f'Structured Response: {response}') + response = await agent.run('Solve: 2x + 5 = 15') + print(f'Structured Response: {response}') + +asyncio.run(main()) ``` -## πŸ“ YAML Configuration +## 🎨 Flo AI Studio - Visual Workflow Designer -Define your agents using YAML for easy configuration and deployment: +**Create AI workflows visually with our powerful React-based studio!** -```yaml -metadata: - name: email-summary-flo - version: 1.0.0 - description: "Agent for analyzing email threads" -agent: - name: EmailSummaryAgent - role: Email communication expert - model: - provider: openai - name: gpt-4o-mini - settings: - temperature: 0 - max_retries: 3 - reasoning_pattern: DIRECT - job: > - You are given an email thread between a customer and a support agent. - Your job is to analyze the behavior, sentiment, and communication style. - parser: - name: EmailSummary - fields: - - name: sender_type - type: literal - description: "Who sent the latest email" - values: - - value: customer - description: "Latest email was sent by customer" - - value: agent - description: "Latest email was sent by support agent" - - name: summary - type: str - description: "A comprehensive summary of the email" - - name: resolution_status - type: literal - description: "Issue resolution status" - values: - - value: resolved - description: "Issue appears resolved" - - value: unresolved - description: "Issue requires attention" -``` +

+ Flo AI Studio - Visual Workflow Designer +

-```python -from typing import Any, List -from flo_ai.builder.agent_builder import AgentBuilder -from flo_ai.models.agent import Agent +Flo AI Studio is a modern, intuitive visual editor that allows you to design complex multi-agent workflows through a drag-and-drop interface. Build sophisticated AI systems without writing code, then export them as production-ready YAML configurations. + +### πŸš€ Studio Features -# Create agent from YAML -yaml_config: str = """...""" # Your YAML configuration string -email_thread: List[str] = ["Email thread content..."] +- **🎯 Visual Design**: Drag-and-drop interface for creating agent workflows +- **πŸ€– Agent Management**: Configure AI agents with different roles, models, and tools +- **πŸ”€ Smart Routing**: Visual router configuration for intelligent workflow decisions +- **πŸ“€ YAML Export**: Export workflows as Flo AI-compatible YAML configurations +- **πŸ“₯ YAML Import**: Import existing workflows for further editing +- **βœ… Workflow Validation**: Real-time validation and error checking +- **πŸ”§ Tool Integration**: Connect agents to external tools and APIs +- **πŸ“‹ Template System**: Quick start with pre-built agent and router templates -builder: AgentBuilder = AgentBuilder.from_yaml(yaml_str=yaml_config) -agent: Agent = builder.build() +### πŸƒβ€β™‚οΈ Quick Start with Studio -# Use the agent -result: Any = await agent.run(email_thread) -``` +1. **Start the Studio**: + ```bash + cd studio + pnpm install + pnpm dev + ``` -## πŸ”§ Variables System +2. **Design Your Workflow**: + - Add agents, routers, and tools to the canvas + - Configure their properties and connections + - Test with the built-in validation -Flo AI supports dynamic variable resolution in agent prompts and inputs using `` syntax. Variables are automatically discovered, validated at runtime, and can be shared across multi-agent workflows. +3. **Export & Run**: +```python +from flo_ai.arium import AriumBuilder + + builder = AriumBuilder.from_yaml(yaml_file='your_workflow.yaml') + result = await builder.build_and_run(['Your input here']) + ``` -### ✨ Key Features +## πŸ”§ Core Features -- **πŸ” Automatic Discovery**: Variables are extracted from system prompts and inputs at runtime -- **βœ… Runtime Validation**: Missing variables are reported with detailed error messages -- **🀝 Multi-Agent Support**: Variables can be shared across agent workflows -- **πŸ›‘οΈ JSON-Safe Syntax**: `` format avoids conflicts with JSON content +### LLM Providers -### Basic Usage +Flo AI supports multiple LLM providers with consistent interfaces: ```python -import asyncio -from typing import Any, Dict -from flo_ai.builder.agent_builder import AgentBuilder +# OpenAI from flo_ai.llm import OpenAI -from flo_ai.models.agent import Agent +llm = OpenAI(model='gpt-4o', temperature=0.7) -async def main() -> None: - # Create agent with variables in system prompt - agent: Agent = ( - AgentBuilder() - .with_name('Data Analyst') - .with_prompt('Analyze and focus on . Generate insights for .') - .with_llm(OpenAI(model='gpt-4o-mini')) - .build() - ) - - # Define variables at runtime - variables: Dict[str, str] = { - 'dataset_path': '/data/sales_q4_2024.csv', - 'key_metric': 'revenue growth', - 'target_audience': 'executive team' - } - - # Run agent with variable resolution - result: Any = await agent.run( - 'Please provide a comprehensive analysis with actionable recommendations.', - variables=variables - ) - - print(f'Analysis: {result}') +# Anthropic Claude +from flo_ai.llm import Anthropic +llm = Anthropic(model='claude-3-5-sonnet-20240620', temperature=0.7) -asyncio.run(main()) +# Google Gemini +from flo_ai.llm import Gemini +llm = Gemini(model='gemini-2.5-flash', temperature=0.7) + +# Google VertexAI +from flo_ai.llm import VertexAI +llm = VertexAI(model='gemini-2.5-flash', project='your-project') + +# Ollama (Local) +from flo_ai.llm import Ollama +llm = Ollama(model='llama2', base_url='http://localhost:11434') ``` -### Variables in User Input +### Tools & @flo_tool Decorator -Variables can also be used in the user input messages: +Create custom tools easily with the `@flo_tool` decorator: ```python -import asyncio -from typing import Any, Dict -from flo_ai.models.agent import Agent -from flo_ai.llm import OpenAI +from flo_ai.tool import flo_tool -async def input_variables_example() -> None: - agent: Agent = Agent( - name='content_creator', - system_prompt='You are a content creator specializing in .', - llm=OpenAI(model='gpt-4o-mini') - ) - - variables: Dict[str, str] = { - 'content_type': 'technical blog posts', - 'topic': 'machine learning fundamentals', - 'word_count': '1500', - 'target_level': 'intermediate' - } - - # Variables in both system prompt and user input - result: Any = await agent.run( - 'Create a -word article about for readers.', - variables=variables - ) - - print(f'Content: {result}') +@flo_tool(description="Get current weather for a city") +async def get_weather(city: str, country: str = None) -> str: + """Get weather information for a specific city.""" + # Your weather API implementation + return f"Weather in {city}: sunny, 25Β°C" -asyncio.run(input_variables_example()) +# Use in agent + agent = ( + AgentBuilder() + .with_name('Weather Assistant') + .with_llm(OpenAI(model='gpt-4o-mini')) + .with_tools([get_weather.tool]) + .build() + ) ``` -### Multi-Agent Variable Sharing +### Variables System -Variables can be shared and passed between agents in workflows: +Dynamic variable resolution in agent prompts using `` syntax: ```python -import asyncio -from typing import Any, Dict, List -from flo_ai.arium import AriumBuilder -from flo_ai.models.agent import Agent -from flo_ai.llm import OpenAI +# Create agent with variables +agent = ( + AgentBuilder() + .with_name('Data Analyst') + .with_prompt('Analyze and focus on . Generate insights for .') + .with_llm(OpenAI(model='gpt-4o-mini')) + .build() +) -async def multi_agent_variables() -> List[Any]: - llm: OpenAI = OpenAI(model='gpt-4o-mini') - - # Agent 1: Research phase - researcher: Agent = Agent( - name='researcher', - system_prompt='Research and focus on analysis.', - llm=llm - ) - - # Agent 2: Writing phase - writer: Agent = Agent( - name='writer', - system_prompt='Write a based on the research for .', - llm=llm - ) - - # Agent 3: Review phase - reviewer: Agent = Agent( - name='reviewer', - system_prompt='Review the for and provide feedback.', - llm=llm - ) - - # Shared variables across all agents - shared_variables: Dict[str, str] = { - 'research_topic': 'sustainable energy solutions', - 'research_depth': 'comprehensive', - 'document_type': 'white paper', - 'target_audience': 'policy makers', - 'review_criteria': 'accuracy and policy relevance' - } - - # Run multi-agent workflow with shared variables - result: List[Any] = await ( - AriumBuilder() - .add_agents([researcher, writer, reviewer]) - .start_with(researcher) - .connect(researcher, writer) - .connect(writer, reviewer) - .end_with(reviewer) - .build_and_run( - ['Begin comprehensive research and document creation process'], - variables=shared_variables - ) - ) - - return result +# Define variables at runtime +variables = { + 'dataset_path': '/data/sales_q4_2024.csv', + 'key_metric': 'revenue growth', + 'target_audience': 'executive team' +} -asyncio.run(multi_agent_variables()) +result = await agent.run( + 'Please provide a comprehensive analysis with actionable recommendations.', + variables=variables +) ``` -### YAML Configuration with Variables +### Document Processing -Variables work seamlessly with YAML-based agent configuration: - -```yaml -metadata: - name: personalized-assistant - version: 1.0.0 - description: "Personalized assistant with variable support" -agent: - name: PersonalizedAssistant - kind: llm - role: assistant specialized in - model: - provider: openai - name: gpt-4o-mini - settings: - temperature: 0.3 - max_retries: 2 - reasoning_pattern: DIRECT - job: > - You are a focused on . - Your expertise includes and you should - tailor responses for users. - Always consider in your recommendations. -``` +Process PDF and TXT documents with AI agents: ```python -import asyncio -from typing import Any, Dict -from flo_ai.builder.agent_builder import AgentBuilder -from flo_ai.models.agent import Agent +from flo_ai.models.document import DocumentMessage, DocumentType -async def yaml_with_variables() -> None: - yaml_config: str = """...""" # Your YAML configuration - - # Variables for YAML agent - variables: Dict[str, str] = { - 'user_role': 'data scientist', - 'domain_expertise': 'machine learning and statistical analysis', - 'primary_objective': 'deriving actionable insights from data', - 'experience_level': 'senior', - 'priority_constraints': 'computational efficiency and model interpretability' - } - - # Create agent from YAML with variables - builder: AgentBuilder = AgentBuilder.from_yaml(yaml_str=yaml_config) - agent: Agent = builder.build() - - result: Any = await agent.run( - 'Help me design an ML pipeline for with ', - variables={ - **variables, - 'use_case': 'customer churn prediction', - 'data_constraints': 'limited labeled data' - } + # Create document message + document = DocumentMessage( + document_type=DocumentType.PDF, + document_file_path='business_report.pdf' ) - print(f'ML Pipeline Advice: {result}') +# Process with agent +agent = ( + AgentBuilder() + .with_name('Document Analyzer') + .with_prompt('Analyze the provided document and extract key insights.') + .with_llm(OpenAI(model='gpt-4o-mini')) + .build() +) -asyncio.run(yaml_with_variables()) + result = await agent.run([document]) ``` -### Error Handling and Validation +### Output Formatting -The variables system provides comprehensive error reporting for missing or invalid variables: +Use Pydantic models for structured outputs: ```python -import asyncio -from typing import Any, Dict -from flo_ai.models.agent import Agent -from flo_ai.llm import OpenAI +from pydantic import BaseModel, Field -async def variable_validation_example() -> None: - agent: Agent = Agent( - name='validator_example', - system_prompt='Process and for analysis.', - llm=OpenAI(model='gpt-4o-mini') - ) - - # Incomplete variables (missing 'another_param') - incomplete_variables: Dict[str, str] = { - 'required_param': 'dataset.csv' - # 'another_param' is missing - } - - try: - result: Any = await agent.run( - 'Analyze the data in ', - variables=incomplete_variables # Missing 'another_param' and 'data_source' - ) - except ValueError as e: - print(f'Variable validation error: {e}') - # Error will list all missing variables with their locations - -asyncio.run(variable_validation_example()) -``` +class AnalysisResult(BaseModel): + summary: str = Field(description="Executive summary") + key_findings: list = Field(description="List of key findings") + recommendations: list = Field(description="Actionable recommendations") -### Best Practices +agent = ( + AgentBuilder() + .with_name('Business Analyst') + .with_llm(OpenAI(model='gpt-4o')) + .with_output_schema(AnalysisResult) + .build() +) +``` -1. **Descriptive Variable Names**: Use clear, descriptive names like `` instead of `` -2. **Consistent Naming**: Use consistent variable names across related agents and workflows -3. **Validation**: Always test your variable resolution before production deployment -4. **Documentation**: Document expected variables in your agent configurations +### Error Handling -The variables system makes Flo AI agents highly reusable and configurable, enabling you to create flexible AI workflows that adapt to different contexts and requirements. +Built-in retry mechanisms and error recovery: -## πŸ“„ Document Processing +```python +agent = ( + AgentBuilder() + .with_name('Robust Agent') + .with_llm(OpenAI(model='gpt-4o')) + .with_retries(3) # Retry up to 3 times on failure + .build() +) +``` -Flo AI provides powerful document processing capabilities that allow agents to analyze and work with various document formats. The framework supports PDF and TXT documents with an extensible architecture for easy addition of new formats. +## πŸ”„ Agent Orchestration with Arium -### ✨ Key Features +Arium is Flo AI's powerful workflow orchestration engine for creating complex multi-agent workflows. -- **πŸ“„ Multi-Format Support**: Process PDF and TXT documents seamlessly -- **πŸ”„ Multiple Input Methods**: File paths, bytes data, or base64 encoded content -- **🧠 LLM Integration**: Direct document input to AI agents for analysis -- **⚑ Async Processing**: Efficient document handling with async/await support -- **πŸ”§ Extensible Architecture**: Easy to add support for new document types -- **πŸ“Š Rich Metadata**: Extract page counts, processing methods, and document statistics - -### Basic Document Processing - -```python -import asyncio -from flo_ai.builder.agent_builder import AgentBuilder -from flo_ai.llm import OpenAI -from flo_ai.models.document import DocumentMessage, DocumentType - -async def basic_document_analysis(): - # Create document message from file path - document = DocumentMessage( - document_type=DocumentType.PDF, - document_file_path='path/to/your/document.pdf' - ) - - # Create document analysis agent - agent = ( - AgentBuilder() - .with_name('Document Analyzer') - .with_prompt('Analyze the provided document and extract key insights, themes, and important information.') - .with_llm(OpenAI(model='gpt-4o-mini')) - .build() - ) - - # Process document with agent - result = await agent.run([document]) - print(f'Analysis: {result}') - -asyncio.run(basic_document_analysis()) -``` - -### Multiple Input Methods - -Flo AI supports three ways to provide document content: - -#### 1. File Path (Recommended) -```python -document = DocumentMessage( - document_type=DocumentType.PDF, - document_file_path='/path/to/document.pdf' -) -``` - -#### 2. Bytes Data -```python -# Read file as bytes -with open('document.pdf', 'rb') as f: - pdf_bytes = f.read() - -document = DocumentMessage( - document_type=DocumentType.PDF, - document_bytes=pdf_bytes, - mime_type='application/pdf' -) -``` - -#### 3. Base64 Encoded -```python -import base64 - -# Encode file to base64 -with open('document.pdf', 'rb') as f: - pdf_base64 = base64.b64encode(f.read()).decode('utf-8') - -document = DocumentMessage( - document_type=DocumentType.PDF, - document_base64=pdf_base64, - mime_type='application/pdf' -) -``` - -### Document Processing in Workflows - -Documents can be seamlessly integrated into Arium workflows: - -```python -import asyncio -from flo_ai.arium import AriumBuilder -from flo_ai.models.document import DocumentMessage, DocumentType - -async def document_workflow(): - # Create document message - document = DocumentMessage( - document_type=DocumentType.PDF, - document_file_path='business_report.pdf' - ) - - # Define workflow YAML - workflow_yaml = """ - metadata: - name: document-analysis-workflow - version: 1.0.0 - description: "Multi-agent document analysis pipeline" - - arium: - agents: - - name: intake_agent - role: "Document Intake Specialist" - job: "Process and assess document content for analysis." - model: - provider: openai - name: gpt-4o-mini - - - name: content_analyzer - role: "Content Analyst" - job: "Analyze document content for themes, insights, and key information." - model: - provider: openai - name: gpt-4o-mini - - - name: summary_generator - role: "Summary Writer" - job: "Create comprehensive summaries of analyzed content." - model: - provider: openai - name: gpt-4o-mini - - workflow: - start: intake_agent - edges: - - from: intake_agent - to: [content_analyzer] - - from: content_analyzer - to: [summary_generator] - end: [summary_generator] - """ - - # Run workflow with document - result = await ( - AriumBuilder() - .from_yaml(yaml_str=workflow_yaml) - .build_and_run([document, 'Analyze this business report and provide insights']) - ) - - return result - -asyncio.run(document_workflow()) -``` - -### Advanced Document Processing - -#### Custom Document Metadata -```python -document = DocumentMessage( - document_type=DocumentType.PDF, - document_file_path='report.pdf', - metadata={ - 'source': 'quarterly_reports', - 'department': 'finance', - 'priority': 'high', - 'tags': ['financial', 'q4-2024'] - } -) -``` - -#### Processing Different Document Types -```python -# PDF Document -pdf_doc = DocumentMessage( - document_type=DocumentType.PDF, - document_file_path='presentation.pdf' -) - -# Text Document -txt_doc = DocumentMessage( - document_type=DocumentType.TXT, - document_file_path='notes.txt' -) - -# Process both with the same agent -agent = AgentBuilder().with_name('Multi-Format Analyzer').build() - -pdf_result = await agent.run([pdf_doc]) -txt_result = await agent.run([txt_doc]) -``` - -### Document Processing Tools - -Create custom tools for document operations: - -```python -from flo_ai.tool import flo_tool -from flo_ai.models.document import DocumentMessage, DocumentType - -@flo_tool(description="Extract key information from documents") -async def extract_document_info(document_path: str, doc_type: str) -> str: - """Extract key information from a document.""" - document_type = DocumentType.PDF if doc_type.lower() == 'pdf' else DocumentType.TXT - - document = DocumentMessage( - document_type=document_type, - document_file_path=document_path - ) - - # Use document processing agent - agent = AgentBuilder().with_name('Info Extractor').build() - result = await agent.run([document]) - - return result - -# Use in agent -agent = ( - AgentBuilder() - .with_name('Document Processor') - .with_tools([extract_document_info.tool]) - .build() -) -``` - -### Error Handling - -```python -from flo_ai.utils.document_processor import DocumentProcessingError - -try: - document = DocumentMessage( - document_type=DocumentType.PDF, - document_file_path='nonexistent.pdf' - ) - result = await agent.run([document]) -except DocumentProcessingError as e: - print(f'Document processing failed: {e}') -except FileNotFoundError: - print('Document file not found') -``` - -### Supported Document Types - -| Type | Extension | Description | Processing Method | -|------|-----------|-------------|-------------------| -| PDF | `.pdf` | Portable Document Format | PyMuPDF4LLM (LLM-optimized) | -| TXT | `.txt` | Plain text files | UTF-8 with encoding detection | - -### Best Practices - -1. **File Validation**: Always check if files exist before processing -2. **Memory Management**: Use file paths for large documents to avoid memory issues -3. **Error Handling**: Implement proper error handling for document processing failures -4. **Metadata**: Add relevant metadata to help agents understand document context -5. **Format Selection**: Choose the most appropriate input method for your use case - -### Use Cases - -- πŸ“Š **Document Analysis**: Extract insights from reports, papers, and documents -- πŸ“ **Content Summarization**: Create summaries of long documents -- πŸ” **Information Extraction**: Pull specific data from structured documents -- πŸ“‹ **Document Classification**: Categorize documents based on content -- πŸ€– **Multi-Agent Workflows**: Process documents through specialized agent pipelines -- πŸ“ˆ **Business Intelligence**: Analyze business documents for insights and trends - -The document processing system makes Flo AI incredibly powerful for real-world applications that need to work with various document formats, enabling sophisticated AI workflows that can understand and process complex document content. - -## πŸ› οΈ Tools - -Create custom tools easily with async support: - -```python -from typing import List -from flo_ai.tool.base_tool import Tool -from flo_ai.builder.agent_builder import AgentBuilder -from flo_ai.llm import OpenAI -from flo_ai.models.agent import Agent - -async def weather_lookup(city: str) -> str: - # Your weather API call here - return f"Weather in {city}: Sunny, 25Β°C" - -weather_tool: Tool = Tool( - name='weather_lookup', - description='Get current weather for a city', - function=weather_lookup, - parameters={ - 'city': { - 'type': 'string', - 'description': 'City name to get weather for' - } - } -) - -# Add to your agent -agent: Agent = ( - AgentBuilder() - .with_name('Weather Assistant') - .with_llm(OpenAI(model='gpt-4o-mini')) - .with_tools([weather_tool]) - .build() -) -``` - -### 🎯 @flo_tool Decorator - -The `@flo_tool` decorator automatically converts any Python function into a `Tool` object with minimal boilerplate: - -```python -from typing import Any, Dict, Union -from flo_ai.tool import flo_tool -from flo_ai.builder.agent_builder import AgentBuilder -from flo_ai.llm import OpenAI -from flo_ai.models.agent import Agent - -@flo_tool( - description="Perform mathematical calculations", - parameter_descriptions={ - "operation": "The operation to perform (add, subtract, multiply, divide)", - "x": "First number", - "y": "Second number" - } -) -async def calculate(operation: str, x: float, y: float) -> Union[float, str]: - """Calculate mathematical operations between two numbers.""" - operations: Dict[str, callable] = { - 'add': lambda: x + y, - 'subtract': lambda: x - y, - 'multiply': lambda: x * y, - 'divide': lambda: x / y if y != 0 else 'Cannot divide by zero', - } - if operation not in operations: - raise ValueError(f'Unknown operation: {operation}') - return operations[operation]() - -# Function can be called normally -result: Union[float, str] = await calculate("add", 5, 3) # Returns 8 - -# Tool object is automatically available -agent: Agent = ( - AgentBuilder() - .with_name('Calculator Agent') - .with_llm(OpenAI(model='gpt-4o-mini')) - .with_tools([calculate.tool]) # Access the tool via .tool attribute - .build() -) -``` - -**Key Benefits:** -- βœ… **Automatic parameter extraction** from type hints -- βœ… **Flexible descriptions** via docstrings or custom descriptions -- βœ… **Type conversion** from Python types to JSON schema -- βœ… **Dual functionality** - functions work normally AND as tools -- βœ… **Async support** for both sync and async functions - -**Simple Usage:** -```python -from flo_ai.tool import flo_tool - -@flo_tool() -async def convert_units(value: float, from_unit: str, to_unit: str) -> str: - """Convert between different units (km/miles, kg/lbs, celsius/fahrenheit).""" - # Implementation here - result: float = 0.0 # Your conversion logic here - return f"{value} {from_unit} = {result} {to_unit}" - -# Tool is automatically available as convert_units.tool -``` - -**With Custom Metadata:** -```python -from typing import Optional -from flo_ai.tool import flo_tool - -@flo_tool( - name="weather_checker", - description="Get current weather information for a city", - parameter_descriptions={ - "city": "The city to get weather for", - "country": "The country (optional)", - } -) -async def get_weather(city: str, country: Optional[str] = None) -> str: - """Get weather information for a specific city.""" - return f"Weather in {city}: sunny" -``` - -> πŸ“– **For detailed documentation on the `@flo_tool` decorator, see [README_flo_tool.md](TOOLS.md)** - -## 🧠 Reasoning Patterns - -Flo AI supports multiple reasoning patterns: - -- **DIRECT**: Simple question-answer without step-by-step reasoning -- **COT (Chain of Thought)**: Step-by-step reasoning before providing the answer -- **REACT**: Reasoning and action cycles for tool-using agents - -```python -from flo_ai.models.base_agent import ReasoningPattern -from flo_ai.builder.agent_builder import AgentBuilder -from flo_ai.llm import OpenAI -from flo_ai.models.agent import Agent - -agent: Agent = ( - AgentBuilder() - .with_name('Reasoning Agent') - .with_llm(OpenAI(model='gpt-4o')) - .with_reasoning(ReasoningPattern.COT) # or REACT, DIRECT - .build() -) -``` - -## πŸ”§ LLM Providers - -### OpenAI -```python -from flo_ai.llm import OpenAI - -llm: OpenAI = OpenAI( - model='gpt-4o', - temperature=0.7, - api_key='your-api-key' # or set OPENAI_API_KEY env var -) -``` - -### Anthropic Claude -```python -from flo_ai.llm import Anthropic - -llm: Anthropic = Anthropic( - model='claude-3-5-sonnet-20240620', - temperature=0.7, - api_key='your-api-key' # or set ANTHROPIC_API_KEY env var -) -``` - -### Google Gemini -```python -from flo_ai.llm import Gemini - -llm: Gemini = Gemini( - model='gemini-2.5-flash', # or gemini-2.5-pro - temperature=0.7, - api_key='your-api-key' # or set GOOGLE_API_KEY env var -) -``` - -### Google VertexAI -```python -from flo_ai.llm import VertexAI - -llm: VertexAI = VertexAI( - model='gemini-2.5-flash', # or gemini-2.5-pro - temperature=0.7, - project='your-gcp-project-id', # or set GOOGLE_CLOUD_PROJECT env var - location='us-central1' # or set GOOGLE_CLOUD_LOCATION env var -) -``` - -**Prerequisites for VertexAI:** -- Set up Google Cloud project with Vertex AI API enabled -- Configure authentication: `gcloud auth application-default login` -- Set environment variables: `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` - -### Ollama (Local) -```python -from flo_ai.llm import Ollama - -llm: Ollama = Ollama( - model='llama2', - base_url='http://localhost:11434' -) -``` - -### Streaming Support in LLM -Streaming helps the llm to generate the output (response) piece-by-piece, or token-by-token, -as it is being computed, instead of waiting until the entire response is complete before sending it to the user - -Steaming Support has been added to all the llm providers. Example of streaming function with Gemini is shown below: -```python -from flo_ai.llm import Gemini - -llm: Gemini = Gemini( - model='gemini-2.5-flash', # or gemini-2.5-pro - temperature=0.7, - api_key='your-api-key' # or set GOOGLE_API_KEY env var -) -messages=[{"role": "user", "content": "Stream a short sentence."}] -chunks: List[str] = [] - async for chunk in llm.stream(messages=messages): - text = chunk.get('content', '') - if text: - chunks.append(text) - if len(''.join(chunks)) >= max_chars: - break - return ''.join(chunks) -``` -## πŸ“Š Output Formatting - -Use Pydantic models or JSON schemas for structured outputs: - -```python -from pydantic import BaseModel, Field -from flo_ai.builder.agent_builder import AgentBuilder -from flo_ai.llm import OpenAI -from flo_ai.models.agent import Agent - -class MathSolution(BaseModel): - solution: str = Field(description="Step-by-step solution") - answer: str = Field(description="Final answer") - confidence: float = Field(description="Confidence level (0-1)") - -agent: Agent = ( - AgentBuilder() - .with_name('Math Solver') - .with_llm(OpenAI(model='gpt-4o')) - .with_output_schema(MathSolution) - .build() -) -``` - -## πŸ”„ Error Handling - -Built-in retry mechanisms and error recovery: - -```python -from flo_ai.builder.agent_builder import AgentBuilder -from flo_ai.llm import OpenAI -from flo_ai.models.agent import Agent - -agent: Agent = ( - AgentBuilder() - .with_name('Robust Agent') - .with_llm(OpenAI(model='gpt-4o')) - .with_retries(3) # Retry up to 3 times on failure - .build() -) -``` - -## πŸ“š Examples - -Check out the `examples/` directory for comprehensive examples: - -- `agent_builder_usage.py` - Basic agent creation patterns -- `yaml_agent_example.py` - YAML-based agent configuration -- `output_formatter.py` - Structured output examples -- `multi_tool_example.py` - Multi-tool agent examples -- `cot_agent_example.py` - Chain of Thought reasoning -- `usage.py` and `usage_claude.py` - Provider-specific examples -- `vertexai_agent_example.py` - Google VertexAI integration examples -- `ollama_agent_example.py` - Local Ollama model examples -- `document_processing_example.py` - Document processing with PDF and TXT files - -## πŸš€ Advanced Features - -### Custom Tool Creation -```python -from typing import Dict, Any -from flo_ai.tool.base_tool import Tool - -async def custom_function(param1: str, param2: int) -> Dict[str, str]: - # Your async logic here - return {"result": f"Processed {param1} with {param2}"} - -custom_tool: Tool = Tool( - name='custom_function', - description='A custom async tool', - function=custom_function, - parameters={ - 'param1': {'type': 'string', 'description': 'First parameter'}, - 'param2': {'type': 'integer', 'description': 'Second parameter'} - } -) -``` - -### YAML Parser Integration -```python -from typing import Dict, Any -from flo_ai.formatter.yaml_format_parser import FloYamlParser -from flo_ai.builder.agent_builder import AgentBuilder -from flo_ai.llm import OpenAI -from flo_ai.models.agent import Agent - -# Create parser from YAML definition -yaml_config: Dict[str, Any] = {} # Your YAML configuration dict -parser: FloYamlParser = FloYamlParser.create(yaml_dict=yaml_config) -output_schema: Any = parser.get_format() - -agent: Agent = ( - AgentBuilder() - .with_name('YAML Configured Agent') - .with_llm(OpenAI(model='gpt-4o')) - .with_output_schema(output_schema) - .build() -) -``` - -## πŸ”„ Agent Orchestration with Arium - -Arium is Flo AI's powerful workflow orchestration engine that allows you to create complex multi-agent workflows with ease. Think of it as a conductor for your AI agents, coordinating their interactions and data flow. - -### 🌟 Key Features - -- **πŸ”— Multi-Agent Workflows**: Orchestrate multiple agents working together -- **🎯 Flexible Routing**: Route between agents based on context and conditions -- **🧠 LLM Routers**: Intelligent routing powered by LLMs, define routing logic in YAML -- **πŸ’Ύ Shared Memory**: Agents share conversation history and context -- **πŸ“Š Visual Workflows**: Generate flow diagrams of your agent interactions -- **⚑ Builder Pattern**: Fluent API for easy workflow construction -- **πŸ”„ Reusable Workflows**: Build once, run multiple times with different inputs - -### Quick Start: Simple Agent Chain - -```python -import asyncio -from typing import Any, List -from flo_ai.arium import AriumBuilder -from flo_ai.models.agent import Agent -from flo_ai.llm.openai_llm import OpenAI - -async def simple_chain() -> List[Any]: - llm: OpenAI = OpenAI(model='gpt-4o-mini') - - # Create agents - analyst: Agent = Agent( - name='content_analyst', - system_prompt='Analyze the input and extract key insights.', - llm=llm - ) - - summarizer: Agent = Agent( - name='summarizer', - system_prompt='Create a concise summary based on the analysis.', - llm=llm - ) - - # Build and run workflow - result: List[Any] = await ( - AriumBuilder() - .add_agents([analyst, summarizer]) - .start_with(analyst) - .connect(analyst, summarizer) # analyst β†’ summarizer - .end_with(summarizer) - .build_and_run(["Analyze this complex business report..."]) - ) - - return result - -asyncio.run(simple_chain()) -``` - -### Advanced: Conditional Routing - -```python -import asyncio -from typing import Any, List -from flo_ai.arium import AriumBuilder -from flo_ai.models.agent import Agent -from flo_ai.llm.openai_llm import OpenAI -from flo_ai.arium.memory import BaseMemory - -async def conditional_workflow() -> List[Any]: - llm: OpenAI = OpenAI(model='gpt-4o-mini') - - # Create specialized agents - classifier: Agent = Agent( - name='classifier', - system_prompt='Classify the input as either "technical" or "business".', - llm=llm - ) - - tech_specialist: Agent = Agent( - name='tech_specialist', - system_prompt='Provide technical analysis and solutions.', - llm=llm - ) - - business_specialist: Agent = Agent( - name='business_specialist', - system_prompt='Provide business analysis and recommendations.', - llm=llm - ) - - final_agent: Agent = Agent( - name='final_reviewer', - system_prompt='Provide final review and conclusions.', - llm=llm - ) - - # Define routing logic - def route_by_type(memory: BaseMemory) -> str: - """Route based on classification result""" - messages: List[Any] = memory.get() - last_message: str = str(messages[-1]) if messages else "" - - if "technical" in last_message.lower(): - return "tech_specialist" - else: - return "business_specialist" - - # Build workflow with conditional routing - result: List[Any] = await ( - AriumBuilder() - .add_agents([classifier, tech_specialist, business_specialist, final_agent]) - .start_with(classifier) - .add_edge(classifier, [tech_specialist, business_specialist], route_by_type) - .connect(tech_specialist, final_agent) - .connect(business_specialist, final_agent) - .end_with(final_agent) - .build_and_run(["How can we optimize our database performance?"]) - ) - - return result -``` - -### Agent + Tool Workflows - -```python -import asyncio -from typing import Any, List -from flo_ai.tool import flo_tool -from flo_ai.arium import AriumBuilder -from flo_ai.models.agent import Agent -from flo_ai.llm.openai_llm import OpenAI - -@flo_tool(description="Search for relevant information") -async def search_tool(query: str) -> str: - # Your search implementation - return f"Search results for: {query}" - -@flo_tool(description="Perform calculations") -async def calculator(expression: str) -> float: - # Your calculation implementation - return eval(expression) # Note: Use safely in production - -async def agent_tool_workflow() -> List[Any]: - llm: OpenAI = OpenAI(model='gpt-4o-mini') - - research_agent: Agent = Agent( - name='researcher', - system_prompt='Research topics and gather information.', - llm=llm - ) - - analyst_agent: Agent = Agent( - name='analyst', - system_prompt='Analyze data and provide insights.', - llm=llm - ) - - # Mix agents and tools in workflow - result: List[Any] = await ( - AriumBuilder() - .add_agent(research_agent) - .add_tools([search_tool.tool, calculator.tool]) - .add_agent(analyst_agent) - .start_with(research_agent) - .connect(research_agent, search_tool.tool) - .connect(search_tool.tool, calculator.tool) - .connect(calculator.tool, analyst_agent) - .end_with(analyst_agent) - .build_and_run(["Research market trends for Q4 2024"]) - ) - - return result -``` - -### Workflow Visualization - -```python -from typing import Any, List, Callable, Optional -from flo_ai.arium import AriumBuilder -from flo_ai.arium.arium import Arium -from flo_ai.models.agent import Agent -from flo_ai.tool.base_tool import Tool - -# Assume these are defined elsewhere -agent1: Agent = ... # Your agent definitions -agent2: Agent = ... -agent3: Agent = ... -tool1: Tool = ... # Your tool definitions -tool2: Tool = ... -router_function: Callable = ... # Your router function - -# Build workflow and generate visual diagram -arium: Arium = ( - AriumBuilder() - .add_agents([agent1, agent2, agent3]) - .add_tools([tool1, tool2]) - .start_with(agent1) - .connect(agent1, tool1) - .add_edge(tool1, [agent2, agent3], router_function) - .end_with(agent2) - .end_with(agent3) - .visualize("my_workflow.png", "Customer Service Workflow") # Generates PNG - .build() -) - -# Run the workflow -result: List[Any] = await arium.run(["Customer complaint about billing"]) -``` - -### Memory and Context Sharing - -All agents in an Arium workflow share the same memory, enabling them to build on each other's work: - -```python -from typing import Any, List -from flo_ai.arium import AriumBuilder -from flo_ai.arium.memory import MessageMemory -from flo_ai.arium.arium import Arium -from flo_ai.models.agent import Agent - -# Assume these agents are defined elsewhere -agent1: Agent = ... -agent2: Agent = ... -agent3: Agent = ... - -# Custom memory for persistent context -custom_memory: MessageMemory = MessageMemory() - -result: List[Any] = await ( - AriumBuilder() - .with_memory(custom_memory) # Shared across all agents - .add_agents([agent1, agent2, agent3]) - .start_with(agent1) - .connect(agent1, agent2) - .connect(agent2, agent3) - .end_with(agent3) - .build_and_run(["Initial context and instructions"]) -) - -# Build the arium for reuse -arium: Arium = ( - AriumBuilder() - .with_memory(custom_memory) - .add_agents([agent1, agent2, agent3]) - .start_with(agent1) - .connect(agent1, agent2) - .connect(agent2, agent3) - .end_with(agent3) - .build() -) - -# Memory persists and can be reused -result2: List[Any] = await arium.run(["Follow-up question based on previous context"]) -``` - -### πŸ“Š Use Cases for Arium - -- **πŸ“ Content Pipeline**: Research β†’ Writing β†’ Editing β†’ Publishing -- **πŸ” Analysis Workflows**: Data Collection β†’ Processing β†’ Analysis β†’ Reporting -- **🎯 Decision Trees**: Classification β†’ Specialized Processing β†’ Final Decision -- **🀝 Customer Service**: Intent Detection β†’ Specialist Routing β†’ Resolution -- **πŸ§ͺ Research Workflows**: Question Generation β†’ Investigation β†’ Synthesis β†’ Validation -- **πŸ“‹ Document Processing**: Extraction β†’ Validation β†’ Transformation β†’ Storage - -### Builder Pattern Benefits - -The AriumBuilder provides a fluent, intuitive API: - -```python -from typing import Any, List -from flo_ai.arium import AriumBuilder -from flo_ai.arium.arium import Arium -from flo_ai.models.agent import Agent -from flo_ai.tool.base_tool import Tool - -# Assume these are defined elsewhere -agent1: Agent = ... -agent2: Agent = ... -tool1: Tool = ... -inputs: List[str] = ["Your input messages"] - -# All builder methods return self for chaining -workflow: Arium = ( - AriumBuilder() - .add_agent(agent1) # Add components - .add_tool(tool1) - .start_with(agent1) # Define flow - .connect(agent1, tool1) - .end_with(tool1) - .build() # Create Arium instance -) - -# Or build and run in one step -result: List[Any] = await ( - AriumBuilder() - .add_agents([agent1, agent2]) - .start_with(agent1) - .connect(agent1, agent2) - .end_with(agent2) - .build_and_run(inputs) # Build + run together -) -``` - -**Validation Built-in**: The builder automatically validates your workflow: -- βœ… Ensures at least one agent/tool -- βœ… Requires start and end nodes -- βœ… Validates routing functions -- βœ… Checks for unreachable nodes - -### πŸ“„ YAML-Based Arium Workflows - -One of Flo AI's most powerful features is the ability to define entire multi-agent workflows using YAML configuration. This approach makes workflows reproducible, versionable, and easy to modify without changing code. - -#### Simple YAML Workflow - -```yaml -metadata: - name: "content-analysis-workflow" - version: "1.0.0" - description: "Multi-agent content analysis and summarization pipeline" - -arium: - # Define agents inline - agents: - - name: "analyzer" - role: "Content Analyst" - job: "Analyze the input content and extract key insights, themes, and important information." - model: - provider: "openai" - name: "gpt-4o-mini" - settings: - temperature: 0.2 - max_retries: 3 - reasoning_pattern: "COT" - - - name: "summarizer" - role: "Content Summarizer" - job: "Create a concise, actionable summary based on the analysis provided." - model: - provider: "anthropic" - name: "claude-3-5-sonnet-20240620" - settings: - temperature: 0.1 - reasoning_pattern: "DIRECT" - - # Define the workflow - workflow: - start: "analyzer" - edges: - - from: "analyzer" - to: ["summarizer"] - end: ["summarizer"] -``` - -```python -import asyncio -from typing import Any, List -from flo_ai.arium import AriumBuilder - -async def run_yaml_workflow() -> List[Any]: - yaml_config = """...""" # Your YAML configuration - - # Create workflow from YAML - result: List[Any] = await ( - AriumBuilder() - .from_yaml(yaml_config) - .build_and_run(["Analyze this quarterly business report..."]) - ) - - return result - -asyncio.run(run_yaml_workflow()) -``` - -#### Advanced YAML Workflow with Tools and Routing - -```yaml -metadata: - name: "research-workflow" - version: "2.0.0" - description: "Intelligent research workflow with conditional routing" - -arium: - # Define agents with tool references - agents: - - name: "classifier" - role: "Content Classifier" - job: "Classify input as 'research', 'calculation', or 'analysis' task." - model: - provider: "openai" - name: "gpt-4o-mini" - tools: ["web_search"] # Reference tools provided in Python - - - name: "researcher" - role: "Research Specialist" - job: "Conduct thorough research on with analysis." - model: - provider: "anthropic" - name: "claude-3-5-sonnet-20240620" - tools: ["web_search"] - settings: - temperature: 0.3 - reasoning_pattern: "REACT" - - - name: "analyst" - role: "Data Analyst" - job: "Analyze numerical data and provide insights for ." - model: - provider: "openai" - name: "gpt-4o" - tools: ["calculator", "web_search"] - settings: - reasoning_pattern: "COT" - - - name: "synthesizer" - role: "Information Synthesizer" - job: "Combine research and analysis into final recommendations." - model: - provider: "gemini" - name: "gemini-2.5-flash" - - # Complex workflow with conditional routing - workflow: - start: "classifier" - edges: - # Conditional routing based on classification - - from: "classifier" - to: ["researcher", "analyst"] - router: "classification_router" # Router function provided in Python - - # Both specialists feed into synthesizer - - from: "researcher" - to: ["synthesizer"] - - - from: "analyst" - to: ["synthesizer"] - - end: ["synthesizer"] -``` - -```python -import asyncio -from typing import Any, Dict, List, Literal -from flo_ai.arium import AriumBuilder -from flo_ai.tool.base_tool import Tool -from flo_ai.arium.memory import BaseMemory - -# Define tools in Python (cannot be defined in YAML) -async def web_search(query: str) -> str: - # Your search implementation - return f"Search results for: {query}" - -async def calculate(expression: str) -> str: - # Your calculation implementation - try: - result = eval(expression) # Note: Use safely in production - return f"Calculation result: {result}" - except: - return "Invalid expression" - -# Create tool objects -tools: Dict[str, Tool] = { - "web_search": Tool( - name="web_search", - description="Search the web for current information", - function=web_search, - parameters={ - "query": { - "type": "string", - "description": "Search query" - } - } - ), - "calculator": Tool( - name="calculator", - description="Perform mathematical calculations", - function=calculate, - parameters={ - "expression": { - "type": "string", - "description": "Mathematical expression to calculate" - } - } - ) -} - -# Define router functions in Python (cannot be defined in YAML) -def classification_router(memory: BaseMemory) -> Literal["researcher", "analyst"]: - """Route based on task classification""" - content = str(memory.get()[-1]).lower() - if 'research' in content or 'investigate' in content: - return 'researcher' - elif 'calculate' in content or 'analyze data' in content: - return 'analyst' - return 'researcher' # default - -routers: Dict[str, callable] = { - "classification_router": classification_router -} - -async def run_workflow() -> List[Any]: - yaml_config = """...""" # Your YAML configuration from above - - # Create workflow with tools and routers provided as Python objects - result: List[Any] = await ( - AriumBuilder() - .from_yaml( - yaml_str=yaml_config, - tools=tools, # Tools must be provided as Python objects - routers=routers # Routers must be provided as Python functions - ) - .build_and_run(["Research the latest trends in renewable energy"]) - ) - - return result -``` - -#### 🧠 LLM-Powered Routers in YAML (NEW!) - -One of the most powerful new features is the ability to define **intelligent LLM routers directly in YAML**. No more writing router functions - just describe your routing logic and let the LLM handle the decisions! - -```yaml -metadata: - name: "intelligent-content-workflow" - version: "1.0.0" - description: "Content creation with intelligent LLM-based routing" - -arium: - agents: - - name: "content_creator" - role: "Content Creator" - job: "Create initial content based on the request" - model: - provider: "openai" - name: "gpt-4o-mini" - - - name: "technical_writer" - role: "Technical Writer" - job: "Refine content for technical accuracy and clarity" - model: - provider: "openai" - name: "gpt-4o-mini" - - - name: "creative_writer" - role: "Creative Writer" - job: "Enhance content with creativity and storytelling" - model: - provider: "openai" - name: "gpt-4o-mini" - - - name: "marketing_writer" - role: "Marketing Writer" - job: "Optimize content for engagement and conversion" - model: - provider: "openai" - name: "gpt-4o-mini" - - # ✨ LLM Router definitions - No code required! - routers: - - name: "content_type_router" - type: "smart" # Uses LLM to make intelligent routing decisions - routing_options: - technical_writer: "Technical content, documentation, tutorials, how-to guides" - creative_writer: "Creative writing, storytelling, fiction, brand narratives" - marketing_writer: "Marketing copy, sales content, landing pages, ad campaigns" - model: - provider: "openai" - name: "gpt-4o-mini" - settings: - temperature: 0.3 - fallback_strategy: "first" - - - name: "task_classifier" - type: "task_classifier" # Keyword-based classification - task_categories: - math_solver: - description: "Mathematical calculations and problem solving" - keywords: ["calculate", "solve", "equation", "math", "formula"] - examples: ["Calculate 2+2", "Solve x^2 + 5x + 6 = 0"] - code_helper: - description: "Programming and code assistance" - keywords: ["code", "program", "debug", "function", "algorithm"] - examples: ["Write a Python function", "Debug this code"] - model: - provider: "openai" - name: "gpt-4o-mini" - - workflow: - start: "content_creator" - edges: - - from: "content_creator" - to: ["technical_writer", "creative_writer", "marketing_writer"] - router: "content_type_router" # LLM automatically routes based on content type! - end: ["technical_writer", "creative_writer", "marketing_writer"] -``` - -**🎯 LLM Router Types:** - -1. **Smart Router** (`type: smart`): General-purpose routing based on content analysis -2. **Task Classifier** (`type: task_classifier`): Routes based on keywords and examples -3. **Conversation Analysis** (`type: conversation_analysis`): Context-aware routing -4. **Reflection Router** (`type: reflection`): Structured Aβ†’Bβ†’Aβ†’C patterns for reflection workflows -5. **PlanExecute Router** (`type: plan_execute`): Cursor-style plan-and-execute workflows with step tracking - -**✨ Key Benefits:** -- 🚫 **No Code Required**: Define routing logic purely in YAML -- 🎯 **Intelligent Decisions**: LLMs understand context and make smart routing choices -- πŸ“ **Easy Configuration**: Simple, declarative syntax -- πŸ”„ **Version Control**: Track routing changes in YAML files -- πŸŽ›οΈ **Model Flexibility**: Each router can use different LLM models - -```python -# Using LLM routers is incredibly simple! -async def run_intelligent_workflow(): - # No routers dictionary needed - they're defined in YAML! - result = await ( - AriumBuilder() - .from_yaml(yaml_str=intelligent_workflow_yaml) - .build_and_run(["Write a technical tutorial on Docker containers"]) - ) - # The LLM will automatically route to technical_writer! ✨ - return result -``` - -##### πŸ”„ ReflectionRouter: Structured Reflection Workflows (NEW!) - -The **ReflectionRouter** is designed specifically for reflection-based workflows that follow Aβ†’Bβ†’Aβ†’C patterns, commonly used for mainβ†’criticβ†’mainβ†’final agent sequences. This pattern is perfect for iterative improvement workflows where a critic agent provides feedback before final processing. - -**πŸ“‹ Key Features:** -- 🎯 **Pattern Tracking**: Automatically tracks progress through defined reflection sequences -- πŸ”„ **Self-Reference Support**: Allows routing back to the same agent (Aβ†’Bβ†’A patterns) -- πŸ“Š **Visual Progress**: Shows current position with β—‹ pending, βœ“ completed indicators -- πŸ›‘οΈ **Loop Prevention**: Built-in safety mechanisms to prevent infinite loops -- πŸŽ›οΈ **Flexible Patterns**: Supports both 2-agent (Aβ†’Bβ†’A) and 3-agent (Aβ†’Bβ†’Aβ†’C) flows - -**🎯 Supported Patterns:** - -1. **A β†’ B β†’ A** (2 agents): Main β†’ Critic β†’ Main β†’ End -2. **A β†’ B β†’ A β†’ C** (3 agents): Main β†’ Critic β†’ Main β†’ Final - -```yaml -# Simple A β†’ B β†’ A reflection pattern -metadata: - name: "content-reflection-workflow" - version: "1.0.0" - description: "Content creation with critic feedback loop" - -arium: - agents: - - name: "writer" - role: "Content Writer" - job: "Create and improve content based on feedback from critics." - model: - provider: "openai" - name: "gpt-4o-mini" - settings: - temperature: 0.7 - - - name: "critic" - role: "Content Critic" - job: "Review content and provide constructive feedback for improvement." - model: - provider: "openai" - name: "gpt-4o-mini" - settings: - temperature: 0.3 - - # ✨ ReflectionRouter definition - routers: - - name: "reflection_router" - type: "reflection" # Specialized for reflection patterns - flow_pattern: [writer, critic, writer] # A β†’ B β†’ A pattern - model: - provider: "openai" - name: "gpt-4o-mini" - settings: - temperature: 0.2 - allow_early_exit: false # Strict adherence to pattern - - workflow: - start: "writer" - edges: - - from: "writer" - to: [critic, writer] # Can go to critic or self-reference - router: "reflection_router" - - from: "critic" - to: [writer] # Always returns to writer - router: "reflection_router" - end: [writer] # Writer produces final output -``` - -```yaml -# Advanced A β†’ B β†’ A β†’ C reflection pattern -metadata: - name: "advanced-reflection-workflow" - version: "1.0.0" - description: "Full reflection cycle with dedicated final agent" - -arium: - agents: - - name: "researcher" - role: "Research Agent" - job: "Conduct research and gather information on topics." - model: - provider: "openai" - name: "gpt-4o-mini" - - - name: "reviewer" - role: "Research Reviewer" - job: "Review research quality and suggest improvements." - model: - provider: "anthropic" - name: "claude-3-5-sonnet-20240620" - - - name: "synthesizer" - role: "Information Synthesizer" - job: "Create final synthesis and conclusions from research." - model: - provider: "openai" - name: "gpt-4o" - - routers: - - name: "research_reflection_router" - type: "reflection" - flow_pattern: [researcher, reviewer, researcher, synthesizer] # A β†’ B β†’ A β†’ C - settings: - allow_early_exit: true # Allow smart early completion - - workflow: - start: "researcher" - edges: - - from: "researcher" - to: [reviewer, researcher, synthesizer] # All possible destinations - router: "research_reflection_router" - - from: "reviewer" - to: [researcher, reviewer, synthesizer] - router: "research_reflection_router" - - from: "synthesizer" - to: [end] - end: [synthesizer] -``` - -**πŸ”§ ReflectionRouter Configuration Options:** - -```yaml -routers: - - name: "my_reflection_router" - type: "reflection" - flow_pattern: [main_agent, critic, main_agent, final_agent] # Define your pattern - model: # Optional: LLM for routing decisions - provider: "openai" - name: "gpt-4o-mini" - settings: # Optional settings - temperature: 0.2 # Router temperature (lower = more deterministic) - allow_early_exit: false # Allow early completion if LLM determines pattern is done - fallback_strategy: "first" # first, last, random - fallback when LLM fails -``` - -**πŸ—οΈ Programmatic Usage:** +### Simple Agent Chains ```python -import asyncio from flo_ai.arium import AriumBuilder from flo_ai.models.agent import Agent from flo_ai.llm import OpenAI -from flo_ai.arium.llm_router import create_main_critic_reflection_router -async def reflection_workflow_example(): - llm = OpenAI(model='gpt-4o-mini', api_key='your-api-key') +async def simple_chain(): + llm = OpenAI(model='gpt-4o-mini') # Create agents - main_agent = Agent( - name='main_agent', - system_prompt='Create solutions and improve them based on feedback.', - llm=llm - ) - - critic = Agent( - name='critic', - system_prompt='Provide constructive feedback for improvement.', - llm=llm - ) - - final_agent = Agent( - name='final_agent', - system_prompt='Polish and finalize the work.', + analyst = Agent( + name='content_analyst', + system_prompt='Analyze the input and extract key insights.', llm=llm ) - # Create reflection router - A β†’ B β†’ A β†’ C pattern - reflection_router = create_main_critic_reflection_router( - main_agent='main_agent', - critic_agent='critic', - final_agent='final_agent', - allow_early_exit=False, # Strict pattern adherence + summarizer = Agent( + name='summarizer', + system_prompt='Create a concise summary based on the analysis.', llm=llm ) - # Build workflow + # Build and run workflow result = await ( AriumBuilder() - .add_agents([main_agent, critic, final_agent]) - .start_with(main_agent) - .add_edge(main_agent, [critic, final_agent], reflection_router) - .add_edge(critic, [main_agent, final_agent], reflection_router) - .end_with(final_agent) - .build_and_run(["Create a comprehensive project proposal"]) + .add_agents([analyst, summarizer]) + .start_with(analyst) + .connect(analyst, summarizer) + .end_with(summarizer) + .build_and_run(["Analyze this complex business report..."]) ) return result - -# Alternative: Direct factory usage -from flo_ai.arium.llm_router import create_llm_router - -reflection_router = create_llm_router( - 'reflection', - flow_pattern=['writer', 'editor', 'writer'], # A β†’ B β†’ A - allow_early_exit=False, - llm=llm -) ``` -**πŸ’‘ ReflectionRouter Intelligence:** - -The ReflectionRouter automatically: -- **Tracks Progress**: Knows which step in the pattern should execute next -- **Prevents Loops**: Uses execution context to avoid infinite cycles -- **Provides Guidance**: Shows LLM the suggested next step and current progress -- **Handles Self-Reference**: Properly validates flows that return to the same agent -- **Visual Feedback**: Displays pattern progress: `β—‹ writer β†’ βœ“ critic β†’ β—‹ writer` - -**🎯 Perfect Use Cases:** -- πŸ“ **Content Creation**: Writer β†’ Editor β†’ Writer β†’ Publisher -- πŸ”¬ **Research Workflows**: Researcher β†’ Reviewer β†’ Researcher β†’ Synthesizer -- πŸ’Ό **Business Analysis**: Analyst β†’ Critic β†’ Analyst β†’ Decision Maker -- 🎨 **Creative Processes**: Creator β†’ Critic β†’ Creator β†’ Finalizer -- πŸ§ͺ **Iterative Refinement**: Any process requiring feedback and improvement cycles - -**⚑ Quick Start Example:** +### Conditional Routing ```python -# Minimal A β†’ B β†’ A pattern -yaml_config = """ -arium: - agents: - - name: main_agent - job: "Main work agent" - model: {provider: openai, name: gpt-4o-mini} - - name: critic - job: "Feedback agent" - model: {provider: openai, name: gpt-4o-mini} - - routers: - - name: reflection_router - type: reflection - flow_pattern: [main_agent, critic, main_agent] +from flo_ai.arium.memory import BaseMemory - workflow: - start: main_agent - edges: - - from: main_agent - to: [critic, main_agent] - router: reflection_router - - from: critic - to: [main_agent] - router: reflection_router - end: [main_agent] -""" - -result = await AriumBuilder().from_yaml(yaml_str=yaml_config).build_and_run(["Your task"]) + def route_by_type(memory: BaseMemory) -> str: + """Route based on classification result""" + messages = memory.get() + last_message = str(messages[-1]) if messages else "" + + if "technical" in last_message.lower(): + return "tech_specialist" + else: + return "business_specialist" + + # Build workflow with conditional routing +result = await ( + AriumBuilder() + .add_agents([classifier, tech_specialist, business_specialist, final_agent]) + .start_with(classifier) + .add_edge(classifier, [tech_specialist, business_specialist], route_by_type) + .connect(tech_specialist, final_agent) + .connect(business_specialist, final_agent) + .end_with(final_agent) + .build_and_run(["How can we optimize our database performance?"]) + ) ``` -The ReflectionRouter makes implementing sophisticated feedback loops and iterative improvement workflows incredibly simple, whether you need a 2-agent or 3-agent pattern! πŸš€ +### YAML-Based Workflows -##### πŸ”„ PlanExecuteRouter: Cursor-Style Plan-and-Execute Workflows (NEW!) - -The **PlanExecuteRouter** implements sophisticated plan-and-execute patterns similar to how Cursor works. It automatically breaks down complex tasks into detailed execution plans and coordinates step-by-step execution with intelligent progress tracking. - -**πŸ“‹ Key Features:** -- 🎯 **Automatic Task Breakdown**: Creates detailed execution plans from high-level tasks -- πŸ“Š **Step Tracking**: Real-time progress monitoring with visual indicators (β—‹ ⏳ βœ… ❌) -- πŸ”„ **Phase Coordination**: Intelligent routing between planning, execution, and review phases -- πŸ›‘οΈ **Dependency Management**: Handles step dependencies and execution order automatically -- πŸ’Ύ **Plan Persistence**: Uses PlanAwareMemory for stateful plan storage and updates -- πŸ”§ **Error Recovery**: Built-in retry logic for failed steps - -**🎯 Perfect for Cursor-Style Workflows:** -- πŸ’» **Software Development**: Requirements β†’ Design β†’ Implementation β†’ Testing β†’ Review -- πŸ“ **Content Creation**: Planning β†’ Writing β†’ Editing β†’ Review β†’ Publishing -- πŸ”¬ **Research Projects**: Plan β†’ Investigate β†’ Analyze β†’ Synthesize β†’ Report -- πŸ“Š **Business Processes**: Any multi-step workflow with dependencies - -**πŸ“„ YAML Configuration:** +Define entire workflows in YAML: ```yaml -# Complete Plan-Execute Workflow metadata: - name: "development-plan-execute" + name: "content-analysis-workflow" version: "1.0.0" - description: "Cursor-style development workflow" + description: "Multi-agent content analysis pipeline" arium: agents: - - name: planner - role: Project Planner - job: > - Break down complex development tasks into detailed, sequential execution plans. - Create clear steps with dependencies and agent assignments. - model: - provider: openai - name: gpt-4o-mini - settings: - temperature: 0.3 - - - name: developer - role: Software Developer - job: > - Implement features step by step according to execution plans. - Provide detailed implementation and update step status. - model: - provider: openai - name: gpt-4o-mini - settings: - temperature: 0.5 - - - name: tester - role: QA Engineer - job: > - Test implementations thoroughly and validate functionality. - Create comprehensive test scenarios and report results. + - name: "analyzer" + role: "Content Analyst" + job: "Analyze the input content and extract key insights." model: - provider: openai - name: gpt-4o-mini - settings: - temperature: 0.2 - - - name: reviewer - role: Senior Reviewer - job: > - Provide final quality assessment and approval. - Review completed work for best practices and requirements. + provider: "openai" + name: "gpt-4o-mini" + + - name: "summarizer" + role: "Content Summarizer" + job: "Create a concise summary based on the analysis." model: - provider: openai - name: gpt-4o-mini - - # PlanExecuteRouter configuration - routers: - - name: dev_plan_router - type: plan_execute # Router type for plan-execute workflows - agents: # Available agents and their capabilities - planner: "Creates detailed execution plans by breaking down tasks" - developer: "Implements features and code according to plan specifications" - tester: "Tests implementations and validates functionality" - reviewer: "Reviews and approves completed work" - model: # Optional: LLM for routing decisions - provider: openai - name: gpt-4o-mini - settings: # Optional configuration - temperature: 0.2 # Router decision temperature - planner_agent: planner # Agent responsible for creating plans - executor_agent: developer # Default agent for executing steps - reviewer_agent: reviewer # Optional agent for final review - max_retries: 3 # Maximum retries for failed steps + provider: "anthropic" + name: "claude-3-5-sonnet-20240620" workflow: - start: planner + start: "analyzer" edges: - # All agents can route to all others based on plan state - - from: planner - to: [developer, tester, reviewer, planner] - router: dev_plan_router - - from: developer - to: [developer, tester, reviewer, planner] - router: dev_plan_router - - from: tester - to: [developer, tester, reviewer, planner] - router: dev_plan_router - - from: reviewer - to: [end] - end: [reviewer] + - from: "analyzer" + to: ["summarizer"] + end: ["summarizer"] ``` -**πŸ—οΈ Programmatic Usage:** - ```python -import asyncio -from flo_ai.arium import AriumBuilder -from flo_ai.arium.memory import PlanAwareMemory -from flo_ai.models.agent import Agent -from flo_ai.llm import OpenAI -from flo_ai.arium.llm_router import create_plan_execute_router - -async def cursor_style_workflow(): - llm = OpenAI(model='gpt-4o-mini', api_key='your-api-key') - - # Create specialized agents - planner = Agent( - name='planner', - system_prompt='Create detailed execution plans by breaking down tasks into sequential steps.', - llm=llm - ) - - developer = Agent( - name='developer', - system_prompt='Implement features step by step according to execution plans.', - llm=llm - ) - - tester = Agent( - name='tester', - system_prompt='Test implementations and validate functionality thoroughly.', - llm=llm - ) - - reviewer = Agent( - name='reviewer', - system_prompt='Review completed work and provide final approval.', - llm=llm - ) - - # Create plan-execute router - plan_router = create_plan_execute_router( - planner_agent='planner', - executor_agent='developer', - reviewer_agent='reviewer', - additional_agents={'tester': 'Tests implementations and validates quality'}, - llm=llm - ) - - # Use PlanAwareMemory for plan state persistence - memory = PlanAwareMemory() - - # Build and run workflow - result = await ( +# Run YAML workflow +result = await ( AriumBuilder() - .with_memory(memory) - .add_agents([planner, developer, tester, reviewer]) - .start_with(planner) - .add_edge(planner, [developer, tester, reviewer, planner], plan_router) - .add_edge(developer, [developer, tester, reviewer, planner], plan_router) - .add_edge(tester, [developer, tester, reviewer, planner], plan_router) - .add_edge(reviewer, [developer, tester, reviewer, planner], plan_router) - .end_with(reviewer) - .build_and_run(["Create a REST API for user authentication with JWT tokens"]) + .from_yaml(yaml_str=workflow_yaml) + .build_and_run(["Analyze this quarterly business report..."]) ) - - return result - -# Alternative: Factory function -from flo_ai.arium.llm_router import create_plan_execute_router - -plan_router = create_plan_execute_router( - planner_agent='planner', - executor_agent='developer', - reviewer_agent='reviewer', - llm=llm -) ``` -**πŸ’‘ How PlanExecuteRouter Works:** - -The router intelligently coordinates workflow phases: +### LLM-Powered Routers -1. **Planning Phase**: - - Detects when no execution plan exists - - Routes to planner agent to create detailed plan - - Plan stored as ExecutionPlan object in PlanAwareMemory +Define intelligent routing logic directly in YAML: -2. **Execution Phase**: - - Analyzes plan state and step dependencies - - Routes to appropriate agents for next ready steps - - Updates step status (pending β†’ in-progress β†’ completed) - - Handles parallel execution of independent steps +```yaml + routers: + - name: "content_type_router" + type: "smart" # Uses LLM for intelligent routing + routing_options: + technical_writer: "Technical content, documentation, tutorials" + creative_writer: "Creative writing, storytelling, fiction" + marketing_writer: "Marketing copy, sales content, campaigns" + model: + provider: "openai" + name: "gpt-4o-mini" +``` -3. **Review Phase**: - - Detects when all steps are completed - - Routes to reviewer agent for final validation - - Manages error recovery for failed steps +### ReflectionRouter & PlanExecuteRouter -**πŸ“Š Plan Progress Visualization:** +**ReflectionRouter** for Aβ†’Bβ†’Aβ†’C feedback patterns: -``` -πŸ“‹ EXECUTION PLAN: User Authentication API -πŸ“Š CURRENT PROGRESS: -βœ… design_schema: Design user database schema β†’ developer -βœ… implement_registration: Create registration endpoint β†’ developer -⏳ implement_login: Add login with JWT β†’ developer (depends: design_schema, implement_registration) -β—‹ add_middleware: Authentication middleware β†’ developer (depends: implement_login) -β—‹ write_tests: Comprehensive testing β†’ tester (depends: add_middleware) -β—‹ final_review: Security and code review β†’ reviewer (depends: write_tests) - -🎯 NEXT ACTION: Execute step 'implement_login' -🎯 SUGGESTED AGENT: developer +```yaml + routers: + - name: "reflection_router" + type: "reflection" + flow_pattern: [writer, critic, writer] # A β†’ B β†’ A pattern + model: + provider: "openai" + name: "gpt-4o-mini" ``` -**πŸ”§ Advanced Configuration Options:** +**PlanExecuteRouter** for Cursor-style plan-and-execute workflows: ```yaml routers: - - name: advanced_plan_router - type: plan_execute - agents: - planner: "Creates execution plans" - frontend_dev: "Frontend implementation" - backend_dev: "Backend implementation" - devops: "Deployment and infrastructure" - qa_tester: "Quality assurance testing" - security_reviewer: "Security review" - product_owner: "Product validation" - model: - provider: openai - name: gpt-4o - settings: - temperature: 0.1 # Lower for more deterministic routing - planner_agent: planner # Plan creation agent - executor_agent: backend_dev # Default execution agent - reviewer_agent: product_owner # Final review agent - max_retries: 5 # Retry attempts for failed steps - allow_parallel_execution: true # Enable parallel step execution - plan_validation: strict # Validate plan completeness -``` - -**⚑ Quick Start Example:** - -```python -# Minimal plan-execute workflow -yaml_config = """ -arium: + - name: "plan_router" + type: "plan_execute" agents: - - name: planner - job: "Create execution plans" - model: {provider: openai, name: gpt-4o-mini} - - name: executor - job: "Execute plan steps" - model: {provider: openai, name: gpt-4o-mini} - - name: reviewer - job: "Review final results" - model: {provider: openai, name: gpt-4o-mini} - - routers: - - name: simple_plan_router - type: plan_execute - agents: - planner: "Creates plans" - executor: "Executes steps" - reviewer: "Reviews results" + planner: "Creates detailed execution plans" + developer: "Implements features according to plan" + tester: "Tests implementations and validates functionality" + reviewer: "Reviews and approves completed work" settings: planner_agent: planner - executor_agent: executor + executor_agent: developer reviewer_agent: reviewer - - workflow: - start: planner - edges: - - from: planner - to: [executor, reviewer, planner] - router: simple_plan_router - - from: executor - to: [executor, reviewer, planner] - router: simple_plan_router - - from: reviewer - to: [end] - end: [reviewer] -""" - -result = await AriumBuilder().from_yaml(yaml_str=yaml_config).build_and_run(["Your complex task"]) ``` -**🎯 Use Cases and Examples:** - -- πŸ“± **App Development**: "Build a todo app with React and Node.js" -- πŸ›’ **E-commerce**: "Create a shopping cart system with payment processing" -- πŸ“Š **Data Pipeline**: "Build ETL pipeline for customer analytics" -- πŸ” **Security**: "Implement OAuth2 authentication system" -- πŸ“ˆ **Analytics**: "Create real-time dashboard with user metrics" - -The PlanExecuteRouter brings Cursor-style intelligent task automation to Flo AI, making it incredibly easy to build sophisticated multi-step workflows that adapt and execute complex tasks automatically! πŸš€ - -#### YAML Workflow with Variables - -```yaml -metadata: - name: "personalized-workflow" - version: "1.0.0" - description: "Workflow that adapts based on input variables" - -arium: - agents: - - name: "specialist" - role: "" - job: "You are a specializing in . Provide for ." - model: - provider: "" - name: "" - settings: - temperature: 0.3 - reasoning_pattern: "" - - - name: "reviewer" - role: "Quality Reviewer" - job: "Review the for and provide feedback." - model: - provider: "openai" - name: "gpt-4o" +## πŸ“Š OpenTelemetry Integration - workflow: - start: "specialist" - edges: - - from: "specialist" - to: ["reviewer"] - end: ["reviewer"] -``` +Built-in observability for production monitoring: ```python -import asyncio -from typing import Any, Dict, List -from flo_ai.arium import AriumBuilder - -async def run_personalized_workflow() -> List[Any]: - yaml_config = """...""" # Your YAML configuration with variables - - # Define variables for the workflow - variables: Dict[str, str] = { - 'expert_role': 'Data Scientist', - 'domain': 'machine learning and predictive analytics', - 'output_type': 'technical analysis report', - 'target_audience': 'engineering team', - 'preferred_llm_provider': 'anthropic', - 'model_name': 'claude-3-5-sonnet-20240620', - 'reasoning_style': 'COT', - 'quality_criteria': 'technical accuracy and clarity' - } - - result: List[Any] = await ( - AriumBuilder() - .from_yaml(yaml_config) - .build_and_run( - ["Analyze our customer churn prediction model performance"], - variables=variables - ) - ) - - return result -``` - -#### Using Pre-built Agents in YAML Workflows - -```yaml -metadata: - name: "hybrid-workflow" - version: "1.0.0" - description: "Mix of inline agents and pre-built agent references" +from flo_ai import configure_telemetry, shutdown_telemetry -# Import existing agent configurations -imports: - - "agents/content_analyzer.yaml" - - "agents/technical_reviewer.yaml" +# Configure at startup +configure_telemetry( + service_name="my_ai_app", + service_version="1.0.0", + console_export=True # For debugging +) -arium: - # Mix of imported and inline agents - agents: - # Reference imported agent - - import: "content_analyzer" - name: "analyzer" # Override name if needed - - # Define new agent inline - - name: "formatter" - role: "Content Formatter" - job: "Format the analysis into a professional report structure." - model: - provider: "openai" - name: "gpt-4o-mini" - - # Reference another imported agent - - import: "technical_reviewer" - name: "reviewer" +# Your application code here... - workflow: - start: "analyzer" - edges: - - from: "analyzer" - to: ["formatter"] - - from: "formatter" - to: ["reviewer"] - end: ["reviewer"] +# Shutdown to flush data +shutdown_telemetry() ``` -#### YAML Workflow Best Practices - -1. **Modular Design**: Define reusable agents in YAML, create tools in Python separately -2. **Clear Naming**: Use descriptive names for agents and workflows -3. **Variable Usage**: Leverage variables for environment-specific configurations -4. **Version Control**: Track workflow versions in metadata -5. **Documentation**: Include descriptions for complex workflows -6. **Router Functions**: Keep routing logic simple and provide as Python functions -7. **Tool Management**: Create tools as Python objects and pass them to the builder - -#### What Can Be Defined in YAML vs Python - -**βœ… YAML Configuration Supports:** -- Agent definitions (name, role, job, model settings) -- Workflow structure (start, edges, end nodes) -- Agent-to-agent connections -- Tool and router references (by name) -- Variables and settings -- Model configurations - -**❌ YAML Configuration Does NOT Support:** -- Tool function implementations (must be Python objects) -- Router function code (must be Python functions) -- Custom logic execution -- Direct function definitions +**πŸ“– [Complete Telemetry Guide β†’](flo_ai/flo_ai/telemetry/README.md)** -**πŸ’‘ Best Practice**: Use YAML for workflow structure and agent configuration, Python for executable logic (tools and routers). +## πŸ“š Examples & Documentation -#### Benefits of YAML Workflows +### Examples Directory -- **πŸ”„ Reproducible**: Version-controlled workflow definitions -- **πŸ“ Maintainable**: Easy to modify workflow structure without code changes -- **πŸ§ͺ Testable**: Different configurations for testing vs. production -- **πŸ‘₯ Collaborative**: Non-developers can modify workflow structure -- **πŸš€ Deployable**: Easy CI/CD integration with YAML configurations -- **πŸ” Auditable**: Clear workflow definitions for compliance +Check out the `examples/` directory for comprehensive examples: -> πŸ“– **For detailed Arium documentation and advanced patterns, see [flo_ai/flo_ai/arium/README.md](flo_ai/flo_ai/arium/README.md)** +- `agent_builder_usage.py` - Basic agent creation patterns +- `yaml_agent_example.py` - YAML-based agent configuration +- `output_formatter.py` - Structured output examples +- `multi_tool_example.py` - Multi-tool agent examples +- `document_processing_example.py` - Document processing with PDF and TXT files -## πŸ“– Documentation +### Documentation -Visit our [comprehensive documentation](https://flo-ai.rootflo.ai) for: -- Detailed tutorials -- API reference -- Best practices -- Advanced examples -- Architecture deep-dives +Visit our [website](https://www.rootflo.ai) to know more **Additional Resources:** -- [@flo_tool Decorator Guide](flo_ai/README_flo_tool.md) - Complete guide to the `@flo_tool` decorator -- [Examples Directory](examples/) - Ready-to-run code examples +- [@flo_tool Decorator Guide](TOOLS.md) - Complete guide to the `@flo_tool` decorator +- [Examples Directory](flo_ai/examples/) - Ready-to-run code examples - [Contributing Guide](CONTRIBUTING.md) - How to contribute to Flo AI ## 🌟 Why Flo AI? @@ -2560,8 +566,7 @@ Visit our [comprehensive documentation](https://flo-ai.rootflo.ai) for: - **Testable**: Each component can be tested independently - **Scalable**: From simple agents to complex multi-tool systems -## 🎯 Use Cases - +### Use Cases - πŸ€– Customer Service Automation - πŸ“Š Data Analysis and Processing - πŸ“ Content Generation and Summarization @@ -2596,4 +601,4 @@ Built with ❀️ using: Built with ❀️ by the rootflo team
Community β€’ Documentation - + \ No newline at end of file diff --git a/flo_ai/README.md b/flo_ai/README.md index 7979e145..6aaf5a50 100644 --- a/flo_ai/README.md +++ b/flo_ai/README.md @@ -2,10 +2,10 @@ Rootflo

-

Composable Agentic AI Workflow

+

Flo AI 🌊

-Flo AI is a Python framework for building structured AI agents with support for multiple LLM providers, tool integration, and YAML-based configuration. Create production-ready AI agents with minimal code and maximum flexibility. + Build production-ready AI agents with structured outputs, tool integration, and multi-LLM support

@@ -24,10 +24,7 @@ Flo AI is a Python framework for building structured AI agents with support for


- Checkout the docs Β» -
-
- Github + GitHub β€’ Website β€’ @@ -36,58 +33,11 @@ Flo AI is a Python framework for building structured AI agents with support for


-# Flo AI 🌊 - -> Build production-ready AI agents with structured outputs, tool integration, and multi-LLM support +## πŸš€ What is Flo AI? Flo AI is a Python framework that makes building production-ready AI agents and teams as easy as writing YAML. Think "Kubernetes for AI Agents" - compose complex AI architectures using pre-built components while maintaining the flexibility to create your own. -## 🎨 Flo AI Studio - Visual Workflow Designer - -**Create AI workflows visually with our powerful React-based studio!** - -

- Flo AI Studio - Visual Workflow Designer -

- -Flo AI Studio is a modern, intuitive visual editor that allows you to design complex multi-agent workflows through a drag-and-drop interface. Build sophisticated AI systems without writing code, then export them as production-ready YAML configurations. - -### πŸš€ Studio Features - -- **🎯 Visual Design**: Drag-and-drop interface for creating agent workflows -- **πŸ€– Agent Management**: Configure AI agents with different roles, models, and tools -- **πŸ”€ Smart Routing**: Visual router configuration for intelligent workflow decisions -- **πŸ“€ YAML Export**: Export workflows as Flo AI-compatible YAML configurations -- **πŸ“₯ YAML Import**: Import existing workflows for further editing -- **βœ… Workflow Validation**: Real-time validation and error checking -- **πŸ”§ Tool Integration**: Connect agents to external tools and APIs -- **πŸ“‹ Template System**: Quick start with pre-built agent and router templates - -### πŸƒβ€β™‚οΈ Quick Start with Studio - -1. **Start the Studio**: - ```bash - cd studio - pnpm install - pnpm dev - ``` - -2. **Design Your Workflow**: - - Add agents, routers, and tools to the canvas - - Configure their properties and connections - - Test with the built-in validation - -3. **Export & Run**: - ```bash - # Export YAML from the studio, then run with Flo AI - python -c " - from flo_ai.arium import AriumBuilder - builder = AriumBuilder.from_yaml(yaml_file='your_workflow.yaml') - result = await builder.build_and_run(['Your input here']) - " - ``` - -## ✨ Features +### ✨ Key Features - πŸ”Œ **Truly Composable**: Build complex AI systems by combining smaller, reusable components - πŸ—οΈ **Production-Ready**: Built-in best practices and optimizations for production deployments @@ -96,93 +46,33 @@ Flo AI Studio is a modern, intuitive visual editor that allows you to design com - πŸ”§ **Flexible**: Use pre-built components or create your own - 🀝 **Team-Oriented**: Create and manage teams of AI agents working together - πŸ”„ **Langchain Compatible**: Works with all your favorite Langchain tools -- πŸ“Š **OpenTelemetry Integration**: Built-in observability with automatic instrumentation for LLM calls, agent execution, and workflows - -## πŸ“Š OpenTelemetry Integration - -Flo AI includes comprehensive OpenTelemetry integration for production observability. Monitor your AI applications with automatic instrumentation for: - -- πŸ” **LLM Calls**: Track token usage, latency, and errors across all providers -- πŸ€– **Agent Execution**: Monitor performance, tool calls, and retry attempts -- πŸ”„ **Workflows**: Track Arium workflow execution and node traversals -- πŸ“Š **Metrics**: Export performance data to Jaeger, Prometheus, Grafana, or cloud providers - -### Quick Telemetry Setup - -```python -from flo_ai import configure_telemetry, shutdown_telemetry - -# Configure at startup -configure_telemetry( - service_name="my_ai_app", - service_version="1.0.0", - console_export=True # For debugging -) - -# Your application code here... - -# Shutdown to flush data -shutdown_telemetry() -``` - -### Production Monitoring - -```python -# Export to OTLP collector (Jaeger, Prometheus, etc.) -configure_telemetry( - service_name="my_ai_app", - otlp_endpoint="http://localhost:4317" -) -``` - -**πŸ“– [Complete Telemetry Guide β†’](flo_ai/flo_ai/telemetry/README.md)** +- πŸ“Š **OpenTelemetry Integration**: Built-in observability with automatic instrumentation ## πŸ“– Table of Contents - [πŸš€ Quick Start](#-quick-start) - [Installation](#installation) - - [Create Your First AI Agent in 30 seconds](#create-your-first-ai-agent-in-30-seconds) - - [Create a Tool-Using Agent](#create-a-tool-using-agent) - - [Create an Agent with Structured Output](#create-an-agent-with-structured-output) -- [πŸ“Š OpenTelemetry Integration](#-opentelemetry-integration) -- [πŸ“ YAML Configuration](#-yaml-configuration) -- [πŸ”§ Variables System](#-variables-system) -- [πŸ“„ Document Processing](#-document-processing) -- [πŸ› οΈ Tools](#️-tools) - - [🎯 @flo_tool Decorator](#-flo_tool-decorator) -- [🧠 Reasoning Patterns](#-reasoning-patterns) -- [πŸ”§ LLM Providers](#-llm-providers) - - [OpenAI](#openai) - - [Anthropic Claude](#anthropic-claude) - - [Google Gemini](#google-gemini) - - [Google VertexAI](#google-vertexai) - - [Ollama (Local)](#ollama-local) - - [Streaming Support in LLM](#streaming-support) -- [πŸ“Š Output Formatting](#-output-formatting) -- [πŸ”„ Error Handling](#-error-handling) -- [πŸ“š Examples](#-examples) -- [πŸš€ Advanced Features](#-advanced-features) - - [Custom Tool Creation](#custom-tool-creation) - - [YAML Parser Integration](#yaml-parser-integration) + - [Your First Agent (30 seconds)](#your-first-agent-30-seconds) + - [Tool-Using Agent](#tool-using-agent) + - [Structured Output Agent](#structured-output-agent) +- [🎨 Flo AI Studio - Visual Workflow Designer](#-flo-ai-studio---visual-workflow-designer) +- [πŸ”§ Core Features](#-core-features) + - [LLM Providers](#llm-providers) + - [Tools & @flo_tool Decorator](#tools--flo_tool-decorator) + - [Variables System](#variables-system) + - [Document Processing](#document-processing) + - [Output Formatting](#output-formatting) + - [Error Handling](#error-handling) - [πŸ”„ Agent Orchestration with Arium](#-agent-orchestration-with-arium) - - [🌟 Key Features](#-key-features) - - [Quick Start: Simple Agent Chain](#quick-start-simple-agent-chain) - - [Advanced: Conditional Routing](#advanced-conditional-routing) - - [Agent + Tool Workflows](#agent--tool-workflows) - - [Workflow Visualization](#workflow-visualization) - - [Memory and Context Sharing](#memory-and-context-sharing) - - [πŸ“Š Use Cases for Arium](#-use-cases-for-arium) - - [Builder Pattern Benefits](#builder-pattern-benefits) - - [πŸ“„ YAML-Based Arium Workflows](#-yaml-based-arium-workflows) - - [🧠 LLM-Powered Routers in YAML (NEW!)](#-llm-powered-routers-in-yaml-new) - - [πŸ”„ ReflectionRouter: Structured Reflection Workflows (NEW!)](#-reflectionrouter-structured-reflection-workflows-new) - - [πŸ”„ PlanExecuteRouter: Cursor-Style Plan-and-Execute Workflows (NEW!)](#-planexecuterouter-cursor-style-plan-and-execute-workflows-new) -- [πŸ“– Documentation](#-documentation) + - [Simple Agent Chains](#simple-agent-chains) + - [Conditional Routing](#conditional-routing) + - [YAML-Based Workflows](#yaml-based-workflows) + - [LLM-Powered Routers](#llm-powered-routers) + - [ReflectionRouter & PlanExecuteRouter](#reflectionrouter--planexecuterouter) +- [πŸ“Š OpenTelemetry Integration](#-opentelemetry-integration) +- [πŸ“š Examples & Documentation](#-examples--documentation) - [🌟 Why Flo AI?](#-why-flo-ai) -- [🎯 Use Cases](#-use-cases) - [🀝 Contributing](#-contributing) -- [πŸ“œ License](#-license) -- [πŸ™ Acknowledgments](#-acknowledgments) ## πŸš€ Quick Start @@ -194,18 +84,16 @@ pip install flo-ai poetry add flo-ai ``` -### Create Your First AI Agent in 30 seconds +### Your First Agent (30 seconds) ```python import asyncio -from typing import Any from flo_ai.builder.agent_builder import AgentBuilder from flo_ai.llm import OpenAI -from flo_ai.models.agent import Agent -async def main() -> None: +async def main(): # Create a simple conversational agent - agent: Agent = ( + agent = ( AgentBuilder() .with_name('Math Tutor') .with_prompt('You are a helpful math tutor.') @@ -213,2338 +101,456 @@ async def main() -> None: .build() ) - response: Any = await agent.run('What is the formula for the area of a circle?') + response = await agent.run('What is the formula for the area of a circle?') print(f'Response: {response}') asyncio.run(main()) ``` -### Create a Tool-Using Agent +### Tool-Using Agent ```python import asyncio -from typing import Any, Dict, List, Union from flo_ai.builder.agent_builder import AgentBuilder -from flo_ai.tool.base_tool import Tool -from flo_ai.models.base_agent import ReasoningPattern -from flo_ai.models.agent import Agent +from flo_ai.tool import flo_tool from flo_ai.llm import Anthropic +@flo_tool(description="Perform mathematical calculations") async def calculate(operation: str, x: float, y: float) -> float: - if operation == 'add': - return x + y - elif operation == 'multiply': - return x * y - raise ValueError(f'Unknown operation: {operation}') - -# Define a calculator tool -calculator_tool: Tool = Tool( - name='calculate', - description='Perform basic calculations', - function=calculate, - parameters={ - 'operation': { - 'type': 'string', - 'description': 'The operation to perform (add or multiply)', - }, - 'x': {'type': 'number', 'description': 'First number'}, - 'y': {'type': 'number', 'description': 'Second number'}, - }, -) + """Calculate mathematical operations between two numbers.""" + operations = { + 'add': lambda: x + y, + 'subtract': lambda: x - y, + 'multiply': lambda: x * y, + 'divide': lambda: x / y if y != 0 else 0, + } + return operations.get(operation, lambda: 0)() -# Create a tool-using agent with Claude -agent: Agent = ( +async def main(): + agent = ( AgentBuilder() .with_name('Calculator Assistant') .with_prompt('You are a math assistant that can perform calculations.') .with_llm(Anthropic(model='claude-3-5-sonnet-20240620')) - .with_tools([calculator_tool]) - .with_reasoning(ReasoningPattern.REACT) - .with_retries(2) + .with_tools([calculate.tool]) .build() ) -response: Any = await agent.run('Calculate 5 plus 3') -print(f'Response: {response}') + response = await agent.run('Calculate 5 plus 3') + print(f'Response: {response}') + +asyncio.run(main()) ``` -### Create an Agent with Structured Output +### Structured Output Agent ```python import asyncio -from typing import Any, Dict +from pydantic import BaseModel, Field from flo_ai.builder.agent_builder import AgentBuilder from flo_ai.llm import OpenAI -from flo_ai.models.agent import Agent -# Define output schema for structured responses -math_schema: Dict[str, Any] = { - 'type': 'object', - 'properties': { - 'solution': {'type': 'string', 'description': 'The step-by-step solution'}, - 'answer': {'type': 'string', 'description': 'The final answer'}, - }, - 'required': ['solution', 'answer'], -} +class MathSolution(BaseModel): + solution: str = Field(description="Step-by-step solution") + answer: str = Field(description="Final answer") + confidence: float = Field(description="Confidence level (0-1)") -# Create an agent with structured output -agent: Agent = ( +async def main(): + agent = ( AgentBuilder() - .with_name('Structured Math Solver') - .with_prompt('You are a math problem solver that provides structured solutions.') + .with_name('Math Solver') .with_llm(OpenAI(model='gpt-4o')) - .with_output_schema(math_schema) + .with_output_schema(MathSolution) .build() ) -response: Any = await agent.run('Solve: 2x + 5 = 15') -print(f'Structured Response: {response}') + response = await agent.run('Solve: 2x + 5 = 15') + print(f'Structured Response: {response}') + +asyncio.run(main()) ``` -## πŸ“ YAML Configuration +## 🎨 Flo AI Studio - Visual Workflow Designer -Define your agents using YAML for easy configuration and deployment: +**Create AI workflows visually with our powerful React-based studio!** -```yaml -metadata: - name: email-summary-flo - version: 1.0.0 - description: "Agent for analyzing email threads" -agent: - name: EmailSummaryAgent - role: Email communication expert - model: - provider: openai - name: gpt-4o-mini - settings: - temperature: 0 - max_retries: 3 - reasoning_pattern: DIRECT - job: > - You are given an email thread between a customer and a support agent. - Your job is to analyze the behavior, sentiment, and communication style. - parser: - name: EmailSummary - fields: - - name: sender_type - type: literal - description: "Who sent the latest email" - values: - - value: customer - description: "Latest email was sent by customer" - - value: agent - description: "Latest email was sent by support agent" - - name: summary - type: str - description: "A comprehensive summary of the email" - - name: resolution_status - type: literal - description: "Issue resolution status" - values: - - value: resolved - description: "Issue appears resolved" - - value: unresolved - description: "Issue requires attention" -``` +

+ Flo AI Studio - Visual Workflow Designer +

-```python -from typing import Any, List -from flo_ai.builder.agent_builder import AgentBuilder -from flo_ai.models.agent import Agent +Flo AI Studio is a modern, intuitive visual editor that allows you to design complex multi-agent workflows through a drag-and-drop interface. Build sophisticated AI systems without writing code, then export them as production-ready YAML configurations. + +### πŸš€ Studio Features -# Create agent from YAML -yaml_config: str = """...""" # Your YAML configuration string -email_thread: List[str] = ["Email thread content..."] +- **🎯 Visual Design**: Drag-and-drop interface for creating agent workflows +- **πŸ€– Agent Management**: Configure AI agents with different roles, models, and tools +- **πŸ”€ Smart Routing**: Visual router configuration for intelligent workflow decisions +- **πŸ“€ YAML Export**: Export workflows as Flo AI-compatible YAML configurations +- **πŸ“₯ YAML Import**: Import existing workflows for further editing +- **βœ… Workflow Validation**: Real-time validation and error checking +- **πŸ”§ Tool Integration**: Connect agents to external tools and APIs +- **πŸ“‹ Template System**: Quick start with pre-built agent and router templates -builder: AgentBuilder = AgentBuilder.from_yaml(yaml_str=yaml_config) -agent: Agent = builder.build() +### πŸƒβ€β™‚οΈ Quick Start with Studio -# Use the agent -result: Any = await agent.run(email_thread) -``` +1. **Start the Studio**: + ```bash + cd studio + pnpm install + pnpm dev + ``` -## πŸ”§ Variables System +2. **Design Your Workflow**: + - Add agents, routers, and tools to the canvas + - Configure their properties and connections + - Test with the built-in validation -Flo AI supports dynamic variable resolution in agent prompts and inputs using `` syntax. Variables are automatically discovered, validated at runtime, and can be shared across multi-agent workflows. +3. **Export & Run**: +```python +from flo_ai.arium import AriumBuilder + + builder = AriumBuilder.from_yaml(yaml_file='your_workflow.yaml') + result = await builder.build_and_run(['Your input here']) + ``` -### ✨ Key Features +## πŸ”§ Core Features -- **πŸ” Automatic Discovery**: Variables are extracted from system prompts and inputs at runtime -- **βœ… Runtime Validation**: Missing variables are reported with detailed error messages -- **🀝 Multi-Agent Support**: Variables can be shared across agent workflows -- **πŸ›‘οΈ JSON-Safe Syntax**: `` format avoids conflicts with JSON content +### LLM Providers -### Basic Usage +Flo AI supports multiple LLM providers with consistent interfaces: ```python -import asyncio -from typing import Any, Dict -from flo_ai.builder.agent_builder import AgentBuilder +# OpenAI from flo_ai.llm import OpenAI -from flo_ai.models.agent import Agent +llm = OpenAI(model='gpt-4o', temperature=0.7) -async def main() -> None: - # Create agent with variables in system prompt - agent: Agent = ( - AgentBuilder() - .with_name('Data Analyst') - .with_prompt('Analyze and focus on . Generate insights for .') - .with_llm(OpenAI(model='gpt-4o-mini')) - .build() - ) - - # Define variables at runtime - variables: Dict[str, str] = { - 'dataset_path': '/data/sales_q4_2024.csv', - 'key_metric': 'revenue growth', - 'target_audience': 'executive team' - } - - # Run agent with variable resolution - result: Any = await agent.run( - 'Please provide a comprehensive analysis with actionable recommendations.', - variables=variables - ) - - print(f'Analysis: {result}') +# Anthropic Claude +from flo_ai.llm import Anthropic +llm = Anthropic(model='claude-3-5-sonnet-20240620', temperature=0.7) -asyncio.run(main()) +# Google Gemini +from flo_ai.llm import Gemini +llm = Gemini(model='gemini-2.5-flash', temperature=0.7) + +# Google VertexAI +from flo_ai.llm import VertexAI +llm = VertexAI(model='gemini-2.5-flash', project='your-project') + +# Ollama (Local) +from flo_ai.llm import Ollama +llm = Ollama(model='llama2', base_url='http://localhost:11434') ``` -### Variables in User Input +### Tools & @flo_tool Decorator -Variables can also be used in the user input messages: +Create custom tools easily with the `@flo_tool` decorator: ```python -import asyncio -from typing import Any, Dict -from flo_ai.models.agent import Agent -from flo_ai.llm import OpenAI +from flo_ai.tool import flo_tool -async def input_variables_example() -> None: - agent: Agent = Agent( - name='content_creator', - system_prompt='You are a content creator specializing in .', - llm=OpenAI(model='gpt-4o-mini') - ) - - variables: Dict[str, str] = { - 'content_type': 'technical blog posts', - 'topic': 'machine learning fundamentals', - 'word_count': '1500', - 'target_level': 'intermediate' - } - - # Variables in both system prompt and user input - result: Any = await agent.run( - 'Create a -word article about for readers.', - variables=variables - ) - - print(f'Content: {result}') +@flo_tool(description="Get current weather for a city") +async def get_weather(city: str, country: str = None) -> str: + """Get weather information for a specific city.""" + # Your weather API implementation + return f"Weather in {city}: sunny, 25Β°C" -asyncio.run(input_variables_example()) +# Use in agent + agent = ( + AgentBuilder() + .with_name('Weather Assistant') + .with_llm(OpenAI(model='gpt-4o-mini')) + .with_tools([get_weather.tool]) + .build() + ) ``` -### Multi-Agent Variable Sharing +### Variables System -Variables can be shared and passed between agents in workflows: +Dynamic variable resolution in agent prompts using `` syntax: ```python -import asyncio -from typing import Any, Dict, List -from flo_ai.arium import AriumBuilder -from flo_ai.models.agent import Agent -from flo_ai.llm import OpenAI +# Create agent with variables +agent = ( + AgentBuilder() + .with_name('Data Analyst') + .with_prompt('Analyze and focus on . Generate insights for .') + .with_llm(OpenAI(model='gpt-4o-mini')) + .build() +) -async def multi_agent_variables() -> List[Any]: - llm: OpenAI = OpenAI(model='gpt-4o-mini') - - # Agent 1: Research phase - researcher: Agent = Agent( - name='researcher', - system_prompt='Research and focus on analysis.', - llm=llm - ) - - # Agent 2: Writing phase - writer: Agent = Agent( - name='writer', - system_prompt='Write a based on the research for .', - llm=llm - ) - - # Agent 3: Review phase - reviewer: Agent = Agent( - name='reviewer', - system_prompt='Review the for and provide feedback.', - llm=llm - ) - - # Shared variables across all agents - shared_variables: Dict[str, str] = { - 'research_topic': 'sustainable energy solutions', - 'research_depth': 'comprehensive', - 'document_type': 'white paper', - 'target_audience': 'policy makers', - 'review_criteria': 'accuracy and policy relevance' - } - - # Run multi-agent workflow with shared variables - result: List[Any] = await ( - AriumBuilder() - .add_agents([researcher, writer, reviewer]) - .start_with(researcher) - .connect(researcher, writer) - .connect(writer, reviewer) - .end_with(reviewer) - .build_and_run( - ['Begin comprehensive research and document creation process'], - variables=shared_variables - ) - ) - - return result +# Define variables at runtime +variables = { + 'dataset_path': '/data/sales_q4_2024.csv', + 'key_metric': 'revenue growth', + 'target_audience': 'executive team' +} -asyncio.run(multi_agent_variables()) +result = await agent.run( + 'Please provide a comprehensive analysis with actionable recommendations.', + variables=variables +) ``` -### YAML Configuration with Variables +### Document Processing -Variables work seamlessly with YAML-based agent configuration: - -```yaml -metadata: - name: personalized-assistant - version: 1.0.0 - description: "Personalized assistant with variable support" -agent: - name: PersonalizedAssistant - kind: llm - role: assistant specialized in - model: - provider: openai - name: gpt-4o-mini - settings: - temperature: 0.3 - max_retries: 2 - reasoning_pattern: DIRECT - job: > - You are a focused on . - Your expertise includes and you should - tailor responses for users. - Always consider in your recommendations. -``` +Process PDF and TXT documents with AI agents: ```python -import asyncio -from typing import Any, Dict -from flo_ai.builder.agent_builder import AgentBuilder -from flo_ai.models.agent import Agent +from flo_ai.models.document import DocumentMessage, DocumentType -async def yaml_with_variables() -> None: - yaml_config: str = """...""" # Your YAML configuration - - # Variables for YAML agent - variables: Dict[str, str] = { - 'user_role': 'data scientist', - 'domain_expertise': 'machine learning and statistical analysis', - 'primary_objective': 'deriving actionable insights from data', - 'experience_level': 'senior', - 'priority_constraints': 'computational efficiency and model interpretability' - } - - # Create agent from YAML with variables - builder: AgentBuilder = AgentBuilder.from_yaml(yaml_str=yaml_config) - agent: Agent = builder.build() - - result: Any = await agent.run( - 'Help me design an ML pipeline for with ', - variables={ - **variables, - 'use_case': 'customer churn prediction', - 'data_constraints': 'limited labeled data' - } + # Create document message + document = DocumentMessage( + document_type=DocumentType.PDF, + document_file_path='business_report.pdf' ) - print(f'ML Pipeline Advice: {result}') +# Process with agent +agent = ( + AgentBuilder() + .with_name('Document Analyzer') + .with_prompt('Analyze the provided document and extract key insights.') + .with_llm(OpenAI(model='gpt-4o-mini')) + .build() +) -asyncio.run(yaml_with_variables()) + result = await agent.run([document]) ``` -### Error Handling and Validation +### Output Formatting -The variables system provides comprehensive error reporting for missing or invalid variables: +Use Pydantic models for structured outputs: ```python -import asyncio -from typing import Any, Dict -from flo_ai.models.agent import Agent -from flo_ai.llm import OpenAI +from pydantic import BaseModel, Field -async def variable_validation_example() -> None: - agent: Agent = Agent( - name='validator_example', - system_prompt='Process and for analysis.', - llm=OpenAI(model='gpt-4o-mini') - ) - - # Incomplete variables (missing 'another_param') - incomplete_variables: Dict[str, str] = { - 'required_param': 'dataset.csv' - # 'another_param' is missing - } - - try: - result: Any = await agent.run( - 'Analyze the data in ', - variables=incomplete_variables # Missing 'another_param' and 'data_source' - ) - except ValueError as e: - print(f'Variable validation error: {e}') - # Error will list all missing variables with their locations - -asyncio.run(variable_validation_example()) -``` +class AnalysisResult(BaseModel): + summary: str = Field(description="Executive summary") + key_findings: list = Field(description="List of key findings") + recommendations: list = Field(description="Actionable recommendations") -### Best Practices +agent = ( + AgentBuilder() + .with_name('Business Analyst') + .with_llm(OpenAI(model='gpt-4o')) + .with_output_schema(AnalysisResult) + .build() +) +``` -1. **Descriptive Variable Names**: Use clear, descriptive names like `` instead of `` -2. **Consistent Naming**: Use consistent variable names across related agents and workflows -3. **Validation**: Always test your variable resolution before production deployment -4. **Documentation**: Document expected variables in your agent configurations +### Error Handling -The variables system makes Flo AI agents highly reusable and configurable, enabling you to create flexible AI workflows that adapt to different contexts and requirements. +Built-in retry mechanisms and error recovery: -## πŸ“„ Document Processing +```python +agent = ( + AgentBuilder() + .with_name('Robust Agent') + .with_llm(OpenAI(model='gpt-4o')) + .with_retries(3) # Retry up to 3 times on failure + .build() +) +``` -Flo AI provides powerful document processing capabilities that allow agents to analyze and work with various document formats. The framework supports PDF and TXT documents with an extensible architecture for easy addition of new formats. +## πŸ”„ Agent Orchestration with Arium -### ✨ Key Features +Arium is Flo AI's powerful workflow orchestration engine for creating complex multi-agent workflows. -- **πŸ“„ Multi-Format Support**: Process PDF and TXT documents seamlessly -- **πŸ”„ Multiple Input Methods**: File paths, bytes data, or base64 encoded content -- **🧠 LLM Integration**: Direct document input to AI agents for analysis -- **⚑ Async Processing**: Efficient document handling with async/await support -- **πŸ”§ Extensible Architecture**: Easy to add support for new document types -- **πŸ“Š Rich Metadata**: Extract page counts, processing methods, and document statistics - -### Basic Document Processing - -```python -import asyncio -from flo_ai.builder.agent_builder import AgentBuilder -from flo_ai.llm import OpenAI -from flo_ai.models.document import DocumentMessage, DocumentType - -async def basic_document_analysis(): - # Create document message from file path - document = DocumentMessage( - document_type=DocumentType.PDF, - document_file_path='path/to/your/document.pdf' - ) - - # Create document analysis agent - agent = ( - AgentBuilder() - .with_name('Document Analyzer') - .with_prompt('Analyze the provided document and extract key insights, themes, and important information.') - .with_llm(OpenAI(model='gpt-4o-mini')) - .build() - ) - - # Process document with agent - result = await agent.run([document]) - print(f'Analysis: {result}') - -asyncio.run(basic_document_analysis()) -``` - -### Multiple Input Methods - -Flo AI supports three ways to provide document content: - -#### 1. File Path (Recommended) -```python -document = DocumentMessage( - document_type=DocumentType.PDF, - document_file_path='/path/to/document.pdf' -) -``` - -#### 2. Bytes Data -```python -# Read file as bytes -with open('document.pdf', 'rb') as f: - pdf_bytes = f.read() - -document = DocumentMessage( - document_type=DocumentType.PDF, - document_bytes=pdf_bytes, - mime_type='application/pdf' -) -``` - -#### 3. Base64 Encoded -```python -import base64 - -# Encode file to base64 -with open('document.pdf', 'rb') as f: - pdf_base64 = base64.b64encode(f.read()).decode('utf-8') - -document = DocumentMessage( - document_type=DocumentType.PDF, - document_base64=pdf_base64, - mime_type='application/pdf' -) -``` - -### Document Processing in Workflows - -Documents can be seamlessly integrated into Arium workflows: - -```python -import asyncio -from flo_ai.arium import AriumBuilder -from flo_ai.models.document import DocumentMessage, DocumentType - -async def document_workflow(): - # Create document message - document = DocumentMessage( - document_type=DocumentType.PDF, - document_file_path='business_report.pdf' - ) - - # Define workflow YAML - workflow_yaml = """ - metadata: - name: document-analysis-workflow - version: 1.0.0 - description: "Multi-agent document analysis pipeline" - - arium: - agents: - - name: intake_agent - role: "Document Intake Specialist" - job: "Process and assess document content for analysis." - model: - provider: openai - name: gpt-4o-mini - - - name: content_analyzer - role: "Content Analyst" - job: "Analyze document content for themes, insights, and key information." - model: - provider: openai - name: gpt-4o-mini - - - name: summary_generator - role: "Summary Writer" - job: "Create comprehensive summaries of analyzed content." - model: - provider: openai - name: gpt-4o-mini - - workflow: - start: intake_agent - edges: - - from: intake_agent - to: [content_analyzer] - - from: content_analyzer - to: [summary_generator] - end: [summary_generator] - """ - - # Run workflow with document - result = await ( - AriumBuilder() - .from_yaml(yaml_str=workflow_yaml) - .build_and_run([document, 'Analyze this business report and provide insights']) - ) - - return result - -asyncio.run(document_workflow()) -``` - -### Advanced Document Processing - -#### Custom Document Metadata -```python -document = DocumentMessage( - document_type=DocumentType.PDF, - document_file_path='report.pdf', - metadata={ - 'source': 'quarterly_reports', - 'department': 'finance', - 'priority': 'high', - 'tags': ['financial', 'q4-2024'] - } -) -``` - -#### Processing Different Document Types -```python -# PDF Document -pdf_doc = DocumentMessage( - document_type=DocumentType.PDF, - document_file_path='presentation.pdf' -) - -# Text Document -txt_doc = DocumentMessage( - document_type=DocumentType.TXT, - document_file_path='notes.txt' -) - -# Process both with the same agent -agent = AgentBuilder().with_name('Multi-Format Analyzer').build() - -pdf_result = await agent.run([pdf_doc]) -txt_result = await agent.run([txt_doc]) -``` - -### Document Processing Tools - -Create custom tools for document operations: - -```python -from flo_ai.tool import flo_tool -from flo_ai.models.document import DocumentMessage, DocumentType - -@flo_tool(description="Extract key information from documents") -async def extract_document_info(document_path: str, doc_type: str) -> str: - """Extract key information from a document.""" - document_type = DocumentType.PDF if doc_type.lower() == 'pdf' else DocumentType.TXT - - document = DocumentMessage( - document_type=document_type, - document_file_path=document_path - ) - - # Use document processing agent - agent = AgentBuilder().with_name('Info Extractor').build() - result = await agent.run([document]) - - return result - -# Use in agent -agent = ( - AgentBuilder() - .with_name('Document Processor') - .with_tools([extract_document_info.tool]) - .build() -) -``` - -### Error Handling - -```python -from flo_ai.utils.document_processor import DocumentProcessingError - -try: - document = DocumentMessage( - document_type=DocumentType.PDF, - document_file_path='nonexistent.pdf' - ) - result = await agent.run([document]) -except DocumentProcessingError as e: - print(f'Document processing failed: {e}') -except FileNotFoundError: - print('Document file not found') -``` - -### Supported Document Types - -| Type | Extension | Description | Processing Method | -|------|-----------|-------------|-------------------| -| PDF | `.pdf` | Portable Document Format | PyMuPDF4LLM (LLM-optimized) | -| TXT | `.txt` | Plain text files | UTF-8 with encoding detection | - -### Best Practices - -1. **File Validation**: Always check if files exist before processing -2. **Memory Management**: Use file paths for large documents to avoid memory issues -3. **Error Handling**: Implement proper error handling for document processing failures -4. **Metadata**: Add relevant metadata to help agents understand document context -5. **Format Selection**: Choose the most appropriate input method for your use case - -### Use Cases - -- πŸ“Š **Document Analysis**: Extract insights from reports, papers, and documents -- πŸ“ **Content Summarization**: Create summaries of long documents -- πŸ” **Information Extraction**: Pull specific data from structured documents -- πŸ“‹ **Document Classification**: Categorize documents based on content -- πŸ€– **Multi-Agent Workflows**: Process documents through specialized agent pipelines -- πŸ“ˆ **Business Intelligence**: Analyze business documents for insights and trends - -The document processing system makes Flo AI incredibly powerful for real-world applications that need to work with various document formats, enabling sophisticated AI workflows that can understand and process complex document content. - -## πŸ› οΈ Tools - -Create custom tools easily with async support: - -```python -from typing import List -from flo_ai.tool.base_tool import Tool -from flo_ai.builder.agent_builder import AgentBuilder -from flo_ai.llm import OpenAI -from flo_ai.models.agent import Agent - -async def weather_lookup(city: str) -> str: - # Your weather API call here - return f"Weather in {city}: Sunny, 25Β°C" - -weather_tool: Tool = Tool( - name='weather_lookup', - description='Get current weather for a city', - function=weather_lookup, - parameters={ - 'city': { - 'type': 'string', - 'description': 'City name to get weather for' - } - } -) - -# Add to your agent -agent: Agent = ( - AgentBuilder() - .with_name('Weather Assistant') - .with_llm(OpenAI(model='gpt-4o-mini')) - .with_tools([weather_tool]) - .build() -) -``` - -### 🎯 @flo_tool Decorator - -The `@flo_tool` decorator automatically converts any Python function into a `Tool` object with minimal boilerplate: - -```python -from typing import Any, Dict, Union -from flo_ai.tool import flo_tool -from flo_ai.builder.agent_builder import AgentBuilder -from flo_ai.llm import OpenAI -from flo_ai.models.agent import Agent - -@flo_tool( - description="Perform mathematical calculations", - parameter_descriptions={ - "operation": "The operation to perform (add, subtract, multiply, divide)", - "x": "First number", - "y": "Second number" - } -) -async def calculate(operation: str, x: float, y: float) -> Union[float, str]: - """Calculate mathematical operations between two numbers.""" - operations: Dict[str, callable] = { - 'add': lambda: x + y, - 'subtract': lambda: x - y, - 'multiply': lambda: x * y, - 'divide': lambda: x / y if y != 0 else 'Cannot divide by zero', - } - if operation not in operations: - raise ValueError(f'Unknown operation: {operation}') - return operations[operation]() - -# Function can be called normally -result: Union[float, str] = await calculate("add", 5, 3) # Returns 8 - -# Tool object is automatically available -agent: Agent = ( - AgentBuilder() - .with_name('Calculator Agent') - .with_llm(OpenAI(model='gpt-4o-mini')) - .with_tools([calculate.tool]) # Access the tool via .tool attribute - .build() -) -``` - -**Key Benefits:** -- βœ… **Automatic parameter extraction** from type hints -- βœ… **Flexible descriptions** via docstrings or custom descriptions -- βœ… **Type conversion** from Python types to JSON schema -- βœ… **Dual functionality** - functions work normally AND as tools -- βœ… **Async support** for both sync and async functions - -**Simple Usage:** -```python -from flo_ai.tool import flo_tool - -@flo_tool() -async def convert_units(value: float, from_unit: str, to_unit: str) -> str: - """Convert between different units (km/miles, kg/lbs, celsius/fahrenheit).""" - # Implementation here - result: float = 0.0 # Your conversion logic here - return f"{value} {from_unit} = {result} {to_unit}" - -# Tool is automatically available as convert_units.tool -``` - -**With Custom Metadata:** -```python -from typing import Optional -from flo_ai.tool import flo_tool - -@flo_tool( - name="weather_checker", - description="Get current weather information for a city", - parameter_descriptions={ - "city": "The city to get weather for", - "country": "The country (optional)", - } -) -async def get_weather(city: str, country: Optional[str] = None) -> str: - """Get weather information for a specific city.""" - return f"Weather in {city}: sunny" -``` - -> πŸ“– **For detailed documentation on the `@flo_tool` decorator, see [README_flo_tool.md](TOOLS.md)** - -## 🧠 Reasoning Patterns - -Flo AI supports multiple reasoning patterns: - -- **DIRECT**: Simple question-answer without step-by-step reasoning -- **COT (Chain of Thought)**: Step-by-step reasoning before providing the answer -- **REACT**: Reasoning and action cycles for tool-using agents - -```python -from flo_ai.models.base_agent import ReasoningPattern -from flo_ai.builder.agent_builder import AgentBuilder -from flo_ai.llm import OpenAI -from flo_ai.models.agent import Agent - -agent: Agent = ( - AgentBuilder() - .with_name('Reasoning Agent') - .with_llm(OpenAI(model='gpt-4o')) - .with_reasoning(ReasoningPattern.COT) # or REACT, DIRECT - .build() -) -``` - -## πŸ”§ LLM Providers - -### OpenAI -```python -from flo_ai.llm import OpenAI - -llm: OpenAI = OpenAI( - model='gpt-4o', - temperature=0.7, - api_key='your-api-key' # or set OPENAI_API_KEY env var -) -``` - -### Anthropic Claude -```python -from flo_ai.llm import Anthropic - -llm: Anthropic = Anthropic( - model='claude-3-5-sonnet-20240620', - temperature=0.7, - api_key='your-api-key' # or set ANTHROPIC_API_KEY env var -) -``` - -### Google Gemini -```python -from flo_ai.llm import Gemini - -llm: Gemini = Gemini( - model='gemini-2.5-flash', # or gemini-2.5-pro - temperature=0.7, - api_key='your-api-key' # or set GOOGLE_API_KEY env var -) -``` - -### Google VertexAI -```python -from flo_ai.llm import VertexAI - -llm: VertexAI = VertexAI( - model='gemini-2.5-flash', # or gemini-2.5-pro - temperature=0.7, - project='your-gcp-project-id', # or set GOOGLE_CLOUD_PROJECT env var - location='us-central1' # or set GOOGLE_CLOUD_LOCATION env var -) -``` - -**Prerequisites for VertexAI:** -- Set up Google Cloud project with Vertex AI API enabled -- Configure authentication: `gcloud auth application-default login` -- Set environment variables: `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` - -### Ollama (Local) -```python -from flo_ai.llm import Ollama - -llm: Ollama = Ollama( - model='llama2', - base_url='http://localhost:11434' -) -``` - -### Streaming Support in LLM -Streaming helps the llm to generate the output (response) piece-by-piece, or token-by-token, -as it is being computed, instead of waiting until the entire response is complete before sending it to the user - -Steaming Support has been added to all the llm providers. Example of streaming function with Gemini is shown below: -```python -from flo_ai.llm import Gemini - -llm: Gemini = Gemini( - model='gemini-2.5-flash', # or gemini-2.5-pro - temperature=0.7, - api_key='your-api-key' # or set GOOGLE_API_KEY env var -) -messages=[{"role": "user", "content": "Stream a short sentence."}] -chunks: List[str] = [] - async for chunk in llm.stream(messages=messages): - text = chunk.get('content', '') - if text: - chunks.append(text) - if len(''.join(chunks)) >= max_chars: - break - return ''.join(chunks) -``` -## πŸ“Š Output Formatting - -Use Pydantic models or JSON schemas for structured outputs: - -```python -from pydantic import BaseModel, Field -from flo_ai.builder.agent_builder import AgentBuilder -from flo_ai.llm import OpenAI -from flo_ai.models.agent import Agent - -class MathSolution(BaseModel): - solution: str = Field(description="Step-by-step solution") - answer: str = Field(description="Final answer") - confidence: float = Field(description="Confidence level (0-1)") - -agent: Agent = ( - AgentBuilder() - .with_name('Math Solver') - .with_llm(OpenAI(model='gpt-4o')) - .with_output_schema(MathSolution) - .build() -) -``` - -## πŸ”„ Error Handling - -Built-in retry mechanisms and error recovery: - -```python -from flo_ai.builder.agent_builder import AgentBuilder -from flo_ai.llm import OpenAI -from flo_ai.models.agent import Agent - -agent: Agent = ( - AgentBuilder() - .with_name('Robust Agent') - .with_llm(OpenAI(model='gpt-4o')) - .with_retries(3) # Retry up to 3 times on failure - .build() -) -``` - -## πŸ“š Examples - -Check out the `examples/` directory for comprehensive examples: - -- `agent_builder_usage.py` - Basic agent creation patterns -- `yaml_agent_example.py` - YAML-based agent configuration -- `output_formatter.py` - Structured output examples -- `multi_tool_example.py` - Multi-tool agent examples -- `cot_agent_example.py` - Chain of Thought reasoning -- `usage.py` and `usage_claude.py` - Provider-specific examples -- `vertexai_agent_example.py` - Google VertexAI integration examples -- `ollama_agent_example.py` - Local Ollama model examples -- `document_processing_example.py` - Document processing with PDF and TXT files - -## πŸš€ Advanced Features - -### Custom Tool Creation -```python -from typing import Dict, Any -from flo_ai.tool.base_tool import Tool - -async def custom_function(param1: str, param2: int) -> Dict[str, str]: - # Your async logic here - return {"result": f"Processed {param1} with {param2}"} - -custom_tool: Tool = Tool( - name='custom_function', - description='A custom async tool', - function=custom_function, - parameters={ - 'param1': {'type': 'string', 'description': 'First parameter'}, - 'param2': {'type': 'integer', 'description': 'Second parameter'} - } -) -``` - -### YAML Parser Integration -```python -from typing import Dict, Any -from flo_ai.formatter.yaml_format_parser import FloYamlParser -from flo_ai.builder.agent_builder import AgentBuilder -from flo_ai.llm import OpenAI -from flo_ai.models.agent import Agent - -# Create parser from YAML definition -yaml_config: Dict[str, Any] = {} # Your YAML configuration dict -parser: FloYamlParser = FloYamlParser.create(yaml_dict=yaml_config) -output_schema: Any = parser.get_format() - -agent: Agent = ( - AgentBuilder() - .with_name('YAML Configured Agent') - .with_llm(OpenAI(model='gpt-4o')) - .with_output_schema(output_schema) - .build() -) -``` - -## πŸ”„ Agent Orchestration with Arium - -Arium is Flo AI's powerful workflow orchestration engine that allows you to create complex multi-agent workflows with ease. Think of it as a conductor for your AI agents, coordinating their interactions and data flow. - -### 🌟 Key Features - -- **πŸ”— Multi-Agent Workflows**: Orchestrate multiple agents working together -- **🎯 Flexible Routing**: Route between agents based on context and conditions -- **🧠 LLM Routers**: Intelligent routing powered by LLMs, define routing logic in YAML -- **πŸ’Ύ Shared Memory**: Agents share conversation history and context -- **πŸ“Š Visual Workflows**: Generate flow diagrams of your agent interactions -- **⚑ Builder Pattern**: Fluent API for easy workflow construction -- **πŸ”„ Reusable Workflows**: Build once, run multiple times with different inputs - -### Quick Start: Simple Agent Chain - -```python -import asyncio -from typing import Any, List -from flo_ai.arium import AriumBuilder -from flo_ai.models.agent import Agent -from flo_ai.llm.openai_llm import OpenAI - -async def simple_chain() -> List[Any]: - llm: OpenAI = OpenAI(model='gpt-4o-mini') - - # Create agents - analyst: Agent = Agent( - name='content_analyst', - system_prompt='Analyze the input and extract key insights.', - llm=llm - ) - - summarizer: Agent = Agent( - name='summarizer', - system_prompt='Create a concise summary based on the analysis.', - llm=llm - ) - - # Build and run workflow - result: List[Any] = await ( - AriumBuilder() - .add_agents([analyst, summarizer]) - .start_with(analyst) - .connect(analyst, summarizer) # analyst β†’ summarizer - .end_with(summarizer) - .build_and_run(["Analyze this complex business report..."]) - ) - - return result - -asyncio.run(simple_chain()) -``` - -### Advanced: Conditional Routing - -```python -import asyncio -from typing import Any, List -from flo_ai.arium import AriumBuilder -from flo_ai.models.agent import Agent -from flo_ai.llm.openai_llm import OpenAI -from flo_ai.arium.memory import BaseMemory - -async def conditional_workflow() -> List[Any]: - llm: OpenAI = OpenAI(model='gpt-4o-mini') - - # Create specialized agents - classifier: Agent = Agent( - name='classifier', - system_prompt='Classify the input as either "technical" or "business".', - llm=llm - ) - - tech_specialist: Agent = Agent( - name='tech_specialist', - system_prompt='Provide technical analysis and solutions.', - llm=llm - ) - - business_specialist: Agent = Agent( - name='business_specialist', - system_prompt='Provide business analysis and recommendations.', - llm=llm - ) - - final_agent: Agent = Agent( - name='final_reviewer', - system_prompt='Provide final review and conclusions.', - llm=llm - ) - - # Define routing logic - def route_by_type(memory: BaseMemory) -> str: - """Route based on classification result""" - messages: List[Any] = memory.get() - last_message: str = str(messages[-1]) if messages else "" - - if "technical" in last_message.lower(): - return "tech_specialist" - else: - return "business_specialist" - - # Build workflow with conditional routing - result: List[Any] = await ( - AriumBuilder() - .add_agents([classifier, tech_specialist, business_specialist, final_agent]) - .start_with(classifier) - .add_edge(classifier, [tech_specialist, business_specialist], route_by_type) - .connect(tech_specialist, final_agent) - .connect(business_specialist, final_agent) - .end_with(final_agent) - .build_and_run(["How can we optimize our database performance?"]) - ) - - return result -``` - -### Agent + Tool Workflows - -```python -import asyncio -from typing import Any, List -from flo_ai.tool import flo_tool -from flo_ai.arium import AriumBuilder -from flo_ai.models.agent import Agent -from flo_ai.llm.openai_llm import OpenAI - -@flo_tool(description="Search for relevant information") -async def search_tool(query: str) -> str: - # Your search implementation - return f"Search results for: {query}" - -@flo_tool(description="Perform calculations") -async def calculator(expression: str) -> float: - # Your calculation implementation - return eval(expression) # Note: Use safely in production - -async def agent_tool_workflow() -> List[Any]: - llm: OpenAI = OpenAI(model='gpt-4o-mini') - - research_agent: Agent = Agent( - name='researcher', - system_prompt='Research topics and gather information.', - llm=llm - ) - - analyst_agent: Agent = Agent( - name='analyst', - system_prompt='Analyze data and provide insights.', - llm=llm - ) - - # Mix agents and tools in workflow - result: List[Any] = await ( - AriumBuilder() - .add_agent(research_agent) - .add_tools([search_tool.tool, calculator.tool]) - .add_agent(analyst_agent) - .start_with(research_agent) - .connect(research_agent, search_tool.tool) - .connect(search_tool.tool, calculator.tool) - .connect(calculator.tool, analyst_agent) - .end_with(analyst_agent) - .build_and_run(["Research market trends for Q4 2024"]) - ) - - return result -``` - -### Workflow Visualization - -```python -from typing import Any, List, Callable, Optional -from flo_ai.arium import AriumBuilder -from flo_ai.arium.arium import Arium -from flo_ai.models.agent import Agent -from flo_ai.tool.base_tool import Tool - -# Assume these are defined elsewhere -agent1: Agent = ... # Your agent definitions -agent2: Agent = ... -agent3: Agent = ... -tool1: Tool = ... # Your tool definitions -tool2: Tool = ... -router_function: Callable = ... # Your router function - -# Build workflow and generate visual diagram -arium: Arium = ( - AriumBuilder() - .add_agents([agent1, agent2, agent3]) - .add_tools([tool1, tool2]) - .start_with(agent1) - .connect(agent1, tool1) - .add_edge(tool1, [agent2, agent3], router_function) - .end_with(agent2) - .end_with(agent3) - .visualize("my_workflow.png", "Customer Service Workflow") # Generates PNG - .build() -) - -# Run the workflow -result: List[Any] = await arium.run(["Customer complaint about billing"]) -``` - -### Memory and Context Sharing - -All agents in an Arium workflow share the same memory, enabling them to build on each other's work: - -```python -from typing import Any, List -from flo_ai.arium import AriumBuilder -from flo_ai.arium.memory import MessageMemory -from flo_ai.arium.arium import Arium -from flo_ai.models.agent import Agent - -# Assume these agents are defined elsewhere -agent1: Agent = ... -agent2: Agent = ... -agent3: Agent = ... - -# Custom memory for persistent context -custom_memory: MessageMemory = MessageMemory() - -result: List[Any] = await ( - AriumBuilder() - .with_memory(custom_memory) # Shared across all agents - .add_agents([agent1, agent2, agent3]) - .start_with(agent1) - .connect(agent1, agent2) - .connect(agent2, agent3) - .end_with(agent3) - .build_and_run(["Initial context and instructions"]) -) - -# Build the arium for reuse -arium: Arium = ( - AriumBuilder() - .with_memory(custom_memory) - .add_agents([agent1, agent2, agent3]) - .start_with(agent1) - .connect(agent1, agent2) - .connect(agent2, agent3) - .end_with(agent3) - .build() -) - -# Memory persists and can be reused -result2: List[Any] = await arium.run(["Follow-up question based on previous context"]) -``` - -### πŸ“Š Use Cases for Arium - -- **πŸ“ Content Pipeline**: Research β†’ Writing β†’ Editing β†’ Publishing -- **πŸ” Analysis Workflows**: Data Collection β†’ Processing β†’ Analysis β†’ Reporting -- **🎯 Decision Trees**: Classification β†’ Specialized Processing β†’ Final Decision -- **🀝 Customer Service**: Intent Detection β†’ Specialist Routing β†’ Resolution -- **πŸ§ͺ Research Workflows**: Question Generation β†’ Investigation β†’ Synthesis β†’ Validation -- **πŸ“‹ Document Processing**: Extraction β†’ Validation β†’ Transformation β†’ Storage - -### Builder Pattern Benefits - -The AriumBuilder provides a fluent, intuitive API: - -```python -from typing import Any, List -from flo_ai.arium import AriumBuilder -from flo_ai.arium.arium import Arium -from flo_ai.models.agent import Agent -from flo_ai.tool.base_tool import Tool - -# Assume these are defined elsewhere -agent1: Agent = ... -agent2: Agent = ... -tool1: Tool = ... -inputs: List[str] = ["Your input messages"] - -# All builder methods return self for chaining -workflow: Arium = ( - AriumBuilder() - .add_agent(agent1) # Add components - .add_tool(tool1) - .start_with(agent1) # Define flow - .connect(agent1, tool1) - .end_with(tool1) - .build() # Create Arium instance -) - -# Or build and run in one step -result: List[Any] = await ( - AriumBuilder() - .add_agents([agent1, agent2]) - .start_with(agent1) - .connect(agent1, agent2) - .end_with(agent2) - .build_and_run(inputs) # Build + run together -) -``` - -**Validation Built-in**: The builder automatically validates your workflow: -- βœ… Ensures at least one agent/tool -- βœ… Requires start and end nodes -- βœ… Validates routing functions -- βœ… Checks for unreachable nodes - -### πŸ“„ YAML-Based Arium Workflows - -One of Flo AI's most powerful features is the ability to define entire multi-agent workflows using YAML configuration. This approach makes workflows reproducible, versionable, and easy to modify without changing code. - -#### Simple YAML Workflow - -```yaml -metadata: - name: "content-analysis-workflow" - version: "1.0.0" - description: "Multi-agent content analysis and summarization pipeline" - -arium: - # Define agents inline - agents: - - name: "analyzer" - role: "Content Analyst" - job: "Analyze the input content and extract key insights, themes, and important information." - model: - provider: "openai" - name: "gpt-4o-mini" - settings: - temperature: 0.2 - max_retries: 3 - reasoning_pattern: "COT" - - - name: "summarizer" - role: "Content Summarizer" - job: "Create a concise, actionable summary based on the analysis provided." - model: - provider: "anthropic" - name: "claude-3-5-sonnet-20240620" - settings: - temperature: 0.1 - reasoning_pattern: "DIRECT" - - # Define the workflow - workflow: - start: "analyzer" - edges: - - from: "analyzer" - to: ["summarizer"] - end: ["summarizer"] -``` - -```python -import asyncio -from typing import Any, List -from flo_ai.arium import AriumBuilder - -async def run_yaml_workflow() -> List[Any]: - yaml_config = """...""" # Your YAML configuration - - # Create workflow from YAML - result: List[Any] = await ( - AriumBuilder() - .from_yaml(yaml_config) - .build_and_run(["Analyze this quarterly business report..."]) - ) - - return result - -asyncio.run(run_yaml_workflow()) -``` - -#### Advanced YAML Workflow with Tools and Routing - -```yaml -metadata: - name: "research-workflow" - version: "2.0.0" - description: "Intelligent research workflow with conditional routing" - -arium: - # Define agents with tool references - agents: - - name: "classifier" - role: "Content Classifier" - job: "Classify input as 'research', 'calculation', or 'analysis' task." - model: - provider: "openai" - name: "gpt-4o-mini" - tools: ["web_search"] # Reference tools provided in Python - - - name: "researcher" - role: "Research Specialist" - job: "Conduct thorough research on with analysis." - model: - provider: "anthropic" - name: "claude-3-5-sonnet-20240620" - tools: ["web_search"] - settings: - temperature: 0.3 - reasoning_pattern: "REACT" - - - name: "analyst" - role: "Data Analyst" - job: "Analyze numerical data and provide insights for ." - model: - provider: "openai" - name: "gpt-4o" - tools: ["calculator", "web_search"] - settings: - reasoning_pattern: "COT" - - - name: "synthesizer" - role: "Information Synthesizer" - job: "Combine research and analysis into final recommendations." - model: - provider: "gemini" - name: "gemini-2.5-flash" - - # Complex workflow with conditional routing - workflow: - start: "classifier" - edges: - # Conditional routing based on classification - - from: "classifier" - to: ["researcher", "analyst"] - router: "classification_router" # Router function provided in Python - - # Both specialists feed into synthesizer - - from: "researcher" - to: ["synthesizer"] - - - from: "analyst" - to: ["synthesizer"] - - end: ["synthesizer"] -``` - -```python -import asyncio -from typing import Any, Dict, List, Literal -from flo_ai.arium import AriumBuilder -from flo_ai.tool.base_tool import Tool -from flo_ai.arium.memory import BaseMemory - -# Define tools in Python (cannot be defined in YAML) -async def web_search(query: str) -> str: - # Your search implementation - return f"Search results for: {query}" - -async def calculate(expression: str) -> str: - # Your calculation implementation - try: - result = eval(expression) # Note: Use safely in production - return f"Calculation result: {result}" - except: - return "Invalid expression" - -# Create tool objects -tools: Dict[str, Tool] = { - "web_search": Tool( - name="web_search", - description="Search the web for current information", - function=web_search, - parameters={ - "query": { - "type": "string", - "description": "Search query" - } - } - ), - "calculator": Tool( - name="calculator", - description="Perform mathematical calculations", - function=calculate, - parameters={ - "expression": { - "type": "string", - "description": "Mathematical expression to calculate" - } - } - ) -} - -# Define router functions in Python (cannot be defined in YAML) -def classification_router(memory: BaseMemory) -> Literal["researcher", "analyst"]: - """Route based on task classification""" - content = str(memory.get()[-1]).lower() - if 'research' in content or 'investigate' in content: - return 'researcher' - elif 'calculate' in content or 'analyze data' in content: - return 'analyst' - return 'researcher' # default - -routers: Dict[str, callable] = { - "classification_router": classification_router -} - -async def run_workflow() -> List[Any]: - yaml_config = """...""" # Your YAML configuration from above - - # Create workflow with tools and routers provided as Python objects - result: List[Any] = await ( - AriumBuilder() - .from_yaml( - yaml_str=yaml_config, - tools=tools, # Tools must be provided as Python objects - routers=routers # Routers must be provided as Python functions - ) - .build_and_run(["Research the latest trends in renewable energy"]) - ) - - return result -``` - -#### 🧠 LLM-Powered Routers in YAML (NEW!) - -One of the most powerful new features is the ability to define **intelligent LLM routers directly in YAML**. No more writing router functions - just describe your routing logic and let the LLM handle the decisions! - -```yaml -metadata: - name: "intelligent-content-workflow" - version: "1.0.0" - description: "Content creation with intelligent LLM-based routing" - -arium: - agents: - - name: "content_creator" - role: "Content Creator" - job: "Create initial content based on the request" - model: - provider: "openai" - name: "gpt-4o-mini" - - - name: "technical_writer" - role: "Technical Writer" - job: "Refine content for technical accuracy and clarity" - model: - provider: "openai" - name: "gpt-4o-mini" - - - name: "creative_writer" - role: "Creative Writer" - job: "Enhance content with creativity and storytelling" - model: - provider: "openai" - name: "gpt-4o-mini" - - - name: "marketing_writer" - role: "Marketing Writer" - job: "Optimize content for engagement and conversion" - model: - provider: "openai" - name: "gpt-4o-mini" - - # ✨ LLM Router definitions - No code required! - routers: - - name: "content_type_router" - type: "smart" # Uses LLM to make intelligent routing decisions - routing_options: - technical_writer: "Technical content, documentation, tutorials, how-to guides" - creative_writer: "Creative writing, storytelling, fiction, brand narratives" - marketing_writer: "Marketing copy, sales content, landing pages, ad campaigns" - model: - provider: "openai" - name: "gpt-4o-mini" - settings: - temperature: 0.3 - fallback_strategy: "first" - - - name: "task_classifier" - type: "task_classifier" # Keyword-based classification - task_categories: - math_solver: - description: "Mathematical calculations and problem solving" - keywords: ["calculate", "solve", "equation", "math", "formula"] - examples: ["Calculate 2+2", "Solve x^2 + 5x + 6 = 0"] - code_helper: - description: "Programming and code assistance" - keywords: ["code", "program", "debug", "function", "algorithm"] - examples: ["Write a Python function", "Debug this code"] - model: - provider: "openai" - name: "gpt-4o-mini" - - workflow: - start: "content_creator" - edges: - - from: "content_creator" - to: ["technical_writer", "creative_writer", "marketing_writer"] - router: "content_type_router" # LLM automatically routes based on content type! - end: ["technical_writer", "creative_writer", "marketing_writer"] -``` - -**🎯 LLM Router Types:** - -1. **Smart Router** (`type: smart`): General-purpose routing based on content analysis -2. **Task Classifier** (`type: task_classifier`): Routes based on keywords and examples -3. **Conversation Analysis** (`type: conversation_analysis`): Context-aware routing -4. **Reflection Router** (`type: reflection`): Structured Aβ†’Bβ†’Aβ†’C patterns for reflection workflows -5. **PlanExecute Router** (`type: plan_execute`): Cursor-style plan-and-execute workflows with step tracking - -**✨ Key Benefits:** -- 🚫 **No Code Required**: Define routing logic purely in YAML -- 🎯 **Intelligent Decisions**: LLMs understand context and make smart routing choices -- πŸ“ **Easy Configuration**: Simple, declarative syntax -- πŸ”„ **Version Control**: Track routing changes in YAML files -- πŸŽ›οΈ **Model Flexibility**: Each router can use different LLM models - -```python -# Using LLM routers is incredibly simple! -async def run_intelligent_workflow(): - # No routers dictionary needed - they're defined in YAML! - result = await ( - AriumBuilder() - .from_yaml(yaml_str=intelligent_workflow_yaml) - .build_and_run(["Write a technical tutorial on Docker containers"]) - ) - # The LLM will automatically route to technical_writer! ✨ - return result -``` - -##### πŸ”„ ReflectionRouter: Structured Reflection Workflows (NEW!) - -The **ReflectionRouter** is designed specifically for reflection-based workflows that follow Aβ†’Bβ†’Aβ†’C patterns, commonly used for mainβ†’criticβ†’mainβ†’final agent sequences. This pattern is perfect for iterative improvement workflows where a critic agent provides feedback before final processing. - -**πŸ“‹ Key Features:** -- 🎯 **Pattern Tracking**: Automatically tracks progress through defined reflection sequences -- πŸ”„ **Self-Reference Support**: Allows routing back to the same agent (Aβ†’Bβ†’A patterns) -- πŸ“Š **Visual Progress**: Shows current position with β—‹ pending, βœ“ completed indicators -- πŸ›‘οΈ **Loop Prevention**: Built-in safety mechanisms to prevent infinite loops -- πŸŽ›οΈ **Flexible Patterns**: Supports both 2-agent (Aβ†’Bβ†’A) and 3-agent (Aβ†’Bβ†’Aβ†’C) flows - -**🎯 Supported Patterns:** - -1. **A β†’ B β†’ A** (2 agents): Main β†’ Critic β†’ Main β†’ End -2. **A β†’ B β†’ A β†’ C** (3 agents): Main β†’ Critic β†’ Main β†’ Final - -```yaml -# Simple A β†’ B β†’ A reflection pattern -metadata: - name: "content-reflection-workflow" - version: "1.0.0" - description: "Content creation with critic feedback loop" - -arium: - agents: - - name: "writer" - role: "Content Writer" - job: "Create and improve content based on feedback from critics." - model: - provider: "openai" - name: "gpt-4o-mini" - settings: - temperature: 0.7 - - - name: "critic" - role: "Content Critic" - job: "Review content and provide constructive feedback for improvement." - model: - provider: "openai" - name: "gpt-4o-mini" - settings: - temperature: 0.3 - - # ✨ ReflectionRouter definition - routers: - - name: "reflection_router" - type: "reflection" # Specialized for reflection patterns - flow_pattern: [writer, critic, writer] # A β†’ B β†’ A pattern - model: - provider: "openai" - name: "gpt-4o-mini" - settings: - temperature: 0.2 - allow_early_exit: false # Strict adherence to pattern - - workflow: - start: "writer" - edges: - - from: "writer" - to: [critic, writer] # Can go to critic or self-reference - router: "reflection_router" - - from: "critic" - to: [writer] # Always returns to writer - router: "reflection_router" - end: [writer] # Writer produces final output -``` - -```yaml -# Advanced A β†’ B β†’ A β†’ C reflection pattern -metadata: - name: "advanced-reflection-workflow" - version: "1.0.0" - description: "Full reflection cycle with dedicated final agent" - -arium: - agents: - - name: "researcher" - role: "Research Agent" - job: "Conduct research and gather information on topics." - model: - provider: "openai" - name: "gpt-4o-mini" - - - name: "reviewer" - role: "Research Reviewer" - job: "Review research quality and suggest improvements." - model: - provider: "anthropic" - name: "claude-3-5-sonnet-20240620" - - - name: "synthesizer" - role: "Information Synthesizer" - job: "Create final synthesis and conclusions from research." - model: - provider: "openai" - name: "gpt-4o" - - routers: - - name: "research_reflection_router" - type: "reflection" - flow_pattern: [researcher, reviewer, researcher, synthesizer] # A β†’ B β†’ A β†’ C - settings: - allow_early_exit: true # Allow smart early completion - - workflow: - start: "researcher" - edges: - - from: "researcher" - to: [reviewer, researcher, synthesizer] # All possible destinations - router: "research_reflection_router" - - from: "reviewer" - to: [researcher, reviewer, synthesizer] - router: "research_reflection_router" - - from: "synthesizer" - to: [end] - end: [synthesizer] -``` - -**πŸ”§ ReflectionRouter Configuration Options:** - -```yaml -routers: - - name: "my_reflection_router" - type: "reflection" - flow_pattern: [main_agent, critic, main_agent, final_agent] # Define your pattern - model: # Optional: LLM for routing decisions - provider: "openai" - name: "gpt-4o-mini" - settings: # Optional settings - temperature: 0.2 # Router temperature (lower = more deterministic) - allow_early_exit: false # Allow early completion if LLM determines pattern is done - fallback_strategy: "first" # first, last, random - fallback when LLM fails -``` - -**πŸ—οΈ Programmatic Usage:** +### Simple Agent Chains ```python -import asyncio from flo_ai.arium import AriumBuilder from flo_ai.models.agent import Agent from flo_ai.llm import OpenAI -from flo_ai.arium.llm_router import create_main_critic_reflection_router -async def reflection_workflow_example(): - llm = OpenAI(model='gpt-4o-mini', api_key='your-api-key') +async def simple_chain(): + llm = OpenAI(model='gpt-4o-mini') # Create agents - main_agent = Agent( - name='main_agent', - system_prompt='Create solutions and improve them based on feedback.', - llm=llm - ) - - critic = Agent( - name='critic', - system_prompt='Provide constructive feedback for improvement.', - llm=llm - ) - - final_agent = Agent( - name='final_agent', - system_prompt='Polish and finalize the work.', + analyst = Agent( + name='content_analyst', + system_prompt='Analyze the input and extract key insights.', llm=llm ) - # Create reflection router - A β†’ B β†’ A β†’ C pattern - reflection_router = create_main_critic_reflection_router( - main_agent='main_agent', - critic_agent='critic', - final_agent='final_agent', - allow_early_exit=False, # Strict pattern adherence + summarizer = Agent( + name='summarizer', + system_prompt='Create a concise summary based on the analysis.', llm=llm ) - # Build workflow + # Build and run workflow result = await ( AriumBuilder() - .add_agents([main_agent, critic, final_agent]) - .start_with(main_agent) - .add_edge(main_agent, [critic, final_agent], reflection_router) - .add_edge(critic, [main_agent, final_agent], reflection_router) - .end_with(final_agent) - .build_and_run(["Create a comprehensive project proposal"]) + .add_agents([analyst, summarizer]) + .start_with(analyst) + .connect(analyst, summarizer) + .end_with(summarizer) + .build_and_run(["Analyze this complex business report..."]) ) return result - -# Alternative: Direct factory usage -from flo_ai.arium.llm_router import create_llm_router - -reflection_router = create_llm_router( - 'reflection', - flow_pattern=['writer', 'editor', 'writer'], # A β†’ B β†’ A - allow_early_exit=False, - llm=llm -) ``` -**πŸ’‘ ReflectionRouter Intelligence:** - -The ReflectionRouter automatically: -- **Tracks Progress**: Knows which step in the pattern should execute next -- **Prevents Loops**: Uses execution context to avoid infinite cycles -- **Provides Guidance**: Shows LLM the suggested next step and current progress -- **Handles Self-Reference**: Properly validates flows that return to the same agent -- **Visual Feedback**: Displays pattern progress: `β—‹ writer β†’ βœ“ critic β†’ β—‹ writer` - -**🎯 Perfect Use Cases:** -- πŸ“ **Content Creation**: Writer β†’ Editor β†’ Writer β†’ Publisher -- πŸ”¬ **Research Workflows**: Researcher β†’ Reviewer β†’ Researcher β†’ Synthesizer -- πŸ’Ό **Business Analysis**: Analyst β†’ Critic β†’ Analyst β†’ Decision Maker -- 🎨 **Creative Processes**: Creator β†’ Critic β†’ Creator β†’ Finalizer -- πŸ§ͺ **Iterative Refinement**: Any process requiring feedback and improvement cycles - -**⚑ Quick Start Example:** +### Conditional Routing ```python -# Minimal A β†’ B β†’ A pattern -yaml_config = """ -arium: - agents: - - name: main_agent - job: "Main work agent" - model: {provider: openai, name: gpt-4o-mini} - - name: critic - job: "Feedback agent" - model: {provider: openai, name: gpt-4o-mini} - - routers: - - name: reflection_router - type: reflection - flow_pattern: [main_agent, critic, main_agent] +from flo_ai.arium.memory import BaseMemory - workflow: - start: main_agent - edges: - - from: main_agent - to: [critic, main_agent] - router: reflection_router - - from: critic - to: [main_agent] - router: reflection_router - end: [main_agent] -""" - -result = await AriumBuilder().from_yaml(yaml_str=yaml_config).build_and_run(["Your task"]) + def route_by_type(memory: BaseMemory) -> str: + """Route based on classification result""" + messages = memory.get() + last_message = str(messages[-1]) if messages else "" + + if "technical" in last_message.lower(): + return "tech_specialist" + else: + return "business_specialist" + + # Build workflow with conditional routing +result = await ( + AriumBuilder() + .add_agents([classifier, tech_specialist, business_specialist, final_agent]) + .start_with(classifier) + .add_edge(classifier, [tech_specialist, business_specialist], route_by_type) + .connect(tech_specialist, final_agent) + .connect(business_specialist, final_agent) + .end_with(final_agent) + .build_and_run(["How can we optimize our database performance?"]) + ) ``` -The ReflectionRouter makes implementing sophisticated feedback loops and iterative improvement workflows incredibly simple, whether you need a 2-agent or 3-agent pattern! πŸš€ +### YAML-Based Workflows -##### πŸ”„ PlanExecuteRouter: Cursor-Style Plan-and-Execute Workflows (NEW!) - -The **PlanExecuteRouter** implements sophisticated plan-and-execute patterns similar to how Cursor works. It automatically breaks down complex tasks into detailed execution plans and coordinates step-by-step execution with intelligent progress tracking. - -**πŸ“‹ Key Features:** -- 🎯 **Automatic Task Breakdown**: Creates detailed execution plans from high-level tasks -- πŸ“Š **Step Tracking**: Real-time progress monitoring with visual indicators (β—‹ ⏳ βœ… ❌) -- πŸ”„ **Phase Coordination**: Intelligent routing between planning, execution, and review phases -- πŸ›‘οΈ **Dependency Management**: Handles step dependencies and execution order automatically -- πŸ’Ύ **Plan Persistence**: Uses PlanAwareMemory for stateful plan storage and updates -- πŸ”§ **Error Recovery**: Built-in retry logic for failed steps - -**🎯 Perfect for Cursor-Style Workflows:** -- πŸ’» **Software Development**: Requirements β†’ Design β†’ Implementation β†’ Testing β†’ Review -- πŸ“ **Content Creation**: Planning β†’ Writing β†’ Editing β†’ Review β†’ Publishing -- πŸ”¬ **Research Projects**: Plan β†’ Investigate β†’ Analyze β†’ Synthesize β†’ Report -- πŸ“Š **Business Processes**: Any multi-step workflow with dependencies - -**πŸ“„ YAML Configuration:** +Define entire workflows in YAML: ```yaml -# Complete Plan-Execute Workflow metadata: - name: "development-plan-execute" + name: "content-analysis-workflow" version: "1.0.0" - description: "Cursor-style development workflow" + description: "Multi-agent content analysis pipeline" arium: agents: - - name: planner - role: Project Planner - job: > - Break down complex development tasks into detailed, sequential execution plans. - Create clear steps with dependencies and agent assignments. - model: - provider: openai - name: gpt-4o-mini - settings: - temperature: 0.3 - - - name: developer - role: Software Developer - job: > - Implement features step by step according to execution plans. - Provide detailed implementation and update step status. - model: - provider: openai - name: gpt-4o-mini - settings: - temperature: 0.5 - - - name: tester - role: QA Engineer - job: > - Test implementations thoroughly and validate functionality. - Create comprehensive test scenarios and report results. + - name: "analyzer" + role: "Content Analyst" + job: "Analyze the input content and extract key insights." model: - provider: openai - name: gpt-4o-mini - settings: - temperature: 0.2 - - - name: reviewer - role: Senior Reviewer - job: > - Provide final quality assessment and approval. - Review completed work for best practices and requirements. + provider: "openai" + name: "gpt-4o-mini" + + - name: "summarizer" + role: "Content Summarizer" + job: "Create a concise summary based on the analysis." model: - provider: openai - name: gpt-4o-mini - - # PlanExecuteRouter configuration - routers: - - name: dev_plan_router - type: plan_execute # Router type for plan-execute workflows - agents: # Available agents and their capabilities - planner: "Creates detailed execution plans by breaking down tasks" - developer: "Implements features and code according to plan specifications" - tester: "Tests implementations and validates functionality" - reviewer: "Reviews and approves completed work" - model: # Optional: LLM for routing decisions - provider: openai - name: gpt-4o-mini - settings: # Optional configuration - temperature: 0.2 # Router decision temperature - planner_agent: planner # Agent responsible for creating plans - executor_agent: developer # Default agent for executing steps - reviewer_agent: reviewer # Optional agent for final review - max_retries: 3 # Maximum retries for failed steps + provider: "anthropic" + name: "claude-3-5-sonnet-20240620" workflow: - start: planner + start: "analyzer" edges: - # All agents can route to all others based on plan state - - from: planner - to: [developer, tester, reviewer, planner] - router: dev_plan_router - - from: developer - to: [developer, tester, reviewer, planner] - router: dev_plan_router - - from: tester - to: [developer, tester, reviewer, planner] - router: dev_plan_router - - from: reviewer - to: [end] - end: [reviewer] + - from: "analyzer" + to: ["summarizer"] + end: ["summarizer"] ``` -**πŸ—οΈ Programmatic Usage:** - ```python -import asyncio -from flo_ai.arium import AriumBuilder -from flo_ai.arium.memory import PlanAwareMemory -from flo_ai.models.agent import Agent -from flo_ai.llm import OpenAI -from flo_ai.arium.llm_router import create_plan_execute_router - -async def cursor_style_workflow(): - llm = OpenAI(model='gpt-4o-mini', api_key='your-api-key') - - # Create specialized agents - planner = Agent( - name='planner', - system_prompt='Create detailed execution plans by breaking down tasks into sequential steps.', - llm=llm - ) - - developer = Agent( - name='developer', - system_prompt='Implement features step by step according to execution plans.', - llm=llm - ) - - tester = Agent( - name='tester', - system_prompt='Test implementations and validate functionality thoroughly.', - llm=llm - ) - - reviewer = Agent( - name='reviewer', - system_prompt='Review completed work and provide final approval.', - llm=llm - ) - - # Create plan-execute router - plan_router = create_plan_execute_router( - planner_agent='planner', - executor_agent='developer', - reviewer_agent='reviewer', - additional_agents={'tester': 'Tests implementations and validates quality'}, - llm=llm - ) - - # Use PlanAwareMemory for plan state persistence - memory = PlanAwareMemory() - - # Build and run workflow - result = await ( +# Run YAML workflow +result = await ( AriumBuilder() - .with_memory(memory) - .add_agents([planner, developer, tester, reviewer]) - .start_with(planner) - .add_edge(planner, [developer, tester, reviewer, planner], plan_router) - .add_edge(developer, [developer, tester, reviewer, planner], plan_router) - .add_edge(tester, [developer, tester, reviewer, planner], plan_router) - .add_edge(reviewer, [developer, tester, reviewer, planner], plan_router) - .end_with(reviewer) - .build_and_run(["Create a REST API for user authentication with JWT tokens"]) + .from_yaml(yaml_str=workflow_yaml) + .build_and_run(["Analyze this quarterly business report..."]) ) - - return result - -# Alternative: Factory function -from flo_ai.arium.llm_router import create_plan_execute_router - -plan_router = create_plan_execute_router( - planner_agent='planner', - executor_agent='developer', - reviewer_agent='reviewer', - llm=llm -) ``` -**πŸ’‘ How PlanExecuteRouter Works:** - -The router intelligently coordinates workflow phases: +### LLM-Powered Routers -1. **Planning Phase**: - - Detects when no execution plan exists - - Routes to planner agent to create detailed plan - - Plan stored as ExecutionPlan object in PlanAwareMemory +Define intelligent routing logic directly in YAML: -2. **Execution Phase**: - - Analyzes plan state and step dependencies - - Routes to appropriate agents for next ready steps - - Updates step status (pending β†’ in-progress β†’ completed) - - Handles parallel execution of independent steps +```yaml + routers: + - name: "content_type_router" + type: "smart" # Uses LLM for intelligent routing + routing_options: + technical_writer: "Technical content, documentation, tutorials" + creative_writer: "Creative writing, storytelling, fiction" + marketing_writer: "Marketing copy, sales content, campaigns" + model: + provider: "openai" + name: "gpt-4o-mini" +``` -3. **Review Phase**: - - Detects when all steps are completed - - Routes to reviewer agent for final validation - - Manages error recovery for failed steps +### ReflectionRouter & PlanExecuteRouter -**πŸ“Š Plan Progress Visualization:** +**ReflectionRouter** for Aβ†’Bβ†’Aβ†’C feedback patterns: -``` -πŸ“‹ EXECUTION PLAN: User Authentication API -πŸ“Š CURRENT PROGRESS: -βœ… design_schema: Design user database schema β†’ developer -βœ… implement_registration: Create registration endpoint β†’ developer -⏳ implement_login: Add login with JWT β†’ developer (depends: design_schema, implement_registration) -β—‹ add_middleware: Authentication middleware β†’ developer (depends: implement_login) -β—‹ write_tests: Comprehensive testing β†’ tester (depends: add_middleware) -β—‹ final_review: Security and code review β†’ reviewer (depends: write_tests) - -🎯 NEXT ACTION: Execute step 'implement_login' -🎯 SUGGESTED AGENT: developer +```yaml + routers: + - name: "reflection_router" + type: "reflection" + flow_pattern: [writer, critic, writer] # A β†’ B β†’ A pattern + model: + provider: "openai" + name: "gpt-4o-mini" ``` -**πŸ”§ Advanced Configuration Options:** +**PlanExecuteRouter** for Cursor-style plan-and-execute workflows: ```yaml routers: - - name: advanced_plan_router - type: plan_execute - agents: - planner: "Creates execution plans" - frontend_dev: "Frontend implementation" - backend_dev: "Backend implementation" - devops: "Deployment and infrastructure" - qa_tester: "Quality assurance testing" - security_reviewer: "Security review" - product_owner: "Product validation" - model: - provider: openai - name: gpt-4o - settings: - temperature: 0.1 # Lower for more deterministic routing - planner_agent: planner # Plan creation agent - executor_agent: backend_dev # Default execution agent - reviewer_agent: product_owner # Final review agent - max_retries: 5 # Retry attempts for failed steps - allow_parallel_execution: true # Enable parallel step execution - plan_validation: strict # Validate plan completeness -``` - -**⚑ Quick Start Example:** - -```python -# Minimal plan-execute workflow -yaml_config = """ -arium: + - name: "plan_router" + type: "plan_execute" agents: - - name: planner - job: "Create execution plans" - model: {provider: openai, name: gpt-4o-mini} - - name: executor - job: "Execute plan steps" - model: {provider: openai, name: gpt-4o-mini} - - name: reviewer - job: "Review final results" - model: {provider: openai, name: gpt-4o-mini} - - routers: - - name: simple_plan_router - type: plan_execute - agents: - planner: "Creates plans" - executor: "Executes steps" - reviewer: "Reviews results" + planner: "Creates detailed execution plans" + developer: "Implements features according to plan" + tester: "Tests implementations and validates functionality" + reviewer: "Reviews and approves completed work" settings: planner_agent: planner - executor_agent: executor + executor_agent: developer reviewer_agent: reviewer - - workflow: - start: planner - edges: - - from: planner - to: [executor, reviewer, planner] - router: simple_plan_router - - from: executor - to: [executor, reviewer, planner] - router: simple_plan_router - - from: reviewer - to: [end] - end: [reviewer] -""" - -result = await AriumBuilder().from_yaml(yaml_str=yaml_config).build_and_run(["Your complex task"]) ``` -**🎯 Use Cases and Examples:** - -- πŸ“± **App Development**: "Build a todo app with React and Node.js" -- πŸ›’ **E-commerce**: "Create a shopping cart system with payment processing" -- πŸ“Š **Data Pipeline**: "Build ETL pipeline for customer analytics" -- πŸ” **Security**: "Implement OAuth2 authentication system" -- πŸ“ˆ **Analytics**: "Create real-time dashboard with user metrics" - -The PlanExecuteRouter brings Cursor-style intelligent task automation to Flo AI, making it incredibly easy to build sophisticated multi-step workflows that adapt and execute complex tasks automatically! πŸš€ - -#### YAML Workflow with Variables - -```yaml -metadata: - name: "personalized-workflow" - version: "1.0.0" - description: "Workflow that adapts based on input variables" - -arium: - agents: - - name: "specialist" - role: "" - job: "You are a specializing in . Provide for ." - model: - provider: "" - name: "" - settings: - temperature: 0.3 - reasoning_pattern: "" - - - name: "reviewer" - role: "Quality Reviewer" - job: "Review the for and provide feedback." - model: - provider: "openai" - name: "gpt-4o" +## πŸ“Š OpenTelemetry Integration - workflow: - start: "specialist" - edges: - - from: "specialist" - to: ["reviewer"] - end: ["reviewer"] -``` +Built-in observability for production monitoring: ```python -import asyncio -from typing import Any, Dict, List -from flo_ai.arium import AriumBuilder - -async def run_personalized_workflow() -> List[Any]: - yaml_config = """...""" # Your YAML configuration with variables - - # Define variables for the workflow - variables: Dict[str, str] = { - 'expert_role': 'Data Scientist', - 'domain': 'machine learning and predictive analytics', - 'output_type': 'technical analysis report', - 'target_audience': 'engineering team', - 'preferred_llm_provider': 'anthropic', - 'model_name': 'claude-3-5-sonnet-20240620', - 'reasoning_style': 'COT', - 'quality_criteria': 'technical accuracy and clarity' - } - - result: List[Any] = await ( - AriumBuilder() - .from_yaml(yaml_config) - .build_and_run( - ["Analyze our customer churn prediction model performance"], - variables=variables - ) - ) - - return result -``` - -#### Using Pre-built Agents in YAML Workflows - -```yaml -metadata: - name: "hybrid-workflow" - version: "1.0.0" - description: "Mix of inline agents and pre-built agent references" +from flo_ai import configure_telemetry, shutdown_telemetry -# Import existing agent configurations -imports: - - "agents/content_analyzer.yaml" - - "agents/technical_reviewer.yaml" +# Configure at startup +configure_telemetry( + service_name="my_ai_app", + service_version="1.0.0", + console_export=True # For debugging +) -arium: - # Mix of imported and inline agents - agents: - # Reference imported agent - - import: "content_analyzer" - name: "analyzer" # Override name if needed - - # Define new agent inline - - name: "formatter" - role: "Content Formatter" - job: "Format the analysis into a professional report structure." - model: - provider: "openai" - name: "gpt-4o-mini" - - # Reference another imported agent - - import: "technical_reviewer" - name: "reviewer" +# Your application code here... - workflow: - start: "analyzer" - edges: - - from: "analyzer" - to: ["formatter"] - - from: "formatter" - to: ["reviewer"] - end: ["reviewer"] +# Shutdown to flush data +shutdown_telemetry() ``` -#### YAML Workflow Best Practices - -1. **Modular Design**: Define reusable agents in YAML, create tools in Python separately -2. **Clear Naming**: Use descriptive names for agents and workflows -3. **Variable Usage**: Leverage variables for environment-specific configurations -4. **Version Control**: Track workflow versions in metadata -5. **Documentation**: Include descriptions for complex workflows -6. **Router Functions**: Keep routing logic simple and provide as Python functions -7. **Tool Management**: Create tools as Python objects and pass them to the builder - -#### What Can Be Defined in YAML vs Python - -**βœ… YAML Configuration Supports:** -- Agent definitions (name, role, job, model settings) -- Workflow structure (start, edges, end nodes) -- Agent-to-agent connections -- Tool and router references (by name) -- Variables and settings -- Model configurations - -**❌ YAML Configuration Does NOT Support:** -- Tool function implementations (must be Python objects) -- Router function code (must be Python functions) -- Custom logic execution -- Direct function definitions +**πŸ“– [Complete Telemetry Guide β†’](flo_ai/flo_ai/telemetry/README.md)** -**πŸ’‘ Best Practice**: Use YAML for workflow structure and agent configuration, Python for executable logic (tools and routers). +## πŸ“š Examples & Documentation -#### Benefits of YAML Workflows +### Examples Directory -- **πŸ”„ Reproducible**: Version-controlled workflow definitions -- **πŸ“ Maintainable**: Easy to modify workflow structure without code changes -- **πŸ§ͺ Testable**: Different configurations for testing vs. production -- **πŸ‘₯ Collaborative**: Non-developers can modify workflow structure -- **πŸš€ Deployable**: Easy CI/CD integration with YAML configurations -- **πŸ” Auditable**: Clear workflow definitions for compliance +Check out the `examples/` directory for comprehensive examples: -> πŸ“– **For detailed Arium documentation and advanced patterns, see [flo_ai/flo_ai/arium/README.md](flo_ai/flo_ai/arium/README.md)** +- `agent_builder_usage.py` - Basic agent creation patterns +- `yaml_agent_example.py` - YAML-based agent configuration +- `output_formatter.py` - Structured output examples +- `multi_tool_example.py` - Multi-tool agent examples +- `document_processing_example.py` - Document processing with PDF and TXT files -## πŸ“– Documentation +### Documentation -Visit our [comprehensive documentation](https://flo-ai.rootflo.ai) for: -- Detailed tutorials -- API reference -- Best practices -- Advanced examples -- Architecture deep-dives +Visit our [website](https://www.rootflo.ai) to know more **Additional Resources:** -- [@flo_tool Decorator Guide](flo_ai/README_flo_tool.md) - Complete guide to the `@flo_tool` decorator -- [Examples Directory](examples/) - Ready-to-run code examples +- [@flo_tool Decorator Guide](TOOLS.md) - Complete guide to the `@flo_tool` decorator +- [Examples Directory](flo_ai/examples/) - Ready-to-run code examples - [Contributing Guide](CONTRIBUTING.md) - How to contribute to Flo AI ## 🌟 Why Flo AI? @@ -2560,8 +566,7 @@ Visit our [comprehensive documentation](https://flo-ai.rootflo.ai) for: - **Testable**: Each component can be tested independently - **Scalable**: From simple agents to complex multi-tool systems -## 🎯 Use Cases - +### Use Cases - πŸ€– Customer Service Automation - πŸ“Š Data Analysis and Processing - πŸ“ Content Generation and Summarization @@ -2596,4 +601,4 @@ Built with ❀️ using: Built with ❀️ by the rootflo team
Community β€’ Documentation - + \ No newline at end of file diff --git a/flo_ai/flo_ai/llm/anthropic_llm.py b/flo_ai/flo_ai/llm/anthropic_llm.py index 39247f67..d9fb19e9 100644 --- a/flo_ai/flo_ai/llm/anthropic_llm.py +++ b/flo_ai/flo_ai/llm/anthropic_llm.py @@ -58,6 +58,7 @@ async def generate( 'model': self.model, 'messages': conversation, 'temperature': self.temperature, + 'max_tokens': self.kwargs.get('max_tokens', 1024), **self.kwargs, } diff --git a/flo_ai/poetry.lock b/flo_ai/poetry.lock index 7ee51314..c42e6fa5 100644 --- a/flo_ai/poetry.lock +++ b/flo_ai/poetry.lock @@ -3684,65 +3684,85 @@ files = [ [[package]] name = "pyyaml" -version = "6.0.2" +version = "6.0.3" description = "YAML parser and emitter for Python" optional = false python-versions = ">=3.8" -groups = ["dev"] +groups = ["main", "dev"] files = [ - {file = "PyYAML-6.0.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:0a9a2848a5b7feac301353437eb7d5957887edbf81d56e903999a75a3d743086"}, - {file = "PyYAML-6.0.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:29717114e51c84ddfba879543fb232a6ed60086602313ca38cce623c1d62cfbf"}, - {file = "PyYAML-6.0.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8824b5a04a04a047e72eea5cec3bc266db09e35de6bdfe34c9436ac5ee27d237"}, - {file = "PyYAML-6.0.2-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:7c36280e6fb8385e520936c3cb3b8042851904eba0e58d277dca80a5cfed590b"}, - {file = "PyYAML-6.0.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ec031d5d2feb36d1d1a24380e4db6d43695f3748343d99434e6f5f9156aaa2ed"}, - {file = "PyYAML-6.0.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:936d68689298c36b53b29f23c6dbb74de12b4ac12ca6cfe0e047bedceea56180"}, - {file = "PyYAML-6.0.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:23502f431948090f597378482b4812b0caae32c22213aecf3b55325e049a6c68"}, - {file = "PyYAML-6.0.2-cp310-cp310-win32.whl", hash = "sha256:2e99c6826ffa974fe6e27cdb5ed0021786b03fc98e5ee3c5bfe1fd5015f42b99"}, - {file = "PyYAML-6.0.2-cp310-cp310-win_amd64.whl", hash = "sha256:a4d3091415f010369ae4ed1fc6b79def9416358877534caf6a0fdd2146c87a3e"}, - {file = "PyYAML-6.0.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:cc1c1159b3d456576af7a3e4d1ba7e6924cb39de8f67111c735f6fc832082774"}, - {file = "PyYAML-6.0.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:1e2120ef853f59c7419231f3bf4e7021f1b936f6ebd222406c3b60212205d2ee"}, - {file = "PyYAML-6.0.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5d225db5a45f21e78dd9358e58a98702a0302f2659a3c6cd320564b75b86f47c"}, - {file = "PyYAML-6.0.2-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:5ac9328ec4831237bec75defaf839f7d4564be1e6b25ac710bd1a96321cc8317"}, - {file = "PyYAML-6.0.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3ad2a3decf9aaba3d29c8f537ac4b243e36bef957511b4766cb0057d32b0be85"}, - {file = "PyYAML-6.0.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:ff3824dc5261f50c9b0dfb3be22b4567a6f938ccce4587b38952d85fd9e9afe4"}, - {file = "PyYAML-6.0.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:797b4f722ffa07cc8d62053e4cff1486fa6dc094105d13fea7b1de7d8bf71c9e"}, - {file = "PyYAML-6.0.2-cp311-cp311-win32.whl", hash = "sha256:11d8f3dd2b9c1207dcaf2ee0bbbfd5991f571186ec9cc78427ba5bd32afae4b5"}, - {file = "PyYAML-6.0.2-cp311-cp311-win_amd64.whl", hash = "sha256:e10ce637b18caea04431ce14fabcf5c64a1c61ec9c56b071a4b7ca131ca52d44"}, - {file = "PyYAML-6.0.2-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:c70c95198c015b85feafc136515252a261a84561b7b1d51e3384e0655ddf25ab"}, - {file = "PyYAML-6.0.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:ce826d6ef20b1bc864f0a68340c8b3287705cae2f8b4b1d932177dcc76721725"}, - {file = "PyYAML-6.0.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1f71ea527786de97d1a0cc0eacd1defc0985dcf6b3f17bb77dcfc8c34bec4dc5"}, - {file = "PyYAML-6.0.2-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:9b22676e8097e9e22e36d6b7bda33190d0d400f345f23d4065d48f4ca7ae0425"}, - {file = "PyYAML-6.0.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:80bab7bfc629882493af4aa31a4cfa43a4c57c83813253626916b8c7ada83476"}, - {file = "PyYAML-6.0.2-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:0833f8694549e586547b576dcfaba4a6b55b9e96098b36cdc7ebefe667dfed48"}, - {file = "PyYAML-6.0.2-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:8b9c7197f7cb2738065c481a0461e50ad02f18c78cd75775628afb4d7137fb3b"}, - {file = "PyYAML-6.0.2-cp312-cp312-win32.whl", hash = "sha256:ef6107725bd54b262d6dedcc2af448a266975032bc85ef0172c5f059da6325b4"}, - {file = "PyYAML-6.0.2-cp312-cp312-win_amd64.whl", hash = "sha256:7e7401d0de89a9a855c839bc697c079a4af81cf878373abd7dc625847d25cbd8"}, - {file = "PyYAML-6.0.2-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:efdca5630322a10774e8e98e1af481aad470dd62c3170801852d752aa7a783ba"}, - {file = "PyYAML-6.0.2-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:50187695423ffe49e2deacb8cd10510bc361faac997de9efef88badc3bb9e2d1"}, - {file = "PyYAML-6.0.2-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0ffe8360bab4910ef1b9e87fb812d8bc0a308b0d0eef8c8f44e0254ab3b07133"}, - {file = "PyYAML-6.0.2-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:17e311b6c678207928d649faa7cb0d7b4c26a0ba73d41e99c4fff6b6c3276484"}, - {file = "PyYAML-6.0.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:70b189594dbe54f75ab3a1acec5f1e3faa7e8cf2f1e08d9b561cb41b845f69d5"}, - {file = "PyYAML-6.0.2-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:41e4e3953a79407c794916fa277a82531dd93aad34e29c2a514c2c0c5fe971cc"}, - {file = "PyYAML-6.0.2-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:68ccc6023a3400877818152ad9a1033e3db8625d899c72eacb5a668902e4d652"}, - {file = "PyYAML-6.0.2-cp313-cp313-win32.whl", hash = "sha256:bc2fa7c6b47d6bc618dd7fb02ef6fdedb1090ec036abab80d4681424b84c1183"}, - {file = "PyYAML-6.0.2-cp313-cp313-win_amd64.whl", hash = "sha256:8388ee1976c416731879ac16da0aff3f63b286ffdd57cdeb95f3f2e085687563"}, - {file = "PyYAML-6.0.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:24471b829b3bf607e04e88d79542a9d48bb037c2267d7927a874e6c205ca7e9a"}, - {file = "PyYAML-6.0.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d7fded462629cfa4b685c5416b949ebad6cec74af5e2d42905d41e257e0869f5"}, - {file = "PyYAML-6.0.2-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d84a1718ee396f54f3a086ea0a66d8e552b2ab2017ef8b420e92edbc841c352d"}, - {file = "PyYAML-6.0.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9056c1ecd25795207ad294bcf39f2db3d845767be0ea6e6a34d856f006006083"}, - {file = "PyYAML-6.0.2-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:82d09873e40955485746739bcb8b4586983670466c23382c19cffecbf1fd8706"}, - {file = "PyYAML-6.0.2-cp38-cp38-win32.whl", hash = "sha256:43fa96a3ca0d6b1812e01ced1044a003533c47f6ee8aca31724f78e93ccc089a"}, - {file = "PyYAML-6.0.2-cp38-cp38-win_amd64.whl", hash = "sha256:01179a4a8559ab5de078078f37e5c1a30d76bb88519906844fd7bdea1b7729ff"}, - {file = "PyYAML-6.0.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:688ba32a1cffef67fd2e9398a2efebaea461578b0923624778664cc1c914db5d"}, - {file = "PyYAML-6.0.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:a8786accb172bd8afb8be14490a16625cbc387036876ab6ba70912730faf8e1f"}, - {file = "PyYAML-6.0.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d8e03406cac8513435335dbab54c0d385e4a49e4945d2909a581c83647ca0290"}, - {file = "PyYAML-6.0.2-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f753120cb8181e736c57ef7636e83f31b9c0d1722c516f7e86cf15b7aa57ff12"}, - {file = "PyYAML-6.0.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3b1fdb9dc17f5a7677423d508ab4f243a726dea51fa5e70992e59a7411c89d19"}, - {file = "PyYAML-6.0.2-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:0b69e4ce7a131fe56b7e4d770c67429700908fc0752af059838b1cfb41960e4e"}, - {file = "PyYAML-6.0.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:a9f8c2e67970f13b16084e04f134610fd1d374bf477b17ec1599185cf611d725"}, - {file = "PyYAML-6.0.2-cp39-cp39-win32.whl", hash = "sha256:6395c297d42274772abc367baaa79683958044e5d3835486c16da75d2a694631"}, - {file = "PyYAML-6.0.2-cp39-cp39-win_amd64.whl", hash = "sha256:39693e1f8320ae4f43943590b49779ffb98acb81f788220ea932a6b6c51004d8"}, - {file = "pyyaml-6.0.2.tar.gz", hash = "sha256:d584d9ec91ad65861cc08d42e834324ef890a082e591037abe114850ff7bbc3e"}, + {file = "PyYAML-6.0.3-cp38-cp38-macosx_10_13_x86_64.whl", hash = "sha256:c2514fceb77bc5e7a2f7adfaa1feb2fb311607c9cb518dbc378688ec73d8292f"}, + {file = "PyYAML-6.0.3-cp38-cp38-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:9c57bb8c96f6d1808c030b1687b9b5fb476abaa47f0db9c0101f5e9f394e97f4"}, + {file = "PyYAML-6.0.3-cp38-cp38-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:efd7b85f94a6f21e4932043973a7ba2613b059c4a000551892ac9f1d11f5baf3"}, + {file = "PyYAML-6.0.3-cp38-cp38-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:22ba7cfcad58ef3ecddc7ed1db3409af68d023b7f940da23c6c2a1890976eda6"}, + {file = "PyYAML-6.0.3-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:6344df0d5755a2c9a276d4473ae6b90647e216ab4757f8426893b5dd2ac3f369"}, + {file = "PyYAML-6.0.3-cp38-cp38-win32.whl", hash = "sha256:3ff07ec89bae51176c0549bc4c63aa6202991da2d9a6129d7aef7f1407d3f295"}, + {file = "PyYAML-6.0.3-cp38-cp38-win_amd64.whl", hash = "sha256:5cf4e27da7e3fbed4d6c3d8e797387aaad68102272f8f9752883bc32d61cb87b"}, + {file = "pyyaml-6.0.3-cp310-cp310-macosx_10_13_x86_64.whl", hash = "sha256:214ed4befebe12df36bcc8bc2b64b396ca31be9304b8f59e25c11cf94a4c033b"}, + {file = "pyyaml-6.0.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:02ea2dfa234451bbb8772601d7b8e426c2bfa197136796224e50e35a78777956"}, + {file = "pyyaml-6.0.3-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:b30236e45cf30d2b8e7b3e85881719e98507abed1011bf463a8fa23e9c3e98a8"}, + {file = "pyyaml-6.0.3-cp310-cp310-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:66291b10affd76d76f54fad28e22e51719ef9ba22b29e1d7d03d6777a9174198"}, + {file = "pyyaml-6.0.3-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:9c7708761fccb9397fe64bbc0395abcae8c4bf7b0eac081e12b809bf47700d0b"}, + {file = "pyyaml-6.0.3-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:418cf3f2111bc80e0933b2cd8cd04f286338bb88bdc7bc8e6dd775ebde60b5e0"}, + {file = "pyyaml-6.0.3-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:5e0b74767e5f8c593e8c9b5912019159ed0533c70051e9cce3e8b6aa699fcd69"}, + {file = "pyyaml-6.0.3-cp310-cp310-win32.whl", hash = "sha256:28c8d926f98f432f88adc23edf2e6d4921ac26fb084b028c733d01868d19007e"}, + {file = "pyyaml-6.0.3-cp310-cp310-win_amd64.whl", hash = "sha256:bdb2c67c6c1390b63c6ff89f210c8fd09d9a1217a465701eac7316313c915e4c"}, + {file = "pyyaml-6.0.3-cp311-cp311-macosx_10_13_x86_64.whl", hash = "sha256:44edc647873928551a01e7a563d7452ccdebee747728c1080d881d68af7b997e"}, + {file = "pyyaml-6.0.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:652cb6edd41e718550aad172851962662ff2681490a8a711af6a4d288dd96824"}, + {file = "pyyaml-6.0.3-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:10892704fc220243f5305762e276552a0395f7beb4dbf9b14ec8fd43b57f126c"}, + {file = "pyyaml-6.0.3-cp311-cp311-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:850774a7879607d3a6f50d36d04f00ee69e7fc816450e5f7e58d7f17f1ae5c00"}, + {file = "pyyaml-6.0.3-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:b8bb0864c5a28024fac8a632c443c87c5aa6f215c0b126c449ae1a150412f31d"}, + {file = "pyyaml-6.0.3-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:1d37d57ad971609cf3c53ba6a7e365e40660e3be0e5175fa9f2365a379d6095a"}, + {file = "pyyaml-6.0.3-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:37503bfbfc9d2c40b344d06b2199cf0e96e97957ab1c1b546fd4f87e53e5d3e4"}, + {file = "pyyaml-6.0.3-cp311-cp311-win32.whl", hash = "sha256:8098f252adfa6c80ab48096053f512f2321f0b998f98150cea9bd23d83e1467b"}, + {file = "pyyaml-6.0.3-cp311-cp311-win_amd64.whl", hash = "sha256:9f3bfb4965eb874431221a3ff3fdcddc7e74e3b07799e0e84ca4a0f867d449bf"}, + {file = "pyyaml-6.0.3-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:7f047e29dcae44602496db43be01ad42fc6f1cc0d8cd6c83d342306c32270196"}, + {file = "pyyaml-6.0.3-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:fc09d0aa354569bc501d4e787133afc08552722d3ab34836a80547331bb5d4a0"}, + {file = "pyyaml-6.0.3-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:9149cad251584d5fb4981be1ecde53a1ca46c891a79788c0df828d2f166bda28"}, + {file = "pyyaml-6.0.3-cp312-cp312-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:5fdec68f91a0c6739b380c83b951e2c72ac0197ace422360e6d5a959d8d97b2c"}, + {file = "pyyaml-6.0.3-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:ba1cc08a7ccde2d2ec775841541641e4548226580ab850948cbfda66a1befcdc"}, + {file = "pyyaml-6.0.3-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:8dc52c23056b9ddd46818a57b78404882310fb473d63f17b07d5c40421e47f8e"}, + {file = "pyyaml-6.0.3-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:41715c910c881bc081f1e8872880d3c650acf13dfa8214bad49ed4cede7c34ea"}, + {file = "pyyaml-6.0.3-cp312-cp312-win32.whl", hash = "sha256:96b533f0e99f6579b3d4d4995707cf36df9100d67e0c8303a0c55b27b5f99bc5"}, + {file = "pyyaml-6.0.3-cp312-cp312-win_amd64.whl", hash = "sha256:5fcd34e47f6e0b794d17de1b4ff496c00986e1c83f7ab2fb8fcfe9616ff7477b"}, + {file = "pyyaml-6.0.3-cp312-cp312-win_arm64.whl", hash = "sha256:64386e5e707d03a7e172c0701abfb7e10f0fb753ee1d773128192742712a98fd"}, + {file = "pyyaml-6.0.3-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:8da9669d359f02c0b91ccc01cac4a67f16afec0dac22c2ad09f46bee0697eba8"}, + {file = "pyyaml-6.0.3-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:2283a07e2c21a2aa78d9c4442724ec1eb15f5e42a723b99cb3d822d48f5f7ad1"}, + {file = "pyyaml-6.0.3-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:ee2922902c45ae8ccada2c5b501ab86c36525b883eff4255313a253a3160861c"}, + {file = "pyyaml-6.0.3-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:a33284e20b78bd4a18c8c2282d549d10bc8408a2a7ff57653c0cf0b9be0afce5"}, + {file = "pyyaml-6.0.3-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:0f29edc409a6392443abf94b9cf89ce99889a1dd5376d94316ae5145dfedd5d6"}, + {file = "pyyaml-6.0.3-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:f7057c9a337546edc7973c0d3ba84ddcdf0daa14533c2065749c9075001090e6"}, + {file = "pyyaml-6.0.3-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:eda16858a3cab07b80edaf74336ece1f986ba330fdb8ee0d6c0d68fe82bc96be"}, + {file = "pyyaml-6.0.3-cp313-cp313-win32.whl", hash = "sha256:d0eae10f8159e8fdad514efdc92d74fd8d682c933a6dd088030f3834bc8e6b26"}, + {file = "pyyaml-6.0.3-cp313-cp313-win_amd64.whl", hash = "sha256:79005a0d97d5ddabfeeea4cf676af11e647e41d81c9a7722a193022accdb6b7c"}, + {file = "pyyaml-6.0.3-cp313-cp313-win_arm64.whl", hash = "sha256:5498cd1645aa724a7c71c8f378eb29ebe23da2fc0d7a08071d89469bf1d2defb"}, + {file = "pyyaml-6.0.3-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:8d1fab6bb153a416f9aeb4b8763bc0f22a5586065f86f7664fc23339fc1c1fac"}, + {file = "pyyaml-6.0.3-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:34d5fcd24b8445fadc33f9cf348c1047101756fd760b4dacb5c3e99755703310"}, + {file = "pyyaml-6.0.3-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:501a031947e3a9025ed4405a168e6ef5ae3126c59f90ce0cd6f2bfc477be31b7"}, + {file = "pyyaml-6.0.3-cp314-cp314-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:b3bc83488de33889877a0f2543ade9f70c67d66d9ebb4ac959502e12de895788"}, + {file = "pyyaml-6.0.3-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:c458b6d084f9b935061bc36216e8a69a7e293a2f1e68bf956dcd9e6cbcd143f5"}, + {file = "pyyaml-6.0.3-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:7c6610def4f163542a622a73fb39f534f8c101d690126992300bf3207eab9764"}, + {file = "pyyaml-6.0.3-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:5190d403f121660ce8d1d2c1bb2ef1bd05b5f68533fc5c2ea899bd15f4399b35"}, + {file = "pyyaml-6.0.3-cp314-cp314-win_amd64.whl", hash = "sha256:4a2e8cebe2ff6ab7d1050ecd59c25d4c8bd7e6f400f5f82b96557ac0abafd0ac"}, + {file = "pyyaml-6.0.3-cp314-cp314-win_arm64.whl", hash = "sha256:93dda82c9c22deb0a405ea4dc5f2d0cda384168e466364dec6255b293923b2f3"}, + {file = "pyyaml-6.0.3-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:02893d100e99e03eda1c8fd5c441d8c60103fd175728e23e431db1b589cf5ab3"}, + {file = "pyyaml-6.0.3-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:c1ff362665ae507275af2853520967820d9124984e0f7466736aea23d8611fba"}, + {file = "pyyaml-6.0.3-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:6adc77889b628398debc7b65c073bcb99c4a0237b248cacaf3fe8a557563ef6c"}, + {file = "pyyaml-6.0.3-cp314-cp314t-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:a80cb027f6b349846a3bf6d73b5e95e782175e52f22108cfa17876aaeff93702"}, + {file = "pyyaml-6.0.3-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:00c4bdeba853cc34e7dd471f16b4114f4162dc03e6b7afcc2128711f0eca823c"}, + {file = "pyyaml-6.0.3-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:66e1674c3ef6f541c35191caae2d429b967b99e02040f5ba928632d9a7f0f065"}, + {file = "pyyaml-6.0.3-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:16249ee61e95f858e83976573de0f5b2893b3677ba71c9dd36b9cf8be9ac6d65"}, + {file = "pyyaml-6.0.3-cp314-cp314t-win_amd64.whl", hash = "sha256:4ad1906908f2f5ae4e5a8ddfce73c320c2a1429ec52eafd27138b7f1cbe341c9"}, + {file = "pyyaml-6.0.3-cp314-cp314t-win_arm64.whl", hash = "sha256:ebc55a14a21cb14062aa4162f906cd962b28e2e9ea38f9b4391244cd8de4ae0b"}, + {file = "pyyaml-6.0.3-cp39-cp39-macosx_10_13_x86_64.whl", hash = "sha256:b865addae83924361678b652338317d1bd7e79b1f4596f96b96c77a5a34b34da"}, + {file = "pyyaml-6.0.3-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:c3355370a2c156cffb25e876646f149d5d68f5e0a3ce86a5084dd0b64a994917"}, + {file = "pyyaml-6.0.3-cp39-cp39-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:3c5677e12444c15717b902a5798264fa7909e41153cdf9ef7ad571b704a63dd9"}, + {file = "pyyaml-6.0.3-cp39-cp39-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:5ed875a24292240029e4483f9d4a4b8a1ae08843b9c54f43fcc11e404532a8a5"}, + {file = "pyyaml-6.0.3-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:0150219816b6a1fa26fb4699fb7daa9caf09eb1999f3b70fb6e786805e80375a"}, + {file = "pyyaml-6.0.3-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:fa160448684b4e94d80416c0fa4aac48967a969efe22931448d853ada8baf926"}, + {file = "pyyaml-6.0.3-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:27c0abcb4a5dac13684a37f76e701e054692a9b2d3064b70f5e4eb54810553d7"}, + {file = "pyyaml-6.0.3-cp39-cp39-win32.whl", hash = "sha256:1ebe39cb5fc479422b83de611d14e2c0d3bb2a18bbcb01f229ab3cfbd8fee7a0"}, + {file = "pyyaml-6.0.3-cp39-cp39-win_amd64.whl", hash = "sha256:2e71d11abed7344e42a8849600193d15b6def118602c4c176f748e4583246007"}, + {file = "pyyaml-6.0.3.tar.gz", hash = "sha256:d76623373421df22fb4cf8817020cbb7ef15c725b9d5e45f17e189bfc384190f"}, ] [[package]] @@ -4842,4 +4862,4 @@ vizualize = ["matplotlib", "networkx"] [metadata] lock-version = "2.1" python-versions = ">=3.10,<4.0" -content-hash = "fb38f83474b0037eff56fe33bca274303f8b0e96e528cba3776bc74ec8181d48" +content-hash = "3b3531a4ef08c1f1f474ab7b296ba84404183f772fe66052b71addb7680b6592" diff --git a/flo_ai/pyproject.toml b/flo_ai/pyproject.toml index 5bc38a9d..f5307b67 100644 --- a/flo_ai/pyproject.toml +++ b/flo_ai/pyproject.toml @@ -28,6 +28,7 @@ opentelemetry-api = "^1.28.2" opentelemetry-sdk = "^1.28.2" opentelemetry-exporter-otlp = "^1.28.2" opentelemetry-instrumentation = "^0.49b2" +pyyaml = "^6.0.3" [tool.poetry.extras] vizualize = ["matplotlib", "networkx"]