diff --git a/documentation/README.md b/documentation/README.md
new file mode 100644
index 00000000..055c983a
--- /dev/null
+++ b/documentation/README.md
@@ -0,0 +1,43 @@
+# Mintlify Starter Kit
+
+Use the starter kit to get your docs deployed and ready to customize.
+
+Click the green **Use this template** button at the top of this repo to copy the Mintlify starter kit. The starter kit contains examples with
+
+- Guide pages
+- Navigation
+- Customizations
+- API reference pages
+- Use of popular components
+
+**[Follow the full quickstart guide](https://starter.mintlify.com/quickstart)**
+
+## Development
+
+Install the [Mintlify CLI](https://www.npmjs.com/package/mint) to preview your documentation changes locally. To install, use the following command:
+
+```
+npm i -g mint
+```
+
+Run the following command at the root of your documentation, where your `docs.json` is located:
+
+```
+mint dev
+```
+
+View your local preview at `http://localhost:3000`.
+
+## Publishing changes
+
+Install our GitHub app from your [dashboard](https://dashboard.mintlify.com/settings/organization/github-app) to propagate changes from your repo to your deployment. Changes are deployed to production automatically after pushing to the default branch.
+
+## Need help?
+
+### Troubleshooting
+
+- If your dev environment isn't running: Run `mint update` to ensure you have the most recent version of the CLI.
+- If a page loads as a 404: Make sure you are running in a folder with a valid `docs.json`.
+
+### Resources
+- [Mintlify documentation](https://mintlify.com/docs)
diff --git a/documentation/ai-tools/claude-code.mdx b/documentation/ai-tools/claude-code.mdx
new file mode 100644
index 00000000..bdc4e04b
--- /dev/null
+++ b/documentation/ai-tools/claude-code.mdx
@@ -0,0 +1,76 @@
+---
+title: "Claude Code setup"
+description: "Configure Claude Code for your documentation workflow"
+icon: "asterisk"
+---
+
+Claude Code is Anthropic's official CLI tool. This guide will help you set up Claude Code to help you write and maintain your documentation.
+
+## Prerequisites
+
+- Active Claude subscription (Pro, Max, or API access)
+
+## Setup
+
+1. Install Claude Code globally:
+
+ ```bash
+ npm install -g @anthropic-ai/claude-code
+```
+
+2. Navigate to your docs directory.
+3. (Optional) Add the `CLAUDE.md` file below to your project.
+4. Run `claude` to start.
+
+## Create `CLAUDE.md`
+
+Create a `CLAUDE.md` file at the root of your documentation repository to train Claude Code on your specific documentation standards:
+
+````markdown
+# Mintlify documentation
+
+## Working relationship
+- You can push back on ideas-this can lead to better documentation. Cite sources and explain your reasoning when you do so
+- ALWAYS ask for clarification rather than making assumptions
+- NEVER lie, guess, or make up information
+
+## Project context
+- Format: MDX files with YAML frontmatter
+- Config: docs.json for navigation, theme, settings
+- Components: Mintlify components
+
+## Content strategy
+- Document just enough for user success - not too much, not too little
+- Prioritize accuracy and usability of information
+- Make content evergreen when possible
+- Search for existing information before adding new content. Avoid duplication unless it is done for a strategic reason
+- Check existing patterns for consistency
+- Start by making the smallest reasonable changes
+
+## Frontmatter requirements for pages
+- title: Clear, descriptive page title
+- description: Concise summary for SEO/navigation
+
+## Writing standards
+- Second-person voice ("you")
+- Prerequisites at start of procedural content
+- Test all code examples before publishing
+- Match style and formatting of existing pages
+- Include both basic and advanced use cases
+- Language tags on all code blocks
+- Alt text on all images
+- Relative paths for internal links
+
+## Git workflow
+- NEVER use --no-verify when committing
+- Ask how to handle uncommitted changes before starting
+- Create a new branch when no clear branch exists for changes
+- Commit frequently throughout development
+- NEVER skip or disable pre-commit hooks
+
+## Do not
+- Skip frontmatter on any MDX file
+- Use absolute URLs for internal links
+- Include untested code examples
+- Make assumptions - always ask for clarification
+````
diff --git a/documentation/ai-tools/cursor.mdx b/documentation/ai-tools/cursor.mdx
new file mode 100644
index 00000000..fbb77616
--- /dev/null
+++ b/documentation/ai-tools/cursor.mdx
@@ -0,0 +1,420 @@
+---
+title: "Cursor setup"
+description: "Configure Cursor for your documentation workflow"
+icon: "arrow-pointer"
+---
+
+Use Cursor to help write and maintain your documentation. This guide shows how to configure Cursor for better results on technical writing tasks and using Mintlify components.
+
+## Prerequisites
+
+- Cursor editor installed
+- Access to your documentation repository
+
+## Project rules
+
+Create project rules that all team members can use. In your documentation repository root:
+
+```bash
+mkdir -p .cursor
+```
+
+Create `.cursor/rules.md`:
+
+````markdown
+# Mintlify technical writing rule
+
+You are an AI writing assistant specialized in creating exceptional technical documentation using Mintlify components and following industry-leading technical writing practices.
+
+## Core writing principles
+
+### Language and style requirements
+
+- Use clear, direct language appropriate for technical audiences
+- Write in second person ("you") for instructions and procedures
+- Use active voice over passive voice
+- Employ present tense for current states, future tense for outcomes
+- Avoid jargon unless necessary and define terms when first used
+- Maintain consistent terminology throughout all documentation
+- Keep sentences concise while providing necessary context
+- Use parallel structure in lists, headings, and procedures
+
+### Content organization standards
+
+- Lead with the most important information (inverted pyramid structure)
+- Use progressive disclosure: basic concepts before advanced ones
+- Break complex procedures into numbered steps
+- Include prerequisites and context before instructions
+- Provide expected outcomes for each major step
+- Use descriptive, keyword-rich headings for navigation and SEO
+- Group related information logically with clear section breaks
+
+### User-centered approach
+
+- Focus on user goals and outcomes rather than system features
+- Anticipate common questions and address them proactively
+- Include troubleshooting for likely failure points
+- Write for scannability with clear headings, lists, and white space
+- Include verification steps to confirm success
+
+## Mintlify component reference
+
+### Callout components
+
+#### Note - Additional helpful information
+
+
+Supplementary information that supports the main content without interrupting flow
+
+
+#### Tip - Best practices and pro tips
+
+
+Expert advice, shortcuts, or best practices that enhance user success
+
+
+#### Warning - Important cautions
+
+
+Critical information about potential issues, breaking changes, or destructive actions
+
+
+#### Info - Neutral contextual information
+
+
+Background information, context, or neutral announcements
+
+
+#### Check - Success confirmations
+
+
+Positive confirmations, successful completions, or achievement indicators
+
+
+### Code components
+
+#### Single code block
+
+Example of a single code block:
+
+```javascript config.js
+const apiConfig = {
+ baseURL: 'https://api.example.com',
+ timeout: 5000,
+ headers: {
+ 'Authorization': `Bearer ${process.env.API_TOKEN}`
+ }
+};
+```
+
+#### Code group with multiple languages
+
+Example of a code group:
+
+
+```javascript Node.js
+const response = await fetch('/api/endpoint', {
+ headers: { Authorization: `Bearer ${apiKey}` }
+});
+```
+
+```python Python
+import requests
+response = requests.get('/api/endpoint',
+ headers={'Authorization': f'Bearer {api_key}'})
+```
+
+```curl cURL
+curl -X GET '/api/endpoint' \
+ -H 'Authorization: Bearer YOUR_API_KEY'
+```
+
+
+#### Request/response examples
+
+Example of request/response documentation:
+
+
+```bash cURL
+curl -X POST 'https://api.example.com/users' \
+ -H 'Content-Type: application/json' \
+ -d '{"name": "John Doe", "email": "john@example.com"}'
+```
+
+
+
+```json Success
+{
+ "id": "user_123",
+ "name": "John Doe",
+ "email": "john@example.com",
+ "created_at": "2024-01-15T10:30:00Z"
+}
+```
+
+
+### Structural components
+
+#### Steps for procedures
+
+Example of step-by-step instructions:
+
+
+
+ Run `npm install` to install required packages.
+
+
+ Verify installation by running `npm list`.
+
+
+
+
+ Create a `.env` file with your API credentials.
+
+ ```bash
+ API_KEY=your_api_key_here
+ ```
+
+
+ Never commit API keys to version control.
+
+
+
+
+#### Tabs for alternative content
+
+Example of tabbed content:
+
+
+
+ ```bash
+ brew install node
+ npm install -g package-name
+ ```
+
+
+
+ ```powershell
+ choco install nodejs
+ npm install -g package-name
+ ```
+
+
+
+ ```bash
+ sudo apt install nodejs npm
+ npm install -g package-name
+ ```
+
+
+
+#### Accordions for collapsible content
+
+Example of accordion groups:
+
+
+
+ - **Firewall blocking**: Ensure ports 80 and 443 are open
+ - **Proxy configuration**: Set HTTP_PROXY environment variable
+ - **DNS resolution**: Try using 8.8.8.8 as DNS server
+
+
+
+ ```javascript
+ const config = {
+ performance: { cache: true, timeout: 30000 },
+ security: { encryption: 'AES-256' }
+ };
+ ```
+
+
+
+### Cards and columns for emphasizing information
+
+Example of cards and card groups:
+
+
+Complete walkthrough from installation to your first API call in under 10 minutes.
+
+
+
+
+ Learn how to authenticate requests using API keys or JWT tokens.
+
+
+
+ Understand rate limits and best practices for high-volume usage.
+
+
+
+### API documentation components
+
+#### Parameter fields
+
+Example of parameter documentation:
+
+
+Unique identifier for the user. Must be a valid UUID v4 format.
+
+
+
+User's email address. Must be valid and unique within the system.
+
+
+
+Maximum number of results to return. Range: 1-100.
+
+
+
+Bearer token for API authentication. Format: `Bearer YOUR_API_KEY`
+
+
+#### Response fields
+
+Example of response field documentation:
+
+
+Unique identifier assigned to the newly created user.
+
+
+
+ISO 8601 formatted timestamp of when the user was created.
+
+
+
+List of permission strings assigned to this user.
+
+
+#### Expandable nested fields
+
+Example of nested field documentation:
+
+
+Complete user object with all associated data.
+
+
+
+ User profile information including personal details.
+
+
+
+ User's first name as entered during registration.
+
+
+
+ URL to user's profile picture. Returns null if no avatar is set.
+
+
+
+
+
+
+### Media and advanced components
+
+#### Frames for images
+
+Wrap all images in frames:
+
+
+
+
+
+
+
+
+
+#### Videos
+
+Use the HTML video element for self-hosted video content:
+
+
+
+Embed YouTube videos using iframe elements:
+
+
+
+#### Tooltips
+
+Example of tooltip usage:
+
+
+API
+
+
+#### Updates
+
+Use updates for changelogs:
+
+
+## New features
+- Added bulk user import functionality
+- Improved error messages with actionable suggestions
+
+## Bug fixes
+- Fixed pagination issue with large datasets
+- Resolved authentication timeout problems
+
+
+## Required page structure
+
+Every documentation page must begin with YAML frontmatter:
+
+```yaml
+---
+title: "Clear, specific, keyword-rich title"
+description: "Concise description explaining page purpose and value"
+---
+```
+
+## Content quality standards
+
+### Code examples requirements
+
+- Always include complete, runnable examples that users can copy and execute
+- Show proper error handling and edge case management
+- Use realistic data instead of placeholder values
+- Include expected outputs and results for verification
+- Test all code examples thoroughly before publishing
+- Specify language and include filename when relevant
+- Add explanatory comments for complex logic
+- Never include real API keys or secrets in code examples
+
+### API documentation requirements
+
+- Document all parameters including optional ones with clear descriptions
+- Show both success and error response examples with realistic data
+- Include rate limiting information with specific limits
+- Provide authentication examples showing proper format
+- Explain all HTTP status codes and error handling
+- Cover complete request/response cycles
+
+### Accessibility requirements
+
+- Include descriptive alt text for all images and diagrams
+- Use specific, actionable link text instead of "click here"
+- Ensure proper heading hierarchy starting with H2
+- Provide keyboard navigation considerations
+- Use sufficient color contrast in examples and visuals
+- Structure content for easy scanning with headers and lists
+
+## Component selection logic
+
+- Use **Steps** for procedures and sequential instructions
+- Use **Tabs** for platform-specific content or alternative approaches
+- Use **CodeGroup** when showing the same concept in multiple programming languages
+- Use **Accordions** for progressive disclosure of information
+- Use **RequestExample/ResponseExample** specifically for API endpoint documentation
+- Use **ParamField** for API parameters, **ResponseField** for API responses
+- Use **Expandable** for nested object properties or hierarchical information
+````
diff --git a/documentation/ai-tools/windsurf.mdx b/documentation/ai-tools/windsurf.mdx
new file mode 100644
index 00000000..fce12bfd
--- /dev/null
+++ b/documentation/ai-tools/windsurf.mdx
@@ -0,0 +1,96 @@
+---
+title: "Windsurf setup"
+description: "Configure Windsurf for your documentation workflow"
+icon: "water"
+---
+
+Configure Windsurf's Cascade AI assistant to help you write and maintain documentation. This guide shows how to set up Windsurf specifically for your Mintlify documentation workflow.
+
+## Prerequisites
+
+- Windsurf editor installed
+- Access to your documentation repository
+
+## Workspace rules
+
+Create workspace rules that provide Windsurf with context about your documentation project and standards.
+
+Create `.windsurf/rules.md` in your project root:
+
+````markdown
+# Mintlify technical writing rule
+
+## Project context
+
+- This is a documentation project on the Mintlify platform
+- We use MDX files with YAML frontmatter
+- Navigation is configured in `docs.json`
+- We follow technical writing best practices
+
+## Writing standards
+
+- Use second person ("you") for instructions
+- Write in active voice and present tense
+- Start procedures with prerequisites
+- Include expected outcomes for major steps
+- Use descriptive, keyword-rich headings
+- Keep sentences concise but informative
+
+## Required page structure
+
+Every page must start with frontmatter:
+
+```yaml
+---
+title: "Clear, specific title"
+description: "Concise description for SEO and navigation"
+---
+```
+
+## Mintlify components
+
+### Callouts
+
+- `` for helpful supplementary information
+- `` for important cautions and breaking changes
+- `` for best practices and expert advice
+- `` for neutral contextual information
+- `` for success confirmations
+
+### Code examples
+
+- When appropriate, include complete, runnable examples
+- Use `` for multiple language examples
+- Specify language tags on all code blocks
+- Include realistic data, not placeholders
+- Use `` and `` for API docs
+
+### Procedures
+
+- Use `` component for sequential instructions
+- Include verification steps with `` components when relevant
+- Break complex procedures into smaller steps
+
+### Content organization
+
+- Use `` for platform-specific content
+- Use `` for progressive disclosure
+- Use `` and `` for highlighting content
+- Wrap images in `` components with descriptive alt text
+
+## API documentation requirements
+
+- Document all parameters with ``
+- Show response structure with ``
+- Include both success and error examples
+- Use `` for nested object properties
+- Always include authentication examples
+
+## Quality standards
+
+- Test all code examples before publishing
+- Use relative paths for internal links
+- Include alt text for all images
+- Ensure proper heading hierarchy (start with h2)
+- Check existing patterns for consistency
+````
diff --git a/documentation/development.mdx b/documentation/development.mdx
new file mode 100644
index 00000000..59218287
--- /dev/null
+++ b/documentation/development.mdx
@@ -0,0 +1,223 @@
+---
+title: 'Development'
+description: 'Set up your development environment for Flo AI'
+---
+
+
+ **Prerequisites**:
+ - Python 3.10 or higher
+ - pip or poetry package manager
+ - API keys for your chosen LLM providers
+
+
+Follow these steps to set up your development environment for Flo AI.
+
+
+
+
+Install Flo AI using pip or poetry:
+
+```bash
+# Using pip
+pip install flo-ai
+
+# Using poetry
+poetry add flo-ai
+```
+
+
+
+
+
+Configure your API keys for LLM providers:
+
+```bash
+# OpenAI
+export OPENAI_API_KEY="your-openai-key"
+
+# Anthropic
+export ANTHROPIC_API_KEY="your-anthropic-key"
+
+# Google Gemini
+export GOOGLE_API_KEY="your-google-key"
+
+# For Google Vertex AI
+export GOOGLE_APPLICATION_CREDENTIALS="path/to/service-account.json"
+export GOOGLE_CLOUD_PROJECT="your-project-id"
+```
+
+
+
+
+
+Test your installation with a simple agent:
+
+```python
+import asyncio
+from flo_ai.builder.agent_builder import AgentBuilder
+from flo_ai.llm import OpenAI
+
+async def test_installation():
+ agent = (
+ AgentBuilder()
+ .with_name('Test Agent')
+ .with_prompt('You are a helpful assistant.')
+ .with_llm(OpenAI(model='gpt-4o-mini'))
+ .build()
+ )
+
+ response = await agent.run('Hello, world!')
+ print(f'Agent response: {response}')
+
+asyncio.run(test_installation())
+```
+
+
+
+
+## Development Tools
+
+### Flo AI Studio
+
+The Flo AI Studio is a visual workflow designer for creating AI agent workflows:
+
+
+
+
+```bash
+cd studio
+pnpm install
+```
+
+
+
+
+
+```bash
+pnpm dev
+```
+
+The studio will be available at `http://localhost:5173`.
+
+
+
+
+### Testing
+
+Run the test suite to ensure everything is working correctly:
+
+```bash
+# Run all tests
+pytest
+
+# Run specific test files
+pytest tests/unit-tests/test_agent.py
+
+# Run with coverage
+pytest --cov=flo_ai
+```
+
+## Project Structure
+
+Understanding the Flo AI project structure:
+
+```
+flo_ai/
+├── flo_ai/ # Core package
+│ ├── builder/ # Agent builder components
+│ ├── llm/ # LLM provider integrations
+│ ├── tool/ # Tool framework
+│ ├── arium/ # Workflow orchestration
+│ ├── models/ # Data models
+│ └── telemetry/ # Observability
+├── examples/ # Example implementations
+├── tests/ # Test suite
+└── studio/ # Visual workflow designer
+```
+
+## Contributing
+
+
+
+
+ 1. Fork the repository
+ 2. Clone your fork: `git clone https://github.com/your-username/flo-ai.git`
+ 3. Install in development mode: `pip install -e .`
+ 4. Install development dependencies: `pip install -e ".[dev]"`
+ 5. Run tests: `pytest`
+
+
+
+
+
+ Flo AI uses pre-commit hooks for code formatting:
+
+ ```bash
+ # Install pre-commit
+ pip install pre-commit
+
+ # Install hooks
+ pre-commit install
+
+ # Run on all files
+ pre-commit run --all-files
+ ```
+
+
+
+
+
+ 1. Create a feature branch: `git checkout -b feature/your-feature`
+ 2. Make your changes and add tests
+ 3. Run tests: `pytest`
+ 4. Commit with conventional commits: `git commit -m "feat: add new feature"`
+ 5. Push and create a pull request
+
+
+
+
+## Troubleshooting
+
+
+
+
+ If you encounter import errors, ensure you're using Python 3.10+ and have installed all dependencies:
+
+ ```bash
+ pip install -r requirements.txt
+ # or
+ poetry install
+ ```
+
+
+
+
+
+ Verify your API keys are correctly set:
+
+ ```bash
+ echo $OPENAI_API_KEY
+ echo $ANTHROPIC_API_KEY
+ ```
+
+
+
+
+
+ If the studio doesn't load, try:
+
+ ```bash
+ cd studio
+ rm -rf node_modules
+ pnpm install
+ pnpm dev
+ ```
+
+
+
+
+## Need Help?
+
+- Check out our [examples](https://github.com/rootflo/flo-ai/tree/main/flo_ai/examples)
+- Join our [community discussions](https://github.com/rootflo/flo-ai/discussions)
+- Read the [contributing guide](https://github.com/rootflo/flo-ai/blob/main/CONTRIBUTING.md)
diff --git a/documentation/docs.json b/documentation/docs.json
new file mode 100644
index 00000000..a705f779
--- /dev/null
+++ b/documentation/docs.json
@@ -0,0 +1,98 @@
+{
+ "$schema": "https://mintlify.com/docs.json",
+ "theme": "mint",
+ "name": "Flo AI Documentation",
+ "colors": {
+ "primary": "#3B82F6",
+ "light": "#60A5FA",
+ "dark": "#1E40AF"
+ },
+ "favicon": "/favicon.png",
+ "navigation": {
+ "tabs": [
+ {
+ "tab": "Guides",
+ "groups": [
+ {
+ "group": "Getting started",
+ "pages": [
+ "index",
+ "quickstart",
+ "development"
+ ]
+ },
+ {
+ "group": "Core Features",
+ "pages": [
+ "essentials/agents",
+ "essentials/arium",
+ "essentials/studio",
+ "essentials/yaml-agents",
+ "essentials/code"
+ ]
+ },
+ {
+ "group": "Advanced",
+ "pages": [
+ "essentials/llm-providers",
+ "essentials/tools",
+ "essentials/yaml-workflows",
+ "essentials/routing",
+ "essentials/telemetry"
+ ]
+ }
+ ]
+ }
+ ],
+ "global": {
+ "anchors": [
+ {
+ "anchor": "GitHub",
+ "href": "https://github.com/rootflo/flo-ai",
+ "icon": "github"
+ },
+ {
+ "anchor": "Community",
+ "href": "https://github.com/rootflo/flo-ai/discussions",
+ "icon": "comments"
+ }
+ ]
+ }
+ },
+ "logo": {
+ "light": "/logo/light.png",
+ "dark": "/logo/dark.png"
+ },
+ "navbar": {
+ "links": [
+ {
+ "label": "Examples",
+ "href": "https://github.com/rootflo/flo-ai/tree/main/flo_ai/examples"
+ }
+ ],
+ "primary": {
+ "type": "button",
+ "label": "Get Started",
+ "href": "https://github.com/rootflo/flo-ai"
+ }
+ },
+ "contextual": {
+ "options": [
+ "copy",
+ "view",
+ "chatgpt",
+ "claude",
+ "perplexity",
+ "mcp",
+ "cursor",
+ "vscode"
+ ]
+ },
+ "footer": {
+ "socials": {
+ "x": "https://x.com/rootflo",
+ "github": "https://github.com/rootflo/flo-ai",
+ "linkedin": "https://linkedin.com/company/rootflo"
+ }
+ }
+}
diff --git a/documentation/essentials/agents.mdx b/documentation/essentials/agents.mdx
new file mode 100644
index 00000000..1cdd79c7
--- /dev/null
+++ b/documentation/essentials/agents.mdx
@@ -0,0 +1,301 @@
+---
+title: 'Agents'
+description: 'Learn how to create and configure AI agents with Flo AI'
+icon: 'robot'
+---
+
+## Creating Agents
+
+Agents are the core building blocks of Flo AI. They represent AI-powered entities that can process inputs, use tools, and generate responses.
+
+### Basic Agent Creation
+
+Create a simple conversational agent:
+
+```python
+from flo_ai.builder.agent_builder import AgentBuilder
+from flo_ai.llm import OpenAI
+
+agent = (
+ AgentBuilder()
+ .with_name('Customer Support')
+ .with_prompt('You are a helpful customer support agent.')
+ .with_llm(OpenAI(model='gpt-4o-mini'))
+ .build()
+)
+
+response = await agent.run('How can I reset my password?')
+```
+
+### Agent Configuration
+
+Configure agents with various options:
+
+```python
+agent = (
+ AgentBuilder()
+ .with_name('Data Analyst')
+ .with_prompt('You are an expert data analyst.')
+ .with_llm(OpenAI(model='gpt-4o', temperature=0.3))
+ .with_retries(3) # Retry on failure
+ .with_max_tokens(1000)
+ .build()
+)
+```
+
+## Agent Types
+
+### Conversational Agents
+
+Basic agents for chat and Q&A:
+
+```python
+conversational_agent = (
+ AgentBuilder()
+ .with_name('Chat Assistant')
+ .with_prompt('You are a friendly conversational assistant.')
+ .with_llm(OpenAI(model='gpt-4o-mini'))
+ .build()
+)
+```
+
+### Tool-Using Agents
+
+Agents that can use external tools:
+
+```python
+from flo_ai.tool import flo_tool
+
+@flo_tool(description="Get weather information")
+async def get_weather(city: str) -> str:
+ return f"Weather in {city}: sunny, 25°C"
+
+tool_agent = (
+ AgentBuilder()
+ .with_name('Weather Assistant')
+ .with_prompt('You help users get weather information.')
+ .with_llm(OpenAI(model='gpt-4o-mini'))
+ .with_tools([get_weather.tool])
+ .build()
+)
+```
+
+### Structured Output Agents
+
+Agents that return structured data:
+
+```python
+from pydantic import BaseModel, Field
+
+class AnalysisResult(BaseModel):
+ summary: str = Field(description="Executive summary")
+ key_findings: list = Field(description="List of key findings")
+ recommendations: list = Field(description="Actionable recommendations")
+
+structured_agent = (
+ AgentBuilder()
+ .with_name('Business Analyst')
+ .with_prompt('Analyze business data and provide insights.')
+ .with_llm(OpenAI(model='gpt-4o'))
+ .with_output_schema(AnalysisResult)
+ .build()
+)
+```
+
+## Agent Capabilities
+
+### Variable Resolution
+
+Use dynamic variables in agent prompts:
+
+```python
+agent = (
+ AgentBuilder()
+ .with_name('Personalized Assistant')
+ .with_prompt('Hello ! You are at .')
+ .with_llm(OpenAI(model='gpt-4o-mini'))
+ .build()
+)
+
+# Use variables at runtime
+variables = {
+ 'user_name': 'John',
+ 'user_role': 'Data Scientist',
+ 'company': 'TechCorp'
+}
+
+response = await agent.run(
+ 'What should I focus on today?',
+ variables=variables
+)
+```
+
+### Document Processing
+
+Process PDF and text documents:
+
+```python
+from flo_ai.models.document import DocumentMessage, DocumentType
+
+# Create document message
+document = DocumentMessage(
+ document_type=DocumentType.PDF,
+ document_file_path='report.pdf'
+)
+
+# Process with agent
+response = await agent.run([document])
+```
+
+### Error Handling
+
+Built-in retry mechanisms and error recovery:
+
+```python
+robust_agent = (
+ AgentBuilder()
+ .with_name('Reliable Agent')
+ .with_prompt('You are a reliable assistant.')
+ .with_llm(OpenAI(model='gpt-4o'))
+ .with_retries(3) # Retry up to 3 times
+ .with_timeout(30) # 30 second timeout
+ .build()
+)
+```
+
+## Best Practices
+
+### Prompt Engineering
+
+- **Be specific**: Clearly define the agent's role and capabilities
+- **Use examples**: Provide examples of expected inputs and outputs
+- **Set boundaries**: Define what the agent should and shouldn't do
+
+```python
+well_prompted_agent = (
+ AgentBuilder()
+ .with_name('Code Reviewer')
+ .with_prompt('''
+ You are an expert code reviewer. Your role is to:
+ 1. Review code for bugs, security issues, and best practices
+ 2. Suggest improvements and optimizations
+ 3. Provide constructive feedback
+
+ Always be specific about issues and provide actionable suggestions.
+ Focus on code quality, performance, and maintainability.
+ ''')
+ .with_llm(OpenAI(model='gpt-4o'))
+ .build()
+)
+```
+
+### Model Selection
+
+Choose the right model for your use case:
+
+- **GPT-4o**: Best for complex reasoning and analysis
+- **GPT-4o-mini**: Good balance of performance and cost
+- **Claude-3.5-Sonnet**: Excellent for creative tasks
+- **Gemini**: Good for multilingual applications
+
+### Performance Optimization
+
+```python
+# Use streaming for long responses
+streaming_agent = (
+ AgentBuilder()
+ .with_name('Content Generator')
+ .with_prompt('Generate detailed content.')
+ .with_llm(OpenAI(model='gpt-4o', stream=True))
+ .build()
+)
+
+# Use caching for repeated queries
+cached_agent = (
+ AgentBuilder()
+ .with_name('Cached Agent')
+ .with_prompt('You provide consistent responses.')
+ .with_llm(OpenAI(model='gpt-4o-mini'))
+ .with_cache(ttl=3600) # Cache for 1 hour
+ .build()
+)
+```
+
+## Agent Lifecycle
+
+### Initialization
+
+```python
+# Create agent
+agent = AgentBuilder().with_name('My Agent').build()
+
+# Initialize with configuration
+await agent.initialize()
+```
+
+### Execution
+
+```python
+# Simple execution
+response = await agent.run('Hello!')
+
+# With context
+response = await agent.run('Hello!', context={'user_id': '123'})
+
+# With variables
+response = await agent.run('Hello!', variables={'name': 'John'})
+```
+
+### Cleanup
+
+```python
+# Clean up resources
+await agent.cleanup()
+```
+
+## Advanced Features
+
+### Custom Memory
+
+```python
+from flo_ai.arium.memory import BaseMemory
+
+class CustomMemory(BaseMemory):
+ def __init__(self):
+ self.messages = []
+
+ def add(self, message):
+ self.messages.append(message)
+
+ def get(self):
+ return self.messages
+
+agent = (
+ AgentBuilder()
+ .with_name('Memory Agent')
+ .with_prompt('You remember previous conversations.')
+ .with_llm(OpenAI(model='gpt-4o'))
+ .with_memory(CustomMemory())
+ .build()
+)
+```
+
+### Custom Event Handlers
+
+```python
+async def on_agent_start(agent, input_data):
+ print(f"Agent {agent.name} started processing")
+
+async def on_agent_complete(agent, result):
+ print(f"Agent {agent.name} completed with result: {result}")
+
+agent = (
+ AgentBuilder()
+ .with_name('Event Agent')
+ .with_prompt('You are an event-driven agent.')
+ .with_llm(OpenAI(model='gpt-4o'))
+ .with_event_handler('start', on_agent_start)
+ .with_event_handler('complete', on_agent_complete)
+ .build()
+)
+```
diff --git a/documentation/essentials/arium.mdx b/documentation/essentials/arium.mdx
new file mode 100644
index 00000000..44388d06
--- /dev/null
+++ b/documentation/essentials/arium.mdx
@@ -0,0 +1,377 @@
+---
+title: 'Arium Workflows'
+description: 'Create complex multi-agent workflows with Arium orchestration'
+icon: 'sitemap'
+---
+
+## What is Arium?
+
+Arium is Flo AI's powerful workflow orchestration engine for creating complex multi-agent workflows. It allows you to chain agents together, implement conditional routing, and build sophisticated AI systems.
+
+## Basic Workflow Creation
+
+### Simple Agent Chain
+
+Create a linear workflow with multiple agents:
+
+```python
+from flo_ai.arium import AriumBuilder
+from flo_ai.models.agent import Agent
+from flo_ai.llm import OpenAI
+
+async def simple_chain():
+ llm = OpenAI(model='gpt-4o-mini')
+
+ # Create agents
+ analyst = Agent(
+ name='content_analyst',
+ system_prompt='Analyze the input and extract key insights.',
+ llm=llm
+ )
+
+ summarizer = Agent(
+ name='summarizer',
+ system_prompt='Create a concise summary based on the analysis.',
+ llm=llm
+ )
+
+ # Build and run workflow
+ result = await (
+ AriumBuilder()
+ .add_agents([analyst, summarizer])
+ .start_with(analyst)
+ .connect(analyst, summarizer)
+ .end_with(summarizer)
+ .build_and_run(["Analyze this complex business report..."])
+ )
+
+ return result
+```
+
+### Conditional Routing
+
+Route to different agents based on conditions:
+
+```python
+from flo_ai.arium.memory import BaseMemory
+
+def route_by_type(memory: BaseMemory) -> str:
+ """Route based on classification result"""
+ messages = memory.get()
+ last_message = str(messages[-1]) if messages else ""
+
+ if "technical" in last_message.lower():
+ return "tech_specialist"
+ else:
+ return "business_specialist"
+
+# Build workflow with conditional routing
+result = await (
+ AriumBuilder()
+ .add_agents([classifier, tech_specialist, business_specialist, final_agent])
+ .start_with(classifier)
+ .add_edge(classifier, [tech_specialist, business_specialist], route_by_type)
+ .connect(tech_specialist, final_agent)
+ .connect(business_specialist, final_agent)
+ .end_with(final_agent)
+ .build_and_run(["How can we optimize our database performance?"])
+)
+```
+
+## YAML-Based Workflows
+
+Define entire workflows in YAML for easy management:
+
+```yaml
+metadata:
+ name: "content-analysis-workflow"
+ version: "1.0.0"
+ description: "Multi-agent content analysis pipeline"
+
+arium:
+ agents:
+ - name: "analyzer"
+ role: "Content Analyst"
+ job: "Analyze the input content and extract key insights."
+ model:
+ provider: "openai"
+ name: "gpt-4o-mini"
+
+ - name: "summarizer"
+ role: "Content Summarizer"
+ job: "Create a concise summary based on the analysis."
+ model:
+ provider: "anthropic"
+ name: "claude-3-5-sonnet-20240620"
+
+ workflow:
+ start: "analyzer"
+ edges:
+ - from: "analyzer"
+ to: ["summarizer"]
+ end: ["summarizer"]
+```
+
+```python
+# Run YAML workflow
+result = await (
+ AriumBuilder()
+ .from_yaml(yaml_file='workflow.yaml')
+ .build_and_run(["Analyze this quarterly business report..."])
+)
+```
+
+## Advanced Routing
+
+### LLM-Powered Routers
+
+Use LLMs for intelligent routing decisions:
+
+```yaml
+routers:
+ - name: "content_type_router"
+ type: "smart" # Uses LLM for intelligent routing
+ routing_options:
+ technical_writer: "Technical content, documentation, tutorials"
+ creative_writer: "Creative writing, storytelling, fiction"
+ marketing_writer: "Marketing copy, sales content, campaigns"
+ model:
+ provider: "openai"
+ name: "gpt-4o-mini"
+```
+
+### ReflectionRouter
+
+For A→B→A→C feedback patterns:
+
+```yaml
+routers:
+ - name: "reflection_router"
+ type: "reflection"
+ flow_pattern: [writer, critic, writer] # A → B → A pattern
+ model:
+ provider: "openai"
+ name: "gpt-4o-mini"
+```
+
+### PlanExecuteRouter
+
+For Cursor-style plan-and-execute workflows:
+
+```yaml
+routers:
+ - name: "plan_router"
+ type: "plan_execute"
+ agents:
+ planner: "Creates detailed execution plans"
+ developer: "Implements features according to plan"
+ tester: "Tests implementations and validates functionality"
+ reviewer: "Reviews and approves completed work"
+ settings:
+ planner_agent: planner
+ executor_agent: developer
+ reviewer_agent: reviewer
+```
+
+## Workflow Patterns
+
+### Sequential Processing
+
+```python
+# A → B → C
+workflow = (
+ AriumBuilder()
+ .add_agents([agent_a, agent_b, agent_c])
+ .start_with(agent_a)
+ .connect(agent_a, agent_b)
+ .connect(agent_b, agent_c)
+ .end_with(agent_c)
+)
+```
+
+### Parallel Processing
+
+```python
+# A → [B, C] → D
+workflow = (
+ AriumBuilder()
+ .add_agents([agent_a, agent_b, agent_c, agent_d])
+ .start_with(agent_a)
+ .connect(agent_a, [agent_b, agent_c])
+ .connect(agent_b, agent_d)
+ .connect(agent_c, agent_d)
+ .end_with(agent_d)
+)
+```
+
+### Fan-out/Fan-in
+
+```python
+# A → [B, C, D] → E
+workflow = (
+ AriumBuilder()
+ .add_agents([agent_a, agent_b, agent_c, agent_d, agent_e])
+ .start_with(agent_a)
+ .connect(agent_a, [agent_b, agent_c, agent_d])
+ .connect(agent_b, agent_e)
+ .connect(agent_c, agent_e)
+ .connect(agent_d, agent_e)
+ .end_with(agent_e)
+)
+```
+
+## Memory Management
+
+### Shared Memory
+
+```python
+from flo_ai.arium.memory import MessageMemory
+
+# Create shared memory
+shared_memory = MessageMemory()
+
+workflow = (
+ AriumBuilder()
+ .add_agents([agent_a, agent_b])
+ .with_memory(shared_memory)
+ .start_with(agent_a)
+ .connect(agent_a, agent_b)
+ .end_with(agent_b)
+)
+```
+
+### Custom Memory
+
+```python
+from flo_ai.arium.memory import BaseMemory
+
+class CustomMemory(BaseMemory):
+ def __init__(self):
+ self.data = {}
+
+ def add(self, key, value):
+ self.data[key] = value
+
+ def get(self, key):
+ return self.data.get(key)
+
+custom_memory = CustomMemory()
+```
+
+## Event Handling
+
+### Workflow Events
+
+```python
+async def on_workflow_start(workflow, input_data):
+ print(f"Workflow started with input: {input_data}")
+
+async def on_workflow_complete(workflow, result):
+ print(f"Workflow completed with result: {result}")
+
+async def on_agent_start(agent, input_data):
+ print(f"Agent {agent.name} started")
+
+workflow = (
+ AriumBuilder()
+ .add_agents([agent_a, agent_b])
+ .with_event_handler('workflow_start', on_workflow_start)
+ .with_event_handler('workflow_complete', on_workflow_complete)
+ .with_event_handler('agent_start', on_agent_start)
+ .start_with(agent_a)
+ .connect(agent_a, agent_b)
+ .end_with(agent_b)
+)
+```
+
+## Error Handling
+
+### Retry Logic
+
+```python
+workflow = (
+ AriumBuilder()
+ .add_agents([agent_a, agent_b])
+ .with_retries(3) # Retry failed agents up to 3 times
+ .with_timeout(60) # 60 second timeout
+ .start_with(agent_a)
+ .connect(agent_a, agent_b)
+ .end_with(agent_b)
+)
+```
+
+### Error Recovery
+
+```python
+async def error_handler(agent, error):
+ print(f"Agent {agent.name} failed: {error}")
+ # Implement custom error recovery logic
+ return "fallback_response"
+
+workflow = (
+ AriumBuilder()
+ .add_agents([agent_a, agent_b])
+ .with_error_handler(error_handler)
+ .start_with(agent_a)
+ .connect(agent_a, agent_b)
+ .end_with(agent_b)
+)
+```
+
+## Performance Optimization
+
+### Parallel Execution
+
+```python
+# Execute multiple agents in parallel
+workflow = (
+ AriumBuilder()
+ .add_agents([agent_a, agent_b, agent_c])
+ .start_with(agent_a)
+ .connect_parallel(agent_a, [agent_b, agent_c])
+ .end_with([agent_b, agent_c])
+)
+```
+
+### Caching
+
+```python
+workflow = (
+ AriumBuilder()
+ .add_agents([agent_a, agent_b])
+ .with_cache(ttl=3600) # Cache results for 1 hour
+ .start_with(agent_a)
+ .connect(agent_a, agent_b)
+ .end_with(agent_b)
+)
+```
+
+## Best Practices
+
+### Workflow Design
+
+1. **Keep it simple**: Start with linear workflows before adding complexity
+2. **Use meaningful names**: Name agents and workflows descriptively
+3. **Handle errors**: Always implement error handling and recovery
+4. **Test thoroughly**: Test workflows with various inputs
+
+### Performance Tips
+
+1. **Use appropriate models**: Choose models based on task complexity
+2. **Implement caching**: Cache expensive operations
+3. **Optimize routing**: Use efficient routing logic
+4. **Monitor performance**: Use telemetry to track workflow performance
+
+### Debugging
+
+```python
+# Enable debug mode
+workflow = (
+ AriumBuilder()
+ .add_agents([agent_a, agent_b])
+ .with_debug(True) # Enable debug logging
+ .start_with(agent_a)
+ .connect(agent_a, agent_b)
+ .end_with(agent_b)
+)
+```
diff --git a/documentation/essentials/code.mdx b/documentation/essentials/code.mdx
new file mode 100644
index 00000000..503c253c
--- /dev/null
+++ b/documentation/essentials/code.mdx
@@ -0,0 +1,182 @@
+---
+title: 'Code Examples'
+description: 'Flo AI code examples and syntax highlighting'
+icon: 'code'
+---
+
+## Basic Agent Creation
+
+Here's how to create a simple conversational agent with Flo AI:
+
+```python Simple Agent
+import asyncio
+from flo_ai.builder.agent_builder import AgentBuilder
+from flo_ai.llm import OpenAI
+
+async def main():
+ agent = (
+ AgentBuilder()
+ .with_name('Math Tutor')
+ .with_prompt('You are a helpful math tutor.')
+ .with_llm(OpenAI(model='gpt-4o-mini'))
+ .build()
+ )
+
+ response = await agent.run('What is the formula for the area of a circle?')
+ print(f'Response: {response}')
+
+asyncio.run(main())
+```
+
+## Tool Integration
+
+Create agents that can use custom tools:
+
+```python Tool-Using Agent
+import asyncio
+from flo_ai.builder.agent_builder import AgentBuilder
+from flo_ai.tool import flo_tool
+from flo_ai.llm import Anthropic
+
+@flo_tool(description="Perform mathematical calculations")
+async def calculate(operation: str, x: float, y: float) -> float:
+ """Calculate mathematical operations between two numbers."""
+ operations = {
+ 'add': lambda: x + y,
+ 'subtract': lambda: x - y,
+ 'multiply': lambda: x * y,
+ 'divide': lambda: x / y if y != 0 else 0,
+ }
+ return operations.get(operation, lambda: 0)()
+
+async def main():
+ agent = (
+ AgentBuilder()
+ .with_name('Calculator Assistant')
+ .with_prompt('You are a math assistant that can perform calculations.')
+ .with_llm(Anthropic(model='claude-3-5-sonnet-20240620'))
+ .with_tools([calculate.tool])
+ .build()
+ )
+
+ response = await agent.run('Calculate 5 plus 3')
+ print(f'Response: {response}')
+
+asyncio.run(main())
+```
+
+## Structured Outputs
+
+Use Pydantic models for structured agent responses:
+
+```python Structured Output
+import asyncio
+from pydantic import BaseModel, Field
+from flo_ai.builder.agent_builder import AgentBuilder
+from flo_ai.llm import OpenAI
+
+class MathSolution(BaseModel):
+ solution: str = Field(description="Step-by-step solution")
+ answer: str = Field(description="Final answer")
+ confidence: float = Field(description="Confidence level (0-1)")
+
+async def main():
+ agent = (
+ AgentBuilder()
+ .with_name('Math Solver')
+ .with_llm(OpenAI(model='gpt-4o'))
+ .with_output_schema(MathSolution)
+ .build()
+ )
+
+ response = await agent.run('Solve: 2x + 5 = 15')
+ print(f'Structured Response: {response}')
+
+asyncio.run(main())
+```
+
+## Multi-Agent Workflows
+
+Create complex workflows with multiple agents:
+
+```python Multi-Agent Workflow
+import asyncio
+from flo_ai.arium import AriumBuilder
+from flo_ai.models.agent import Agent
+from flo_ai.llm import OpenAI
+
+async def content_analysis_workflow():
+ llm = OpenAI(model='gpt-4o-mini')
+
+ # Create specialized agents
+ analyst = Agent(
+ name='content_analyst',
+ system_prompt='Analyze the input and extract key insights.',
+ llm=llm
+ )
+
+ summarizer = Agent(
+ name='summarizer',
+ system_prompt='Create a concise summary based on the analysis.',
+ llm=llm
+ )
+
+ # Build and run workflow
+ result = await (
+ AriumBuilder()
+ .add_agents([analyst, summarizer])
+ .start_with(analyst)
+ .connect(analyst, summarizer)
+ .end_with(summarizer)
+ .build_and_run(["Analyze this complex business report..."])
+ )
+
+ return result
+
+asyncio.run(content_analysis_workflow())
+```
+
+## YAML Configuration
+
+Define entire workflows in YAML:
+
+```yaml workflow.yaml
+metadata:
+ name: "content-analysis-workflow"
+ version: "1.0.0"
+ description: "Multi-agent content analysis pipeline"
+
+arium:
+ agents:
+ - name: "analyzer"
+ role: "Content Analyst"
+ job: "Analyze the input content and extract key insights."
+ model:
+ provider: "openai"
+ name: "gpt-4o-mini"
+
+ - name: "summarizer"
+ role: "Content Summarizer"
+ job: "Create a concise summary based on the analysis."
+ model:
+ provider: "anthropic"
+ name: "claude-3-5-sonnet-20240620"
+
+ workflow:
+ start: "analyzer"
+ edges:
+ - from: "analyzer"
+ to: ["summarizer"]
+ end: ["summarizer"]
+```
+
+```python Run YAML Workflow
+from flo_ai.arium import AriumBuilder
+
+# Run YAML workflow
+result = await (
+ AriumBuilder()
+ .from_yaml(yaml_file='workflow.yaml')
+ .build_and_run(["Analyze this quarterly business report..."])
+)
+```
diff --git a/documentation/essentials/images.mdx b/documentation/essentials/images.mdx
new file mode 100644
index 00000000..1144eb2c
--- /dev/null
+++ b/documentation/essentials/images.mdx
@@ -0,0 +1,59 @@
+---
+title: 'Images and embeds'
+description: 'Add image, video, and other HTML elements'
+icon: 'image'
+---
+
+
+
+## Image
+
+### Using Markdown
+
+The [markdown syntax](https://www.markdownguide.org/basic-syntax/#images) lets you add images using the following code
+
+```md
+
+```
+
+Note that the image file size must be less than 5MB. Otherwise, we recommend hosting on a service like [Cloudinary](https://cloudinary.com/) or [S3](https://aws.amazon.com/s3/). You can then use that URL and embed.
+
+### Using embeds
+
+To get more customizability with images, you can also use [embeds](/writing-content/embed) to add images
+
+```html
+
+```
+
+## Embeds and HTML elements
+
+
+
+
+
+
+
+Mintlify supports [HTML tags in Markdown](https://www.markdownguide.org/basic-syntax/#html). This is helpful if you prefer HTML tags to Markdown syntax, and lets you create documentation with infinite flexibility.
+
+
+
+### iFrames
+
+Loads another HTML page within the document. Most commonly used for embedding videos.
+
+```html
+
+```
diff --git a/documentation/essentials/llm-providers.mdx b/documentation/essentials/llm-providers.mdx
new file mode 100644
index 00000000..f29eac2b
--- /dev/null
+++ b/documentation/essentials/llm-providers.mdx
@@ -0,0 +1,439 @@
+---
+title: 'LLM Providers'
+description: 'Supported language model providers and configuration'
+icon: 'brain'
+---
+
+## Supported Providers
+
+Flo AI supports multiple LLM providers with consistent interfaces, allowing you to easily switch between different models and providers.
+
+## OpenAI
+
+### Basic Configuration
+
+```python
+from flo_ai.llm import OpenAI
+
+# Basic OpenAI configuration
+llm = OpenAI(
+ model='gpt-4o',
+ temperature=0.7,
+ max_tokens=1000
+)
+
+# With additional parameters
+llm = OpenAI(
+ model='gpt-4o-mini',
+ temperature=0.3,
+ max_tokens=500,
+ timeout=30,
+ api_key='your-api-key' # Optional, can use environment variable
+)
+```
+
+### Available Models
+
+```python
+# GPT-4 models
+gpt4 = OpenAI(model='gpt-4o')
+gpt4_mini = OpenAI(model='gpt-4o-mini')
+
+# GPT-3.5 models
+gpt35 = OpenAI(model='gpt-3.5-turbo')
+gpt35_16k = OpenAI(model='gpt-3.5-turbo-16k')
+```
+
+### Streaming Support
+
+```python
+# Enable streaming for real-time responses
+streaming_llm = OpenAI(
+ model='gpt-4o',
+ stream=True
+)
+
+# Use with agent
+agent = (
+ AgentBuilder()
+ .with_name('Streaming Agent')
+ .with_prompt('You are a helpful assistant.')
+ .with_llm(streaming_llm)
+ .build()
+)
+```
+
+## Anthropic Claude
+
+### Basic Configuration
+
+```python
+from flo_ai.llm import Anthropic
+
+# Basic Claude configuration
+claude = Anthropic(
+ model='claude-3-5-sonnet-20240620',
+ temperature=0.7,
+ max_tokens=1000
+)
+
+# With additional parameters
+claude = Anthropic(
+ model='claude-3-5-haiku-20241022',
+ temperature=0.3,
+ max_tokens=500,
+ timeout=30
+)
+```
+
+### Available Models
+
+```python
+# Claude 3.5 models
+claude_sonnet = Anthropic(model='claude-3-5-sonnet-20240620')
+claude_haiku = Anthropic(model='claude-3-5-haiku-20241022')
+
+# Claude 3 models
+claude_3_sonnet = Anthropic(model='claude-3-sonnet-20240229')
+claude_3_haiku = Anthropic(model='claude-3-haiku-20240307')
+```
+
+## Google Gemini
+
+### Basic Configuration
+
+```python
+from flo_ai.llm import Gemini
+
+# Basic Gemini configuration
+gemini = Gemini(
+ model='gemini-2.5-flash',
+ temperature=0.7,
+ max_tokens=1000
+)
+
+# With additional parameters
+gemini = Gemini(
+ model='gemini-2.5-pro',
+ temperature=0.3,
+ max_tokens=500,
+ timeout=30
+)
+```
+
+### Available Models
+
+```python
+# Gemini 2.5 models
+gemini_flash = Gemini(model='gemini-2.5-flash')
+gemini_pro = Gemini(model='gemini-2.5-pro')
+
+# Gemini 1.5 models
+gemini_15_flash = Gemini(model='gemini-1.5-flash')
+gemini_15_pro = Gemini(model='gemini-1.5-pro')
+```
+
+## Google Vertex AI
+
+### Configuration
+
+```python
+from flo_ai.llm import VertexAI
+
+# Vertex AI configuration
+vertex_llm = VertexAI(
+ model='gemini-2.5-flash',
+ project='your-project-id',
+ location='us-central1',
+ temperature=0.7
+)
+
+# With service account
+vertex_llm = VertexAI(
+ model='gemini-2.5-pro',
+ project='your-project-id',
+ credentials_path='path/to/service-account.json',
+ location='us-central1'
+)
+```
+
+## Ollama (Local)
+
+### Configuration
+
+```python
+from flo_ai.llm import OllamaLLM
+
+# Local Ollama configuration
+ollama = OllamaLLM(
+ model='llama2',
+ base_url='http://localhost:11434',
+ temperature=0.7
+)
+
+# With custom parameters
+ollama = OllamaLLM(
+ model='codellama',
+ base_url='http://localhost:11434',
+ temperature=0.3,
+ timeout=60
+)
+```
+
+### Popular Local Models
+
+```python
+# Code generation
+codellama = OllamaLLM(model='codellama')
+
+# General purpose
+llama2 = OllamaLLM(model='llama2')
+llama3 = OllamaLLM(model='llama3')
+
+# Specialized models
+mistral = OllamaLLM(model='mistral')
+phi = OllamaLLM(model='phi')
+```
+
+## Provider Comparison
+
+| Provider | Best For | Cost | Speed | Quality |
+|----------|----------|------|-------|---------|
+| GPT-4o | Complex reasoning | High | Medium | Excellent |
+| GPT-4o-mini | Balanced tasks | Medium | Fast | Good |
+| Claude-3.5-Sonnet | Creative writing | High | Medium | Excellent |
+| Claude-3.5-Haiku | Simple tasks | Low | Fast | Good |
+| Gemini-2.5-Pro | Multimodal tasks | Medium | Medium | Good |
+| Gemini-2.5-Flash | Fast responses | Low | Very Fast | Good |
+| Ollama | Privacy/Offline | Free | Variable | Variable |
+
+## Model Selection Guide
+
+### For Different Use Cases
+
+```python
+# Code generation and analysis
+code_llm = OpenAI(model='gpt-4o', temperature=0.1)
+
+# Creative writing
+creative_llm = Anthropic(model='claude-3-5-sonnet-20240620', temperature=0.8)
+
+# Data analysis
+analysis_llm = OpenAI(model='gpt-4o', temperature=0.2)
+
+# Customer support
+support_llm = OpenAI(model='gpt-4o-mini', temperature=0.3)
+
+# Fast responses
+fast_llm = Gemini(model='gemini-2.5-flash', temperature=0.3)
+```
+
+### Performance Optimization
+
+```python
+# For high-volume, simple tasks
+efficient_llm = OpenAI(
+ model='gpt-4o-mini',
+ temperature=0.1,
+ max_tokens=200
+)
+
+# For complex reasoning
+powerful_llm = OpenAI(
+ model='gpt-4o',
+ temperature=0.2,
+ max_tokens=2000
+)
+```
+
+## Environment Configuration
+
+### API Keys
+
+```bash
+# OpenAI
+export OPENAI_API_KEY="your-openai-key"
+
+# Anthropic
+export ANTHROPIC_API_KEY="your-anthropic-key"
+
+# Google
+export GOOGLE_API_KEY="your-google-key"
+
+# Vertex AI
+export GOOGLE_APPLICATION_CREDENTIALS="path/to/service-account.json"
+export GOOGLE_CLOUD_PROJECT="your-project-id"
+```
+
+### Python Configuration
+
+```python
+import os
+from flo_ai.llm import OpenAI
+
+# Configure with environment variables
+llm = OpenAI(
+ model='gpt-4o',
+ api_key=os.getenv('OPENAI_API_KEY')
+)
+
+# Or use default environment variable names
+llm = OpenAI(model='gpt-4o') # Automatically uses OPENAI_API_KEY
+```
+
+## Advanced Configuration
+
+### Custom Headers
+
+```python
+# Add custom headers for API requests
+llm = OpenAI(
+ model='gpt-4o',
+ headers={
+ 'X-Custom-Header': 'value',
+ 'User-Agent': 'MyApp/1.0'
+ }
+)
+```
+
+### Retry Configuration
+
+```python
+# Configure retry behavior
+llm = OpenAI(
+ model='gpt-4o',
+ max_retries=3,
+ retry_delay=1.0,
+ timeout=30
+)
+```
+
+### Rate Limiting
+
+```python
+# Configure rate limiting
+llm = OpenAI(
+ model='gpt-4o',
+ requests_per_minute=60,
+ tokens_per_minute=150000
+)
+```
+
+## Model Switching
+
+### Dynamic Model Selection
+
+```python
+def get_llm_for_task(task_type: str):
+ if task_type == 'creative':
+ return Anthropic(model='claude-3-5-sonnet-20240620')
+ elif task_type == 'analytical':
+ return OpenAI(model='gpt-4o')
+ elif task_type == 'fast':
+ return Gemini(model='gemini-2.5-flash')
+ else:
+ return OpenAI(model='gpt-4o-mini')
+
+# Use in agent
+task_type = 'creative'
+llm = get_llm_for_task(task_type)
+agent = AgentBuilder().with_llm(llm).build()
+```
+
+### A/B Testing
+
+```python
+# Test different models
+models = [
+ OpenAI(model='gpt-4o'),
+ Anthropic(model='claude-3-5-sonnet-20240620'),
+ Gemini(model='gemini-2.5-pro')
+]
+
+for i, llm in enumerate(models):
+ agent = AgentBuilder().with_llm(llm).build()
+ response = await agent.run('Test prompt')
+ print(f"Model {i+1}: {response}")
+```
+
+## Troubleshooting
+
+### Common Issues
+
+
+
+
+ Ensure your API keys are correctly set:
+
+ ```bash
+ echo $OPENAI_API_KEY
+ echo $ANTHROPIC_API_KEY
+ echo $GOOGLE_API_KEY
+ ```
+
+
+
+
+
+ If you hit rate limits, implement backoff:
+
+ ```python
+ import time
+ import random
+
+ async def with_backoff(func, max_retries=3):
+ for attempt in range(max_retries):
+ try:
+ return await func()
+ except RateLimitError:
+ wait_time = (2 ** attempt) + random.uniform(0, 1)
+ time.sleep(wait_time)
+ raise Exception("Max retries exceeded")
+ ```
+
+
+
+
+
+ Check that the model name is correct and available in your region:
+
+ ```python
+ # List available models
+ from flo_ai.llm import OpenAI
+
+ # This will raise an error if model is not available
+ try:
+ llm = OpenAI(model='gpt-4o')
+ print("Model is available")
+ except Exception as e:
+ print(f"Model error: {e}")
+ ```
+
+
+
+
+## Best Practices
+
+### Model Selection
+
+1. **Start with GPT-4o-mini** for most tasks
+2. **Use GPT-4o** for complex reasoning
+3. **Try Claude** for creative tasks
+4. **Use Gemini** for multimodal or fast responses
+5. **Use Ollama** for privacy-sensitive applications
+
+### Cost Optimization
+
+1. **Use appropriate models** for task complexity
+2. **Implement caching** for repeated queries
+3. **Set reasonable limits** on max_tokens
+4. **Monitor usage** and costs
+5. **Use streaming** for long responses
+
+### Performance Tips
+
+1. **Batch requests** when possible
+2. **Use connection pooling** for high-volume applications
+3. **Implement retry logic** with exponential backoff
+4. **Cache responses** for identical inputs
+5. **Monitor latency** and optimize accordingly
diff --git a/documentation/essentials/markdown.mdx b/documentation/essentials/markdown.mdx
new file mode 100644
index 00000000..a45c1d56
--- /dev/null
+++ b/documentation/essentials/markdown.mdx
@@ -0,0 +1,88 @@
+---
+title: 'Markdown syntax'
+description: 'Text, title, and styling in standard markdown'
+icon: 'text-size'
+---
+
+## Titles
+
+Best used for section headers.
+
+```md
+## Titles
+```
+
+### Subtitles
+
+Best used for subsection headers.
+
+```md
+### Subtitles
+```
+
+
+
+Each **title** and **subtitle** creates an anchor and also shows up on the table of contents on the right.
+
+
+
+## Text formatting
+
+We support most markdown formatting. Simply add `**`, `_`, or `~` around text to format it.
+
+| Style | How to write it | Result |
+| ------------- | ----------------- | --------------- |
+| Bold | `**bold**` | **bold** |
+| Italic | `_italic_` | _italic_ |
+| Strikethrough | `~strikethrough~` | ~strikethrough~ |
+
+You can combine these. For example, write `**_bold and italic_**` to get **_bold and italic_** text.
+
+You need to use HTML to write superscript and subscript text. That is, add `` or `` around your text.
+
+| Text Size | How to write it | Result |
+| ----------- | ------------------------ | ---------------------- |
+| Superscript | `superscript` | superscript |
+| Subscript | `subscript` | subscript |
+
+## Linking to pages
+
+You can add a link by wrapping text in `[]()`. You would write `[link to google](https://google.com)` to [link to google](https://google.com).
+
+Links to pages in your docs need to be root-relative. Basically, you should include the entire folder path. For example, `[link to text](/writing-content/text)` links to the page "Text" in our components section.
+
+Relative links like `[link to text](../text)` will open slower because we cannot optimize them as easily.
+
+## Blockquotes
+
+### Singleline
+
+To create a blockquote, add a `>` in front of a paragraph.
+
+> Dorothy followed her through many of the beautiful rooms in her castle.
+
+```md
+> Dorothy followed her through many of the beautiful rooms in her castle.
+```
+
+### Multiline
+
+> Dorothy followed her through many of the beautiful rooms in her castle.
+>
+> The Witch bade her clean the pots and kettles and sweep the floor and keep the fire fed with wood.
+
+```md
+> Dorothy followed her through many of the beautiful rooms in her castle.
+>
+> The Witch bade her clean the pots and kettles and sweep the floor and keep the fire fed with wood.
+```
+
+### LaTeX
+
+Mintlify supports [LaTeX](https://www.latex-project.org) through the Latex component.
+
+8 x (vk x H1 - H2) = (0,1)
+
+```md
+8 x (vk x H1 - H2) = (0,1)
+```
diff --git a/documentation/essentials/navigation.mdx b/documentation/essentials/navigation.mdx
new file mode 100644
index 00000000..60adeff2
--- /dev/null
+++ b/documentation/essentials/navigation.mdx
@@ -0,0 +1,87 @@
+---
+title: 'Navigation'
+description: 'The navigation field in docs.json defines the pages that go in the navigation menu'
+icon: 'map'
+---
+
+The navigation menu is the list of links on every website.
+
+You will likely update `docs.json` every time you add a new page. Pages do not show up automatically.
+
+## Navigation syntax
+
+Our navigation syntax is recursive which means you can make nested navigation groups. You don't need to include `.mdx` in page names.
+
+
+
+```json Regular Navigation
+"navigation": {
+ "tabs": [
+ {
+ "tab": "Docs",
+ "groups": [
+ {
+ "group": "Getting Started",
+ "pages": ["quickstart"]
+ }
+ ]
+ }
+ ]
+}
+```
+
+```json Nested Navigation
+"navigation": {
+ "tabs": [
+ {
+ "tab": "Docs",
+ "groups": [
+ {
+ "group": "Getting Started",
+ "pages": [
+ "quickstart",
+ {
+ "group": "Nested Reference Pages",
+ "pages": ["nested-reference-page"]
+ }
+ ]
+ }
+ ]
+ }
+ ]
+}
+```
+
+
+
+## Folders
+
+Simply put your MDX files in folders and update the paths in `docs.json`.
+
+For example, to have a page at `https://yoursite.com/your-folder/your-page` you would make a folder called `your-folder` containing an MDX file called `your-page.mdx`.
+
+
+
+You cannot use `api` for the name of a folder unless you nest it inside another folder. Mintlify uses Next.js which reserves the top-level `api` folder for internal server calls. A folder name such as `api-reference` would be accepted.
+
+
+
+```json Navigation With Folder
+"navigation": {
+ "tabs": [
+ {
+ "tab": "Docs",
+ "groups": [
+ {
+ "group": "Group Name",
+ "pages": ["your-folder/your-page"]
+ }
+ ]
+ }
+ ]
+}
+```
+
+## Hidden pages
+
+MDX files not included in `docs.json` will not show up in the sidebar but are accessible through the search bar and by linking directly to them.
diff --git a/documentation/essentials/reusable-snippets.mdx b/documentation/essentials/reusable-snippets.mdx
new file mode 100644
index 00000000..376e27bd
--- /dev/null
+++ b/documentation/essentials/reusable-snippets.mdx
@@ -0,0 +1,110 @@
+---
+title: "Reusable snippets"
+description: "Reusable, custom snippets to keep content in sync"
+icon: "recycle"
+---
+
+import SnippetIntro from '/snippets/snippet-intro.mdx';
+
+
+
+## Creating a custom snippet
+
+**Pre-condition**: You must create your snippet file in the `snippets` directory.
+
+
+ Any page in the `snippets` directory will be treated as a snippet and will not
+ be rendered into a standalone page. If you want to create a standalone page
+ from the snippet, import the snippet into another file and call it as a
+ component.
+
+
+### Default export
+
+1. Add content to your snippet file that you want to re-use across multiple
+ locations. Optionally, you can add variables that can be filled in via props
+ when you import the snippet.
+
+```mdx snippets/my-snippet.mdx
+Hello world! This is my content I want to reuse across pages. My keyword of the
+day is {word}.
+```
+
+
+ The content that you want to reuse must be inside the `snippets` directory in
+ order for the import to work.
+
+
+2. Import the snippet into your destination file.
+
+```mdx destination-file.mdx
+---
+title: My title
+description: My Description
+---
+
+import MySnippet from '/snippets/path/to/my-snippet.mdx';
+
+## Header
+
+Lorem impsum dolor sit amet.
+
+
+```
+
+### Reusable variables
+
+1. Export a variable from your snippet file:
+
+```mdx snippets/path/to/custom-variables.mdx
+export const myName = 'my name';
+
+export const myObject = { fruit: 'strawberries' };
+```
+
+2. Import the snippet from your destination file and use the variable:
+
+```mdx destination-file.mdx
+---
+title: My title
+description: My Description
+---
+
+import { myName, myObject } from '/snippets/path/to/custom-variables.mdx';
+
+Hello, my name is {myName} and I like {myObject.fruit}.
+```
+
+### Reusable components
+
+1. Inside your snippet file, create a component that takes in props by exporting
+ your component in the form of an arrow function.
+
+```mdx snippets/custom-component.mdx
+export const MyComponent = ({ title }) => (
+
+
{title}
+
... snippet content ...
+
+);
+```
+
+
+ MDX does not compile inside the body of an arrow function. Stick to HTML
+ syntax when you can or use a default export if you need to use MDX.
+
+
+2. Import the snippet into your destination file and pass in the props
+
+```mdx destination-file.mdx
+---
+title: My title
+description: My Description
+---
+
+import { MyComponent } from '/snippets/custom-component.mdx';
+
+Lorem ipsum dolor sit amet.
+
+
+```
diff --git a/documentation/essentials/routing.mdx b/documentation/essentials/routing.mdx
new file mode 100644
index 00000000..be2f4afc
--- /dev/null
+++ b/documentation/essentials/routing.mdx
@@ -0,0 +1,472 @@
+---
+title: 'Intelligent Routing'
+description: 'Implement smart routing logic for multi-agent workflows'
+icon: 'route'
+---
+
+## Routing Overview
+
+Flo AI provides powerful routing capabilities that allow you to create intelligent workflows where requests are dynamically routed to the most appropriate agents based on content, context, or custom logic.
+
+## Routing Types
+
+### Conditional Routing
+
+Route based on simple conditions or content analysis:
+
+```python
+from flo_ai.arium.memory import BaseMemory
+
+def route_by_type(memory: BaseMemory) -> str:
+ """Route based on classification result"""
+ messages = memory.get()
+ last_message = str(messages[-1]) if messages else ""
+
+ if "technical" in last_message.lower():
+ return "tech_specialist"
+ elif "billing" in last_message.lower():
+ return "billing_specialist"
+ else:
+ return "general_specialist"
+
+# Use in workflow
+workflow = (
+ AriumBuilder()
+ .add_agents([classifier, tech_specialist, billing_specialist, general_specialist])
+ .start_with(classifier)
+ .add_edge(classifier, [tech_specialist, billing_specialist, general_specialist], route_by_type)
+ .end_with([tech_specialist, billing_specialist, general_specialist])
+)
+```
+
+### LLM-Powered Routing
+
+Use AI to make intelligent routing decisions:
+
+```yaml smart-routing.yaml
+routers:
+ - name: "content_router"
+ type: "smart"
+ routing_options:
+ technical_writer: "Technical content, documentation, tutorials, code examples"
+ creative_writer: "Creative writing, storytelling, fiction, poetry"
+ marketing_writer: "Marketing copy, sales content, campaigns, advertisements"
+ model:
+ provider: "openai"
+ name: "gpt-4o-mini"
+ temperature: 0.1
+```
+
+### Reflection Routing
+
+Implement A→B→A→C feedback patterns:
+
+```yaml reflection-routing.yaml
+routers:
+ - name: "reflection_router"
+ type: "reflection"
+ flow_pattern: ["writer", "critic", "writer"]
+ model:
+ provider: "openai"
+ name: "gpt-4o-mini"
+ settings:
+ max_iterations: 3
+ convergence_threshold: 0.8
+```
+
+### Plan-Execute Routing
+
+Cursor-style development workflows:
+
+```yaml plan-execute-routing.yaml
+routers:
+ - name: "plan_execute_router"
+ type: "plan_execute"
+ settings:
+ planner_agent: "planner"
+ executor_agent: "developer"
+ reviewer_agent: "reviewer"
+ max_iterations: 5
+ quality_threshold: 0.9
+ model:
+ provider: "openai"
+ name: "gpt-4o-mini"
+```
+
+## Advanced Routing Patterns
+
+### Multi-Criteria Routing
+
+```python
+def advanced_router(memory: BaseMemory) -> str:
+ """Route based on multiple criteria"""
+ messages = memory.get()
+ last_message = str(messages[-1]) if messages else ""
+
+ # Extract criteria
+ urgency = "urgent" in last_message.lower()
+ technical = "technical" in last_message.lower()
+ billing = "billing" in last_message.lower()
+
+ # Route based on combination of criteria
+ if urgency and technical:
+ return "senior_tech_specialist"
+ elif urgency and billing:
+ return "billing_manager"
+ elif technical:
+ return "tech_specialist"
+ elif billing:
+ return "billing_specialist"
+ else:
+ return "general_support"
+```
+
+### Context-Aware Routing
+
+```python
+def context_aware_router(memory: BaseMemory) -> str:
+ """Route based on conversation context"""
+ messages = memory.get()
+
+ # Analyze conversation history
+ conversation_context = []
+ for msg in messages[-5:]: # Last 5 messages
+ conversation_context.append(str(msg))
+
+ context_text = " ".join(conversation_context)
+
+ # Route based on context
+ if "previous issue" in context_text.lower():
+ return "follow_up_specialist"
+ elif "new customer" in context_text.lower():
+ return "onboarding_specialist"
+ else:
+ return "general_specialist"
+```
+
+### Load Balancing
+
+```python
+import random
+from collections import defaultdict
+
+class LoadBalancer:
+ def __init__(self):
+ self.agent_loads = defaultdict(int)
+
+ def route_with_load_balancing(self, memory: BaseMemory) -> str:
+ """Route to agent with least load"""
+ available_agents = ["agent1", "agent2", "agent3"]
+
+ # Find agent with minimum load
+ min_load = min(self.agent_loads[agent] for agent in available_agents)
+ least_loaded = [agent for agent in available_agents
+ if self.agent_loads[agent] == min_load]
+
+ # Random selection among least loaded
+ selected = random.choice(least_loaded)
+ self.agent_loads[selected] += 1
+
+ return selected
+```
+
+## YAML Router Configuration
+
+### Smart Router
+
+```yaml
+routers:
+ - name: "intelligent_router"
+ type: "smart"
+ description: "Route requests based on content analysis"
+ routing_options:
+ technical_support:
+ description: "Technical issues and troubleshooting"
+ keywords: ["error", "bug", "technical", "code", "system"]
+ priority: "high"
+
+ billing_support:
+ description: "Billing and payment issues"
+ keywords: ["billing", "payment", "invoice", "charge", "refund"]
+ priority: "medium"
+
+ general_support:
+ description: "General questions and information"
+ keywords: ["question", "help", "information", "general"]
+ priority: "low"
+
+ model:
+ provider: "openai"
+ name: "gpt-4o-mini"
+ temperature: 0.1
+ max_tokens: 100
+
+ settings:
+ confidence_threshold: 0.8
+ fallback_route: "general_support"
+ timeout: 10
+```
+
+### Conditional Router
+
+```yaml
+routers:
+ - name: "conditional_router"
+ type: "conditional"
+ description: "Route based on predefined conditions"
+ conditions:
+ - condition: "urgency == 'high' and type == 'technical'"
+ route: "senior_tech_specialist"
+ - condition: "urgency == 'high' and type == 'billing'"
+ route: "billing_manager"
+ - condition: "type == 'technical'"
+ route: "tech_specialist"
+ - condition: "type == 'billing'"
+ route: "billing_specialist"
+ - condition: "default"
+ route: "general_support"
+```
+
+### Reflection Router
+
+```yaml
+routers:
+ - name: "reflection_router"
+ type: "reflection"
+ description: "Implement iterative improvement pattern"
+ flow_pattern: ["writer", "critic", "writer"]
+ model:
+ provider: "openai"
+ name: "gpt-4o-mini"
+ settings:
+ max_iterations: 3
+ convergence_threshold: 0.85
+ improvement_required: true
+ quality_metrics:
+ - "clarity"
+ - "accuracy"
+ - "completeness"
+```
+
+## Custom Router Implementation
+
+### Creating Custom Routers
+
+```python
+from flo_ai.arium.routers import BaseRouter
+from flo_ai.arium.memory import BaseMemory
+
+class CustomRouter(BaseRouter):
+ def __init__(self, config: dict):
+ self.config = config
+ self.routing_rules = config.get('rules', [])
+
+ async def route(self, memory: BaseMemory) -> str:
+ """Custom routing logic"""
+ messages = memory.get()
+ last_message = str(messages[-1]) if messages else ""
+
+ # Apply custom routing rules
+ for rule in self.routing_rules:
+ if self._matches_rule(last_message, rule):
+ return rule['target']
+
+ # Default route
+ return self.config.get('default_route', 'general_agent')
+
+ def _matches_rule(self, message: str, rule: dict) -> bool:
+ """Check if message matches routing rule"""
+ keywords = rule.get('keywords', [])
+ return any(keyword.lower() in message.lower() for keyword in keywords)
+```
+
+### Using Custom Routers
+
+```python
+# Configure custom router
+router_config = {
+ 'rules': [
+ {
+ 'keywords': ['urgent', 'critical', 'emergency'],
+ 'target': 'priority_support'
+ },
+ {
+ 'keywords': ['technical', 'bug', 'error'],
+ 'target': 'tech_support'
+ }
+ ],
+ 'default_route': 'general_support'
+}
+
+custom_router = CustomRouter(router_config)
+
+# Use in workflow
+workflow = (
+ AriumBuilder()
+ .add_agents([classifier, priority_support, tech_support, general_support])
+ .start_with(classifier)
+ .add_edge(classifier, [priority_support, tech_support, general_support], custom_router)
+ .end_with([priority_support, tech_support, general_support])
+)
+```
+
+## Router Performance
+
+### Caching Routes
+
+```python
+from functools import lru_cache
+
+class CachedRouter:
+ def __init__(self):
+ self.route_cache = {}
+
+ @lru_cache(maxsize=1000)
+ def get_route(self, message_hash: str) -> str:
+ """Cache routing decisions"""
+ # Routing logic here
+ return route_decision
+
+ def route(self, memory: BaseMemory) -> str:
+ messages = memory.get()
+ last_message = str(messages[-1]) if messages else ""
+ message_hash = hash(last_message)
+
+ if message_hash in self.route_cache:
+ return self.route_cache[message_hash]
+
+ route = self.get_route(message_hash)
+ self.route_cache[message_hash] = route
+ return route
+```
+
+### Performance Monitoring
+
+```python
+import time
+from typing import Dict, List
+
+class MonitoredRouter:
+ def __init__(self):
+ self.routing_times: List[float] = []
+ self.route_counts: Dict[str, int] = {}
+
+ def route(self, memory: BaseMemory) -> str:
+ start_time = time.time()
+
+ # Routing logic
+ route_decision = self._determine_route(memory)
+
+ # Record metrics
+ routing_time = time.time() - start_time
+ self.routing_times.append(routing_time)
+ self.route_counts[route_decision] = self.route_counts.get(route_decision, 0) + 1
+
+ return route_decision
+
+ def get_metrics(self) -> dict:
+ return {
+ 'avg_routing_time': sum(self.routing_times) / len(self.routing_times),
+ 'route_distribution': self.route_counts,
+ 'total_routes': len(self.routing_times)
+ }
+```
+
+## Error Handling in Routing
+
+### Fallback Routes
+
+```python
+def robust_router(memory: BaseMemory) -> str:
+ """Router with fallback handling"""
+ try:
+ # Primary routing logic
+ return primary_routing_logic(memory)
+ except Exception as e:
+ print(f"Routing error: {e}")
+ # Fallback to default route
+ return "fallback_agent"
+```
+
+### Retry Logic
+
+```python
+import asyncio
+
+class RetryRouter:
+ def __init__(self, max_retries: int = 3):
+ self.max_retries = max_retries
+
+ async def route_with_retry(self, memory: BaseMemory) -> str:
+ """Route with retry logic"""
+ for attempt in range(self.max_retries):
+ try:
+ return await self._route(memory)
+ except Exception as e:
+ if attempt == self.max_retries - 1:
+ return "fallback_agent"
+ await asyncio.sleep(2 ** attempt) # Exponential backoff
+```
+
+## Best Practices
+
+### Router Design
+
+1. **Keep it simple**: Start with basic conditional routing
+2. **Use LLM routing**: For complex content analysis
+3. **Implement fallbacks**: Always have a default route
+4. **Monitor performance**: Track routing metrics
+5. **Test thoroughly**: Validate routing logic with various inputs
+
+### Performance Optimization
+
+```python
+# Optimize router performance
+class OptimizedRouter:
+ def __init__(self):
+ self.route_cache = {}
+ self.keyword_index = self._build_keyword_index()
+
+ def _build_keyword_index(self) -> dict:
+ """Pre-build keyword index for fast lookup"""
+ return {
+ 'technical': ['bug', 'error', 'code', 'system'],
+ 'billing': ['payment', 'invoice', 'charge', 'refund'],
+ 'urgent': ['urgent', 'critical', 'emergency', 'asap']
+ }
+
+ def route(self, memory: BaseMemory) -> str:
+ """Fast routing using pre-built index"""
+ messages = memory.get()
+ last_message = str(messages[-1]) if messages else ""
+
+ # Fast keyword matching
+ for category, keywords in self.keyword_index.items():
+ if any(keyword in last_message.lower() for keyword in keywords):
+ return f"{category}_agent"
+
+ return "general_agent"
+```
+
+### Security Considerations
+
+```python
+def secure_router(memory: BaseMemory) -> str:
+ """Router with security validation"""
+ messages = memory.get()
+ last_message = str(messages[-1]) if messages else ""
+
+ # Validate input
+ if len(last_message) > 10000: # Prevent very long inputs
+ return "error_handler"
+
+ # Check for malicious patterns
+ malicious_patterns = ['