Skip to content

feat(flo-ai): add field level validation for agent/arium yaml#178

Merged
vishnurk6247 merged 3 commits into
developfrom
feat/yaml-validation
Dec 12, 2025
Merged

feat(flo-ai): add field level validation for agent/arium yaml#178
vishnurk6247 merged 3 commits into
developfrom
feat/yaml-validation

Conversation

@vishnurk6247
Copy link
Copy Markdown
Member

@vishnurk6247 vishnurk6247 commented Dec 12, 2025

Summary by CodeRabbit

  • New Features
    • Unified agent-related imports under a single public module; improved agent runtime with enhanced tool orchestration and reasoning options.
  • Refactor
    • Public API export surface reorganized to use the new module locations (import paths updated across examples and packages).
  • Documentation
    • All docs and quickstart examples updated to reflect new import paths and clearer YAML examples.
  • Tests
    • Added extensive YAML/config validation unit tests and improved validation error formatting.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Dec 12, 2025

Walkthrough

This PR relocates and re-exports agent-related public APIs into a new flo_ai.agent module, replaces the runtime Agent in models with Pydantic configuration models, adds typed YAML validation and builder flows, updates imports across docs/examples/tests, and introduces a new runtime Agent implementation plus helper changes.

Changes

Cohort / File(s) Summary
Public API & Core Agent
flo_ai/flo_ai/__init__.py, flo_ai/flo_ai/agent/__init__.py, flo_ai/flo_ai/agent/agent.py, flo_ai/flo_ai/agent/builder.py, flo_ai/flo_ai/agent/plan_agents.py
New flo_ai.agent module: re-exports public types, adds a runtime Agent class with conversational and tool-using flows, and updates AgentBuilder to validate/consume Pydantic YAML models (new _validate_yaml_config).
Models / Typed Configs
flo_ai/flo_ai/models/agent.py, flo_ai/flo_ai/models/arium.py, flo_ai/flo_ai/models/__init__.py
Replaced runtime Agent with declarative Pydantic models: AgentYamlModel, LLMConfigModel, parser/tool/arium models, and removed several agent runtime exports from models (moved to agent).
Arium & Builder Integration
flo_ai/flo_ai/arium/arium.py, flo_ai/flo_ai/arium/base.py, flo_ai/flo_ai/arium/builder.py
Arium builder switched to model-driven parsing using AriumYamlModel, validation via _validate_yaml_config, and model-based construction of agents/routers/LLMs.
LLM Factory & Helpers
flo_ai/flo_ai/helpers/llm_factory.py, flo_ai/flo_ai/helpers/yaml_validation.py, flo_ai/flo_ai/formatter/yaml_format_parser.py
LLMFactory now accepts LLMConfigModel (typed config); added format_validation_error_path helper; changed dynamic Literal construction to use __getitem__.
Docs & Guides
TOOLS.md, documentation/*.mdx, flo_ai/README.md, flo_ai/docs/arium_yaml_guide.md
Updated example import paths to flo_ai.agent and adjusted formatting/inline examples to reflect API relocation.
Examples
flo_ai/examples/* (many files, see diff)
Updated imports to use flo_ai.agent for Agent, AgentBuilder, ReasoningPattern; minor runtime tweaks in some examples (MockLLM signatures, output formatting, YAML parser details).
Tests
flo_ai/tests/unit-tests/* (multiple)
Updated imports for new module paths; added extensive YAML/model validation tests; adjusted MockLLM signatures to accept **kwargs in stream methods.
Telephony Module
wavefront/server/modules/voice_agents_module/.../telephony_schemas.py
Replaced @model_validator usage with model_post_init for SIP-connection validation in Pydantic model.

Sequence Diagram(s)

mermaid
sequenceDiagram
participant Client
participant Agent
participant LLM
participant Tool
participant Telemetry
Note over Client,Agent: Run sequence (inputs / variables)
Client->>Agent: run(inputs, variables)
Agent->>Agent: resolve variables & build prompt
Agent->>LLM: send prompt (with functions if tools)
alt LLM function-call response
LLM-->>Agent: function_call(name, args)
Agent->>Telemetry: trace tool invocation
Agent->>Tool: execute(name, args)
Tool-->>Agent: tool result
Agent->>Agent: incorporate result as FunctionMessage
Agent->>LLM: continue conversation with updated messages
else LLM final response
LLM-->>Agent: final assistant message
end
Agent->>Telemetry: report execution metrics
Agent-->>Client: return final messages

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

  • Review focus:
    • flo_ai/flo_ai/agent/agent.py — complex control flow for tool execution, retries, and final-answer detection.
    • flo_ai/flo_ai/agent/builder.py & flo_ai/flo_ai/arium/builder.py — Pydantic validation integration and config->runtime mapping.
    • flo_ai/flo_ai/helpers/llm_factory.py — signature and config access changes to LLM creation.
    • Tests & examples — ensure import updates and MockLLM signature changes are consistent.

Possibly related PRs

Suggested reviewers

  • vizsatiz

Poem

🐰 I hopped through imports, neat and spry,

Swapped builder paths and gave models a try.
YAML now validated, prompts in a row,
Agents call tools, and telemetry glows.
A rabbit cheers for cleaner API flow! 🎩✨

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately summarizes the main change: adding field-level validation for agent/arium YAML configurations through new Pydantic models and validation logic.
Docstring Coverage ✅ Passed Docstring coverage is 94.29% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/yaml-validation

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Comment thread flo_ai/flo_ai/agent/agent.py Fixed
Comment thread flo_ai/flo_ai/arium/builder.py Fixed
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (6)
flo_ai/flo_ai/formatter/yaml_format_parser.py (1)

85-95: Use Literal.__class_getitem__() instead of Literal.__getitem__().

For Python 3.10+, __class_getitem__ is the correct hook per PEP 560 for subscripting generic classes. __getitem__ is non-standard and intended only for legacy runtimes. Replace Literal.__getitem__(literals) with Literal.__class_getitem__(literals) on line 94.

flo_ai/docs/arium_yaml_guide.md (1)

742-786: Add languages to fenced code blocks (MD040).

- ```
+ ```text
  ValueError: YAML must contain an "arium" section
- ```
+ ```

- ```
+ ```text
  ValueError: Agent {name} must have either yaml_config or yaml_file
- ```
+ ```

- ```
+ ```text
  ValueError: Tool {name} not found in provided tools dictionary
- ```
+ ```

- ```
+ ```text
  ValueError: Router {name} not found in provided routers dictionary
- ```
+ ```

- ```
+ ```text
  ValueError: Workflow must specify a start node
  ValueError: Workflow must specify end nodes
- ```
+ ```
documentation/essentials/yaml-agents.mdx (2)

32-38: Docs likely call AgentBuilder.from_yaml incorrectly (positional arg ambiguity).
If from_yaml is from_yaml(yaml_str=None, yaml_file=None, ...), then from_yaml('agent.yaml') will be treated as YAML content, not a file path.

-from flo_ai.agent import AgentBuilder
-
-# Load agent from YAML
-agent = AgentBuilder.from_yaml('agent.yaml')
+from flo_ai.agent import AgentBuilder
+
+# Load agent from YAML file
+builder = AgentBuilder.from_yaml(yaml_file='agent.yaml')
+agent = builder.build()
 response = await agent.run('How can I reset my password?')

197-214: from_yaml_string() does not exist; use AgentBuilder.from_yaml(yaml_str=...) and call .build() to get the agent.

The from_yaml_string method is not implemented in the codebase. The correct API is from_yaml() which accepts a yaml_str parameter and returns an AgentBuilder instance. Call .build() on the builder to get the final Agent object.

-# Load from string
-yaml_content = """
+yaml_content = """
 agent:
   name: "Test Agent"
   prompt: "You are a test agent."
   model:
     provider: "openai"
     name: "gpt-4o-mini"
 """
-
-agent = AgentBuilder.from_yaml_string(yaml_content)
+agent = AgentBuilder.from_yaml(yaml_str=yaml_content).build()
flo_ai/flo_ai/__init__.py (1)

64-122: Remove HumanMessage and AIMessage from __all__, or import them from the models package.

The symbols HumanMessage and AIMessage are listed in __all__ but are not imported into the module. Either these symbols need to be imported from .models, or they should be removed from __all__ if they are not part of the public API.

flo_ai/flo_ai/helpers/llm_factory.py (1)

20-28: Provider alias mismatch: LLMConfigModel allows aliases that SUPPORTED_PROVIDERS rejects.

LLMConfigModel (lines 85-150 in agent.py) accepts 'claude' as an alias for 'anthropic' and 'google' as an alias for 'gemini', but SUPPORTED_PROVIDERS doesn't include these aliases. A config using provider: claude will pass Pydantic validation but fail at runtime with "Unsupported model provider".

Consider either:

  1. Adding aliases to SUPPORTED_PROVIDERS and mapping them in create_llm, or
  2. Normalizing aliases in LLMConfigModel.model_post_init before they reach the factory.
 SUPPORTED_PROVIDERS = {
     'openai',
     'anthropic',
+    'claude',  # Alias for anthropic
     'gemini',
+    'google',  # Alias for gemini
     'ollama',
     'vertexai',
     'rootflo',
     'openai_vllm',
 }

And update routing logic:

-        if provider not in LLMFactory.SUPPORTED_PROVIDERS:
+        # Normalize aliases
+        provider_map = {'claude': 'anthropic', 'google': 'gemini'}
+        provider = provider_map.get(provider, provider)
+
+        if provider not in LLMFactory.SUPPORTED_PROVIDERS:

Also applies to: 46-52

🧹 Nitpick comments (13)
flo_ai/flo_ai/models/agent.py (1)

206-216: Normalize job vs prompt to avoid downstream ambiguity.

If job wins, set prompt = None (or vice versa) so builder/runtime mapping doesn’t have to re-encode precedence rules.

 def model_post_init(self, __context):
     """Ensure either job or prompt is provided."""
     if not self.job and not self.prompt:
         raise ValueError(
             "Agent configuration must have either 'job' or 'prompt' field"
         )
     # If both are provided, prefer 'job' and ignore 'prompt'
     if self.job and self.prompt:
-        # Keep job, prompt will be ignored in favor of job
-        pass
+        self.prompt = None
flo_ai/flo_ai/arium/arium.py (1)

5-5: Prefer importing from the public surface for consistency (from flo_ai.agent import Agent).
Directly importing flo_ai.agent.agent couples internal modules to file layout; using the package export keeps refactors cheaper.

flo_ai/examples/simple_working_demo.py (1)

12-18: Consider standardizing UserMessage import path across examples. If UserMessage is meant to be part of the “public surface”, prefer a single canonical import in docs/examples to reduce churn.

flo_ai/flo_ai/helpers/yaml_validation.py (1)

1-53: Nice usability win for YAML errors; consider tightening types + name string coercion.
Optional, but it’ll make intent clearer and avoid surprising non-str name values.

-from typing import Dict, Any, Tuple
+from typing import Any, Mapping, Sequence

-def format_validation_error_path(loc: Tuple, config: Dict[str, Any]) -> str:
+def format_validation_error_path(loc: Sequence[Any], config: Mapping[str, Any]) -> str:
@@
-                if isinstance(item, dict) and 'name' in item:
-                    path_parts.append(f"{item['name']}")
+                if isinstance(item, dict) and 'name' in item:
+                    path_parts.append(str(item['name']))
flo_ai/tests/unit-tests/test_arium_yaml_validation.py (4)

441-442: Minor: Redundant assertion check.

Line 441 already asserts config.iterators is not None, making the first part of line 442's assertion redundant.

         # foreach_nodes should be merged into iterators
         assert config.iterators is not None
-        assert config.iterators is not None and len(config.iterators) == 1
+        assert len(config.iterators) == 1

479-480: Minor: Same redundant assertion pattern.

Same issue as above - the None check is already performed on line 479.

         assert config.arium.agents is not None
-        assert config.arium.agents is not None and len(config.arium.agents) == 1
+        assert len(config.arium.agents) == 1

558-559: Minor: Redundant None check pattern repeated.

         assert config.arium.agents is not None
-        assert config.arium.agents is not None and len(config.arium.agents) == 2
+        assert len(config.arium.agents) == 2

595-596: Minor: Redundant None check pattern repeated.

         assert config.arium.agents is not None
-        assert config.arium.agents is not None and len(config.arium.agents) == 1
+        assert len(config.arium.agents) == 1
flo_ai/flo_ai/agent/builder.py (1)

320-324: Type hint mismatch with actual usage.

The type hint List[Dict[str, Any]] for tools_config doesn't reflect that the list can contain strings (as seen in lines 290-298 where strings are appended). The implementation at lines 337-343 handles strings correctly, but the type hint is misleading.

     @classmethod
     def _process_yaml_tools(
         cls,
-        tools_config: List[Dict[str, Any]],
+        tools_config: List[Union[str, Dict[str, Any]]],
         tool_registry: Optional[Dict[str, Tool]] = None,
     ) -> List[Tool]:
flo_ai/flo_ai/helpers/llm_factory.py (1)

70-74: Redundant validation: name is already validated by LLMConfigModel.model_post_init.

For providers openai, anthropic, claude, gemini, google, and ollama, the Pydantic model already raises a ValueError if name is missing. This factory check is defensive but redundant.

Not blocking—defense in depth is acceptable, but you could simplify by trusting the model validation.

flo_ai/flo_ai/arium/builder.py (1)

802-819: Unused parameter: base_llm is accepted but never used.

The base_llm parameter is documented and accepted but not passed to create_llm_from_config. If it's intended as a fallback, consider implementing that logic; otherwise, remove it to avoid confusion.

flo_ai/flo_ai/agent/agent.py (1)

289-291: Control flow after continue is unreachable.

Lines 290-291 (continue and break) can never both execute. After line 290's continue, execution jumps back to the while loop, making line 291's break unreachable in that branch. The break is only reached when not assistant_message (line 246 being falsy after line 244 check), which is fine, but the indentation suggests it's part of the same logical block.

This works correctly, but the structure is confusing. Consider restructuring for clarity.

flo_ai/flo_ai/models/arium.py (1)

320-324: No-op validator can be removed.

This validator just returns v without any transformation or validation. The merging logic in model_post_init already handles the iterators/foreach_nodes aliasing.

-    @field_validator('iterators', 'foreach_nodes', mode='before')
-    @classmethod
-    def validate_foreach_nodes(cls, v):
-        """Handle both 'iterators' and 'foreach_nodes' aliases."""
-        return v
-
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7b3012c and 0cedc1d.

⛔ Files ignored due to path filters (1)
  • flo_ai/uv.lock is excluded by !**/*.lock
📒 Files selected for processing (64)
  • TOOLS.md (4 hunks)
  • documentation/development.mdx (1 hunks)
  • documentation/essentials/agents.mdx (4 hunks)
  • documentation/essentials/arium.mdx (8 hunks)
  • documentation/essentials/code.mdx (7 hunks)
  • documentation/essentials/telemetry.mdx (12 hunks)
  • documentation/essentials/yaml-agents.mdx (12 hunks)
  • documentation/quickstart.mdx (6 hunks)
  • flo_ai/README.md (5 hunks)
  • flo_ai/docs/arium_yaml_guide.md (33 hunks)
  • flo_ai/examples/agent_builder_usage.py (1 hunks)
  • flo_ai/examples/arium_examples.py (1 hunks)
  • flo_ai/examples/arium_linear_usage.py (1 hunks)
  • flo_ai/examples/arium_yaml_example.py (2 hunks)
  • flo_ai/examples/chat_history.py (1 hunks)
  • flo_ai/examples/cot_agent_example.py (1 hunks)
  • flo_ai/examples/cot_conversational_example.py (1 hunks)
  • flo_ai/examples/custom_plan_execute_demo.py (1 hunks)
  • flo_ai/examples/document_processing_example.py (1 hunks)
  • flo_ai/examples/example_graph_visualization.py (2 hunks)
  • flo_ai/examples/flo_tool_example.py (1 hunks)
  • flo_ai/examples/llm_router_example.py (1 hunks)
  • flo_ai/examples/multi_tool_example.py (1 hunks)
  • flo_ai/examples/ollama_agent_example.py (1 hunks)
  • flo_ai/examples/output_formatter.py (1 hunks)
  • flo_ai/examples/partial_tool_example.py (1 hunks)
  • flo_ai/examples/simple_flow_router_demo.py (1 hunks)
  • flo_ai/examples/simple_plan_execute_demo.py (1 hunks)
  • flo_ai/examples/simple_reflection_router_demo.py (1 hunks)
  • flo_ai/examples/simple_working_demo.py (1 hunks)
  • flo_ai/examples/simple_yaml_workflow.py (1 hunks)
  • flo_ai/examples/tool_usage.py (1 hunks)
  • flo_ai/examples/tool_using_agent.py (1 hunks)
  • flo_ai/examples/tools_quickstart.py (1 hunks)
  • flo_ai/examples/usage_claude.py (1 hunks)
  • flo_ai/examples/variables_workflow_example.py (1 hunks)
  • flo_ai/examples/variables_workflow_yaml_example.py (1 hunks)
  • flo_ai/examples/vertexai_agent_example.py (1 hunks)
  • flo_ai/examples/vllm_agent_usage.py (1 hunks)
  • flo_ai/examples/yaml_agent_example.py (2 hunks)
  • flo_ai/examples/yaml_tool_config_example.py (1 hunks)
  • flo_ai/flo_ai/__init__.py (2 hunks)
  • flo_ai/flo_ai/agent/__init__.py (1 hunks)
  • flo_ai/flo_ai/agent/agent.py (1 hunks)
  • flo_ai/flo_ai/agent/builder.py (4 hunks)
  • flo_ai/flo_ai/agent/plan_agents.py (1 hunks)
  • flo_ai/flo_ai/arium/arium.py (1 hunks)
  • flo_ai/flo_ai/arium/base.py (1 hunks)
  • flo_ai/flo_ai/arium/builder.py (22 hunks)
  • flo_ai/flo_ai/formatter/yaml_format_parser.py (2 hunks)
  • flo_ai/flo_ai/helpers/llm_factory.py (9 hunks)
  • flo_ai/flo_ai/helpers/yaml_validation.py (1 hunks)
  • flo_ai/flo_ai/models/__init__.py (2 hunks)
  • flo_ai/flo_ai/models/agent.py (1 hunks)
  • flo_ai/flo_ai/models/arium.py (1 hunks)
  • flo_ai/tests/unit-tests/test_agent_builder_tools.py (1 hunks)
  • flo_ai/tests/unit-tests/test_agent_yaml_validation.py (1 hunks)
  • flo_ai/tests/unit-tests/test_arium_builder.py (1 hunks)
  • flo_ai/tests/unit-tests/test_arium_yaml.py (3 hunks)
  • flo_ai/tests/unit-tests/test_arium_yaml_validation.py (1 hunks)
  • flo_ai/tests/unit-tests/test_base_llm.py (1 hunks)
  • flo_ai/tests/unit-tests/test_llm_router.py (1 hunks)
  • flo_ai/tests/unit-tests/test_yaml_tool_config.py (1 hunks)
  • wavefront/server/modules/voice_agents_module/voice_agents_module/models/telephony_schemas.py (2 hunks)
🧰 Additional context used
🧬 Code graph analysis (40)
flo_ai/examples/output_formatter.py (2)
flo_ai/flo_ai/agent/agent.py (1)
  • Agent (30-643)
flo_ai/flo_ai/agent/builder.py (1)
  • AgentBuilder (15-384)
flo_ai/tests/unit-tests/test_agent_yaml_validation.py (1)
flo_ai/flo_ai/models/agent.py (8)
  • LiteralValueModel (22-29)
  • MetadataModel (12-19)
  • LLMConfigModel (86-151)
  • SettingsModel (154-165)
  • ParserModel (70-76)
  • ParserFieldModel (32-67)
  • ExampleModel (79-83)
  • ToolConfigModel (168-180)
flo_ai/flo_ai/agent/__init__.py (4)
flo_ai/flo_ai/agent/agent.py (1)
  • Agent (30-643)
flo_ai/flo_ai/agent/base_agent.py (3)
  • BaseAgent (25-131)
  • AgentType (14-16)
  • ReasoningPattern (19-22)
flo_ai/flo_ai/agent/plan_agents.py (2)
  • PlannerAgent (15-71)
  • ExecutorAgent (74-122)
flo_ai/flo_ai/agent/builder.py (1)
  • AgentBuilder (15-384)
flo_ai/tests/unit-tests/test_arium_yaml_validation.py (3)
flo_ai/flo_ai/models/arium.py (8)
  • AriumYamlModel (351-359)
  • AriumConfigModel (297-348)
  • AriumAgentConfigModel (19-87)
  • FunctionNodeConfigModel (90-101)
  • RouterConfigModel (141-203)
  • WorkflowConfigModel (216-229)
  • AriumNodeConfigModel (232-287)
  • ForEachNodeConfigModel (290-294)
flo_ai/flo_ai/arium/builder.py (1)
  • _validate_yaml_config (292-319)
flo_ai/flo_ai/agent/builder.py (1)
  • _validate_yaml_config (189-216)
flo_ai/examples/custom_plan_execute_demo.py (1)
flo_ai/flo_ai/agent/plan_agents.py (2)
  • PlannerAgent (15-71)
  • ExecutorAgent (74-122)
flo_ai/examples/ollama_agent_example.py (2)
flo_ai/flo_ai/agent/builder.py (1)
  • AgentBuilder (15-384)
flo_ai/flo_ai/agent/base_agent.py (1)
  • ReasoningPattern (19-22)
flo_ai/tests/unit-tests/test_arium_builder.py (1)
flo_ai/flo_ai/agent/agent.py (1)
  • Agent (30-643)
flo_ai/examples/vertexai_agent_example.py (4)
flo_ai/flo_ai/agent/builder.py (1)
  • AgentBuilder (15-384)
flo_ai/flo_ai/llm/vertexai_llm.py (1)
  • VertexAI (7-26)
flo_ai/flo_ai/agent/agent.py (1)
  • Agent (30-643)
flo_ai/flo_ai/agent/base_agent.py (1)
  • ReasoningPattern (19-22)
flo_ai/examples/variables_workflow_yaml_example.py (1)
flo_ai/flo_ai/agent/builder.py (1)
  • AgentBuilder (15-384)
flo_ai/flo_ai/arium/base.py (1)
flo_ai/flo_ai/agent/agent.py (1)
  • Agent (30-643)
flo_ai/tests/unit-tests/test_yaml_tool_config.py (1)
flo_ai/flo_ai/agent/builder.py (1)
  • AgentBuilder (15-384)
flo_ai/examples/document_processing_example.py (1)
flo_ai/flo_ai/agent/builder.py (1)
  • AgentBuilder (15-384)
flo_ai/examples/agent_builder_usage.py (3)
flo_ai/flo_ai/agent/builder.py (1)
  • AgentBuilder (15-384)
flo_ai/flo_ai/tool/base_tool.py (1)
  • Tool (12-50)
flo_ai/flo_ai/agent/base_agent.py (1)
  • ReasoningPattern (19-22)
flo_ai/examples/tool_usage.py (5)
flo_ai/flo_ai/agent/agent.py (1)
  • Agent (30-643)
flo_ai/flo_ai/llm/openai_llm.py (1)
  • OpenAI (17-210)
flo_ai/flo_ai/tool/base_tool.py (1)
  • Tool (12-50)
flo_ai/flo_ai/models/agent_error.py (1)
  • AgentError (4-9)
flo_ai/flo_ai/agent/base_agent.py (1)
  • ReasoningPattern (19-22)
flo_ai/examples/tool_using_agent.py (4)
flo_ai/flo_ai/agent/builder.py (1)
  • AgentBuilder (15-384)
flo_ai/flo_ai/tool/base_tool.py (1)
  • Tool (12-50)
flo_ai/flo_ai/agent/base_agent.py (1)
  • ReasoningPattern (19-22)
flo_ai/flo_ai/agent/agent.py (1)
  • Agent (30-643)
flo_ai/examples/example_graph_visualization.py (3)
flo_ai/flo_ai/agent/agent.py (1)
  • Agent (30-643)
flo_ai/flo_ai/tool/flo_tool.py (1)
  • flo_tool (8-68)
flo_ai/flo_ai/llm/base_llm.py (2)
  • BaseLLM (9-144)
  • stream (33-41)
flo_ai/examples/tools_quickstart.py (1)
flo_ai/flo_ai/agent/builder.py (1)
  • AgentBuilder (15-384)
flo_ai/examples/yaml_tool_config_example.py (1)
flo_ai/flo_ai/agent/builder.py (1)
  • AgentBuilder (15-384)
flo_ai/examples/arium_linear_usage.py (1)
flo_ai/flo_ai/agent/agent.py (1)
  • Agent (30-643)
flo_ai/flo_ai/arium/arium.py (1)
flo_ai/flo_ai/agent/agent.py (1)
  • Agent (30-643)
flo_ai/examples/vllm_agent_usage.py (3)
flo_ai/flo_ai/agent/builder.py (1)
  • AgentBuilder (15-384)
flo_ai/flo_ai/tool/base_tool.py (1)
  • Tool (12-50)
flo_ai/flo_ai/agent/base_agent.py (1)
  • ReasoningPattern (19-22)
flo_ai/examples/simple_reflection_router_demo.py (1)
flo_ai/flo_ai/agent/agent.py (1)
  • Agent (30-643)
flo_ai/examples/simple_plan_execute_demo.py (1)
flo_ai/flo_ai/agent/agent.py (1)
  • Agent (30-643)
flo_ai/flo_ai/models/__init__.py (1)
flo_ai/flo_ai/models/chat_message.py (1)
  • MessageType (5-9)
flo_ai/examples/arium_examples.py (1)
flo_ai/flo_ai/agent/agent.py (1)
  • Agent (30-643)
flo_ai/examples/multi_tool_example.py (2)
flo_ai/flo_ai/agent/builder.py (1)
  • AgentBuilder (15-384)
flo_ai/flo_ai/agent/base_agent.py (1)
  • ReasoningPattern (19-22)
flo_ai/flo_ai/agent/plan_agents.py (1)
flo_ai/flo_ai/agent/agent.py (1)
  • Agent (30-643)
flo_ai/examples/simple_flow_router_demo.py (1)
flo_ai/flo_ai/agent/agent.py (1)
  • Agent (30-643)
flo_ai/examples/partial_tool_example.py (1)
flo_ai/flo_ai/agent/builder.py (1)
  • AgentBuilder (15-384)
flo_ai/examples/usage_claude.py (2)
flo_ai/flo_ai/agent/base_agent.py (1)
  • ReasoningPattern (19-22)
flo_ai/flo_ai/agent/agent.py (1)
  • Agent (30-643)
flo_ai/flo_ai/helpers/llm_factory.py (1)
flo_ai/flo_ai/models/agent.py (1)
  • LLMConfigModel (86-151)
flo_ai/examples/variables_workflow_example.py (2)
flo_ai/flo_ai/agent/builder.py (1)
  • AgentBuilder (15-384)
flo_ai/flo_ai/agent/agent.py (1)
  • Agent (30-643)
flo_ai/examples/cot_conversational_example.py (3)
flo_ai/flo_ai/agent/agent.py (1)
  • Agent (30-643)
flo_ai/flo_ai/models/chat_message.py (1)
  • UserMessage (65-70)
flo_ai/flo_ai/agent/base_agent.py (1)
  • ReasoningPattern (19-22)
flo_ai/examples/chat_history.py (2)
flo_ai/flo_ai/agent/builder.py (1)
  • AgentBuilder (15-384)
flo_ai/flo_ai/agent/agent.py (1)
  • Agent (30-643)
flo_ai/examples/arium_yaml_example.py (2)
flo_ai/flo_ai/agent/builder.py (1)
  • AgentBuilder (15-384)
flo_ai/flo_ai/agent/base_agent.py (1)
  • ReasoningPattern (19-22)
flo_ai/flo_ai/models/arium.py (1)
flo_ai/flo_ai/models/agent.py (7)
  • MetadataModel (12-19)
  • LLMConfigModel (86-151)
  • AgentConfigModel (183-239)
  • model_post_init (54-67)
  • model_post_init (117-151)
  • model_post_init (206-215)
  • model_post_init (248-252)
flo_ai/flo_ai/models/agent.py (2)
flo_ai/flo_ai/models/arium.py (5)
  • model_post_init (35-87)
  • model_post_init (177-203)
  • model_post_init (262-287)
  • model_post_init (326-348)
  • model_post_init (357-359)
wavefront/server/modules/voice_agents_module/voice_agents_module/models/telephony_schemas.py (1)
  • model_post_init (94-97)
wavefront/server/modules/voice_agents_module/voice_agents_module/models/telephony_schemas.py (2)
flo_ai/flo_ai/models/agent.py (4)
  • model_post_init (54-67)
  • model_post_init (117-151)
  • model_post_init (206-215)
  • model_post_init (248-252)
flo_ai/flo_ai/models/arium.py (5)
  • model_post_init (35-87)
  • model_post_init (177-203)
  • model_post_init (262-287)
  • model_post_init (326-348)
  • model_post_init (357-359)
flo_ai/tests/unit-tests/test_arium_yaml.py (2)
flo_ai/flo_ai/arium/builder.py (2)
  • AriumBuilder (20-936)
  • from_yaml (322-800)
flo_ai/flo_ai/agent/builder.py (1)
  • from_yaml (219-318)
flo_ai/flo_ai/__init__.py (4)
flo_ai/flo_ai/models/chat_message.py (1)
  • MessageType (5-9)
flo_ai/flo_ai/agent/agent.py (1)
  • Agent (30-643)
flo_ai/flo_ai/agent/base_agent.py (3)
  • BaseAgent (25-131)
  • AgentType (14-16)
  • ReasoningPattern (19-22)
flo_ai/flo_ai/agent/builder.py (1)
  • AgentBuilder (15-384)
🪛 LanguageTool
flo_ai/docs/arium_yaml_guide.md

[style] ~267-~267: The double modal “Requires nested” is nonstandard (only accepted in certain dialects). Consider “to be nested”.
Context: ...ne YAML Configuration:** - ⚠️ Requires nested YAML string - ⚠️ Limited IDE support fo...

(NEEDS_FIXED)

🪛 markdownlint-cli2 (0.18.1)
flo_ai/docs/arium_yaml_guide.md

748-748: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


756-756: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


764-764: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


772-772: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


780-780: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🔇 Additional comments (90)
flo_ai/examples/custom_plan_execute_demo.py (1)

10-15: Import-path update looks correct for the new public API surface.

Please sanity-check this example still runs end-to-end after packaging: flo_ai.agent must export PlannerAgent and ExecutorAgent (and any transitive deps like PlanAwareMemory tools).

flo_ai/tests/unit-tests/test_base_llm.py (1)

49-59: MockLLM.stream signature widened appropriately.

flo_ai/flo_ai/formatter/yaml_format_parser.py (1)

186-195: Type annotation on FloYamlParser.Builder.yaml_dict is a nice clarity improvement.

flo_ai/tests/unit-tests/test_llm_router.py (1)

38-48: MockLLM.stream accepting **kwargs matches the updated interface.

flo_ai/flo_ai/models/agent.py (1)

32-68: ParserFieldModel post-init validation is solid for literal/array/object invariants.

flo_ai/examples/simple_flow_router_demo.py (1)

8-14: Agent import-path update is consistent with the new module layout.

Please confirm this example still runs with the new Agent implementation (constructor args + return type of run(...)) and updated exports from flo_ai.agent.

flo_ai/examples/document_processing_example.py (1)

16-23: AgentBuilder import-path update aligns with the public API move.

Just verify packaging/exports: flo_ai.agent.AgentBuilder should be available for end users (and flo_ai.builder.agent_builder either removed or left as a compatibility shim).

flo_ai/examples/simple_yaml_workflow.py (1)

124-131: No issues found with attribute access.

The build_and_run() method has an explicit return type annotation of List[MessageMemoryItem], and MessageMemoryItem always has a result: BaseMessage attribute (required in __init__). BaseMessage has a content attribute. The code message.result.content is correct and safe.

Note: The isinstance(result, list) check on line 127 is redundant since build_and_run() always returns a list, not a union type.

flo_ai/tests/unit-tests/test_agent_builder_tools.py (1)

5-5: Import path update looks correct (AgentBuilder via flo_ai.agent).
This aligns the test with the new public API surface.

flo_ai/examples/tools_quickstart.py (1)

13-13: Import path update looks correct (AgentBuilder via flo_ai.agent).

flo_ai/examples/simple_reflection_router_demo.py (1)

11-11: Import path update looks correct (Agent via flo_ai.agent).

flo_ai/examples/multi_tool_example.py (1)

2-5: Import path updates look correct (AgentBuilder, ReasoningPattern via flo_ai.agent).

documentation/development.mdx (1)

56-60: Docs snippet updated correctly to the new import path.

flo_ai/flo_ai/agent/plan_agents.py (1)

8-10: Import path update looks correct (Agent via flo_ai.agent).

flo_ai/flo_ai/arium/arium.py (1)

5-5: No legacy Agent imports detected—isinstance checks are safe.

Repository-wide search confirms no imports from flo_ai.models.agent.Agent or other legacy locations exist. All isinstance(node, Agent) checks in this file use the correct class identity from flo_ai.agent.agent.

flo_ai/examples/ollama_agent_example.py (1)

2-2: Import path updates align with the new public API surface.

Also applies to: 4-4

flo_ai/examples/variables_workflow_yaml_example.py (1)

13-13: LGTM: example updated to use flo_ai.agent.AgentBuilder.

flo_ai/tests/unit-tests/test_arium_builder.py (1)

9-9: LGTM: test updated to import Agent from flo_ai.agent.

flo_ai/examples/yaml_agent_example.py (2)

1-1: LGTM: import updated to flo_ai.agent.AgentBuilder.


139-140: Output printing is now aligned with agent.run() returning a message list.

flo_ai/examples/arium_linear_usage.py (1)

3-3: LGTM: import updated to flo_ai.agent.Agent.

flo_ai/examples/flo_tool_example.py (1)

2-2: LGTM: imports updated to flo_ai.agent for the relocated public symbols.

Also applies to: 4-4

flo_ai/examples/yaml_tool_config_example.py (1)

10-10: LGTM: import updated to flo_ai.agent.AgentBuilder.

flo_ai/examples/chat_history.py (1)

3-3: LGTM: examples now use flo_ai.agent for AgentBuilder / Agent.

Also applies to: 5-5

flo_ai/tests/unit-tests/test_yaml_tool_config.py (1)

4-4: Import path update aligns with new public API surface.

flo_ai/examples/cot_agent_example.py (1)

8-9: Import path updates are consistent with flo_ai.agent re-exports.

flo_ai/README.md (1)

88-89: README import snippets updated to the new flo_ai.agent public API.

Also applies to: 111-112, 147-148, 362-363

flo_ai/flo_ai/arium/base.py (1)

5-5: Updated import to flo_ai.agent.Agent is consistent with module relocation.

flo_ai/examples/arium_examples.py (1)

12-13: Example import updated to flo_ai.agent.Agent (no functional change).

flo_ai/examples/tool_usage.py (1)

3-8: Imports updated to flo_ai.agent as intended.

flo_ai/examples/usage_claude.py (1)

3-4: LGTM - Clean import path update.

The imports have been correctly updated to use the new flo_ai.agent module, aligning with the public API reorganization.

TOOLS.md (1)

68-68: LGTM - Documentation imports updated consistently.

All code examples in the documentation now correctly reference flo_ai.agent.AgentBuilder, maintaining consistency across the documentation.

Also applies to: 98-98, 307-307, 334-334

flo_ai/examples/tool_using_agent.py (1)

3-6: LGTM - Import paths aligned with new API structure.

All imports correctly updated to use flo_ai.agent as the unified source for Agent-related public entities.

flo_ai/examples/simple_plan_execute_demo.py (1)

13-13: LGTM - Agent import path updated.

Import correctly updated to reflect the new public API structure.

documentation/essentials/telemetry.mdx (1)

51-51: LGTM - Documentation code example updated.

The telemetry documentation now correctly references the new import path for AgentBuilder.

flo_ai/examples/partial_tool_example.py (1)

11-11: LGTM - Import path updated correctly.

AgentBuilder import now uses the unified flo_ai.agent module.

flo_ai/examples/llm_router_example.py (1)

14-16: LGTM - Imports consolidated to new API module.

Both AgentBuilder and ReasoningPattern now correctly imported from flo_ai.agent.

flo_ai/examples/output_formatter.py (1)

6-7: LGTM - Import paths updated with alias preserved.

Both Agent and AgentBuilder now correctly imported from the new flo_ai.agent module, with the ToolAgent alias properly maintained.

flo_ai/examples/cot_conversational_example.py (1)

7-9: LGTM - Import path updates are correct.

The import consolidation to flo_ai.agent for Agent and ReasoningPattern, while keeping UserMessage in flo_ai.models, aligns with the public API reorganization. The example code remains functionally identical.

flo_ai/examples/vertexai_agent_example.py (1)

10-13: LGTM - Import updates align with API reorganization.

The consolidation of AgentBuilder, Agent, and ReasoningPattern to flo_ai.agent is consistent with the broader refactoring. The example maintains its functionality while using the updated import paths.

documentation/essentials/arium.mdx (1)

19-19: LGTM - Documentation updated for new API.

The import path update in the code example correctly reflects the Agent relocation to flo_ai.agent.

flo_ai/examples/variables_workflow_example.py (1)

11-13: LGTM - Import consolidation is correct.

AgentBuilder and Agent now correctly imported from flo_ai.agent, maintaining the example's functionality.

flo_ai/examples/agent_builder_usage.py (1)

3-5: LGTM - Import paths correctly updated.

AgentBuilder and ReasoningPattern are now properly sourced from flo_ai.agent, consistent with the public API reorganization.

documentation/essentials/code.mdx (1)

13-13: LGTM - Documentation examples updated consistently.

All code examples throughout the documentation now correctly reference flo_ai.agent for AgentBuilder and Agent imports, ensuring users follow the updated public API structure.

Also applies to: 37-37, 75-75, 105-105

flo_ai/examples/vllm_agent_usage.py (1)

3-5: LGTM - Import updates are consistent.

AgentBuilder and ReasoningPattern correctly imported from flo_ai.agent, maintaining consistency across all examples.

documentation/essentials/agents.mdx (1)

16-16: LGTM - Core documentation updated correctly.

The import example in the agents documentation now correctly shows AgentBuilder sourced from flo_ai.agent, guiding users to the proper import path.

documentation/quickstart.mdx (1)

33-34: AgentBuilder import update is consistent with the new public surface.
from flo_ai.agent import AgentBuilder matches the new flo_ai/flo_ai/agent/__init__.py re-exports.

Also applies to: 62-65

flo_ai/flo_ai/models/__init__.py (1)

7-18: LGTM: MessageType re-export via flo_ai.models is clean and consistent.
No obvious circular import risk here since it sources from .chat_message.

Also applies to: 20-33

flo_ai/examples/example_graph_visualization.py (1)

8-13: Good update: example now imports Agent from the new public module.
Keeps examples aligned with the PR’s API move.

flo_ai/flo_ai/agent/__init__.py (1)

1-14: LGTM: flo_ai.agent provides a clean, discoverable public surface.
Re-export list matches what examples/docs are using.

flo_ai/flo_ai/__init__.py (1)

6-19: LGTM: top-level flo_ai now re-exports Agent API from .agent (consistent with the move).
This keeps from flo_ai import Agent, AgentBuilder, ReasoningPattern stable.

Also applies to: 44-50

flo_ai/examples/arium_yaml_example.py (2)

740-742: LGTM: example imports updated to flo_ai.agent for AgentBuilder/ReasoningPattern.
Consistent with the new public surface.


332-344: YAML example enhancement is correct and follows the validated parser schema.

The items structure with name, type, and description fields is the correct format expected by ParserFieldModel (defined in flo_ai/models/agent.py). Array fields are validated to require items as a nested field model, and the example at lines 332-344 matches this validated schema exactly.

wavefront/server/modules/voice_agents_module/voice_agents_module/models/telephony_schemas.py (2)

1-1: LGTM - Import cleanup aligns with Pydantic v2 pattern.

The removal of model_validator import is consistent with the migration to model_post_init for post-initialization validation.


94-97: LGTM - Correct migration to model_post_init.

The validation logic is properly implemented using Pydantic v2's model_post_init hook. This pattern is consistent with other models in the codebase (e.g., flo_ai/flo_ai/models/agent.py and flo_ai/flo_ai/models/arium.py).

flo_ai/tests/unit-tests/test_arium_yaml_validation.py (2)

1-26: LGTM - Well-structured test module with appropriate imports.

The test file properly imports all necessary models and sets up comprehensive test coverage for Arium YAML validation.


599-837: LGTM - Comprehensive builder validation integration tests.

The TestAriumBuilderValidation class thoroughly tests the AriumBuilder._validate_yaml_config method, covering various error scenarios including missing workflow sections, invalid edges, missing router configurations, and multi-config method errors.

flo_ai/tests/unit-tests/test_arium_yaml.py (3)

2-8: LGTM - Clear separation of concerns documented.

The updated docstring properly clarifies that this module focuses on integration and runtime behavior, while YAML structure validation is tested in test_arium_yaml_validation.py.


16-16: LGTM - Import path updated to reflect API reorganization.

The import of Agent from flo_ai.agent aligns with the PR's broader API surface reorganization.


863-878: LGTM - Good error case coverage.

This test properly verifies that a ValueError is raised when an agent configuration is missing both the model field and the base_llm parameter.

flo_ai/tests/unit-tests/test_agent_yaml_validation.py (4)

1-26: LGTM - Well-organized imports covering all model types.

The imports are comprehensive and properly organized, covering all validation models defined in flo_ai.models.agent.


272-485: Excellent provider-specific validation coverage.

The TestLLMConfigModel class thoroughly tests all supported providers with their specific requirements:

  • OpenAI/Anthropic/Gemini/Ollama: name required
  • VertexAI: name, project, and base_url required
  • RootFlo: model_id required
  • OpenAI vLLM: name, base_url, and api_key required

Boundary testing for temperature (0.0-2.0), max_tokens (>0), and timeout (>0) is also well covered.


686-702: Good negative test cases for tool validation.

These tests properly verify that invalid tool configurations (missing name, invalid types) raise appropriate errors with clear messages.


932-970: Comprehensive provider coverage test.

The test_yaml_all_providers test efficiently validates that all supported LLM providers can be configured through YAML with their required parameters. This is a good pattern for ensuring no provider is accidentally broken.

flo_ai/flo_ai/agent/builder.py (4)

4-12: LGTM - Import updates align with API reorganization.

The imports correctly reflect the new module structure with Agent and ReasoningPattern from flo_ai.agent, and the new validation models from flo_ai.models.agent.


188-216: LGTM - Well-implemented validation method.

The _validate_yaml_config static method properly:

  1. Validates input against AgentYamlModel
  2. Formats validation errors with field paths for readability
  3. Preserves the original exception chain with from e

309-316: Potential AttributeError if _llm is None.

If both agent.model is None and base_llm is None, a ValueError is raised at line 285-287. However, if validation passes and _llm is set, this code path is safe. Consider adding an explicit guard for defensive coding.

The current flow guarantees _llm is set before reaching this block (either from create_llm_from_config at line 281-282 or from base_llm at line 288). However, if future refactoring changes the order, this could become an issue.

         if agent.settings is not None:
             settings = agent.settings
-            if settings.temperature is not None:
+            if settings.temperature is not None and builder._llm is not None:
                 builder._llm.temperature = settings.temperature

255-267: LGTM - Clean validation-first pattern.

The approach of validating first, then accessing the typed model attributes provides better type safety and clearer code compared to dictionary access.

flo_ai/flo_ai/helpers/llm_factory.py (5)

14-14: LGTM: Clean import of the typed configuration model.

The migration to LLMConfigModel provides type safety and centralized validation.


89-122: LGTM: VertexAI creation logic is correct.

The method properly handles parameter priority (kwargs > model_config) and provides sensible defaults. The duplicate validation with LLMConfigModel is acceptable as defense in depth.


124-160: LGTM: OpenAI vLLM creation logic is correct.

Parameter priority and temperature defaulting are handled appropriately.


162-200: LGTM: RootFlo creation logic correctly handles multiple auth sources.

The priority chain (kwargs > model_config > env vars) is well-implemented, and the comment about access_token being from kwargs only is helpful.


203-219: LGTM: Clean convenience wrapper.

flo_ai/flo_ai/arium/builder.py (6)

1-17: LGTM: Imports updated for typed configuration models.

Clean import structure with Pydantic validation support.


291-319: LGTM: Well-structured validation with user-friendly error messages.

The error formatting provides clear field paths and context, which will significantly improve debugging YAML configuration issues.


365-369: LGTM: Validated config handling is clean.

Using model_dump(exclude_none=True, by_alias=True) ensures proper serialization while the validated model provides typed access.


382-445: LGTM: Agent creation logic covers all configuration methods.

The branching correctly handles name-only references, direct configuration, inline YAML, and external file references.


477-574: LGTM: Router processing correctly handles all router types.

The conversion of typed models to dicts for the router factory is handled appropriately.


919-936: LGTM: Agent builder chain is clean and complete.

flo_ai/flo_ai/agent/agent.py (4)

1-27: LGTM: Clean imports for agent functionality.

Well-organized imports covering messages, tools, telemetry, and variable handling.


72-127: LGTM: Run method properly handles variable resolution and routing.

The dual-path approach (resolved vs unresolved variables) and routing based on tools presence is clean.


476-541: LGTM: Well-structured ReACT and CoT prompts.

Clear instructions with proper tool enumeration and explicit "Final Answer:" guidance.


543-643: LGTM: Robust final answer detection with appropriate fallbacks.

The two-tier approach (token detection + LLM classification) with conservative defaults is well-designed for reliability.

flo_ai/flo_ai/models/arium.py (7)

1-17: LGTM: Clean module setup with appropriate imports.

Shared models imported from agent.py to avoid duplication.


19-88: LGTM: AriumAgentConfigModel correctly extends parent with arium-specific validation.

The override of model_post_init properly allows name-only references while enforcing configuration method exclusivity.


90-101: LGTM: Clean function node configuration model.


141-203: LGTM: Router configuration with comprehensive type-specific validation.

The model_post_init correctly enforces required fields per router type.


206-229: LGTM: Edge and workflow models handle YAML keys correctly.

The from alias with populate_by_name=True properly handles the Python reserved word.


232-288: LGTM: AriumNodeConfigModel correctly validates configuration methods.

The mutual exclusivity check between yaml_file and inline configuration, plus requiring workflow for inline configs, is well-implemented.


351-365: LGTM: Root model and forward reference handling is correct.

The forward reference rebuilding is necessary for recursive type definitions.

Comment thread documentation/quickstart.mdx Outdated
Comment thread flo_ai/docs/arium_yaml_guide.md
Comment thread flo_ai/examples/example_graph_visualization.py Outdated
Comment thread flo_ai/flo_ai/agent/agent.py
Comment thread flo_ai/flo_ai/agent/agent.py Outdated
Comment thread flo_ai/flo_ai/agent/agent.py Outdated
Comment thread flo_ai/flo_ai/arium/builder.py
Comment thread flo_ai/flo_ai/arium/builder.py Outdated
Comment on lines +86 to +152
class LLMConfigModel(BaseModel):
"""LLM model configuration."""

provider: Literal[
'openai',
'anthropic',
'claude', # Alias for anthropic
'gemini',
'google', # Alias for gemini
'ollama',
'vertexai',
'rootflo',
'openai_vllm',
] = Field(..., description='LLM provider')
name: Optional[str] = Field(
None, description='Model name (required for most providers)'
)
base_url: Optional[str] = Field(None, description='Custom base URL')
temperature: Optional[float] = Field(
None, ge=0.0, le=2.0, description='Temperature setting'
)
max_tokens: Optional[int] = Field(None, gt=0, description='Maximum tokens')
timeout: Optional[int] = Field(None, gt=0, description='Request timeout in seconds')
# VertexAI specific
project: Optional[str] = Field(None, description='GCP project ID (for VertexAI)')
location: Optional[str] = Field(None, description='GCP location (for VertexAI)')
# RootFlo specific
model_id: Optional[str] = Field(None, description='Model ID (for RootFlo)')
# OpenAI vLLM specific
api_key: Optional[str] = Field(None, description='API key (for openai_vllm)')

def model_post_init(self, __context):
"""Validate provider-specific requirements."""
provider = self.provider.lower()

# Most providers require 'name'
if provider in ['openai', 'anthropic', 'claude', 'gemini', 'google', 'ollama']:
if not self.name:
raise ValueError(
f'{provider.title()} provider requires "name" parameter in model configuration'
)

raise AgentError(f'Failed after maximum {self.max_retries} attempts.')

def _get_react_prompt(self, variables: Optional[Dict[str, Any]] = None) -> str:
"""Get system prompt modified for ReACT pattern"""
variables = variables or {}
# VertexAI requires name, project, and base_url
if provider == 'vertexai':
if not self.name:
raise ValueError('VertexAI provider requires "name" parameter')
if not self.project:
raise ValueError('VertexAI provider requires "project" parameter')
if not self.base_url:
raise ValueError('VertexAI provider requires "base_url" parameter')

# RootFlo requires model_id
if provider == 'rootflo':
if not self.model_id:
raise ValueError(
'RootFlo provider requires "model_id" in model configuration'
)

tools_desc = '\n'.join(
[f'- {tool.name}: {tool.description}' for tool in self.tools]
)
# OpenAI vLLM requires name, base_url, and api_key
if provider == 'openai_vllm':
if not self.name:
raise ValueError('openai_vllm provider requires "name" parameter')
if not self.base_url:
raise ValueError('openai_vllm provider requires "base_url" parameter')
if not self.api_key:
raise ValueError('openai_vllm provider requires "api_key" parameter')

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# First, let me find where LLMConfigModel is used and locate the factory function
rg "create_llm|from_config" --type py -i

Repository: rootflo/wavefront

Length of output: 6044


🏁 Script executed:

# Also search for the main factory or initialization logic
rg "def.*llm.*config" --type py -i -A 3

Repository: rootflo/wavefront

Length of output: 10615


🏁 Script executed:

# Let's also check the file structure to understand the codebase better
fd -t f -e py | grep -E "(llm|config|agent|factory)" | head -20

Repository: rootflo/wavefront

Length of output: 780


🏁 Script executed:

cat -n flo_ai/flo_ai/helpers/llm_factory.py

Repository: rootflo/wavefront

Length of output: 9237


🏁 Script executed:

# Also check the test to see what configurations work
cat -n flo_ai/tests/unit-tests/test_agent_yaml_validation.py | head -100

Repository: rootflo/wavefront

Length of output: 3889


🏁 Script executed:

# Let me check the agent.py file to see the exact validation rules
sed -n '86,152p' flo_ai/flo_ai/models/agent.py

Repository: rootflo/wavefront

Length of output: 2861


🏁 Script executed:

# Check if there are any examples or tests showing rootflo usage
grep -r "rootflo" flo_ai/tests --type py -A 5 -B 2 | head -80

Repository: rootflo/wavefront

Length of output: 178


🏁 Script executed:

# Try with rg instead
rg "rootflo" flo_ai/tests --type py -A 5 -B 2 | head -100

Repository: rootflo/wavefront

Length of output: 2516


🏁 Script executed:

# Also check examples
rg "rootflo" flo_ai/examples --type py -A 5 -B 2

Repository: rootflo/wavefront

Length of output: 43


🏁 Script executed:

# Let me check if there are actual usage examples that show how rootflo base_url is typically provided
rg "ROOTFLO_BASE_URL|rootflo.*base_url" -i

Repository: rootflo/wavefront

Length of output: 411


🏁 Script executed:

# Also check the actual test to see the full parametrized test
cat -n flo_ai/tests/unit-tests/test_agent_yaml_validation.py | grep -A 50 "test_valid_provider_configs"

Repository: rootflo/wavefront

Length of output: 43


🏁 Script executed:

# Let me check the full test parametrization to see all config requirements
sed -n '200,350p' flo_ai/tests/unit-tests/test_agent_yaml_validation.py

Repository: rootflo/wavefront

Length of output: 5039


🏁 Script executed:

# Continue reading to find the parametrized test
sed -n '350,450p' flo_ai/tests/unit-tests/test_agent_yaml_validation.py

Repository: rootflo/wavefront

Length of output: 3942


Provider-specific validation in LLMConfigModel has misaligned requirements for RootFlo; VertexAI and OpenAI vLLM validation rules correctly match factory behavior.

The vertexai provider validation requiring base_url is correct—the factory enforces this requirement. However, the rootflo validation is incomplete: while it requires model_id, it does not validate base_url, which the factory requires (via model_config.base_url, kwargs, or ROOTFLO_BASE_URL environment variable). Update model_post_init to validate that RootFlo has base_url available from at least one source, or document that it must be provided via environment variable.

Comment thread flo_ai/flo_ai/models/agent.py
@vishnurk6247 vishnurk6247 merged commit 982f777 into develop Dec 12, 2025
7 of 8 checks passed
@vishnurk6247 vishnurk6247 deleted the feat/yaml-validation branch December 12, 2025 17:34
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
flo_ai/docs/arium_yaml_guide.md (1)

742-784: Fix markdownlint MD040: add languages to fenced code blocks in Error Handling.

@@
-```
+```text
 ValueError: YAML must contain an "arium" section

@@
- +text
ValueError: Agent {name} must have either yaml_config or yaml_file

@@
-```
+```text
ValueError: Tool {name} not found in provided tools dictionary

@@
- +text
ValueError: Router {name} not found in provided routers dictionary

@@
-```
+```text
ValueError: Workflow must specify a start node
ValueError: Workflow must specify end nodes

</blockquote></details>
<details>
<summary>flo_ai/flo_ai/arium/builder.py (2)</summary><blockquote>

`654-696`: **Bug risk: ForEach execute_node can become a plain `dict` (order-dependent) and break at runtime.**  
During resolution, `foreach_nodes_dict.get(execute_node_name)` may return the *unresolved config dict* for forward references. At minimum, reject non-node execute_node and require referencing agents/function_nodes/ariums (or do a dependency-aware resolve).  


```diff
@@
         for foreach_name, foreach_config in foreach_nodes_dict.items():
@@
             execute_node = (
@@
                 or foreach_nodes_dict.get(execute_node_name)
             )
 
+            if isinstance(execute_node, dict):
+                raise ValueError(
+                    f"ForEachNode '{foreach_name}': execute_node '{execute_node_name}' "
+                    "refers to another iterator that hasn't been resolved yet (or is invalid). "
+                    "Reorder iterators or reference a non-iterator node."
+                )
+
             if not execute_node:
@@
             foreach_node = ForEachNode(name=foreach_name, execute_node=execute_node)

889-908: Agent config: tools silently ignored and output_schema defaults to {} causing behavior inconsistency.

Tools specified in YAML are silently ignored if tool_registry is not provided to from_yaml() — the condition at line 892 (if tool_configs and available_tools:) skips the entire block when available_tools is falsy, leaving agent_tools as an empty list with no error raised.

At line 928, output_schema is forced to {} when None, which differs semantically from the default behavior and could break LLM implementations that check for is not None rather than truthiness.

Add validation to catch the missing tool_registry case, and avoid forcing {} when no schema is provided.

♻️ Duplicate comments (3)
flo_ai/examples/example_graph_visualization.py (1)

48-56: MockLLM now matches BaseLLM call shape (avoids runtime kwargs TypeError).
Adding **kwargs: Any to both generate() and stream() is the right fix; this addresses the earlier review note.

flo_ai/flo_ai/models/agent.py (2)

217-249: Tools singleton normalization is a solid YAML UX improvement.
This avoids iterating chars/keys and yields clearer validation outcomes.


117-152: RootFlo provider validation still misses base_url (factory mismatch).
RootFlo currently only enforces model_id, but if runtime creation requires base_url (or env fallback), validation should ensure it’s available from model.base_url or documented env.

@@
         # RootFlo requires model_id
         if provider == 'rootflo':
             if not self.model_id:
                 raise ValueError(
                     'RootFlo provider requires "model_id" in model configuration'
                 )
+            # If base_url is mandatory at runtime, validate it here (or clearly document env fallback).
+            # Prefer validating config-level presence for clearer YAML errors.
+            # if not self.base_url:
+            #     raise ValueError('RootFlo provider requires "base_url" in model configuration')
🧹 Nitpick comments (5)
flo_ai/flo_ai/models/agent.py (2)

54-67: ParserFieldModel: good structural validation; consider enforcing required default.
Current required: Optional[bool] allows null to slip through; if downstream expects strict boolean, default to False (or True based on schema semantics) to avoid tri-state logic.


206-216: AgentConfigModel: “prefer job” comment doesn’t match behavior—normalize or drop comment.
Right now both job and prompt remain set; builder will “prefer job”, but the model doesn’t “ignore prompt”. Consider setting self.prompt = None when job is present to keep config canonical.

flo_ai/flo_ai/agent/builder.py (2)

287-297: Type hygiene: _process_yaml_tools accepts str entries—fix the annotation to match.
This reads as List[Dict[str, Any]] but you pass List[Union[str, Dict[str, Any]]].

Also applies to: 318-323


307-315: Avoid mutating builder._llm directly; prefer an explicit builder method.
Direct attribute writes couple AgentBuilder to BaseLLM internals (and can break if temperature becomes computed/immutable).

flo_ai/flo_ai/arium/builder.py (1)

799-817: _create_llm_from_config takes base_llm but ignores it—drop the param or implement fallback.
This signature implies fallback behavior that doesn’t exist.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0cedc1d and 453eb20.

📒 Files selected for processing (7)
  • documentation/quickstart.mdx (6 hunks)
  • flo_ai/docs/arium_yaml_guide.md (33 hunks)
  • flo_ai/examples/example_graph_visualization.py (2 hunks)
  • flo_ai/flo_ai/agent/agent.py (1 hunks)
  • flo_ai/flo_ai/agent/builder.py (4 hunks)
  • flo_ai/flo_ai/arium/builder.py (22 hunks)
  • flo_ai/flo_ai/models/agent.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • documentation/quickstart.mdx
🧰 Additional context used
🧬 Code graph analysis (3)
flo_ai/flo_ai/models/agent.py (2)
flo_ai/flo_ai/models/arium.py (5)
  • model_post_init (35-87)
  • model_post_init (177-203)
  • model_post_init (262-287)
  • model_post_init (326-348)
  • model_post_init (357-359)
wavefront/server/modules/voice_agents_module/voice_agents_module/models/telephony_schemas.py (1)
  • model_post_init (94-97)
flo_ai/flo_ai/arium/builder.py (2)
flo_ai/flo_ai/agent/builder.py (4)
  • AgentBuilder (15-382)
  • _validate_yaml_config (189-216)
  • from_yaml (219-316)
  • build (171-186)
flo_ai/flo_ai/formatter/yaml_format_parser.py (3)
  • build (150-162)
  • build (196-236)
  • FloYamlParser (165-236)
flo_ai/flo_ai/agent/builder.py (4)
flo_ai/flo_ai/tool/tool_config.py (2)
  • ToolConfig (5-56)
  • create_tool_config (59-70)
flo_ai/flo_ai/models/agent.py (2)
  • AgentYamlModel (251-261)
  • LLMConfigModel (86-151)
flo_ai/flo_ai/helpers/yaml_validation.py (1)
  • format_validation_error_path (6-53)
flo_ai/flo_ai/helpers/llm_factory.py (1)
  • create_llm_from_config (204-219)
🪛 markdownlint-cli2 (0.18.1)
flo_ai/docs/arium_yaml_guide.md

748-748: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


756-756: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


764-764: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


772-772: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


780-780: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🔇 Additional comments (2)
flo_ai/examples/example_graph_visualization.py (1)

7-12: Import relocation + typing update look correct.
Switching to from flo_ai.agent import Agent matches the new public API surface, and adding Any supports the updated mock signatures.

flo_ai/flo_ai/arium/builder.py (1)

291-320: [rewritten comment]
[classification tag]

Comment on lines +31 to +63
def __init__(
self,
name: str,
system_prompt: str | AssistantMessage,
llm: BaseLLM,
tools: Optional[List[Tool]] = None,
max_retries: int = 0,
max_tool_calls: int = 5,
reasoning_pattern: ReasoningPattern = ReasoningPattern.DIRECT,
output_schema: Optional[Dict[str, Any]] = None,
role: Optional[str] = None,
act_as: Optional[str] = MessageType.ASSISTANT,
input_filter: Optional[List[str]] = None,
):
# Determine agent type based on tools
agent_type = AgentType.TOOL_USING if tools else AgentType.CONVERSATIONAL

# Enhance system prompt with role if provided
enhanced_prompt = system_prompt
if role:
if isinstance(system_prompt, str):
enhanced_prompt = f'You are {role}. {system_prompt}'
elif isinstance(system_prompt, AssistantMessage):
enhanced_prompt = f'You are {role}. {system_prompt.content}'

super().__init__(
name=name,
system_prompt=str(enhanced_prompt),
agent_type=agent_type,
llm=llm,
max_retries=max_retries,
max_tool_calls=max_tool_calls,
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Bug: system_prompt as AssistantMessage becomes str(AssistantMessage) (likely wrong prompt content).
If role is not provided and system_prompt is an AssistantMessage, you currently pass str(enhanced_prompt) to BaseAgent—this can end up as a repr instead of the message text.

@@
-        enhanced_prompt = system_prompt
+        enhanced_prompt: str
         if role:
             if isinstance(system_prompt, str):
                 enhanced_prompt = f'You are {role}. {system_prompt}'
             elif isinstance(system_prompt, AssistantMessage):
                 enhanced_prompt = f'You are {role}. {system_prompt.content}'
+            else:
+                enhanced_prompt = str(system_prompt)
+        else:
+            enhanced_prompt = (
+                system_prompt
+                if isinstance(system_prompt, str)
+                else system_prompt.content
+                if isinstance(system_prompt, AssistantMessage)
+                else str(system_prompt)
+            )
@@
         super().__init__(
             name=name,
-            system_prompt=str(enhanced_prompt),
+            system_prompt=enhanced_prompt,
             agent_type=agent_type,
             llm=llm,
             max_retries=max_retries,
             max_tool_calls=max_tool_calls,
         )
🤖 Prompt for AI Agents
In flo_ai/flo_ai/agent/agent.py around lines 31 to 63, when system_prompt is an
AssistantMessage and role is not provided the code currently passes
str(enhanced_prompt) to the BaseAgent which yields a repr instead of the actual
message text; change the logic so enhanced_prompt is always a plain string: if
system_prompt is an AssistantMessage set enhanced_prompt = system_prompt.content
(and if role is provided prepend "You are {role}. " to that content), and
finally pass enhanced_prompt (not str(enhanced_prompt)) into super().__init__.

Comment on lines +142 to +147
logger.debug(f'Sending messages to LLM: {messages}')
response = await self.llm.generate(
messages, output_schema=self.output_schema
)
logger.debug(f'Raw LLM Response: {response}')

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Logging likely leaks sensitive content (full prompts, tool outputs, user data).
logger.debug(f'Sending messages to LLM: {messages}') and raw responses can spill secrets/PII into logs. Consider redaction/truncation or logging only metadata (counts, roles, ids).

Also applies to: 653-661

Comment on lines +216 to +279
# Keep executing tools until we get a final answer
tool_call_count = 0
function_response = None
function_name = None
while tool_call_count < self.max_tool_calls:
formatted_tools = self.llm.format_tools_for_llm(self.tools)
response = await self.llm.generate(
messages,
functions=formatted_tools,
output_schema=self.output_schema,
)

# Handle ReACT and CoT patterns
function_call = await self.llm.get_function_call(response)

# If no function call, check if this is truly a final answer
if not function_call:
assistant_message = self.llm.get_message_content(response)
if assistant_message:
# Check if this is a final answer or just intermediate reasoning
is_final = await self._is_final_answer(
assistant_message, tool_call_count, messages
)
if is_final:
# Ensure act_as is not None (default to 'assistant' if missing)
role = (
self.act_as
if self.act_as is not None
else MessageType.ASSISTANT
)
self.add_to_history(
AssistantMessage(
role=role, content=assistant_message
)
)
return self.conversation_history
else:
# This is intermediate reasoning, add to context and continue
msg_preview = (
assistant_message[:100]
if len(assistant_message) > 100
else assistant_message
)
logger.debug(
f'Detected intermediate reasoning (not final answer): {msg_preview}...'
)
# Ensure act_as is not None (default to 'assistant' if missing)
role = (
self.act_as
if self.act_as is not None
else MessageType.ASSISTANT
)
self.add_to_history(
AssistantMessage(
role=role, content=assistant_message
)
)
self.add_to_history(
UserMessage(
content='Based on your reasoning, please proceed with the necessary tool calls to complete the task.',
)
)
continue
break
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: tool loop writes to conversation_history but doesn’t update messages (LLM never sees the new turns).
In the “intermediate reasoning” path (and tool error retry path), you append to history then continue, but the next llm.generate() uses the stale messages list from before those additions.

@@
                             else:
                                 # This is intermediate reasoning, add to context and continue
@@
                                 self.add_to_history(
                                     AssistantMessage(
                                         role=role, content=assistant_message
                                     )
                                 )
                                 self.add_to_history(
                                     UserMessage(
                                         content='Based on your reasoning, please proceed with the necessary tool calls to complete the task.',
                                     )
                                 )
+                                # Keep the actual LLM input in sync with history
+                                messages = await self._get_message_history(variables)
                                 continue
@@
                         if should_retry and retry_count <= self.max_retries:
@@
                             self.add_to_history(
                                 AssistantMessage(
                                     content=f'Tool execution error: {analysis}'
                                 )
                             )
+                            messages = await self._get_message_history(variables)
                             continue

Also applies to: 386-392

Comment on lines +328 to +337
function_response = await tool.run(
inputs=[], variables=None, **function_args
)
tool_span.set_attribute(
'tool.result.length', len(str(function_response))
)
else:
function_response = await tool.run(
inputs=[], variables=None, **function_args
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Tools run without runtime variables (variables=None)—breaks templating/tool behavior.
You already have variables; pass them through (and consider whether inputs=[] is intended).

@@
-                                function_response = await tool.run(
-                                    inputs=[], variables=None, **function_args
-                                )
+                                function_response = await tool.run(
+                                    inputs=[], variables=variables, **function_args
+                                )
@@
-                            function_response = await tool.run(
-                                inputs=[], variables=None, **function_args
-                            )
+                            function_response = await tool.run(
+                                inputs=[], variables=variables, **function_args
+                            )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
function_response = await tool.run(
inputs=[], variables=None, **function_args
)
tool_span.set_attribute(
'tool.result.length', len(str(function_response))
)
else:
function_response = await tool.run(
inputs=[], variables=None, **function_args
)
function_response = await tool.run(
inputs=[], variables=variables, **function_args
)
tool_span.set_attribute(
'tool.result.length', len(str(function_response))
)
else:
function_response = await tool.run(
inputs=[], variables=variables, **function_args
)
🤖 Prompt for AI Agents
In flo_ai/flo_ai/agent/agent.py around lines 328 to 337, the code calls tool.run
with variables=None which breaks templating and tool behavior; replace
variables=None with the actual variables variable (e.g., variables=variables) so
runtime variables are passed through, and review whether inputs=[] is
intended—if the tool expects no inputs omit the parameter or pass the actual
inputs variable instead; keep the tool_span handling the same.

Comment on lines +188 to +217
@staticmethod
def _validate_yaml_config(config: Dict[str, Any]) -> AgentYamlModel:
"""Validate YAML configuration using Pydantic models.

Args:
config: Dictionary containing YAML configuration

Returns:
AgentYamlModel: Validated configuration model

Raises:
ValueError: If validation fails with formatted error messages
"""
try:
validated_config = AgentYamlModel(**config)
except ValidationError as e:
# Format validation errors for better readability
error_messages = []
for error in e.errors():
field_path = format_validation_error_path(error['loc'], config)
error_msg = f"{field_path}: {error['msg']}"
if 'ctx' in error:
error_msg += f" (context: {error['ctx']})"
error_messages.append(error_msg)
raise ValueError(
'YAML validation failed:\n'
+ '\n'.join(f' - {msg}' for msg in error_messages)
) from e
return validated_config

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

wc -l flo_ai/flo_ai/agent/builder.py

Repository: rootflo/wavefront

Length of output: 95


🏁 Script executed:

cat -n flo_ai/flo_ai/agent/builder.py | sed -n '180,270p'

Repository: rootflo/wavefront

Length of output: 4257


🏁 Script executed:

# Search for yaml.safe_load usage and format_validation_error_path
rg "yaml.safe_load|format_validation_error_path" --context 5

Repository: rootflo/wavefront

Length of output: 35181


🏁 Script executed:

cat -n flo_ai/flo_ai/helpers/yaml_validation.py

Repository: rootflo/wavefront

Length of output: 2506


🏁 Script executed:

# Check if there are any tests for None/non-dict YAML handling
rg "yaml.safe_load" flo_ai/tests --context 3 -A 5

Repository: rootflo/wavefront

Length of output: 2712


🏁 Script executed:

rg "_validate_yaml_config" --context 2

Repository: rootflo/wavefront

Length of output: 6436


🏁 Script executed:

# Check if AriumBuilder has identical structure
cat -n flo_ai/flo_ai/arium/builder.py | sed -n '1,50p'

Repository: rootflo/wavefront

Length of output: 2469


🏁 Script executed:

# Search for similar yaml.safe_load patterns to understand broader impact
rg "yaml.safe_load.*\)" -A 3 flo_ai/flo_ai/agent/builder.py flo_ai/flo_ai/arium/builder.py

Repository: rootflo/wavefront

Length of output: 1213


Guard YAML root type before AgentYamlModel(**config) to prevent confusing TypeErrors.

If YAML parses to None or a list, the code raises TypeError before reaching your error formatting logic. The format_validation_error_path function also expects a dict, so this check is necessary for proper error reporting.

    @staticmethod
    def _validate_yaml_config(config: Dict[str, Any]) -> AgentYamlModel:
        """Validate YAML configuration using Pydantic models.

        Args:
            config: Dictionary containing YAML configuration

        Returns:
            AgentYamlModel: Validated configuration model

        Raises:
            ValueError: If validation fails with formatted error messages
        """
        try:
+            if not isinstance(config, dict):
+                raise ValueError(
+                    f"YAML root must be a mapping/object, got: {type(config).__name__}"
+                )
            validated_config = AgentYamlModel(**config)
        except ValidationError as e:

This applies to both AgentBuilder._validate_yaml_config (lines 188–216) and the identical pattern in AriumBuilder.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
@staticmethod
def _validate_yaml_config(config: Dict[str, Any]) -> AgentYamlModel:
"""Validate YAML configuration using Pydantic models.
Args:
config: Dictionary containing YAML configuration
Returns:
AgentYamlModel: Validated configuration model
Raises:
ValueError: If validation fails with formatted error messages
"""
try:
validated_config = AgentYamlModel(**config)
except ValidationError as e:
# Format validation errors for better readability
error_messages = []
for error in e.errors():
field_path = format_validation_error_path(error['loc'], config)
error_msg = f"{field_path}: {error['msg']}"
if 'ctx' in error:
error_msg += f" (context: {error['ctx']})"
error_messages.append(error_msg)
raise ValueError(
'YAML validation failed:\n'
+ '\n'.join(f' - {msg}' for msg in error_messages)
) from e
return validated_config
@staticmethod
def _validate_yaml_config(config: Dict[str, Any]) -> AgentYamlModel:
"""Validate YAML configuration using Pydantic models.
Args:
config: Dictionary containing YAML configuration
Returns:
AgentYamlModel: Validated configuration model
Raises:
ValueError: If validation fails with formatted error messages
"""
try:
if not isinstance(config, dict):
raise ValueError(
f"YAML root must be a mapping/object, got: {type(config).__name__}"
)
validated_config = AgentYamlModel(**config)
except ValidationError as e:
# Format validation errors for better readability
error_messages = []
for error in e.errors():
field_path = format_validation_error_path(error['loc'], config)
error_msg = f"{field_path}: {error['msg']}"
if 'ctx' in error:
error_msg += f" (context: {error['ctx']})"
error_messages.append(error_msg)
raise ValueError(
'YAML validation failed:\n'
'\n'.join(f' - {msg}' for msg in error_messages)
) from e
return validated_config
🤖 Prompt for AI Agents
In flo_ai/flo_ai/agent/builder.py around lines 188 to 217, guard that the parsed
YAML root is a dict before calling AgentYamlModel(**config) and before calling
format_validation_error_path: if config is None or not isinstance(config, dict)
(e.g., list), raise a ValueError with a clear message like "YAML root must be a
mapping/dictionary, got {type}" so the code doesn't raise a TypeError and your
validation formatting can run; apply the same guard and error message in the
identical section for AriumBuilder as well.

thomastomy5 pushed a commit that referenced this pull request Apr 27, 2026
* refactor(flo-ai): follow arium directory structure for agents

* feat(flo-ai): add field level validation for agent/arium yaml

* fix: review comments
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants