feat(flo-ai): add field level validation for agent/arium yaml#178
Conversation
WalkthroughThis PR relocates and re-exports agent-related public APIs into a new Changes
Sequence Diagram(s)mermaid Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes
Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 10
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (6)
flo_ai/flo_ai/formatter/yaml_format_parser.py (1)
85-95: UseLiteral.__class_getitem__()instead ofLiteral.__getitem__().For Python 3.10+,
__class_getitem__is the correct hook per PEP 560 for subscripting generic classes.__getitem__is non-standard and intended only for legacy runtimes. ReplaceLiteral.__getitem__(literals)withLiteral.__class_getitem__(literals)on line 94.flo_ai/docs/arium_yaml_guide.md (1)
742-786: Add languages to fenced code blocks (MD040).- ``` + ```text ValueError: YAML must contain an "arium" section - ``` + ``` - ``` + ```text ValueError: Agent {name} must have either yaml_config or yaml_file - ``` + ``` - ``` + ```text ValueError: Tool {name} not found in provided tools dictionary - ``` + ``` - ``` + ```text ValueError: Router {name} not found in provided routers dictionary - ``` + ``` - ``` + ```text ValueError: Workflow must specify a start node ValueError: Workflow must specify end nodes - ``` + ```documentation/essentials/yaml-agents.mdx (2)
32-38: Docs likely callAgentBuilder.from_yamlincorrectly (positional arg ambiguity).
Iffrom_yamlisfrom_yaml(yaml_str=None, yaml_file=None, ...), thenfrom_yaml('agent.yaml')will be treated as YAML content, not a file path.-from flo_ai.agent import AgentBuilder - -# Load agent from YAML -agent = AgentBuilder.from_yaml('agent.yaml') +from flo_ai.agent import AgentBuilder + +# Load agent from YAML file +builder = AgentBuilder.from_yaml(yaml_file='agent.yaml') +agent = builder.build() response = await agent.run('How can I reset my password?')
197-214:from_yaml_string()does not exist; useAgentBuilder.from_yaml(yaml_str=...)and call.build()to get the agent.The
from_yaml_stringmethod is not implemented in the codebase. The correct API isfrom_yaml()which accepts ayaml_strparameter and returns anAgentBuilderinstance. Call.build()on the builder to get the finalAgentobject.-# Load from string -yaml_content = """ +yaml_content = """ agent: name: "Test Agent" prompt: "You are a test agent." model: provider: "openai" name: "gpt-4o-mini" """ - -agent = AgentBuilder.from_yaml_string(yaml_content) +agent = AgentBuilder.from_yaml(yaml_str=yaml_content).build()flo_ai/flo_ai/__init__.py (1)
64-122: RemoveHumanMessageandAIMessagefrom__all__, or import them from the models package.The symbols
HumanMessageandAIMessageare listed in__all__but are not imported into the module. Either these symbols need to be imported from.models, or they should be removed from__all__if they are not part of the public API.flo_ai/flo_ai/helpers/llm_factory.py (1)
20-28: Provider alias mismatch:LLMConfigModelallows aliases thatSUPPORTED_PROVIDERSrejects.
LLMConfigModel(lines 85-150 inagent.py) accepts'claude'as an alias for'anthropic'and'google'as an alias for'gemini', butSUPPORTED_PROVIDERSdoesn't include these aliases. A config usingprovider: claudewill pass Pydantic validation but fail at runtime with "Unsupported model provider".Consider either:
- Adding aliases to
SUPPORTED_PROVIDERSand mapping them increate_llm, or- Normalizing aliases in
LLMConfigModel.model_post_initbefore they reach the factory.SUPPORTED_PROVIDERS = { 'openai', 'anthropic', + 'claude', # Alias for anthropic 'gemini', + 'google', # Alias for gemini 'ollama', 'vertexai', 'rootflo', 'openai_vllm', }And update routing logic:
- if provider not in LLMFactory.SUPPORTED_PROVIDERS: + # Normalize aliases + provider_map = {'claude': 'anthropic', 'google': 'gemini'} + provider = provider_map.get(provider, provider) + + if provider not in LLMFactory.SUPPORTED_PROVIDERS:Also applies to: 46-52
🧹 Nitpick comments (13)
flo_ai/flo_ai/models/agent.py (1)
206-216: Normalizejobvspromptto avoid downstream ambiguity.If
jobwins, setprompt = None(or vice versa) so builder/runtime mapping doesn’t have to re-encode precedence rules.def model_post_init(self, __context): """Ensure either job or prompt is provided.""" if not self.job and not self.prompt: raise ValueError( "Agent configuration must have either 'job' or 'prompt' field" ) # If both are provided, prefer 'job' and ignore 'prompt' if self.job and self.prompt: - # Keep job, prompt will be ignored in favor of job - pass + self.prompt = Noneflo_ai/flo_ai/arium/arium.py (1)
5-5: Prefer importing from the public surface for consistency (from flo_ai.agent import Agent).
Directly importingflo_ai.agent.agentcouples internal modules to file layout; using the package export keeps refactors cheaper.flo_ai/examples/simple_working_demo.py (1)
12-18: Consider standardizingUserMessageimport path across examples. IfUserMessageis meant to be part of the “public surface”, prefer a single canonical import in docs/examples to reduce churn.flo_ai/flo_ai/helpers/yaml_validation.py (1)
1-53: Nice usability win for YAML errors; consider tightening types +namestring coercion.
Optional, but it’ll make intent clearer and avoid surprising non-strnamevalues.-from typing import Dict, Any, Tuple +from typing import Any, Mapping, Sequence -def format_validation_error_path(loc: Tuple, config: Dict[str, Any]) -> str: +def format_validation_error_path(loc: Sequence[Any], config: Mapping[str, Any]) -> str: @@ - if isinstance(item, dict) and 'name' in item: - path_parts.append(f"{item['name']}") + if isinstance(item, dict) and 'name' in item: + path_parts.append(str(item['name']))flo_ai/tests/unit-tests/test_arium_yaml_validation.py (4)
441-442: Minor: Redundant assertion check.Line 441 already asserts
config.iterators is not None, making the first part of line 442's assertion redundant.# foreach_nodes should be merged into iterators assert config.iterators is not None - assert config.iterators is not None and len(config.iterators) == 1 + assert len(config.iterators) == 1
479-480: Minor: Same redundant assertion pattern.Same issue as above - the None check is already performed on line 479.
assert config.arium.agents is not None - assert config.arium.agents is not None and len(config.arium.agents) == 1 + assert len(config.arium.agents) == 1
558-559: Minor: Redundant None check pattern repeated.assert config.arium.agents is not None - assert config.arium.agents is not None and len(config.arium.agents) == 2 + assert len(config.arium.agents) == 2
595-596: Minor: Redundant None check pattern repeated.assert config.arium.agents is not None - assert config.arium.agents is not None and len(config.arium.agents) == 1 + assert len(config.arium.agents) == 1flo_ai/flo_ai/agent/builder.py (1)
320-324: Type hint mismatch with actual usage.The type hint
List[Dict[str, Any]]fortools_configdoesn't reflect that the list can contain strings (as seen in lines 290-298 where strings are appended). The implementation at lines 337-343 handles strings correctly, but the type hint is misleading.@classmethod def _process_yaml_tools( cls, - tools_config: List[Dict[str, Any]], + tools_config: List[Union[str, Dict[str, Any]]], tool_registry: Optional[Dict[str, Tool]] = None, ) -> List[Tool]:flo_ai/flo_ai/helpers/llm_factory.py (1)
70-74: Redundant validation:nameis already validated byLLMConfigModel.model_post_init.For providers
openai,anthropic,claude,gemini,ollama, the Pydantic model already raises aValueErrorifnameis missing. This factory check is defensive but redundant.Not blocking—defense in depth is acceptable, but you could simplify by trusting the model validation.
flo_ai/flo_ai/arium/builder.py (1)
802-819: Unused parameter:base_llmis accepted but never used.The
base_llmparameter is documented and accepted but not passed tocreate_llm_from_config. If it's intended as a fallback, consider implementing that logic; otherwise, remove it to avoid confusion.flo_ai/flo_ai/agent/agent.py (1)
289-291: Control flow aftercontinueis unreachable.Lines 290-291 (
continueandbreak) can never both execute. After line 290'scontinue, execution jumps back to thewhileloop, making line 291'sbreakunreachable in that branch. Thebreakis only reached whennot assistant_message(line 246 being falsy after line 244 check), which is fine, but the indentation suggests it's part of the same logical block.This works correctly, but the structure is confusing. Consider restructuring for clarity.
flo_ai/flo_ai/models/arium.py (1)
320-324: No-op validator can be removed.This validator just returns
vwithout any transformation or validation. The merging logic inmodel_post_initalready handles theiterators/foreach_nodesaliasing.- @field_validator('iterators', 'foreach_nodes', mode='before') - @classmethod - def validate_foreach_nodes(cls, v): - """Handle both 'iterators' and 'foreach_nodes' aliases.""" - return v -
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
flo_ai/uv.lockis excluded by!**/*.lock
📒 Files selected for processing (64)
TOOLS.md(4 hunks)documentation/development.mdx(1 hunks)documentation/essentials/agents.mdx(4 hunks)documentation/essentials/arium.mdx(8 hunks)documentation/essentials/code.mdx(7 hunks)documentation/essentials/telemetry.mdx(12 hunks)documentation/essentials/yaml-agents.mdx(12 hunks)documentation/quickstart.mdx(6 hunks)flo_ai/README.md(5 hunks)flo_ai/docs/arium_yaml_guide.md(33 hunks)flo_ai/examples/agent_builder_usage.py(1 hunks)flo_ai/examples/arium_examples.py(1 hunks)flo_ai/examples/arium_linear_usage.py(1 hunks)flo_ai/examples/arium_yaml_example.py(2 hunks)flo_ai/examples/chat_history.py(1 hunks)flo_ai/examples/cot_agent_example.py(1 hunks)flo_ai/examples/cot_conversational_example.py(1 hunks)flo_ai/examples/custom_plan_execute_demo.py(1 hunks)flo_ai/examples/document_processing_example.py(1 hunks)flo_ai/examples/example_graph_visualization.py(2 hunks)flo_ai/examples/flo_tool_example.py(1 hunks)flo_ai/examples/llm_router_example.py(1 hunks)flo_ai/examples/multi_tool_example.py(1 hunks)flo_ai/examples/ollama_agent_example.py(1 hunks)flo_ai/examples/output_formatter.py(1 hunks)flo_ai/examples/partial_tool_example.py(1 hunks)flo_ai/examples/simple_flow_router_demo.py(1 hunks)flo_ai/examples/simple_plan_execute_demo.py(1 hunks)flo_ai/examples/simple_reflection_router_demo.py(1 hunks)flo_ai/examples/simple_working_demo.py(1 hunks)flo_ai/examples/simple_yaml_workflow.py(1 hunks)flo_ai/examples/tool_usage.py(1 hunks)flo_ai/examples/tool_using_agent.py(1 hunks)flo_ai/examples/tools_quickstart.py(1 hunks)flo_ai/examples/usage_claude.py(1 hunks)flo_ai/examples/variables_workflow_example.py(1 hunks)flo_ai/examples/variables_workflow_yaml_example.py(1 hunks)flo_ai/examples/vertexai_agent_example.py(1 hunks)flo_ai/examples/vllm_agent_usage.py(1 hunks)flo_ai/examples/yaml_agent_example.py(2 hunks)flo_ai/examples/yaml_tool_config_example.py(1 hunks)flo_ai/flo_ai/__init__.py(2 hunks)flo_ai/flo_ai/agent/__init__.py(1 hunks)flo_ai/flo_ai/agent/agent.py(1 hunks)flo_ai/flo_ai/agent/builder.py(4 hunks)flo_ai/flo_ai/agent/plan_agents.py(1 hunks)flo_ai/flo_ai/arium/arium.py(1 hunks)flo_ai/flo_ai/arium/base.py(1 hunks)flo_ai/flo_ai/arium/builder.py(22 hunks)flo_ai/flo_ai/formatter/yaml_format_parser.py(2 hunks)flo_ai/flo_ai/helpers/llm_factory.py(9 hunks)flo_ai/flo_ai/helpers/yaml_validation.py(1 hunks)flo_ai/flo_ai/models/__init__.py(2 hunks)flo_ai/flo_ai/models/agent.py(1 hunks)flo_ai/flo_ai/models/arium.py(1 hunks)flo_ai/tests/unit-tests/test_agent_builder_tools.py(1 hunks)flo_ai/tests/unit-tests/test_agent_yaml_validation.py(1 hunks)flo_ai/tests/unit-tests/test_arium_builder.py(1 hunks)flo_ai/tests/unit-tests/test_arium_yaml.py(3 hunks)flo_ai/tests/unit-tests/test_arium_yaml_validation.py(1 hunks)flo_ai/tests/unit-tests/test_base_llm.py(1 hunks)flo_ai/tests/unit-tests/test_llm_router.py(1 hunks)flo_ai/tests/unit-tests/test_yaml_tool_config.py(1 hunks)wavefront/server/modules/voice_agents_module/voice_agents_module/models/telephony_schemas.py(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (40)
flo_ai/examples/output_formatter.py (2)
flo_ai/flo_ai/agent/agent.py (1)
Agent(30-643)flo_ai/flo_ai/agent/builder.py (1)
AgentBuilder(15-384)
flo_ai/tests/unit-tests/test_agent_yaml_validation.py (1)
flo_ai/flo_ai/models/agent.py (8)
LiteralValueModel(22-29)MetadataModel(12-19)LLMConfigModel(86-151)SettingsModel(154-165)ParserModel(70-76)ParserFieldModel(32-67)ExampleModel(79-83)ToolConfigModel(168-180)
flo_ai/flo_ai/agent/__init__.py (4)
flo_ai/flo_ai/agent/agent.py (1)
Agent(30-643)flo_ai/flo_ai/agent/base_agent.py (3)
BaseAgent(25-131)AgentType(14-16)ReasoningPattern(19-22)flo_ai/flo_ai/agent/plan_agents.py (2)
PlannerAgent(15-71)ExecutorAgent(74-122)flo_ai/flo_ai/agent/builder.py (1)
AgentBuilder(15-384)
flo_ai/tests/unit-tests/test_arium_yaml_validation.py (3)
flo_ai/flo_ai/models/arium.py (8)
AriumYamlModel(351-359)AriumConfigModel(297-348)AriumAgentConfigModel(19-87)FunctionNodeConfigModel(90-101)RouterConfigModel(141-203)WorkflowConfigModel(216-229)AriumNodeConfigModel(232-287)ForEachNodeConfigModel(290-294)flo_ai/flo_ai/arium/builder.py (1)
_validate_yaml_config(292-319)flo_ai/flo_ai/agent/builder.py (1)
_validate_yaml_config(189-216)
flo_ai/examples/custom_plan_execute_demo.py (1)
flo_ai/flo_ai/agent/plan_agents.py (2)
PlannerAgent(15-71)ExecutorAgent(74-122)
flo_ai/examples/ollama_agent_example.py (2)
flo_ai/flo_ai/agent/builder.py (1)
AgentBuilder(15-384)flo_ai/flo_ai/agent/base_agent.py (1)
ReasoningPattern(19-22)
flo_ai/tests/unit-tests/test_arium_builder.py (1)
flo_ai/flo_ai/agent/agent.py (1)
Agent(30-643)
flo_ai/examples/vertexai_agent_example.py (4)
flo_ai/flo_ai/agent/builder.py (1)
AgentBuilder(15-384)flo_ai/flo_ai/llm/vertexai_llm.py (1)
VertexAI(7-26)flo_ai/flo_ai/agent/agent.py (1)
Agent(30-643)flo_ai/flo_ai/agent/base_agent.py (1)
ReasoningPattern(19-22)
flo_ai/examples/variables_workflow_yaml_example.py (1)
flo_ai/flo_ai/agent/builder.py (1)
AgentBuilder(15-384)
flo_ai/flo_ai/arium/base.py (1)
flo_ai/flo_ai/agent/agent.py (1)
Agent(30-643)
flo_ai/tests/unit-tests/test_yaml_tool_config.py (1)
flo_ai/flo_ai/agent/builder.py (1)
AgentBuilder(15-384)
flo_ai/examples/document_processing_example.py (1)
flo_ai/flo_ai/agent/builder.py (1)
AgentBuilder(15-384)
flo_ai/examples/agent_builder_usage.py (3)
flo_ai/flo_ai/agent/builder.py (1)
AgentBuilder(15-384)flo_ai/flo_ai/tool/base_tool.py (1)
Tool(12-50)flo_ai/flo_ai/agent/base_agent.py (1)
ReasoningPattern(19-22)
flo_ai/examples/tool_usage.py (5)
flo_ai/flo_ai/agent/agent.py (1)
Agent(30-643)flo_ai/flo_ai/llm/openai_llm.py (1)
OpenAI(17-210)flo_ai/flo_ai/tool/base_tool.py (1)
Tool(12-50)flo_ai/flo_ai/models/agent_error.py (1)
AgentError(4-9)flo_ai/flo_ai/agent/base_agent.py (1)
ReasoningPattern(19-22)
flo_ai/examples/tool_using_agent.py (4)
flo_ai/flo_ai/agent/builder.py (1)
AgentBuilder(15-384)flo_ai/flo_ai/tool/base_tool.py (1)
Tool(12-50)flo_ai/flo_ai/agent/base_agent.py (1)
ReasoningPattern(19-22)flo_ai/flo_ai/agent/agent.py (1)
Agent(30-643)
flo_ai/examples/example_graph_visualization.py (3)
flo_ai/flo_ai/agent/agent.py (1)
Agent(30-643)flo_ai/flo_ai/tool/flo_tool.py (1)
flo_tool(8-68)flo_ai/flo_ai/llm/base_llm.py (2)
BaseLLM(9-144)stream(33-41)
flo_ai/examples/tools_quickstart.py (1)
flo_ai/flo_ai/agent/builder.py (1)
AgentBuilder(15-384)
flo_ai/examples/yaml_tool_config_example.py (1)
flo_ai/flo_ai/agent/builder.py (1)
AgentBuilder(15-384)
flo_ai/examples/arium_linear_usage.py (1)
flo_ai/flo_ai/agent/agent.py (1)
Agent(30-643)
flo_ai/flo_ai/arium/arium.py (1)
flo_ai/flo_ai/agent/agent.py (1)
Agent(30-643)
flo_ai/examples/vllm_agent_usage.py (3)
flo_ai/flo_ai/agent/builder.py (1)
AgentBuilder(15-384)flo_ai/flo_ai/tool/base_tool.py (1)
Tool(12-50)flo_ai/flo_ai/agent/base_agent.py (1)
ReasoningPattern(19-22)
flo_ai/examples/simple_reflection_router_demo.py (1)
flo_ai/flo_ai/agent/agent.py (1)
Agent(30-643)
flo_ai/examples/simple_plan_execute_demo.py (1)
flo_ai/flo_ai/agent/agent.py (1)
Agent(30-643)
flo_ai/flo_ai/models/__init__.py (1)
flo_ai/flo_ai/models/chat_message.py (1)
MessageType(5-9)
flo_ai/examples/arium_examples.py (1)
flo_ai/flo_ai/agent/agent.py (1)
Agent(30-643)
flo_ai/examples/multi_tool_example.py (2)
flo_ai/flo_ai/agent/builder.py (1)
AgentBuilder(15-384)flo_ai/flo_ai/agent/base_agent.py (1)
ReasoningPattern(19-22)
flo_ai/flo_ai/agent/plan_agents.py (1)
flo_ai/flo_ai/agent/agent.py (1)
Agent(30-643)
flo_ai/examples/simple_flow_router_demo.py (1)
flo_ai/flo_ai/agent/agent.py (1)
Agent(30-643)
flo_ai/examples/partial_tool_example.py (1)
flo_ai/flo_ai/agent/builder.py (1)
AgentBuilder(15-384)
flo_ai/examples/usage_claude.py (2)
flo_ai/flo_ai/agent/base_agent.py (1)
ReasoningPattern(19-22)flo_ai/flo_ai/agent/agent.py (1)
Agent(30-643)
flo_ai/flo_ai/helpers/llm_factory.py (1)
flo_ai/flo_ai/models/agent.py (1)
LLMConfigModel(86-151)
flo_ai/examples/variables_workflow_example.py (2)
flo_ai/flo_ai/agent/builder.py (1)
AgentBuilder(15-384)flo_ai/flo_ai/agent/agent.py (1)
Agent(30-643)
flo_ai/examples/cot_conversational_example.py (3)
flo_ai/flo_ai/agent/agent.py (1)
Agent(30-643)flo_ai/flo_ai/models/chat_message.py (1)
UserMessage(65-70)flo_ai/flo_ai/agent/base_agent.py (1)
ReasoningPattern(19-22)
flo_ai/examples/chat_history.py (2)
flo_ai/flo_ai/agent/builder.py (1)
AgentBuilder(15-384)flo_ai/flo_ai/agent/agent.py (1)
Agent(30-643)
flo_ai/examples/arium_yaml_example.py (2)
flo_ai/flo_ai/agent/builder.py (1)
AgentBuilder(15-384)flo_ai/flo_ai/agent/base_agent.py (1)
ReasoningPattern(19-22)
flo_ai/flo_ai/models/arium.py (1)
flo_ai/flo_ai/models/agent.py (7)
MetadataModel(12-19)LLMConfigModel(86-151)AgentConfigModel(183-239)model_post_init(54-67)model_post_init(117-151)model_post_init(206-215)model_post_init(248-252)
flo_ai/flo_ai/models/agent.py (2)
flo_ai/flo_ai/models/arium.py (5)
model_post_init(35-87)model_post_init(177-203)model_post_init(262-287)model_post_init(326-348)model_post_init(357-359)wavefront/server/modules/voice_agents_module/voice_agents_module/models/telephony_schemas.py (1)
model_post_init(94-97)
wavefront/server/modules/voice_agents_module/voice_agents_module/models/telephony_schemas.py (2)
flo_ai/flo_ai/models/agent.py (4)
model_post_init(54-67)model_post_init(117-151)model_post_init(206-215)model_post_init(248-252)flo_ai/flo_ai/models/arium.py (5)
model_post_init(35-87)model_post_init(177-203)model_post_init(262-287)model_post_init(326-348)model_post_init(357-359)
flo_ai/tests/unit-tests/test_arium_yaml.py (2)
flo_ai/flo_ai/arium/builder.py (2)
AriumBuilder(20-936)from_yaml(322-800)flo_ai/flo_ai/agent/builder.py (1)
from_yaml(219-318)
flo_ai/flo_ai/__init__.py (4)
flo_ai/flo_ai/models/chat_message.py (1)
MessageType(5-9)flo_ai/flo_ai/agent/agent.py (1)
Agent(30-643)flo_ai/flo_ai/agent/base_agent.py (3)
BaseAgent(25-131)AgentType(14-16)ReasoningPattern(19-22)flo_ai/flo_ai/agent/builder.py (1)
AgentBuilder(15-384)
🪛 LanguageTool
flo_ai/docs/arium_yaml_guide.md
[style] ~267-~267: The double modal “Requires nested” is nonstandard (only accepted in certain dialects). Consider “to be nested”.
Context: ...ne YAML Configuration:** -
(NEEDS_FIXED)
🪛 markdownlint-cli2 (0.18.1)
flo_ai/docs/arium_yaml_guide.md
748-748: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
756-756: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
764-764: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
772-772: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
780-780: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🔇 Additional comments (90)
flo_ai/examples/custom_plan_execute_demo.py (1)
10-15: Import-path update looks correct for the new public API surface.Please sanity-check this example still runs end-to-end after packaging:
flo_ai.agentmust exportPlannerAgentandExecutorAgent(and any transitive deps likePlanAwareMemorytools).flo_ai/tests/unit-tests/test_base_llm.py (1)
49-59:MockLLM.streamsignature widened appropriately.flo_ai/flo_ai/formatter/yaml_format_parser.py (1)
186-195: Type annotation onFloYamlParser.Builder.yaml_dictis a nice clarity improvement.flo_ai/tests/unit-tests/test_llm_router.py (1)
38-48:MockLLM.streamaccepting**kwargsmatches the updated interface.flo_ai/flo_ai/models/agent.py (1)
32-68:ParserFieldModelpost-init validation is solid for literal/array/object invariants.flo_ai/examples/simple_flow_router_demo.py (1)
8-14: Agent import-path update is consistent with the new module layout.Please confirm this example still runs with the new
Agentimplementation (constructor args + return type ofrun(...)) and updated exports fromflo_ai.agent.flo_ai/examples/document_processing_example.py (1)
16-23: AgentBuilder import-path update aligns with the public API move.Just verify packaging/exports:
flo_ai.agent.AgentBuildershould be available for end users (andflo_ai.builder.agent_buildereither removed or left as a compatibility shim).flo_ai/examples/simple_yaml_workflow.py (1)
124-131: No issues found with attribute access.The
build_and_run()method has an explicit return type annotation ofList[MessageMemoryItem], andMessageMemoryItemalways has aresult: BaseMessageattribute (required in__init__).BaseMessagehas acontentattribute. The codemessage.result.contentis correct and safe.Note: The
isinstance(result, list)check on line 127 is redundant sincebuild_and_run()always returns a list, not a union type.flo_ai/tests/unit-tests/test_agent_builder_tools.py (1)
5-5: Import path update looks correct (AgentBuilder viaflo_ai.agent).
This aligns the test with the new public API surface.flo_ai/examples/tools_quickstart.py (1)
13-13: Import path update looks correct (AgentBuilder viaflo_ai.agent).flo_ai/examples/simple_reflection_router_demo.py (1)
11-11: Import path update looks correct (Agent viaflo_ai.agent).flo_ai/examples/multi_tool_example.py (1)
2-5: Import path updates look correct (AgentBuilder,ReasoningPatternviaflo_ai.agent).documentation/development.mdx (1)
56-60: Docs snippet updated correctly to the new import path.flo_ai/flo_ai/agent/plan_agents.py (1)
8-10: Import path update looks correct (Agent viaflo_ai.agent).flo_ai/flo_ai/arium/arium.py (1)
5-5: No legacy Agent imports detected—isinstance checks are safe.Repository-wide search confirms no imports from
flo_ai.models.agent.Agentor other legacy locations exist. Allisinstance(node, Agent)checks in this file use the correct class identity fromflo_ai.agent.agent.flo_ai/examples/ollama_agent_example.py (1)
2-2: Import path updates align with the new public API surface.Also applies to: 4-4
flo_ai/examples/variables_workflow_yaml_example.py (1)
13-13: LGTM: example updated to useflo_ai.agent.AgentBuilder.flo_ai/tests/unit-tests/test_arium_builder.py (1)
9-9: LGTM: test updated to importAgentfromflo_ai.agent.flo_ai/examples/yaml_agent_example.py (2)
1-1: LGTM: import updated toflo_ai.agent.AgentBuilder.
139-140: Output printing is now aligned withagent.run()returning a message list.flo_ai/examples/arium_linear_usage.py (1)
3-3: LGTM: import updated toflo_ai.agent.Agent.flo_ai/examples/flo_tool_example.py (1)
2-2: LGTM: imports updated toflo_ai.agentfor the relocated public symbols.Also applies to: 4-4
flo_ai/examples/yaml_tool_config_example.py (1)
10-10: LGTM: import updated toflo_ai.agent.AgentBuilder.flo_ai/examples/chat_history.py (1)
3-3: LGTM: examples now useflo_ai.agentforAgentBuilder/Agent.Also applies to: 5-5
flo_ai/tests/unit-tests/test_yaml_tool_config.py (1)
4-4: Import path update aligns with new public API surface.flo_ai/examples/cot_agent_example.py (1)
8-9: Import path updates are consistent withflo_ai.agentre-exports.flo_ai/README.md (1)
88-89: README import snippets updated to the newflo_ai.agentpublic API.Also applies to: 111-112, 147-148, 362-363
flo_ai/flo_ai/arium/base.py (1)
5-5: Updated import toflo_ai.agent.Agentis consistent with module relocation.flo_ai/examples/arium_examples.py (1)
12-13: Example import updated toflo_ai.agent.Agent(no functional change).flo_ai/examples/tool_usage.py (1)
3-8: Imports updated toflo_ai.agentas intended.flo_ai/examples/usage_claude.py (1)
3-4: LGTM - Clean import path update.The imports have been correctly updated to use the new
flo_ai.agentmodule, aligning with the public API reorganization.TOOLS.md (1)
68-68: LGTM - Documentation imports updated consistently.All code examples in the documentation now correctly reference
flo_ai.agent.AgentBuilder, maintaining consistency across the documentation.Also applies to: 98-98, 307-307, 334-334
flo_ai/examples/tool_using_agent.py (1)
3-6: LGTM - Import paths aligned with new API structure.All imports correctly updated to use
flo_ai.agentas the unified source for Agent-related public entities.flo_ai/examples/simple_plan_execute_demo.py (1)
13-13: LGTM - Agent import path updated.Import correctly updated to reflect the new public API structure.
documentation/essentials/telemetry.mdx (1)
51-51: LGTM - Documentation code example updated.The telemetry documentation now correctly references the new import path for
AgentBuilder.flo_ai/examples/partial_tool_example.py (1)
11-11: LGTM - Import path updated correctly.
AgentBuilderimport now uses the unifiedflo_ai.agentmodule.flo_ai/examples/llm_router_example.py (1)
14-16: LGTM - Imports consolidated to new API module.Both
AgentBuilderandReasoningPatternnow correctly imported fromflo_ai.agent.flo_ai/examples/output_formatter.py (1)
6-7: LGTM - Import paths updated with alias preserved.Both
AgentandAgentBuildernow correctly imported from the newflo_ai.agentmodule, with theToolAgentalias properly maintained.flo_ai/examples/cot_conversational_example.py (1)
7-9: LGTM - Import path updates are correct.The import consolidation to
flo_ai.agentfor Agent and ReasoningPattern, while keeping UserMessage inflo_ai.models, aligns with the public API reorganization. The example code remains functionally identical.flo_ai/examples/vertexai_agent_example.py (1)
10-13: LGTM - Import updates align with API reorganization.The consolidation of AgentBuilder, Agent, and ReasoningPattern to
flo_ai.agentis consistent with the broader refactoring. The example maintains its functionality while using the updated import paths.documentation/essentials/arium.mdx (1)
19-19: LGTM - Documentation updated for new API.The import path update in the code example correctly reflects the Agent relocation to
flo_ai.agent.flo_ai/examples/variables_workflow_example.py (1)
11-13: LGTM - Import consolidation is correct.AgentBuilder and Agent now correctly imported from
flo_ai.agent, maintaining the example's functionality.flo_ai/examples/agent_builder_usage.py (1)
3-5: LGTM - Import paths correctly updated.AgentBuilder and ReasoningPattern are now properly sourced from
flo_ai.agent, consistent with the public API reorganization.documentation/essentials/code.mdx (1)
13-13: LGTM - Documentation examples updated consistently.All code examples throughout the documentation now correctly reference
flo_ai.agentfor AgentBuilder and Agent imports, ensuring users follow the updated public API structure.Also applies to: 37-37, 75-75, 105-105
flo_ai/examples/vllm_agent_usage.py (1)
3-5: LGTM - Import updates are consistent.AgentBuilder and ReasoningPattern correctly imported from
flo_ai.agent, maintaining consistency across all examples.documentation/essentials/agents.mdx (1)
16-16: LGTM - Core documentation updated correctly.The import example in the agents documentation now correctly shows AgentBuilder sourced from
flo_ai.agent, guiding users to the proper import path.documentation/quickstart.mdx (1)
33-34: AgentBuilder import update is consistent with the new public surface.
from flo_ai.agent import AgentBuildermatches the newflo_ai/flo_ai/agent/__init__.pyre-exports.Also applies to: 62-65
flo_ai/flo_ai/models/__init__.py (1)
7-18: LGTM:MessageTypere-export viaflo_ai.modelsis clean and consistent.
No obvious circular import risk here since it sources from.chat_message.Also applies to: 20-33
flo_ai/examples/example_graph_visualization.py (1)
8-13: Good update: example now importsAgentfrom the new public module.
Keeps examples aligned with the PR’s API move.flo_ai/flo_ai/agent/__init__.py (1)
1-14: LGTM:flo_ai.agentprovides a clean, discoverable public surface.
Re-export list matches what examples/docs are using.flo_ai/flo_ai/__init__.py (1)
6-19: LGTM: top-levelflo_ainow re-exports Agent API from.agent(consistent with the move).
This keepsfrom flo_ai import Agent, AgentBuilder, ReasoningPatternstable.Also applies to: 44-50
flo_ai/examples/arium_yaml_example.py (2)
740-742: LGTM: example imports updated toflo_ai.agentfor AgentBuilder/ReasoningPattern.
Consistent with the new public surface.
332-344: YAML example enhancement is correct and follows the validated parser schema.The
itemsstructure withname,type, anddescriptionfields is the correct format expected byParserFieldModel(defined inflo_ai/models/agent.py). Array fields are validated to requireitemsas a nested field model, and the example at lines 332-344 matches this validated schema exactly.wavefront/server/modules/voice_agents_module/voice_agents_module/models/telephony_schemas.py (2)
1-1: LGTM - Import cleanup aligns with Pydantic v2 pattern.The removal of
model_validatorimport is consistent with the migration tomodel_post_initfor post-initialization validation.
94-97: LGTM - Correct migration tomodel_post_init.The validation logic is properly implemented using Pydantic v2's
model_post_inithook. This pattern is consistent with other models in the codebase (e.g.,flo_ai/flo_ai/models/agent.pyandflo_ai/flo_ai/models/arium.py).flo_ai/tests/unit-tests/test_arium_yaml_validation.py (2)
1-26: LGTM - Well-structured test module with appropriate imports.The test file properly imports all necessary models and sets up comprehensive test coverage for Arium YAML validation.
599-837: LGTM - Comprehensive builder validation integration tests.The
TestAriumBuilderValidationclass thoroughly tests theAriumBuilder._validate_yaml_configmethod, covering various error scenarios including missing workflow sections, invalid edges, missing router configurations, and multi-config method errors.flo_ai/tests/unit-tests/test_arium_yaml.py (3)
2-8: LGTM - Clear separation of concerns documented.The updated docstring properly clarifies that this module focuses on integration and runtime behavior, while YAML structure validation is tested in
test_arium_yaml_validation.py.
16-16: LGTM - Import path updated to reflect API reorganization.The import of
Agentfromflo_ai.agentaligns with the PR's broader API surface reorganization.
863-878: LGTM - Good error case coverage.This test properly verifies that a
ValueErroris raised when an agent configuration is missing both themodelfield and thebase_llmparameter.flo_ai/tests/unit-tests/test_agent_yaml_validation.py (4)
1-26: LGTM - Well-organized imports covering all model types.The imports are comprehensive and properly organized, covering all validation models defined in
flo_ai.models.agent.
272-485: Excellent provider-specific validation coverage.The
TestLLMConfigModelclass thoroughly tests all supported providers with their specific requirements:
- OpenAI/Anthropic/Gemini/Ollama: name required
- VertexAI: name, project, and base_url required
- RootFlo: model_id required
- OpenAI vLLM: name, base_url, and api_key required
Boundary testing for temperature (0.0-2.0), max_tokens (>0), and timeout (>0) is also well covered.
686-702: Good negative test cases for tool validation.These tests properly verify that invalid tool configurations (missing name, invalid types) raise appropriate errors with clear messages.
932-970: Comprehensive provider coverage test.The
test_yaml_all_providerstest efficiently validates that all supported LLM providers can be configured through YAML with their required parameters. This is a good pattern for ensuring no provider is accidentally broken.flo_ai/flo_ai/agent/builder.py (4)
4-12: LGTM - Import updates align with API reorganization.The imports correctly reflect the new module structure with
AgentandReasoningPatternfromflo_ai.agent, and the new validation models fromflo_ai.models.agent.
188-216: LGTM - Well-implemented validation method.The
_validate_yaml_configstatic method properly:
- Validates input against
AgentYamlModel- Formats validation errors with field paths for readability
- Preserves the original exception chain with
from e
309-316: PotentialAttributeErrorif_llmis None.If both
agent.modelisNoneandbase_llmisNone, aValueErroris raised at line 285-287. However, if validation passes and_llmis set, this code path is safe. Consider adding an explicit guard for defensive coding.The current flow guarantees
_llmis set before reaching this block (either fromcreate_llm_from_configat line 281-282 or frombase_llmat line 288). However, if future refactoring changes the order, this could become an issue.if agent.settings is not None: settings = agent.settings - if settings.temperature is not None: + if settings.temperature is not None and builder._llm is not None: builder._llm.temperature = settings.temperature
255-267: LGTM - Clean validation-first pattern.The approach of validating first, then accessing the typed model attributes provides better type safety and clearer code compared to dictionary access.
flo_ai/flo_ai/helpers/llm_factory.py (5)
14-14: LGTM: Clean import of the typed configuration model.The migration to
LLMConfigModelprovides type safety and centralized validation.
89-122: LGTM: VertexAI creation logic is correct.The method properly handles parameter priority (
kwargs > model_config) and provides sensible defaults. The duplicate validation withLLMConfigModelis acceptable as defense in depth.
124-160: LGTM: OpenAI vLLM creation logic is correct.Parameter priority and temperature defaulting are handled appropriately.
162-200: LGTM: RootFlo creation logic correctly handles multiple auth sources.The priority chain (
kwargs > model_config > env vars) is well-implemented, and the comment aboutaccess_tokenbeing from kwargs only is helpful.
203-219: LGTM: Clean convenience wrapper.flo_ai/flo_ai/arium/builder.py (6)
1-17: LGTM: Imports updated for typed configuration models.Clean import structure with Pydantic validation support.
291-319: LGTM: Well-structured validation with user-friendly error messages.The error formatting provides clear field paths and context, which will significantly improve debugging YAML configuration issues.
365-369: LGTM: Validated config handling is clean.Using
model_dump(exclude_none=True, by_alias=True)ensures proper serialization while the validated model provides typed access.
382-445: LGTM: Agent creation logic covers all configuration methods.The branching correctly handles name-only references, direct configuration, inline YAML, and external file references.
477-574: LGTM: Router processing correctly handles all router types.The conversion of typed models to dicts for the router factory is handled appropriately.
919-936: LGTM: Agent builder chain is clean and complete.flo_ai/flo_ai/agent/agent.py (4)
1-27: LGTM: Clean imports for agent functionality.Well-organized imports covering messages, tools, telemetry, and variable handling.
72-127: LGTM: Run method properly handles variable resolution and routing.The dual-path approach (resolved vs unresolved variables) and routing based on tools presence is clean.
476-541: LGTM: Well-structured ReACT and CoT prompts.Clear instructions with proper tool enumeration and explicit "Final Answer:" guidance.
543-643: LGTM: Robust final answer detection with appropriate fallbacks.The two-tier approach (token detection + LLM classification) with conservative defaults is well-designed for reliability.
flo_ai/flo_ai/models/arium.py (7)
1-17: LGTM: Clean module setup with appropriate imports.Shared models imported from
agent.pyto avoid duplication.
19-88: LGTM: AriumAgentConfigModel correctly extends parent with arium-specific validation.The override of
model_post_initproperly allows name-only references while enforcing configuration method exclusivity.
90-101: LGTM: Clean function node configuration model.
141-203: LGTM: Router configuration with comprehensive type-specific validation.The
model_post_initcorrectly enforces required fields per router type.
206-229: LGTM: Edge and workflow models handle YAML keys correctly.The
fromalias withpopulate_by_name=Trueproperly handles the Python reserved word.
232-288: LGTM: AriumNodeConfigModel correctly validates configuration methods.The mutual exclusivity check between
yaml_fileand inline configuration, plus requiringworkflowfor inline configs, is well-implemented.
351-365: LGTM: Root model and forward reference handling is correct.The forward reference rebuilding is necessary for recursive type definitions.
| class LLMConfigModel(BaseModel): | ||
| """LLM model configuration.""" | ||
|
|
||
| provider: Literal[ | ||
| 'openai', | ||
| 'anthropic', | ||
| 'claude', # Alias for anthropic | ||
| 'gemini', | ||
| 'google', # Alias for gemini | ||
| 'ollama', | ||
| 'vertexai', | ||
| 'rootflo', | ||
| 'openai_vllm', | ||
| ] = Field(..., description='LLM provider') | ||
| name: Optional[str] = Field( | ||
| None, description='Model name (required for most providers)' | ||
| ) | ||
| base_url: Optional[str] = Field(None, description='Custom base URL') | ||
| temperature: Optional[float] = Field( | ||
| None, ge=0.0, le=2.0, description='Temperature setting' | ||
| ) | ||
| max_tokens: Optional[int] = Field(None, gt=0, description='Maximum tokens') | ||
| timeout: Optional[int] = Field(None, gt=0, description='Request timeout in seconds') | ||
| # VertexAI specific | ||
| project: Optional[str] = Field(None, description='GCP project ID (for VertexAI)') | ||
| location: Optional[str] = Field(None, description='GCP location (for VertexAI)') | ||
| # RootFlo specific | ||
| model_id: Optional[str] = Field(None, description='Model ID (for RootFlo)') | ||
| # OpenAI vLLM specific | ||
| api_key: Optional[str] = Field(None, description='API key (for openai_vllm)') | ||
|
|
||
| def model_post_init(self, __context): | ||
| """Validate provider-specific requirements.""" | ||
| provider = self.provider.lower() | ||
|
|
||
| # Most providers require 'name' | ||
| if provider in ['openai', 'anthropic', 'claude', 'gemini', 'google', 'ollama']: | ||
| if not self.name: | ||
| raise ValueError( | ||
| f'{provider.title()} provider requires "name" parameter in model configuration' | ||
| ) | ||
|
|
||
| raise AgentError(f'Failed after maximum {self.max_retries} attempts.') | ||
|
|
||
| def _get_react_prompt(self, variables: Optional[Dict[str, Any]] = None) -> str: | ||
| """Get system prompt modified for ReACT pattern""" | ||
| variables = variables or {} | ||
| # VertexAI requires name, project, and base_url | ||
| if provider == 'vertexai': | ||
| if not self.name: | ||
| raise ValueError('VertexAI provider requires "name" parameter') | ||
| if not self.project: | ||
| raise ValueError('VertexAI provider requires "project" parameter') | ||
| if not self.base_url: | ||
| raise ValueError('VertexAI provider requires "base_url" parameter') | ||
|
|
||
| # RootFlo requires model_id | ||
| if provider == 'rootflo': | ||
| if not self.model_id: | ||
| raise ValueError( | ||
| 'RootFlo provider requires "model_id" in model configuration' | ||
| ) | ||
|
|
||
| tools_desc = '\n'.join( | ||
| [f'- {tool.name}: {tool.description}' for tool in self.tools] | ||
| ) | ||
| # OpenAI vLLM requires name, base_url, and api_key | ||
| if provider == 'openai_vllm': | ||
| if not self.name: | ||
| raise ValueError('openai_vllm provider requires "name" parameter') | ||
| if not self.base_url: | ||
| raise ValueError('openai_vllm provider requires "base_url" parameter') | ||
| if not self.api_key: | ||
| raise ValueError('openai_vllm provider requires "api_key" parameter') | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let me find where LLMConfigModel is used and locate the factory function
rg "create_llm|from_config" --type py -iRepository: rootflo/wavefront
Length of output: 6044
🏁 Script executed:
# Also search for the main factory or initialization logic
rg "def.*llm.*config" --type py -i -A 3Repository: rootflo/wavefront
Length of output: 10615
🏁 Script executed:
# Let's also check the file structure to understand the codebase better
fd -t f -e py | grep -E "(llm|config|agent|factory)" | head -20Repository: rootflo/wavefront
Length of output: 780
🏁 Script executed:
cat -n flo_ai/flo_ai/helpers/llm_factory.pyRepository: rootflo/wavefront
Length of output: 9237
🏁 Script executed:
# Also check the test to see what configurations work
cat -n flo_ai/tests/unit-tests/test_agent_yaml_validation.py | head -100Repository: rootflo/wavefront
Length of output: 3889
🏁 Script executed:
# Let me check the agent.py file to see the exact validation rules
sed -n '86,152p' flo_ai/flo_ai/models/agent.pyRepository: rootflo/wavefront
Length of output: 2861
🏁 Script executed:
# Check if there are any examples or tests showing rootflo usage
grep -r "rootflo" flo_ai/tests --type py -A 5 -B 2 | head -80Repository: rootflo/wavefront
Length of output: 178
🏁 Script executed:
# Try with rg instead
rg "rootflo" flo_ai/tests --type py -A 5 -B 2 | head -100Repository: rootflo/wavefront
Length of output: 2516
🏁 Script executed:
# Also check examples
rg "rootflo" flo_ai/examples --type py -A 5 -B 2Repository: rootflo/wavefront
Length of output: 43
🏁 Script executed:
# Let me check if there are actual usage examples that show how rootflo base_url is typically provided
rg "ROOTFLO_BASE_URL|rootflo.*base_url" -iRepository: rootflo/wavefront
Length of output: 411
🏁 Script executed:
# Also check the actual test to see the full parametrized test
cat -n flo_ai/tests/unit-tests/test_agent_yaml_validation.py | grep -A 50 "test_valid_provider_configs"Repository: rootflo/wavefront
Length of output: 43
🏁 Script executed:
# Let me check the full test parametrization to see all config requirements
sed -n '200,350p' flo_ai/tests/unit-tests/test_agent_yaml_validation.pyRepository: rootflo/wavefront
Length of output: 5039
🏁 Script executed:
# Continue reading to find the parametrized test
sed -n '350,450p' flo_ai/tests/unit-tests/test_agent_yaml_validation.pyRepository: rootflo/wavefront
Length of output: 3942
Provider-specific validation in LLMConfigModel has misaligned requirements for RootFlo; VertexAI and OpenAI vLLM validation rules correctly match factory behavior.
The vertexai provider validation requiring base_url is correct—the factory enforces this requirement. However, the rootflo validation is incomplete: while it requires model_id, it does not validate base_url, which the factory requires (via model_config.base_url, kwargs, or ROOTFLO_BASE_URL environment variable). Update model_post_init to validate that RootFlo has base_url available from at least one source, or document that it must be provided via environment variable.
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
flo_ai/docs/arium_yaml_guide.md (1)
742-784: Fix markdownlint MD040: add languages to fenced code blocks in Error Handling.@@ -``` +```text ValueError: YAML must contain an "arium" section@@
-+text
ValueError: Agent {name} must have either yaml_config or yaml_file@@ -``` +```text ValueError: Tool {name} not found in provided tools dictionary@@
-+text
ValueError: Router {name} not found in provided routers dictionary@@ -``` +```text ValueError: Workflow must specify a start node ValueError: Workflow must specify end nodes</blockquote></details> <details> <summary>flo_ai/flo_ai/arium/builder.py (2)</summary><blockquote> `654-696`: **Bug risk: ForEach execute_node can become a plain `dict` (order-dependent) and break at runtime.** During resolution, `foreach_nodes_dict.get(execute_node_name)` may return the *unresolved config dict* for forward references. At minimum, reject non-node execute_node and require referencing agents/function_nodes/ariums (or do a dependency-aware resolve). ```diff @@ for foreach_name, foreach_config in foreach_nodes_dict.items(): @@ execute_node = ( @@ or foreach_nodes_dict.get(execute_node_name) ) + if isinstance(execute_node, dict): + raise ValueError( + f"ForEachNode '{foreach_name}': execute_node '{execute_node_name}' " + "refers to another iterator that hasn't been resolved yet (or is invalid). " + "Reorder iterators or reference a non-iterator node." + ) + if not execute_node: @@ foreach_node = ForEachNode(name=foreach_name, execute_node=execute_node)
889-908: Agent config: tools silently ignored and output_schema defaults to{}causing behavior inconsistency.Tools specified in YAML are silently ignored if
tool_registryis not provided tofrom_yaml()— the condition at line 892 (if tool_configs and available_tools:) skips the entire block whenavailable_toolsis falsy, leavingagent_toolsas an empty list with no error raised.At line 928,
output_schemais forced to{}whenNone, which differs semantically from the default behavior and could break LLM implementations that check foris not Nonerather than truthiness.Add validation to catch the missing tool_registry case, and avoid forcing
{}when no schema is provided.
♻️ Duplicate comments (3)
flo_ai/examples/example_graph_visualization.py (1)
48-56: MockLLM now matches BaseLLM call shape (avoids runtime kwargs TypeError).
Adding**kwargs: Anyto bothgenerate()andstream()is the right fix; this addresses the earlier review note.flo_ai/flo_ai/models/agent.py (2)
217-249: Tools singleton normalization is a solid YAML UX improvement.
This avoids iterating chars/keys and yields clearer validation outcomes.
117-152: RootFlo provider validation still missesbase_url(factory mismatch).
RootFlo currently only enforcesmodel_id, but if runtime creation requiresbase_url(or env fallback), validation should ensure it’s available frommodel.base_urlor documented env.@@ # RootFlo requires model_id if provider == 'rootflo': if not self.model_id: raise ValueError( 'RootFlo provider requires "model_id" in model configuration' ) + # If base_url is mandatory at runtime, validate it here (or clearly document env fallback). + # Prefer validating config-level presence for clearer YAML errors. + # if not self.base_url: + # raise ValueError('RootFlo provider requires "base_url" in model configuration')
🧹 Nitpick comments (5)
flo_ai/flo_ai/models/agent.py (2)
54-67: ParserFieldModel: good structural validation; consider enforcingrequireddefault.
Currentrequired: Optional[bool]allowsnullto slip through; if downstream expects strict boolean, default toFalse(orTruebased on schema semantics) to avoid tri-state logic.
206-216: AgentConfigModel: “prefer job” comment doesn’t match behavior—normalize or drop comment.
Right now bothjobandpromptremain set; builder will “prefer job”, but the model doesn’t “ignore prompt”. Consider settingself.prompt = Nonewhenjobis present to keep config canonical.flo_ai/flo_ai/agent/builder.py (2)
287-297: Type hygiene:_process_yaml_toolsacceptsstrentries—fix the annotation to match.
This reads asList[Dict[str, Any]]but you passList[Union[str, Dict[str, Any]]].Also applies to: 318-323
307-315: Avoid mutatingbuilder._llmdirectly; prefer an explicit builder method.
Direct attribute writes couple AgentBuilder to BaseLLM internals (and can break if temperature becomes computed/immutable).flo_ai/flo_ai/arium/builder.py (1)
799-817:_create_llm_from_configtakesbase_llmbut ignores it—drop the param or implement fallback.
This signature implies fallback behavior that doesn’t exist.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
documentation/quickstart.mdx(6 hunks)flo_ai/docs/arium_yaml_guide.md(33 hunks)flo_ai/examples/example_graph_visualization.py(2 hunks)flo_ai/flo_ai/agent/agent.py(1 hunks)flo_ai/flo_ai/agent/builder.py(4 hunks)flo_ai/flo_ai/arium/builder.py(22 hunks)flo_ai/flo_ai/models/agent.py(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- documentation/quickstart.mdx
🧰 Additional context used
🧬 Code graph analysis (3)
flo_ai/flo_ai/models/agent.py (2)
flo_ai/flo_ai/models/arium.py (5)
model_post_init(35-87)model_post_init(177-203)model_post_init(262-287)model_post_init(326-348)model_post_init(357-359)wavefront/server/modules/voice_agents_module/voice_agents_module/models/telephony_schemas.py (1)
model_post_init(94-97)
flo_ai/flo_ai/arium/builder.py (2)
flo_ai/flo_ai/agent/builder.py (4)
AgentBuilder(15-382)_validate_yaml_config(189-216)from_yaml(219-316)build(171-186)flo_ai/flo_ai/formatter/yaml_format_parser.py (3)
build(150-162)build(196-236)FloYamlParser(165-236)
flo_ai/flo_ai/agent/builder.py (4)
flo_ai/flo_ai/tool/tool_config.py (2)
ToolConfig(5-56)create_tool_config(59-70)flo_ai/flo_ai/models/agent.py (2)
AgentYamlModel(251-261)LLMConfigModel(86-151)flo_ai/flo_ai/helpers/yaml_validation.py (1)
format_validation_error_path(6-53)flo_ai/flo_ai/helpers/llm_factory.py (1)
create_llm_from_config(204-219)
🪛 markdownlint-cli2 (0.18.1)
flo_ai/docs/arium_yaml_guide.md
748-748: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
756-756: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
764-764: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
772-772: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
780-780: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🔇 Additional comments (2)
flo_ai/examples/example_graph_visualization.py (1)
7-12: Import relocation + typing update look correct.
Switching tofrom flo_ai.agent import Agentmatches the new public API surface, and addingAnysupports the updated mock signatures.flo_ai/flo_ai/arium/builder.py (1)
291-320: [rewritten comment]
[classification tag]
| def __init__( | ||
| self, | ||
| name: str, | ||
| system_prompt: str | AssistantMessage, | ||
| llm: BaseLLM, | ||
| tools: Optional[List[Tool]] = None, | ||
| max_retries: int = 0, | ||
| max_tool_calls: int = 5, | ||
| reasoning_pattern: ReasoningPattern = ReasoningPattern.DIRECT, | ||
| output_schema: Optional[Dict[str, Any]] = None, | ||
| role: Optional[str] = None, | ||
| act_as: Optional[str] = MessageType.ASSISTANT, | ||
| input_filter: Optional[List[str]] = None, | ||
| ): | ||
| # Determine agent type based on tools | ||
| agent_type = AgentType.TOOL_USING if tools else AgentType.CONVERSATIONAL | ||
|
|
||
| # Enhance system prompt with role if provided | ||
| enhanced_prompt = system_prompt | ||
| if role: | ||
| if isinstance(system_prompt, str): | ||
| enhanced_prompt = f'You are {role}. {system_prompt}' | ||
| elif isinstance(system_prompt, AssistantMessage): | ||
| enhanced_prompt = f'You are {role}. {system_prompt.content}' | ||
|
|
||
| super().__init__( | ||
| name=name, | ||
| system_prompt=str(enhanced_prompt), | ||
| agent_type=agent_type, | ||
| llm=llm, | ||
| max_retries=max_retries, | ||
| max_tool_calls=max_tool_calls, | ||
| ) |
There was a problem hiding this comment.
Bug: system_prompt as AssistantMessage becomes str(AssistantMessage) (likely wrong prompt content).
If role is not provided and system_prompt is an AssistantMessage, you currently pass str(enhanced_prompt) to BaseAgent—this can end up as a repr instead of the message text.
@@
- enhanced_prompt = system_prompt
+ enhanced_prompt: str
if role:
if isinstance(system_prompt, str):
enhanced_prompt = f'You are {role}. {system_prompt}'
elif isinstance(system_prompt, AssistantMessage):
enhanced_prompt = f'You are {role}. {system_prompt.content}'
+ else:
+ enhanced_prompt = str(system_prompt)
+ else:
+ enhanced_prompt = (
+ system_prompt
+ if isinstance(system_prompt, str)
+ else system_prompt.content
+ if isinstance(system_prompt, AssistantMessage)
+ else str(system_prompt)
+ )
@@
super().__init__(
name=name,
- system_prompt=str(enhanced_prompt),
+ system_prompt=enhanced_prompt,
agent_type=agent_type,
llm=llm,
max_retries=max_retries,
max_tool_calls=max_tool_calls,
)🤖 Prompt for AI Agents
In flo_ai/flo_ai/agent/agent.py around lines 31 to 63, when system_prompt is an
AssistantMessage and role is not provided the code currently passes
str(enhanced_prompt) to the BaseAgent which yields a repr instead of the actual
message text; change the logic so enhanced_prompt is always a plain string: if
system_prompt is an AssistantMessage set enhanced_prompt = system_prompt.content
(and if role is provided prepend "You are {role}. " to that content), and
finally pass enhanced_prompt (not str(enhanced_prompt)) into super().__init__.
| logger.debug(f'Sending messages to LLM: {messages}') | ||
| response = await self.llm.generate( | ||
| messages, output_schema=self.output_schema | ||
| ) | ||
| logger.debug(f'Raw LLM Response: {response}') | ||
|
|
There was a problem hiding this comment.
Logging likely leaks sensitive content (full prompts, tool outputs, user data).
logger.debug(f'Sending messages to LLM: {messages}') and raw responses can spill secrets/PII into logs. Consider redaction/truncation or logging only metadata (counts, roles, ids).
Also applies to: 653-661
| # Keep executing tools until we get a final answer | ||
| tool_call_count = 0 | ||
| function_response = None | ||
| function_name = None | ||
| while tool_call_count < self.max_tool_calls: | ||
| formatted_tools = self.llm.format_tools_for_llm(self.tools) | ||
| response = await self.llm.generate( | ||
| messages, | ||
| functions=formatted_tools, | ||
| output_schema=self.output_schema, | ||
| ) | ||
|
|
||
| # Handle ReACT and CoT patterns | ||
| function_call = await self.llm.get_function_call(response) | ||
|
|
||
| # If no function call, check if this is truly a final answer | ||
| if not function_call: | ||
| assistant_message = self.llm.get_message_content(response) | ||
| if assistant_message: | ||
| # Check if this is a final answer or just intermediate reasoning | ||
| is_final = await self._is_final_answer( | ||
| assistant_message, tool_call_count, messages | ||
| ) | ||
| if is_final: | ||
| # Ensure act_as is not None (default to 'assistant' if missing) | ||
| role = ( | ||
| self.act_as | ||
| if self.act_as is not None | ||
| else MessageType.ASSISTANT | ||
| ) | ||
| self.add_to_history( | ||
| AssistantMessage( | ||
| role=role, content=assistant_message | ||
| ) | ||
| ) | ||
| return self.conversation_history | ||
| else: | ||
| # This is intermediate reasoning, add to context and continue | ||
| msg_preview = ( | ||
| assistant_message[:100] | ||
| if len(assistant_message) > 100 | ||
| else assistant_message | ||
| ) | ||
| logger.debug( | ||
| f'Detected intermediate reasoning (not final answer): {msg_preview}...' | ||
| ) | ||
| # Ensure act_as is not None (default to 'assistant' if missing) | ||
| role = ( | ||
| self.act_as | ||
| if self.act_as is not None | ||
| else MessageType.ASSISTANT | ||
| ) | ||
| self.add_to_history( | ||
| AssistantMessage( | ||
| role=role, content=assistant_message | ||
| ) | ||
| ) | ||
| self.add_to_history( | ||
| UserMessage( | ||
| content='Based on your reasoning, please proceed with the necessary tool calls to complete the task.', | ||
| ) | ||
| ) | ||
| continue | ||
| break |
There was a problem hiding this comment.
Critical: tool loop writes to conversation_history but doesn’t update messages (LLM never sees the new turns).
In the “intermediate reasoning” path (and tool error retry path), you append to history then continue, but the next llm.generate() uses the stale messages list from before those additions.
@@
else:
# This is intermediate reasoning, add to context and continue
@@
self.add_to_history(
AssistantMessage(
role=role, content=assistant_message
)
)
self.add_to_history(
UserMessage(
content='Based on your reasoning, please proceed with the necessary tool calls to complete the task.',
)
)
+ # Keep the actual LLM input in sync with history
+ messages = await self._get_message_history(variables)
continue
@@
if should_retry and retry_count <= self.max_retries:
@@
self.add_to_history(
AssistantMessage(
content=f'Tool execution error: {analysis}'
)
)
+ messages = await self._get_message_history(variables)
continueAlso applies to: 386-392
| function_response = await tool.run( | ||
| inputs=[], variables=None, **function_args | ||
| ) | ||
| tool_span.set_attribute( | ||
| 'tool.result.length', len(str(function_response)) | ||
| ) | ||
| else: | ||
| function_response = await tool.run( | ||
| inputs=[], variables=None, **function_args | ||
| ) |
There was a problem hiding this comment.
Tools run without runtime variables (variables=None)—breaks templating/tool behavior.
You already have variables; pass them through (and consider whether inputs=[] is intended).
@@
- function_response = await tool.run(
- inputs=[], variables=None, **function_args
- )
+ function_response = await tool.run(
+ inputs=[], variables=variables, **function_args
+ )
@@
- function_response = await tool.run(
- inputs=[], variables=None, **function_args
- )
+ function_response = await tool.run(
+ inputs=[], variables=variables, **function_args
+ )📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| function_response = await tool.run( | |
| inputs=[], variables=None, **function_args | |
| ) | |
| tool_span.set_attribute( | |
| 'tool.result.length', len(str(function_response)) | |
| ) | |
| else: | |
| function_response = await tool.run( | |
| inputs=[], variables=None, **function_args | |
| ) | |
| function_response = await tool.run( | |
| inputs=[], variables=variables, **function_args | |
| ) | |
| tool_span.set_attribute( | |
| 'tool.result.length', len(str(function_response)) | |
| ) | |
| else: | |
| function_response = await tool.run( | |
| inputs=[], variables=variables, **function_args | |
| ) |
🤖 Prompt for AI Agents
In flo_ai/flo_ai/agent/agent.py around lines 328 to 337, the code calls tool.run
with variables=None which breaks templating and tool behavior; replace
variables=None with the actual variables variable (e.g., variables=variables) so
runtime variables are passed through, and review whether inputs=[] is
intended—if the tool expects no inputs omit the parameter or pass the actual
inputs variable instead; keep the tool_span handling the same.
| @staticmethod | ||
| def _validate_yaml_config(config: Dict[str, Any]) -> AgentYamlModel: | ||
| """Validate YAML configuration using Pydantic models. | ||
|
|
||
| Args: | ||
| config: Dictionary containing YAML configuration | ||
|
|
||
| Returns: | ||
| AgentYamlModel: Validated configuration model | ||
|
|
||
| Raises: | ||
| ValueError: If validation fails with formatted error messages | ||
| """ | ||
| try: | ||
| validated_config = AgentYamlModel(**config) | ||
| except ValidationError as e: | ||
| # Format validation errors for better readability | ||
| error_messages = [] | ||
| for error in e.errors(): | ||
| field_path = format_validation_error_path(error['loc'], config) | ||
| error_msg = f"{field_path}: {error['msg']}" | ||
| if 'ctx' in error: | ||
| error_msg += f" (context: {error['ctx']})" | ||
| error_messages.append(error_msg) | ||
| raise ValueError( | ||
| 'YAML validation failed:\n' | ||
| + '\n'.join(f' - {msg}' for msg in error_messages) | ||
| ) from e | ||
| return validated_config | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
wc -l flo_ai/flo_ai/agent/builder.pyRepository: rootflo/wavefront
Length of output: 95
🏁 Script executed:
cat -n flo_ai/flo_ai/agent/builder.py | sed -n '180,270p'Repository: rootflo/wavefront
Length of output: 4257
🏁 Script executed:
# Search for yaml.safe_load usage and format_validation_error_path
rg "yaml.safe_load|format_validation_error_path" --context 5Repository: rootflo/wavefront
Length of output: 35181
🏁 Script executed:
cat -n flo_ai/flo_ai/helpers/yaml_validation.pyRepository: rootflo/wavefront
Length of output: 2506
🏁 Script executed:
# Check if there are any tests for None/non-dict YAML handling
rg "yaml.safe_load" flo_ai/tests --context 3 -A 5Repository: rootflo/wavefront
Length of output: 2712
🏁 Script executed:
rg "_validate_yaml_config" --context 2Repository: rootflo/wavefront
Length of output: 6436
🏁 Script executed:
# Check if AriumBuilder has identical structure
cat -n flo_ai/flo_ai/arium/builder.py | sed -n '1,50p'Repository: rootflo/wavefront
Length of output: 2469
🏁 Script executed:
# Search for similar yaml.safe_load patterns to understand broader impact
rg "yaml.safe_load.*\)" -A 3 flo_ai/flo_ai/agent/builder.py flo_ai/flo_ai/arium/builder.pyRepository: rootflo/wavefront
Length of output: 1213
Guard YAML root type before AgentYamlModel(**config) to prevent confusing TypeErrors.
If YAML parses to None or a list, the code raises TypeError before reaching your error formatting logic. The format_validation_error_path function also expects a dict, so this check is necessary for proper error reporting.
@staticmethod
def _validate_yaml_config(config: Dict[str, Any]) -> AgentYamlModel:
"""Validate YAML configuration using Pydantic models.
Args:
config: Dictionary containing YAML configuration
Returns:
AgentYamlModel: Validated configuration model
Raises:
ValueError: If validation fails with formatted error messages
"""
try:
+ if not isinstance(config, dict):
+ raise ValueError(
+ f"YAML root must be a mapping/object, got: {type(config).__name__}"
+ )
validated_config = AgentYamlModel(**config)
except ValidationError as e:This applies to both AgentBuilder._validate_yaml_config (lines 188–216) and the identical pattern in AriumBuilder.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| @staticmethod | |
| def _validate_yaml_config(config: Dict[str, Any]) -> AgentYamlModel: | |
| """Validate YAML configuration using Pydantic models. | |
| Args: | |
| config: Dictionary containing YAML configuration | |
| Returns: | |
| AgentYamlModel: Validated configuration model | |
| Raises: | |
| ValueError: If validation fails with formatted error messages | |
| """ | |
| try: | |
| validated_config = AgentYamlModel(**config) | |
| except ValidationError as e: | |
| # Format validation errors for better readability | |
| error_messages = [] | |
| for error in e.errors(): | |
| field_path = format_validation_error_path(error['loc'], config) | |
| error_msg = f"{field_path}: {error['msg']}" | |
| if 'ctx' in error: | |
| error_msg += f" (context: {error['ctx']})" | |
| error_messages.append(error_msg) | |
| raise ValueError( | |
| 'YAML validation failed:\n' | |
| + '\n'.join(f' - {msg}' for msg in error_messages) | |
| ) from e | |
| return validated_config | |
| @staticmethod | |
| def _validate_yaml_config(config: Dict[str, Any]) -> AgentYamlModel: | |
| """Validate YAML configuration using Pydantic models. | |
| Args: | |
| config: Dictionary containing YAML configuration | |
| Returns: | |
| AgentYamlModel: Validated configuration model | |
| Raises: | |
| ValueError: If validation fails with formatted error messages | |
| """ | |
| try: | |
| if not isinstance(config, dict): | |
| raise ValueError( | |
| f"YAML root must be a mapping/object, got: {type(config).__name__}" | |
| ) | |
| validated_config = AgentYamlModel(**config) | |
| except ValidationError as e: | |
| # Format validation errors for better readability | |
| error_messages = [] | |
| for error in e.errors(): | |
| field_path = format_validation_error_path(error['loc'], config) | |
| error_msg = f"{field_path}: {error['msg']}" | |
| if 'ctx' in error: | |
| error_msg += f" (context: {error['ctx']})" | |
| error_messages.append(error_msg) | |
| raise ValueError( | |
| 'YAML validation failed:\n' | |
| '\n'.join(f' - {msg}' for msg in error_messages) | |
| ) from e | |
| return validated_config |
🤖 Prompt for AI Agents
In flo_ai/flo_ai/agent/builder.py around lines 188 to 217, guard that the parsed
YAML root is a dict before calling AgentYamlModel(**config) and before calling
format_validation_error_path: if config is None or not isinstance(config, dict)
(e.g., list), raise a ValueError with a clear message like "YAML root must be a
mapping/dictionary, got {type}" so the code doesn't raise a TypeError and your
validation formatting can run; apply the same guard and error message in the
identical section for AriumBuilder as well.
* refactor(flo-ai): follow arium directory structure for agents * feat(flo-ai): add field level validation for agent/arium yaml * fix: review comments
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.