fix: add ty and fix types#173
Conversation
WalkthroughConsolidates input shapes to plain strings across examples and builders, introduces AriumNodeType and FunctionNode, tightens memory and execution-plan types, changes Agent.run to return List[BaseMessage], updates LLM/router signatures (functions/output_schema/optional execution_context), and adjusts builders, YAML parsing, and tests to match these API changes. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant AriumBuilder
participant Arium
participant Memory
participant Agent
participant LLM
User->>AriumBuilder: build_and_run(inputs: str)
AriumBuilder->>Arium: run(inputs: str, variables: {})
Arium->>Arium: _resolve_inputs -> wrap into UserMessage
Arium->>Memory: add(MessageMemoryItem(node, result=UserMessage))
Arium->>Agent: run(messages: List[BaseMessage], variables: Dict)
Agent->>LLM: generate/stream (functions?, output_schema?)
LLM-->>Agent: List[BaseMessage] (conversation history / function results)
Agent->>Memory: add(results as MessageMemoryItem)
Agent-->>Arium: return List[BaseMessage]
Arium-->>AriumBuilder: aggregated result
AriumBuilder-->>User: final output (List[BaseMessage])
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Areas requiring extra attention:
Pre-merge checks and finishing touches❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 12
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
flo_ai/examples/arium_yaml_example.py (1)
557-558: Invalid Python syntax:enumerate[MessageMemoryItem]will cause a TypeError.The syntax
enumerate[MessageMemoryItem](result)is invalid.enumerateis not a generic class and doesn't support subscript notation. This will raiseTypeError: 'type' object is not subscriptableat runtime.Apply this fix:
- for i, message in enumerate[MessageMemoryItem](result): + for i, message in enumerate(result):Note: This same issue appears at lines 587, 609, 637, 668, 700, 723, and 833.
flo_ai/flo_ai/arium/builder.py (1)
642-644: Variable shadowing:agentsshadows the function parameter.The local variable
agentsat line 643 shadows theagentsparameter from thefrom_yamlmethod signature at line 291. This can cause confusion and bugs.Apply this diff to rename the local variable:
elif router_type == 'plan_execute': - agents = router_config.get('agents', {}) - if not agents: + plan_agents = router_config.get('agents', {}) + if not plan_agents: raise ValueError( f'Plan-Execute router {router_name} must specify agents' ) router_fn = create_llm_router( router_type='plan_execute', - agents=agents, + agents=plan_agents, llm=router_llm, **settings, )
🧹 Nitpick comments (9)
flo_ai/examples/vllm_agent_usage.py (1)
67-69: Avoid repeating the samevllm_base_urlvalidation in every exampleThe three blocks repeat the same guard logic already used elsewhere in this file. Consider centralizing this into a small helper (e.g.
def ensure_vllm_base_url() -> str: ...) or a single check inmain()so that:
- The validation logic and error message live in one place.
- Future changes to how the base URL is resolved or validated don’t require touching four call sites.
This keeps the examples a bit cleaner while preserving the same behavior.
Also applies to: 123-125, 222-224
flo_ai/flo_ai/utils/variable_extractor.py (1)
110-137: Avoid mutable default forvariablesand confirm new empty‑text behaviorTwo points here:
- The mutable default
{}forvariablesis a classic Python footgun, even if the function doesn’t currently mutate it. Safer to useNoneand normalize:-def resolve_variables(text: str, variables: Dict[str, Any] = {}) -> str: +def resolve_variables(text: str, variables: Dict[str, Any] | None = None) -> str: @@ - if not text: - raise ValueError('Text is required to resolve variables') + if not text: + raise ValueError('Text is required to resolve variables') + + if variables is None: + variables = {}This keeps runtime semantics the same while removing the mutable default.
if not text: raise ValueError(...)is a behavior change: previously empty strings (or other falsy values) likely passed through. Please confirm all call sites are prepared for this exception, or consider only rejectingNoneand treating""as a valid no‑op if backward compatibility is important.flo_ai/flo_ai/builder/agent_builder.py (1)
189-201: Signature and docstring mostly consistent; consider minor docstring tweakThe new
yaml_str: Optional[str]andyaml_file: Optional[str]parameters give a clean, backwards‑compatible way to support both string and file inputs. Types and defaults look good and should not break existing positional callers.One nit: the leading docstring line still says “from a YAML configuration string”, even though you now also support files. Consider updating it for clarity, e.g.:
- """Create an agent builder from a YAML configuration string + """Create an agent builder from a YAML configuration string or fileflo_ai/examples/arium_examples.py (1)
112-112: Inconsistent input format across examples.This call uses a plain string input, but
example_linear_workflow(line 50) andexample_convenience_function(line 201) still use[UserMessage(...)]list format. If both formats are supported, consider adding a comment clarifying this, or standardize the examples for consistency.flo_ai/flo_ai/models/base_agent.py (2)
79-79: Unnecessarycast()when extending a list.The
cast()is redundant here. Ifinput_messageis already typed asList[BaseMessage], extending the list doesn't require casting. The type checker should infer the correct type from theisinstancecheck.if isinstance(input_message, list): - self.conversation_history.extend(cast(List[BaseMessage], input_message)) + self.conversation_history.extend(input_message)
105-105: Type ignore comments may mask underlying type issues.While
# type: ignoresuppresses the warnings, consider whether the method signatures inBaseLLMcould be updated to acceptMediaMessageContentdirectly, rather than silencing the type checker. If the types are intentionally broader, consider using acast()orassertfor documentation purposes.Also applies to: 113-113
flo_ai/flo_ai/llm/vertexai_llm.py (1)
8-26:base_urlparameter is accepted but never used.The
base_urlparameter is defined in the constructor signature but is neither stored as an instance attribute nor passed to thegenai.Client. If this parameter is intentionally unused for VertexAI (since it uses project/location instead), consider documenting this or removing it from the signature to avoid confusion.If
base_urlshould be ignored for VertexAI, add a comment:def __init__( self, model: str = 'gemini-2.5-flash', temperature: float = 0.7, api_key: Optional[str] = None, - base_url: Optional[str] = None, + base_url: Optional[str] = None, # Unused; VertexAI uses project/location instead project: Optional[str] = None, location: Optional[str] = None, **kwargs, ):flo_ai/flo_ai/models/agent.py (1)
58-58: Explicitstr()cast is safe but may mask type issues.If
enhanced_promptis already a string, this is redundant. If it's anAssistantMessage, thestr()representation may not be the expected content.Consider extracting the content explicitly:
if isinstance(enhanced_prompt, AssistantMessage): enhanced_prompt = enhanced_prompt.content super().__init__( name=name, system_prompt=enhanced_prompt, ... )flo_ai/flo_ai/arium/base.py (1)
23-23: Consider using consistent dict instantiation syntax.Line 23 uses
dict[str, AriumNodeType]()for initialization while line 24 uses the same pattern. This syntax is valid in Python 3.9+, but mixingDict(from typing) with lowercasedictin the same file could be confusing.For consistency, consider using either:
- self.nodes: Dict[str, AriumNodeType] = dict[str, AriumNodeType]() - self.edges: Dict[str, Edge] = dict[str, Edge]() + self.nodes: Dict[str, AriumNodeType] = {} + self.edges: Dict[str, Edge] = {}
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
flo_ai/uv.lockis excluded by!**/*.lock
📒 Files selected for processing (39)
flo_ai/examples/arium_examples.py(6 hunks)flo_ai/examples/arium_linear_usage.py(2 hunks)flo_ai/examples/arium_yaml_example.py(3 hunks)flo_ai/examples/custom_plan_execute_demo.py(1 hunks)flo_ai/examples/example_graph_visualization.py(2 hunks)flo_ai/examples/flo_tool_example.py(1 hunks)flo_ai/examples/llm_router_example.py(5 hunks)flo_ai/examples/multi_tool_example.py(1 hunks)flo_ai/examples/simple_working_demo.py(4 hunks)flo_ai/examples/simple_yaml_workflow.py(1 hunks)flo_ai/examples/telemetry_example.py(1 hunks)flo_ai/examples/variables_workflow_example.py(2 hunks)flo_ai/examples/variables_workflow_yaml_example.py(2 hunks)flo_ai/examples/vertexai_agent_example.py(1 hunks)flo_ai/examples/vllm_agent_usage.py(4 hunks)flo_ai/flo_ai/arium/arium.py(8 hunks)flo_ai/flo_ai/arium/base.py(6 hunks)flo_ai/flo_ai/arium/builder.py(14 hunks)flo_ai/flo_ai/arium/llm_router.py(12 hunks)flo_ai/flo_ai/arium/memory.py(7 hunks)flo_ai/flo_ai/arium/models.py(1 hunks)flo_ai/flo_ai/arium/nodes.py(4 hunks)flo_ai/flo_ai/builder/agent_builder.py(3 hunks)flo_ai/flo_ai/formatter/yaml_format_parser.py(6 hunks)flo_ai/flo_ai/helpers/llm_factory.py(3 hunks)flo_ai/flo_ai/llm/anthropic_llm.py(2 hunks)flo_ai/flo_ai/llm/base_llm.py(3 hunks)flo_ai/flo_ai/llm/gemini_llm.py(4 hunks)flo_ai/flo_ai/llm/ollama_llm.py(1 hunks)flo_ai/flo_ai/llm/openai_llm.py(3 hunks)flo_ai/flo_ai/llm/openai_vllm.py(2 hunks)flo_ai/flo_ai/llm/rootflo_llm.py(3 hunks)flo_ai/flo_ai/llm/vertexai_llm.py(1 hunks)flo_ai/flo_ai/models/agent.py(5 hunks)flo_ai/flo_ai/models/base_agent.py(4 hunks)flo_ai/flo_ai/telemetry/telemetry.py(1 hunks)flo_ai/flo_ai/tool/flo_tool.py(3 hunks)flo_ai/flo_ai/utils/variable_extractor.py(2 hunks)flo_ai/pyproject.toml(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (16)
flo_ai/examples/vertexai_agent_example.py (2)
flo_ai/examples/flo_tool_example.py (1)
get_weather(64-80)flo_ai/examples/multi_tool_example.py (1)
get_weather(68-83)
flo_ai/examples/multi_tool_example.py (3)
flo_ai/examples/flo_tool_example.py (1)
get_weather(64-80)flo_ai/examples/vertexai_agent_example.py (1)
get_weather(48-51)flo_ai/examples/tool_usage.py (1)
get_weather(28-30)
flo_ai/examples/flo_tool_example.py (2)
flo_ai/examples/multi_tool_example.py (1)
get_weather(68-83)flo_ai/examples/vertexai_agent_example.py (1)
get_weather(48-51)
flo_ai/examples/telemetry_example.py (1)
flo_ai/flo_ai/arium/arium.py (1)
run(34-177)
flo_ai/examples/llm_router_example.py (2)
flo_ai/flo_ai/arium/memory.py (1)
BaseMemory(106-130)flo_ai/flo_ai/arium/builder.py (1)
build_and_run(243-261)
flo_ai/examples/arium_linear_usage.py (1)
flo_ai/flo_ai/arium/builder.py (1)
build_and_run(243-261)
flo_ai/examples/simple_working_demo.py (2)
flo_ai/flo_ai/arium/memory.py (5)
MessageMemory(133-153)MessageMemoryItem(22-33)add(108-109)add(143-147)add(165-166)flo_ai/flo_ai/models/chat_message.py (1)
UserMessage(65-70)
flo_ai/flo_ai/models/agent.py (1)
flo_ai/flo_ai/models/chat_message.py (2)
BaseMessage(49-52)FunctionMessage(85-102)
flo_ai/examples/example_graph_visualization.py (1)
flo_ai/flo_ai/llm/base_llm.py (7)
BaseLLM(9-143)generate(23-30)stream(33-40)get_message_content(104-106)format_tool_for_llm(109-111)format_tools_for_llm(114-116)format_image_in_message(119-121)
flo_ai/flo_ai/arium/arium.py (2)
flo_ai/flo_ai/arium/memory.py (2)
MessageMemory(133-153)MessageMemoryItem(22-33)flo_ai/flo_ai/utils/variable_extractor.py (1)
resolve_variables(110-137)
flo_ai/flo_ai/llm/base_llm.py (7)
flo_ai/flo_ai/llm/anthropic_llm.py (1)
format_image_in_message(241-243)flo_ai/flo_ai/llm/gemini_llm.py (1)
format_image_in_message(245-263)flo_ai/flo_ai/llm/ollama_llm.py (1)
format_image_in_message(166-168)flo_ai/flo_ai/llm/openai_llm.py (1)
format_image_in_message(186-210)flo_ai/flo_ai/llm/rootflo_llm.py (1)
format_image_in_message(333-337)flo_ai/tests/unit-tests/test_base_llm.py (1)
format_image_in_message(58-59)flo_ai/flo_ai/models/chat_message.py (1)
ImageMessageContent(21-28)
flo_ai/flo_ai/arium/base.py (4)
flo_ai/flo_ai/arium/nodes.py (3)
AriumNode(12-50)ForEachNode(53-155)FunctionNode(158-206)flo_ai/flo_ai/arium/protocols.py (1)
ExecutableNode(5-39)flo_ai/flo_ai/tool/base_tool.py (1)
Tool(12-50)flo_ai/flo_ai/arium/models.py (4)
StartNode(15-17)EndNode(21-23)Edge(27-34)default_router(7-11)
flo_ai/flo_ai/llm/rootflo_llm.py (3)
flo_ai/flo_ai/llm/base_llm.py (3)
generate(23-30)stream(33-40)format_image_in_message(119-121)flo_ai/flo_ai/llm/gemini_llm.py (3)
generate(48-148)stream(151-206)format_image_in_message(245-263)flo_ai/flo_ai/llm/openai_llm.py (3)
generate(41-117)stream(120-150)format_image_in_message(186-210)
flo_ai/flo_ai/arium/builder.py (4)
flo_ai/flo_ai/arium/memory.py (5)
MessageMemory(133-153)MessageMemoryItem(22-33)get(112-113)get(149-153)get(168-175)flo_ai/flo_ai/arium/nodes.py (4)
AriumNode(12-50)run(35-50)run(98-111)run(179-206)flo_ai/flo_ai/arium/base.py (1)
add_edge(93-164)flo_ai/flo_ai/models/chat_message.py (2)
UserMessage(65-70)BaseMessage(49-52)
flo_ai/flo_ai/arium/llm_router.py (1)
flo_ai/flo_ai/arium/memory.py (5)
MessageMemory(133-153)MessageMemoryItem(22-33)get(112-113)get(149-153)get(168-175)
flo_ai/flo_ai/arium/memory.py (1)
flo_ai/flo_ai/models/chat_message.py (1)
BaseMessage(49-52)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: build
🔇 Additional comments (77)
flo_ai/examples/vertexai_agent_example.py (1)
48-51: Consistent optionalcountrydefault; behavior preservedSwitching
country’s default to''keeps thelocationformatting logic identical for omitted values (if countrystill treats it as falsy) while aligning this example with the other weather tools’ signatures and the Tool schema for an optional string. No issues from a typing or runtime perspective.flo_ai/pyproject.toml (1)
50-50: I'll help you verify the "ty" package dependency and the pre-release version constraint. Let me gather information about this package and check the codebase for any additional context.
<function_calls>
ty package PyPI python type checker
</function_calls>Now let me also search the codebase to see if "ty" appears elsewhere and get more context on the pyproject.toml file:
<function_calls>
#!/bin/bashSearch for all occurrences of "ty" in pyproject.toml
cat flo_ai/pyproject.toml | head -70 | tail -30
</function_calls><function_calls>
#!/bin/bashCheck the entire pyproject.toml structure for ty references
rg -n "ty" flo_ai/pyproject.toml
</function_calls>flo_ai/examples/vllm_agent_usage.py (1)
18-20: Early guard for missing vLLM base URL is a good additionThis check prevents confusing runtime failures by failing fast when
VLLM_BASE_URLis not configured; the error message is clear and close to where the LLM is constructed.flo_ai/examples/telemetry_example.py (2)
185-185: LGTM!The input type change from list to string aligns with the PR's goal of normalizing input shapes across the codebase. This makes the API more intuitive.
180-182: Based on my thorough verification of the codebase, I can now provide the rewritten review comment:The workflow.name assignment is fully supported and working as intended.
The
type: ignorecomment is appropriate for suppressing type checker warnings about dynamically added attributes. The Arium class deliberately allows setting thenameattribute after instantiation (seeArium.run()line 75:getattr(self, 'name', 'unnamed_workflow')), and this pattern is used throughout the codebase. The attribute is not read-only or incorrectly typed—it is simply not initialized in__init__, which is a valid Python pattern.flo_ai/examples/multi_tool_example.py (1)
68-68: LGTM!Changing the default from
Noneto''is a good improvement that simplifies the conditional logic on line 82 and maintains consistency across the codebase.flo_ai/examples/flo_tool_example.py (1)
64-64: LGTM!This change maintains consistency with the same function signature updates in other example files and simplifies downstream string handling.
flo_ai/flo_ai/telemetry/telemetry.py (1)
142-143: Essential type annotation fix.Using
Optional[str]instead ofstr = Noneis the correct way to type optional string parameters. The previous annotations were technically incorrect sincestrcannot beNone.flo_ai/flo_ai/arium/memory.py (4)
69-71: Excellent defensive coding with walrus operator.The walrus operator combined with a None check prevents potential
AttributeErrorfrom accessing.statuson a None value. This is a good improvement over the previous implementation.
108-109: LGTM!Renaming the parameter from
mtomessageimproves code readability and makes the intent clearer.
23-23: The review comment's concern is not supported by the codebase evidence.I've searched for all
MessageMemoryIteminstantiations in the repository. All 6 found instances explicitly provide theresultparameter:
- 2 instances in test files (test_llm_router.py)
- 1 instance in example code (simple_working_demo.py)
- 3 instances in the core arium.py implementation
No code in this repository instantiates
MessageMemoryItemwithout providing theresultparameter. Making it required does not break existing code.
83-88: All call sites tomark_step_completedandmark_step_failedare compliant with the required parameters.The methods are not called anywhere in the codebase, so there are no callers to update. This is not a breaking change.
flo_ai/flo_ai/formatter/yaml_format_parser.py (3)
92-94: LGTM!Using
Literal.__class_getitem__()for runtime construction of Literal types is the correct approach when building types dynamically.
96-101: LGTM!Extracting array type creation into a dedicated method improves code organization and makes the type resolution logic clearer.
158-159: LGTM!Adding explicit
encoding='utf-8'is a best practice that ensures consistent file reading across different platforms and locales.Also applies to: 203-204
flo_ai/flo_ai/arium/models.py (1)
32-33: LGTM!The
type: ignorecomment is appropriate here since accessing.__name__onfunctools.partial.funcis valid at runtime but may not be fully typed in the type checker's perspective.flo_ai/examples/simple_yaml_workflow.py (1)
107-116: LGTM!Converting
user_inputfrom a list to a plain string simplifies the code and aligns with the PR's goal of normalizing input types across the framework. The updated print statement correctly reflects this change.flo_ai/examples/variables_workflow_example.py (1)
145-148: Stringinputsusage matches updatedAriumBuilder.build_and_runAPIBoth workflow invocations now pass a plain string to
build_and_run(inputs=...), which aligns with theinputs: List[BaseMessage] | strsignature and is equivalent to the prior single-element list usage. No behavior change, just cleaner API use.Also applies to: 207-210
flo_ai/examples/example_graph_visualization.py (1)
11-12: MockLLM correctly implements theBaseLLMinterface for the fallback pathImporting
BaseLLMand havingMockLLMsubclass it with concretegenerate,stream,get_message_content, and tool/image formatting methods cleanly satisfies the abstract interface and keeps the visualization example working even without OpenAI. The minimal implementations are appropriate for this mock context.Also applies to: 44-65
flo_ai/flo_ai/llm/ollama_llm.py (1)
15-20:api_keytype narrowed toOptional[str]without runtime impactUpdating
api_keytoOptional[str] = NoneinOllamaLLM.__init__matches theBaseLLMconstructor and other LLMs in the codebase, improving type correctness without changing behavior.flo_ai/flo_ai/helpers/llm_factory.py (1)
141-145: Stricter config validation looks correct but may break existing setupsThe new checks requiring:
base_urlfor VertexAI,api_keyforopenai_vllm, andbase_urlfor RootFlo (even when using an access token),tighten configuration correctness and should surface misconfigured environments early with clear error messages.
However, these will fail any existing configs/tests that previously relied on implicit defaults or partial settings. Please confirm that:
- all production and example configs set these fields (or the relevant env vars), and
- documentation for these providers is updated to call out the now-required parameters.
Also applies to: 175-179, 215-219
flo_ai/examples/variables_workflow_yaml_example.py (1)
200-203: YAML workflowinputsnow correctly use single-string promptsBoth multi-agent YAML workflows now call
build_and_runwithinputsas a plain string rather than a single-element list, matching the updated builder API and keeping behavior equivalent.Also applies to: 290-293
flo_ai/examples/llm_router_example.py (1)
100-108: Router examples now use string inputs consistently with builder/run APIs
- All
build_and_runcalls now take a single string (or multi-line string) instead of a list of strings, and the task-classifier example passestaskdirectly. This matches theAriumBuilder.build_and_runsignature and keeps the semantics of the previous examples intact.- The
content_workflow_router(memory: BaseMemory) -> Literal['editor', 'reviewer'] # type: ignoredeclaration keeps the router’s intent and return type explicit while pragmatically silencing the decorator’s typing mismatch, which is acceptable for example code.No functional issues spotted with these adjustments.
Also applies to: 137-137, 173-187, 305-328, 387-397
flo_ai/examples/custom_plan_execute_demo.py (1)
100-107: Ensurearium.runsupports string inputs in line with other examples
taskis now a plain string passed directly toarium.run(task), which is cleaner and consistent with other examples that send string inputs into Arium flows. This depends onarium.runhandlingstrappropriately (e.g., wrapping it as aUserMessageinternally).Please confirm that
arium.runhas been updated to acceptstrinputs (and is covered by tests) so this example doesn’t regress at runtime.flo_ai/examples/arium_linear_usage.py (1)
29-37: Linear usage examples now align with single-stringbuild_and_runinputsBoth AriumBuilder flows now pass a single string (or multi-line string) into
build_and_run, matching the updatedinputstype while preserving the original behavior of a single initial user turn. The changes are consistent with other examples in this PR.Also applies to: 84-96
flo_ai/flo_ai/tool/flo_tool.py (2)
49-49: LGTM - Type ignore comments are appropriate for dynamic attribute assignment.Dynamic attribute assignment to function objects is a valid Python pattern, and the
# type: ignore[attr-defined]annotations correctly suppress the type checker warnings while documenting the intentional dynamic behavior.Also applies to: 62-62, 65-65
82-89: Defensivegetattrusage is appropriate, though unlikely to be needed for standard callables.The fallback to
'unknown'handles edge cases where the callable might not have__name__or__doc__(e.g., certain wrapped objects or proxies). For standard functions, these attributes are always present, but this defensive approach improves resilience.flo_ai/flo_ai/llm/anthropic_llm.py (2)
21-22: LGTM - Optional parameters allow environment-based configuration.Making
api_keyandbase_urloptional withNonedefaults allows theAsyncAnthropicclient to fall back to environment variables (ANTHROPIC_API_KEY, etc.), which is the standard pattern.
178-178: LGTM - Explicit type annotation improves code clarity.Adding the
Dict[str, Any]type annotation toanthropic_kwargsmakes the code more explicit and helps with static analysis.flo_ai/examples/arium_examples.py (2)
61-76: LGTM - Agent initialization is now more explicit.The updated Agent initialization with explicit
system_promptandllmparameters improves clarity and consistency with the new API design.
232-244: The review comment is incorrect — the codebase already requires Python 3.10+.The
pyproject.tomlspecifiesrequires-python = ">=3.10,<4.0"andsetup.pyspecifiespython_requires='>=3.10'. Since the codebase's minimum supported Python version is 3.10, the union syntaxList[BaseMessage] | stris appropriate and does not require migration toUnion[List[BaseMessage], str].flo_ai/examples/arium_yaml_example.py (2)
583-583: LGTM - Simplified to single string input.The change from list-based input to a single string aligns with the PR's goal of standardizing the input API.
823-830: LGTM - Multi-line string input is cleaner.Using a triple-quoted multi-line string instead of a list of concatenated strings improves readability.
flo_ai/flo_ai/models/base_agent.py (1)
45-45: Return type change fromstrtoList[BaseMessage]is not a breaking change within this codebase.The
base_agent.pyfile is a new addition to this commit, not a modification of existing code. Both the abstract method and all implementing subclasses (such asAgent) were introduced together with the correctList[BaseMessage]return type. All examples and usage patterns already expect and handle this return type, indicating the entire codebase was designed consistently around this interface from the start.flo_ai/flo_ai/llm/openai_vllm.py (1)
11-11: LGTM on the Optional type annotation.Making
api_keyoptional aligns with the base class signature and allows flexibility for configurations where the API key might be provided via environment variables.flo_ai/flo_ai/llm/base_llm.py (3)
10-16: LGTM on the constructor signature update.The
Optional[str]type forapi_keyproperly reflects that the API key may be optional (e.g., when provided via environment variables or when using local models).
47-56: Good defensive programming for function call extraction.The added guards ensure both
nameandargumentsattributes exist before constructing the result. This prevents potentialAttributeErrorexceptions when dealing with incomplete function call objects.
118-121: Return type broadened appropriately.The
Anyreturn type correctly reflects that subclass implementations return different types:dict(OpenAI),types.Part(Gemini),str(Anthropic/Ollama). This allows each LLM implementation to use its native format.flo_ai/examples/simple_working_demo.py (4)
14-17: LGTM on the updated imports.The imports align with the new memory model that uses
MessageMemoryItemandUserMessagefor type-safe message handling.
190-196: LGTM on memory item access patterns.The access patterns (
msg.result.role,msg.result.content,result[-1].result.content) correctly match theMessageMemoryItemstructure whereresultis aBaseMessagewithroleandcontentattributes.
247-249: LGTM on MessageMemoryItem construction.The usage correctly constructs
MessageMemoryItemwithnode='user'andresult=UserMessage(content=...), matching the expected signature from the memory module.
179-179:arium.run()correctly accepts string input per its type signature.Verification confirms the
run()method signature at line 34 offlo_ai/flo_ai/arium/arium.pyexplicitly acceptsinputs: List[BaseMessage] | str, which allows both list and string inputs. The call at line 179 insimple_working_demo.pypassing a stringtaskis valid and matches the updated API.flo_ai/flo_ai/llm/vertexai_llm.py (1)
13-15: LGTM on the Optional type annotations.The
Optional[str]types forbase_url,project, andlocationalign with the broader type system updates across LLM implementations.flo_ai/flo_ai/arium/nodes.py (3)
9-9: LGTM on TYPE_CHECKING import.Using
TYPE_CHECKINGto importAriumNodeTypecorrectly avoids circular dependency issues at runtime while maintaining proper type hints.
60-63: LGTM on broadened node type.Changing
execute_nodetype toAriumNodeTypeallows theForEachNodeto accept any valid node type in the workflow, includingExecutableNode,AriumNode,FunctionNode, etc.
131-131: LGTM on isinstance check.The
isinstance(self.execute_node, AriumNode)check properly narrows the type to accessAriumNode-specific attributes likearium.memory.flo_ai/flo_ai/llm/openai_llm.py (3)
21-23: LGTM on type signature updates.The change to
Optional[str]forapi_keyandbase_urlproperly reflects that these can be nullable, aligning with the broader PR goal of consistent typing across LLM implementations.
31-36: Client initialization simplified.The direct initialization of
AsyncOpenAIwith explicit parameters is cleaner and more maintainable. Note that passing**kwargsdirectly may forward unexpected parameters to the client.Ensure that the
**kwargspassed toAsyncOpenAIare compatible with the client's accepted parameters. Invalid kwargs could cause runtime errors.
155-157: Improved null-safety inget_message_content.The explicit check for
response.content is not Nonebefore accessing it prevents returning the string"None"when content is absent, which is a good defensive fix.flo_ai/flo_ai/arium/arium.py (3)
54-56: LGTM on explicit type annotation.The inline type annotation
inputs: list[BaseMessage]makes the code more readable and helps with type checking after the string-to-list conversion.
195-195: Verify behavior change in_emit_event.The condition now requires
events_filterto be truthy (if callback and events_filter and ...). Previously, ifevents_filterwas an empty list, events would still be checked against it. Now, an emptyevents_filterlist will suppress all events.Confirm this is the intended behavior. If users pass
events_filter=[]expecting no events, this is correct. However, if they expect the default behavior (all events), this could be a breaking change since line 68-69 sets the default to all event types only whenNone.
406-409: Type narrowing withAriumNodeTypeis a good improvement.Using the union type alias
AriumNodeTypeinstead of listing all node types inline improves maintainability. However, the mutable default on line 409 should still be fixed.flo_ai/flo_ai/llm/rootflo_llm.py (4)
196-197: Good defensive check for JWT generation.This explicit check prevents a confusing error from the
jwt.encodecall ifapp_secretisNone.
292-293: Defensive null check after initialization.While
_ensure_initialized()should set_llm, this explicit check provides a clear error message if something goes wrong.
321-337: Initialization guards on synchronous methods may fail.
format_tool_for_llm,format_tools_for_llm, andformat_image_in_messageare synchronous methods that check_llm is Nonebut cannot call_ensure_initialized()(which is async). If these are called beforegenerate()orstream(), they will raiseRuntimeError.Consider documenting this requirement or deferring to the base class implementation when
_llmisNone. Alternatively, provide a synchronous initialization path.
298-310: Remove the# type: ignore[override]comment;RootFloLLM.stream()signature aligns perfectly withBaseLLM.stream().The original review comment's premise is incorrect. Both
BaseLLM.stream()andRootFloLLM.stream()have identical signatures with no**kwargsparameter. The# type: ignore[override]comment appears unnecessary—the methods match exactly.Note: Other LLM implementations (OpenAI, OpenAIVLLM, Gemini) incorrectly add
**kwargsto theirstream()methods, which violates the base class contract. RootFloLLM correctly adheres to the defined interface.flo_ai/flo_ai/models/agent.py (2)
77-77: Return type change aligns with broader PR refactoring.Returning
List[BaseMessage]instead ofstrprovides richer context for downstream consumers, including the full conversation history.
204-206: Explicit return path added for loop exhaustion.This ensures the method always returns a value even if the while loop completes without an early return (e.g., after max retries).
flo_ai/flo_ai/llm/gemini_llm.py (3)
25-25: LGTM on Optional base_url.Consistent with the other LLM implementations in this PR.
245-260: Return type and validation improvements.The return type change from
dicttotypes.Partis more accurate. The added validation formime_typewhenbase64is provided prevents a confusing error from the Gemini SDK.
51-51: I'll help you verify this review comment about the type change for thefunctionsparameter in the Gemini LLM. Let me start by examining the code context and searching for all callers.
<function_calls>
cat -n flo_ai/flo_ai/llm/gemini_llm.py | head -80
</function_calls>Now let me search for all calls to the
generate()method with thefunctionsparameter:
<function_calls>
rg -n --type=py 'generate\s*([^)]functions\s=' -C 5
</function_calls>Let me also check for any imports or usage of
types.FunctionDeclaration:
<function_calls>
rg -n --type=py 'FunctionDeclaration|from.*types|import.*types' -C 2 flo_ai/flo_ai/llm/
</function_calls>Let me look at the broader context of the gemini_llm.py file to understand the function signature:
<function_calls>
wc -l flo_ai/flo_ai/llm/gemini_llm.py
</function_calls><function_calls>
cat -n flo_ai/flo_ai/llm/gemini_llm.py | sed -n '40,70p'
</function_calls>Let me search for all usages of the
generatemethod across the codebase:
<function_calls>
rg -n --type=py '.generate\s*(' -C 3 | head -100
</function_calls><function_calls>
rg -n --type=py 'functions\s*=' -C 2 | grep -A2 -B2 'functions'
</function_calls>Let me get a more focused search on the actual generate calls with functions:
<function_calls>
rg -n --type=py 'functions\s*=' -C 4
</function_calls>flo_ai/flo_ai/arium/base.py (4)
14-16: AriumNodeType definition looks good.The type alias correctly consolidates all node types that can be used in the Arium workflow. This provides better type safety and clearer API contracts.
233-235: Type ignore comments are appropriate for optional visualization dependencies.The
# type: ignorecomments are correctly applied for matplotlib and networkx imports, which are optional dependencies used only for visualization.
376-389: Path handling refactor is clean and appropriate.Using
created_output_pathas aPathobject and calling.parent.mkdir(parents=True, exist_ok=True)ensures the output directory exists before saving. The refactor maintains the same functionality while improving clarity.
96-96: Breaking change confirmed:to_nodesparameter removed default value and all callers already pass it explicitly.Verification shows the parameter at line 96 is indeed
List[str]without a default value. Examining all 20+add_edgecalls across the codebase (unit tests, examples, builder, and configuration loaders) confirms every caller passesto_nodesexplicitly—no code relies on a missing default value.flo_ai/flo_ai/arium/llm_router.py (5)
104-106: Good improvement:routemethod now explicitly typed as async with Optional execution_context.The signature change properly reflects that
execution_contextis optional and the method is asynchronous.
457-464: Conversation filtering logic correctly handles non-string content.Filtering to only include messages where
msg.result.contentis a string prevents type errors when joining the conversation text.
467-471: Null-safe handling of execution_context.The conditional check
if execution_context is not Nonebefore calling_get_next_step_in_patternprevents potential issues whenexecution_contextisNone.
948-951: Type ignore comments are necessary for dynamic Literal construction.The
# type: ignorecomments are appropriate here sinceLiteral[option_names]with a dynamic tuple is not statically verifiable by type checkers.
967-969: Good use ofcastfor dynamic attribute assignment.Using
cast(Any, router_function)properly handles the dynamic attribute assignment without triggering type checker errors.flo_ai/flo_ai/arium/builder.py (7)
32-39: Type annotations properly updated for the new type system.The updates to use
MessageMemoryandAriumNodeTypealign with the broader refactoring across the codebase.
251-260: Good improvement: Input handling now supports both list and single string inputs.The logic correctly handles different input types, wrapping strings in
UserMessageand validating list items.
267-269: Visualization now auto-builds if needed.The change to build the arium if
self._ariumisNoneis a good UX improvement, allowing users to callvisualize()without explicitly callingbuild()first.
452-453: Good defensive check for yaml_file parameter.The explicit check and error message provide clearer feedback when the yaml_file is unexpectedly None.
541-545: Safe function registry access pattern.The conditional check
if function_registry is not Nonebefore calling.get()preventsAttributeErrorwhenfunction_registryisNone.
914-918: Role extraction handles None values appropriately.The conditional expression ensures
roleis always a string, defaulting to empty string when not provided.
960-977: Output schema handling with safe defaults.The
Optional[Dict[str, Any]]type foroutput_schemaand the fallback to{}at line 977 ensure the agent builder receives a valid schema.
| def create(yaml_dict: Optional[Dict] = None, yaml_path: Optional[str] = None): | ||
| return FloYamlParser.Builder(yaml_dict=yaml_dict, yaml_path=yaml_path).build() |
There was a problem hiding this comment.
Confusing API: FloJsonParser.create accepts YAML parameters.
FloJsonParser.create() now accepts yaml_dict and yaml_path parameters, which is counterintuitive. This creates confusion between FloJsonParser and FloYamlParser.
Consider one of the following:
- Remove this method and require users to call
FloYamlParser.create()directly for YAML inputs - Rename the parameters to
dictandpathto be format-agnostic - Add a clear deprecation notice if this is for backward compatibility
🤖 Prompt for AI Agents
In flo_ai/flo_ai/formatter/yaml_format_parser.py around lines 136-137,
FloJsonParser.create currently accepts yaml-specific parameters (yaml_dict,
yaml_path) which is confusing; change the API to be format-agnostic by renaming
those parameters to dict and path (update signature, docstring and internal
references) and update all call sites/tests to use the new names; if you must
preserve the old names for backward compatibility, add a thin wrapper that
accepts yaml_dict/yaml_path, issues a deprecation warning, maps them to
dict/path, and forwards to the new signature.
| async def generate( | ||
| self, messages: list[dict], output_schema: dict = None, **kwargs | ||
| self, | ||
| messages: list[dict], | ||
| functions: Optional[List[Dict[str, Any]]] = None, | ||
| output_schema: Optional[Dict[str, Any]] = None, | ||
| **kwargs, |
There was a problem hiding this comment.
Unused functions parameter in generate method.
The functions parameter is added to the method signature but is never used in the method body. Compare this with the stream method (lines 92-93) which does use the functions parameter.
Apply this diff to use the functions parameter consistently:
# Prepare OpenAI API parameters
vllm_openai_kwargs = {
'model': self.model,
'messages': messages,
'temperature': self.temperature,
**kwargs,
**self.kwargs,
}
+
+ if functions:
+ vllm_openai_kwargs['functions'] = functionsCommittable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In flo_ai/flo_ai/llm/openai_vllm.py around lines 27 to 32, the generate method
accepts a functions parameter but never uses it; update the method to pass the
functions argument into the underlying request/response logic exactly as the
stream method does (e.g., include functions in the payload or call that builds
the model request), ensure any schema/validation logic still applies, and keep
the signature and behavior consistent with stream so functions-based calls are
honored.
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
flo_ai/examples/arium_examples.py (1)
60-113:content_routerdoesn’t match theMessageMemoryAPI and will fail at runtimeIn this example, router functions are invoked with the
MessageMemoryinstance (memory=self.memoryinArium._execute_graph), butcontent_routertreatsmemoryas a list of dicts:last_message = memory[-1]['content']
MessageMemoryis not subscriptable and storesMessageMemoryItemobjects; this will raiseTypeErrorwhen the router is actually used.A safer implementation that works with
MessageMemory:- def content_router(memory) -> Literal['text_processor', 'image_processor']: - # Simple example logic (in real use, this would analyze the memory) - last_message = memory[-1]['content'] - if 'image' in last_message.lower(): - return 'image_processor' - return 'text_processor' + def content_router(memory: MessageMemory) -> Literal['text_processor', 'image_processor']: + # Simple example logic (in real use, this would analyze the memory) + items = memory.get() + if not items: + return 'text_processor' + + last = items[-1].result + content = getattr(last, 'content', str(last)) + if 'image' in str(content).lower(): + return 'image_processor' + return 'text_processor'This keeps the intent while using the actual memory abstraction.
flo_ai/examples/cot_conversational_example.py (1)
35-55: WrapUserMessagequestions when callingagent.run
questionsis now aList[UserMessage], butagent.runis still called with a bareUserMessage. GivenAgent.runexpectsstrorList[BaseMessage], this will likely fail when it tries to iterate overinputs.Either switch
questionsto plain strings, or keepUserMessageand wrap in a list on call. Minimal change:- questions = [ - UserMessage( - 'If a train leaves station A at 2 PM traveling 60 mph and another train leaves station B at 3 PM traveling 80 mph, and the stations are 300 miles apart, when will they meet?' - ), - UserMessage( - 'A store has a 20% discount on all items. If a customer buys 3 items that originally cost $50, $30, and $20, what is the final total after the discount?' - ), - UserMessage( - 'Explain why the sky appears blue during the day but red during sunset.' - ), - ] + questions = [ + UserMessage( + 'If a train leaves station A at 2 PM traveling 60 mph and another train leaves station B at 3 PM traveling 80 mph, and the stations are 300 miles apart, when will they meet?' + ), + UserMessage( + 'A store has a 20% discount on all items. If a customer buys 3 items that originally cost $50, $30, and $20, what is the final total after the discount?' + ), + UserMessage( + 'Explain why the sky appears blue during the day but red during sunset.' + ), + ] @@ - response = await agent.run(question) + response = await agent.run([question])flo_ai/examples/cot_agent_example.py (1)
76-95: EnsureAgent.runreceives a supported input typeHere
questionsis a list ofUserMessage, butagent.runis invoked with a singleUserMessage. With the currentAgent.run(inputs: List[BaseMessage] | str, ...)implementation, this will likely break whenrunattempts to iterate overinputs.A minimal fix that keeps the explicit
UserMessageexamples:- questions = [ - UserMessage('What is 15 + 27?'), - UserMessage( - 'If I have 100 apples and I give away 23, then buy 15 more, how many do I have?' - ), - UserMessage('Calculate 8 * 7 and then add 12 to the result.'), - ] + questions = [ + UserMessage('What is 15 + 27?'), + UserMessage( + 'If I have 100 apples and I give away 23, then buy 15 more, how many do I have?' + ), + UserMessage('Calculate 8 * 7 and then add 12 to the result.'), + ] @@ - response = await agent.run(question) + response = await agent.run([question])
♻️ Duplicate comments (2)
flo_ai/flo_ai/arium/arium.py (1)
24-40: Replace allvariables: Dict[str, Any] = {}defaults withOptional[...] = Noneand normalize inside
Arium.run,_execute_graph,_resolve_inputs, and_execute_nodeall use a mutable dict as a default:
- Line 37:
variables: Dict[str, Any] = {}- Line 204:
variables: Dict[str, Any] = {}- Line 361:
variables: Dict[str, Any] = {}- Line 403:
variables: Dict[str, Any] = {}This is the same mutable-default pattern called out in earlier reviews and can cause shared-state bugs if any of these dicts are ever mutated.
Suggested fix pattern (applied consistently to all four):
- async def run( - self, - inputs: List[BaseMessage] | str, - variables: Dict[str, Any] = {}, + async def run( + self, + inputs: List[BaseMessage] | str, + variables: Optional[Dict[str, Any]] = None, ... ): @@ - if isinstance(inputs, str): + variables = variables or {} + if isinstance(inputs, str): inputs: list[BaseMessage] = [ UserMessage(content=resolve_variables(inputs, variables)) ]Similarly for
_execute_graph,_resolve_inputs, and_execute_node:- async def _execute_graph(..., variables: Dict[str, Any] = {}, ...): + async def _execute_graph(..., variables: Optional[Dict[str, Any]] = None, ...): @@ - result = await self._execute_node( - current_node, event_callback, events_filter, variables - ) + _vars = variables or {} + result = await self._execute_node( + current_node, event_callback, events_filter, _vars + )- def _resolve_inputs(..., variables: Dict[str, Any] = {},) -> List[BaseMessage]: + def _resolve_inputs(..., variables: Optional[Dict[str, Any]] = None,) -> List[BaseMessage]: @@ - resolved_inputs = [] + variables = variables or {} + resolved_inputs = []- async def _execute_node(..., variables: Dict[str, Any] = {},): + async def _execute_node(..., variables: Optional[Dict[str, Any]] = None,): @@ - if isinstance(node, ForEachNode): + _vars = variables or {} + if isinstance(node, ForEachNode): foreach_results = await node.run(inputs, variables=_vars)This keeps current behavior but removes shared mutable defaults everywhere.
Also applies to: 199-205, 358-362, 398-404
flo_ai/flo_ai/arium/nodes.py (1)
36-44: Avoid mutable default dict forvariablesinAriumNode.run
variables: Dict[str, Any] = {}reintroduces the shared-mutable-default problem that was already flagged in earlier reviews. Even though you currently only read fromvariables, this pattern is error-prone and easy to break later.Recommend switching back to an immutable default and normalizing inside:
- async def run( - self, inputs: List[Any], variables: Dict[str, Any] = {}, **kwargs - ) -> Any: + async def run( + self, + inputs: List[Any], + variables: Optional[Dict[str, Any]] = None, + **kwargs, + ) -> Any: @@ - execution_variables = ( - variables.copy() if (self.inherit_variables and variables) else {} - ) + _vars = variables or {} + execution_variables = _vars.copy() if self.inherit_variables else {}This keeps behavior the same while removing the shared default.
🧹 Nitpick comments (8)
flo_ai/tests/unit-tests/test_openai_vllm.py (1)
299-301: Type-ignore is fine here, but consider aligning the source signature with runtime usageAdding
# type: ignore[arg-type]keeps the test while satisfying the stricterDict[str, Any]parameter type onget_message_content. Given thatOpenAIVLLM/OpenAI.get_message_contentexplicitly handlesstrand generic objects, you might later consider widening its parameter type (e.g., toAnyor an appropriate union) to match actual usage and reducearg-typeignores across tests.Based on relevant_code_snippets, where other LLMs already accept
Anyforresponse.flo_ai/examples/custom_plan_execute_demo.py (1)
88-90: Avoid# type: ignore[arg-type]by aligningPlanAwareMemoryandMessageMemorytypesThe
# type: ignore[arg-type]onwith_memory(memory)suggestsPlanAwareMemoryis not statically compatible withMessageMemory. Given this PR is focused on tightening types, it would be better to fix the typing relationship instead of suppressing:
- If
PlanAwareMemoryis conceptually aMessageMemory, consider having it implement/extendMessageMemory(or share a common Protocol/type alias).- Alternatively, broaden
with_memory’s parameter type to the appropriate abstraction that both plain memories andPlanAwareMemoryimplement.That would let you drop the
# type: ignore[arg-type]here and keep this example fully type‑checked. Based on relevant_code_snippets fromflo_ai/flo_ai/arium/builder.py:46-49.flo_ai/tests/integration-tests/test_openai_llm_real.py (1)
197-224: Type ignores hide a too-narrowOpenAI.get_message_contentsignatureThese
# type: ignore[arg-type]comments are fine for tests, but they indicate thatOpenAI.get_message_content’s parameter type (Dict[str, Any]) doesn’t match how it’s actually used (string, message-like objects, and arbitrary objects).Consider widening the signature in
flo_ai/llm/openai_llm.pyto acceptAny(like the Anthropic/Gemini/Ollama variants) so these ignores are no longer necessary and the type better reflects runtime behavior.flo_ai/tests/unit-tests/test_agent_builder_tools.py (1)
81-108: Type ignores here reflect a limitation inAgentBuilder.with_tools’s typingBoth:
builder.with_tools([self.base_tool, tool_config, tool_dict]) # type: ignore[arg-type] ... builder.with_tools(['invalid_tool']) # type: ignore[arg-type]need
# type: ignore[arg-type]becausewith_toolsis typed as:Union[List[Tool], List[ToolConfig], List[Dict[str, Any]]]whereas these tests intentionally pass a mixed list or an obviously invalid element.
The ignores are acceptable in tests, but if you want to avoid them, consider updating
with_tools’s signature to something like:Sequence[Tool | ToolConfig | Dict[str, Any]]and tightening validation internally instead of at the type level.
flo_ai/tests/unit-tests/test_yaml_tool_config.py (1)
260-275: Type ignore on_process_yaml_toolscall is reasonable for test realismHere:
processed_tools = AgentBuilder._process_yaml_tools( tools_config, # type: ignore[arg-type] self.tool_registry, )the raw
tools_configlist mirrors real YAML inputs (strings and dicts), which likely doesn’t match_process_yaml_tools’s current static type. Using# type: ignore[arg-type]in this focused test is fine.If you want to remove the ignore, consider broadening
_process_yaml_tools’s parameter type to accurately represent the accepted YAML tool config shapes (e.g.,Sequence[str | Mapping[str, Any]]).flo_ai/examples/agent_builder_usage.py (1)
20-22: AgentBuilder examples now use correctly typedUserMessagelistsUsing
[UserMessage('...')]as the input toagent.runin all three examples is consistent with theAgent.run(inputs: List[BaseMessage] | str, ...)contract and with the broader shift away fromTextMessageContent. If you want to showcase the simplest possible usage, you could also add a comment or variant that callsawait agent.run('...'), but the current code is fully valid.Also applies to: 72-76, 102-103
flo_ai/tests/unit-tests/test_llm_router.py (1)
7-8: MockLLM now correctly mirrorsBaseLLM’s typed interfaceUpdating
MockLLM.generateandMockLLM.streamto acceptmessages: List[Dict[str, str]], optionalfunctionsandoutput_schema, and to returnDict[str, Any]/AsyncIterator[Dict[str, Any]]brings this test double in line withBaseLLM. The simple'response'payload is sufficient for router tests while satisfying the abstract method contracts.Also applies to: 29-37, 38-47
flo_ai/tests/unit-tests/test_base_llm.py (1)
10-11: MockLLM implementation now matchesBaseLLM’s abstract APIThe new
__init__delegating toBaseLLM, plus the updatedgenerateandstreamsignatures, makes this mock a proper concrete implementation ofBaseLLMand keeps the tests aligned with the real LLM interface. If you ever need different mock outputs per test, consider havinggeneratealso return{'content': self.response_text}instead of a hardcoded string, but that’s purely a convenience enhancement.Also applies to: 22-27, 28-35, 49-59
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (25)
flo_ai/examples/agent_builder_usage.py(3 hunks)flo_ai/examples/arium_examples.py(10 hunks)flo_ai/examples/cot_agent_example.py(1 hunks)flo_ai/examples/cot_conversational_example.py(2 hunks)flo_ai/examples/custom_plan_execute_demo.py(2 hunks)flo_ai/examples/document_processing_example.py(2 hunks)flo_ai/examples/plan_execute_demo.py(2 hunks)flo_ai/examples/simple_plan_execute_demo.py(4 hunks)flo_ai/examples/simple_working_demo.py(4 hunks)flo_ai/examples/telemetry_example.py(1 hunks)flo_ai/examples/tool_usage.py(5 hunks)flo_ai/examples/usage_claude.py(1 hunks)flo_ai/flo_ai/arium/arium.py(9 hunks)flo_ai/flo_ai/arium/nodes.py(6 hunks)flo_ai/flo_ai/arium/protocols.py(1 hunks)flo_ai/setup.py(1 hunks)flo_ai/tests/integration-tests/test_claude_llm_real.py(1 hunks)flo_ai/tests/integration-tests/test_gemini_llm_real.py(1 hunks)flo_ai/tests/integration-tests/test_openai_llm_real.py(3 hunks)flo_ai/tests/unit-tests/test_agent_builder_tools.py(2 hunks)flo_ai/tests/unit-tests/test_base_llm.py(3 hunks)flo_ai/tests/unit-tests/test_llm_router.py(3 hunks)flo_ai/tests/unit-tests/test_openai_llm.py(1 hunks)flo_ai/tests/unit-tests/test_openai_vllm.py(1 hunks)flo_ai/tests/unit-tests/test_yaml_tool_config.py(1 hunks)
✅ Files skipped from review due to trivial changes (2)
- flo_ai/flo_ai/arium/protocols.py
- flo_ai/tests/unit-tests/test_openai_llm.py
🚧 Files skipped from review as they are similar to previous changes (2)
- flo_ai/examples/simple_working_demo.py
- flo_ai/examples/telemetry_example.py
🧰 Additional context used
🧬 Code graph analysis (16)
flo_ai/examples/tool_usage.py (5)
flo_ai/flo_ai/models/chat_message.py (1)
UserMessage(65-70)flo_ai/flo_ai/arium/protocols.py (1)
run(18-35)flo_ai/flo_ai/models/base_agent.py (1)
run(45-47)flo_ai/flo_ai/models/agent.py (1)
run(73-127)flo_ai/flo_ai/tool/base_tool.py (1)
run(45-50)
flo_ai/tests/integration-tests/test_openai_llm_real.py (7)
flo_ai/tests/unit-tests/test_base_llm.py (1)
get_message_content(60-63)flo_ai/tests/unit-tests/test_llm_router.py (1)
get_message_content(49-50)flo_ai/flo_ai/llm/anthropic_llm.py (1)
get_message_content(203-207)flo_ai/flo_ai/llm/base_llm.py (1)
get_message_content(104-106)flo_ai/flo_ai/llm/gemini_llm.py (1)
get_message_content(208-212)flo_ai/flo_ai/llm/ollama_llm.py (1)
get_message_content(129-133)flo_ai/flo_ai/llm/openai_llm.py (1)
get_message_content(152-157)
flo_ai/examples/cot_agent_example.py (1)
flo_ai/flo_ai/models/chat_message.py (1)
UserMessage(65-70)
flo_ai/examples/custom_plan_execute_demo.py (1)
flo_ai/flo_ai/arium/builder.py (1)
with_memory(47-50)
flo_ai/tests/integration-tests/test_gemini_llm_real.py (4)
flo_ai/tests/integration-tests/test_claude_llm_real.py (3)
test_function(266-267)test_function(298-299)test_function(325-328)flo_ai/tests/integration-tests/test_openai_llm_real.py (2)
test_function(230-231)test_function(260-261)flo_ai/tests/unit-tests/test_arium_builder.py (1)
test_function(78-79)flo_ai/tests/unit-tests/test_arium_yaml.py (2)
test_function(188-189)test_function(244-247)
flo_ai/flo_ai/arium/nodes.py (1)
flo_ai/flo_ai/arium/protocols.py (2)
ExecutableNode(5-35)run(18-35)
flo_ai/tests/unit-tests/test_agent_builder_tools.py (1)
flo_ai/flo_ai/builder/agent_builder.py (1)
with_tools(54-111)
flo_ai/examples/document_processing_example.py (1)
flo_ai/flo_ai/models/chat_message.py (2)
DocumentMessageContent(32-39)UserMessage(65-70)
flo_ai/tests/unit-tests/test_openai_vllm.py (7)
flo_ai/tests/unit-tests/test_base_llm.py (1)
get_message_content(60-63)flo_ai/tests/unit-tests/test_llm_router.py (1)
get_message_content(49-50)flo_ai/flo_ai/llm/anthropic_llm.py (1)
get_message_content(203-207)flo_ai/flo_ai/llm/base_llm.py (1)
get_message_content(104-106)flo_ai/flo_ai/llm/gemini_llm.py (1)
get_message_content(208-212)flo_ai/flo_ai/llm/ollama_llm.py (1)
get_message_content(129-133)flo_ai/flo_ai/llm/openai_llm.py (1)
get_message_content(152-157)
flo_ai/tests/unit-tests/test_base_llm.py (2)
flo_ai/tests/unit-tests/test_llm_router.py (2)
generate(29-36)stream(38-47)flo_ai/flo_ai/llm/base_llm.py (2)
generate(23-30)stream(33-40)
flo_ai/examples/agent_builder_usage.py (2)
flo_ai/flo_ai/models/chat_message.py (1)
UserMessage(65-70)flo_ai/flo_ai/models/agent.py (1)
run(73-127)
flo_ai/tests/integration-tests/test_claude_llm_real.py (1)
flo_ai/tests/integration-tests/test_gemini_llm_real.py (3)
test_function(267-268)test_function(298-299)test_function(325-328)
flo_ai/examples/cot_conversational_example.py (1)
flo_ai/flo_ai/models/chat_message.py (1)
UserMessage(65-70)
flo_ai/tests/unit-tests/test_llm_router.py (2)
flo_ai/tests/unit-tests/test_base_llm.py (2)
generate(28-34)stream(49-58)flo_ai/flo_ai/llm/base_llm.py (2)
generate(23-30)stream(33-40)
flo_ai/examples/arium_examples.py (6)
flo_ai/flo_ai/models/chat_message.py (2)
UserMessage(65-70)BaseMessage(49-52)flo_ai/flo_ai/arium/nodes.py (4)
FunctionNode(169-217)run(36-51)run(102-115)run(190-217)flo_ai/flo_ai/arium/arium.py (1)
run(34-177)flo_ai/flo_ai/arium/protocols.py (1)
run(18-35)flo_ai/flo_ai/models/base_agent.py (1)
run(45-47)flo_ai/flo_ai/builder/agent_builder.py (1)
build(169-184)
flo_ai/flo_ai/arium/arium.py (4)
flo_ai/flo_ai/arium/memory.py (2)
MessageMemory(133-153)MessageMemoryItem(22-33)flo_ai/flo_ai/models/chat_message.py (3)
BaseMessage(49-52)UserMessage(65-70)TextMessageContent(43-45)flo_ai/flo_ai/utils/variable_extractor.py (1)
resolve_variables(110-137)flo_ai/flo_ai/arium/events.py (2)
AriumEvent(29-51)AriumEventType(15-25)
🔇 Additional comments (21)
flo_ai/examples/custom_plan_execute_demo.py (1)
100-107: Stringtaskpayload matches the new run input shapeUsing a plain string for
taskand passing it directly toarium.run(task)is consistent with the refactor toward string inputs for builders/runs and looks correct here.flo_ai/setup.py (1)
1-1: Targeted mypy ignore for setuptools import looks correctUsing
import setuptools # type: ignore[import-untyped]is appropriate here: it keeps the runtime behavior unchanged while suppressing the specific mypy warning about untypedsetuptools, and the rest of the file already consistently usessetuptools.setup. Just ensure your CI mypy version supports theimport-untypedsubcode (1.6+).flo_ai/examples/plan_execute_demo.py (2)
72-74: LGTM: Good defensive programming.The None guard prevents potential AttributeError when accessing plan properties if no plan exists in memory.
96-98: LGTM: Consistent None handling.The guard appropriately handles the case where no plan is available before attempting to execute steps.
flo_ai/examples/simple_plan_execute_demo.py (4)
108-110: LGTM: Good defensive programming.The None guard prevents AttributeError when no plan is available.
128-130: LGTM: Consistent None handling.Appropriate guard before accessing plan properties.
144-146: LGTM: Consistent None handling.The guard maintains consistency with other None checks in this file.
226-226: Remove the unnecessarytype: ignore[arg-type]comment.The second parameter is properly typed as
Optional[dict], which explicitly includesNoneas a valid value. PassingNonerequires no type suppression.next_agent = await plan_router(memory, None)Likely an incorrect or invalid review comment.
flo_ai/examples/usage_claude.py (1)
35-41: Optionalcountrytyping is now correct and explicitChanging
countrytostr | None = Nonematches how it’s used and aligns with the rest of the codebase’s PEP 604 style.flo_ai/tests/integration-tests/test_claude_llm_real.py (1)
322-328: Union type foroptional_paramis consistent and accurateUpdating
optional_paramtostr | None = Nonekeeps the helper in line with other tests and reflects that the argument is genuinely optional.flo_ai/examples/arium_examples.py (5)
5-13: UsingUserMessagein Example 1 aligns with the new Arium input modelImporting
UserMessageand calling.build_and_run([UserMessage('Analyze this text')])makes the example explicit about roles and matchesArium.run’sList[BaseMessage] | strcontract. No issues here.Also applies to: 21-51
121-179: Complex workflow example updates look consistent with new Agent and Arium APIsThe four agents now use
system_promptandllm=OpenAI(...), and the workflow is run viaarium.run('Research and write a report on AI trends'), which matchesArium.run’sstr-input path (converted to aUserMessageinternally). No correctness issues evident here.
186-205: Convenience-function example now usesUserMessage, which is consistent
example_convenience_function’s.build_and_run([UserMessage('Hello')])call is now explicit about message type and role. This is aligned with the rest of the examples and the Arium input model.
211-224: Echo-agent example correctly targets updated Agent and Arium signaturesDefining:
agent = Agent( name='echo_agent', system_prompt='Echo the input', llm=OpenAI(model='gpt-4o-mini'), )and then running:
result1 = await arium.run('First input') result2 = await arium.run('Second input')is consistent with the new Agent constructor and
Arium.runaccepting string inputs. Looks good.
231-243: Function-only workflow now correctly matchesFunctionNode’sinputs/variablescontractThe helper functions:
async def pass_through(inputs: List[BaseMessage], variables=None, **kwargs): ... async def capitalize_last(inputs: List[BaseMessage], variables=None, **kwargs): ... async def uppercase_all(inputs: List[BaseMessage], variables=None, **kwargs): ... async def summarize(inputs: List[BaseMessage], variables=None, **kwargs): ...now align with
FunctionNode.run’s call pattern (inputs=...,variables=...), and.build_and_run('hello world')leverages thestrpath inArium.run.This example is consistent with the underlying node and memory implementations.
Also applies to: 270-292
flo_ai/flo_ai/arium/nodes.py (1)
61-75: [rewritten comment]
[classification tag]flo_ai/examples/tool_usage.py (1)
2-2: Message construction changes are consistent with the new APIImporting only
UserMessageand passing[UserMessage('...')]intoagent.runmatches the currentAgent.run(inputs: List[BaseMessage] | str, ...)shape and avoids the oldTextMessageContentwrapper. Nothing else needed here.Also applies to: 19-19, 47-47
flo_ai/examples/cot_conversational_example.py (1)
7-7: Import adjustment looks correctUsing
Agent, UserMessagefromflo_ai.models.agentis consistent with the current model layout and the updated examples; no issues here.flo_ai/examples/document_processing_example.py (1)
16-16: String-basedUserMessageusage in document workflow is compatibleSwitching the import to
DocumentMessageContent, UserMessageand changing the second workflow input toUserMessage('process this document')aligns with theUserMessage.content: str | ...union andAriumBuilder’srun(inputs: List[str | Any], ...)signature. The first element carries theDocumentMessageContent, and the second is plain text; both fit the expected types.Also applies to: 276-293
flo_ai/tests/integration-tests/test_gemini_llm_real.py (1)
325-328: Optional parameter typing matches tool metadataAnnotating
optional_paramasstr | None = Nonealigns the function signature with therequired: Falsemetadata in theToolparameters and with similar tests for other LLMs. This should help ensureformat_tool_for_llmcorrectly treats the argument as optional.flo_ai/tests/unit-tests/test_llm_router.py (1)
268-271: Decorator test return value is now consistent with its typeHaving the decorated
test_routerreturn'researcher'matches itsLiteral['researcher', 'analyst']annotation and keeps the example realistic, even though the test currently only validates that the decorator produces a callable.
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (5)
flo_ai/tests/unit-tests/test_openai_vllm.py (1)
127-136: Misleading comment needs update.The assertion refactor is good and consistent with the first block. However, the comment on line 127 states "Test without API key" but the code still provides
api_key='test-key-123'. Either update the comment to accurately describe what's being tested or adjust the test to actually test the scenario without an API key.Consider updating the comment:
- # Test without API key + # Test with default base_urlOr if the intent is to test without an API key, update the test accordingly.
flo_ai/tests/unit-tests/test_anthropic_llm.py (1)
257-279: Test doesn't match its name and lacks kwargs coverage.The test is named
test_anthropic_generate_with_kwargsbut doesn't actually test any kwargs being passed to thegenerate()method. It's functionally identical totest_anthropic_generate_basic(lines 78-110). According to the implementation,kwargslikemax_tokensshould be supported and default to 1024, but this behavior is now completely untested.Apply this diff to either remove the redundant test or make it actually test kwargs:
@pytest.mark.asyncio async def test_anthropic_generate_with_kwargs(self): - """Test generate method with additional kwargs.""" - llm = Anthropic(model='claude-3-5-sonnet-20240620') + """Test generate method with additional kwargs like max_tokens.""" + llm = Anthropic(model='claude-3-5-sonnet-20240620', max_tokens=2048) # Mock the client response mock_content = Mock() mock_content.text = 'Response with kwargs' mock_content.type = 'text' mock_usage = Mock() mock_usage.input_tokens = 5 mock_usage.output_tokens = 4 mock_response = Mock() mock_response.usage = mock_usage mock_response.content = [mock_content] llm.client = Mock() llm.client.messages.create = AsyncMock(return_value=mock_response) messages = [{'role': 'user', 'content': 'Hello'}] await llm.generate(messages) + + # Verify kwargs are passed through + call_args = llm.client.messages.create.call_args[1] + assert call_args['max_tokens'] == 2048flo_ai/flo_ai/llm/base_llm.py (1)
43-58: Improve type annotation to reflect support for object responsesThe type hint
response: Dict[str, Any]is inaccurate—the method explicitly supports both dict and object-style responses viahasattrandisinstancechecks. Consider loosening toresponse: Anyor usingMapping[str, Any] | Anyto avoid confusing static type analyzers.The current implementation already correctly handles both shapes:
- Object responses with
.function_callattribute (lines 47–57)- Dict responses with
'function_call'key (lines 59–67)Updating the annotation will improve type clarity without requiring logic changes.
flo_ai/flo_ai/arium/llm_router.py (1)
584-591: PlanExecuteRouter has a stub routing prompt that causes silent degradation.
get_routing_prompt()currently returns an empty string instead of generating meaningful routing guidance. This will cause the LLM to be called with a blank prompt, so any workflows relying onplan_executerouters will continue but without the plan-and-execute logic they need.The commented-out implementation (lines 593–616) shows this was intentionally disabled and includes helper methods for planning and execution phases. Either restore and adapt this logic for the new
execution_contextparameter, or raiseNotImplementedErrorto fail loudly instead of silently.flo_ai/flo_ai/arium/builder.py (1)
542-553: Fix invalid use oflist[str](...)in the function_registry error message.
list[str]is a typing alias, not a callable; calling it will raise aTypeError. On the error path wherefunction is None, this will prevent your intendedValueErrorfrom being raised.Replace the typed call with a plain
list(...):f'Function {function_name} not found in provided function_registry dictionary. ' - f'Available functions: {list[str](function_registry.keys()) if function_registry else []}. ' + f'Available functions: {list(function_registry.keys()) if function_registry else []}. ' f'Either provide the function in the function_registry parameter or add configuration fields.'
🧹 Nitpick comments (6)
flo_ai/tests/unit-tests/test_anthropic_llm.py (1)
64-75: Consider testing the kwargs filtering logic.The test verifies that
api_keyandbase_urlare passed to theAsyncAnthropicconstructor, but doesn't verify that the implementation correctly filters these keys from**kwargsto prevent duplication. According to the implementation,filtered_kwargsshould excludeapi_key,base_url, anddefault_headers.Consider adding a test case that verifies the filtering behavior:
def test_anthropic_kwargs_filtering(self, mock_async_anthropic): """Test that duplicate keys are filtered from kwargs.""" mock_client = Mock() mock_async_anthropic.return_value = mock_client # Pass api_key both explicitly and in kwargs llm = Anthropic( api_key='explicit-key', base_url='https://test.com', custom_param='value' ) call_kwargs = mock_async_anthropic.call_args[1] # Should have explicit params assert call_kwargs['api_key'] == 'explicit-key' assert call_kwargs['base_url'] == 'https://test.com' # Should have custom param from filtered_kwargs assert call_kwargs['custom_param'] == 'value'flo_ai/flo_ai/models/base_agent.py (1)
77-81: Redundantcast()- the input is alreadyList[BaseMessage].The type annotation on line 77 already constrains
input_messagetoBaseMessage | List[BaseMessage]. Whenisinstance(input_message, list)is true, type checkers already narrow it toList[BaseMessage]. The cast is unnecessary.def add_to_history(self, input_message: BaseMessage | List[BaseMessage]): if isinstance(input_message, list): - self.conversation_history.extend(cast(List[BaseMessage], input_message)) + self.conversation_history.extend(input_message) else: self.conversation_history.append(input_message)flo_ai/examples/vllm_agent_usage.py (1)
16-21: Base URL helper is correct; consider making it the single source of truth.The helper cleanly enforces presence of
VLLM_BASE_URLand the refactor to passbase_urlinto allOpenAIVLLMinstances is solid.For consistency and clearer error messaging, you might also:
- Re-read the env var inside
ensure_vllm_base_url()(instead of capturingvllm_base_urlat import), and- Use
ensure_vllm_base_url()inmain()when printing the server location, so the log line and runtime checks can’t diverge.Also applies to: 25-26, 34-35, 73-74, 85-86, 128-129, 141-142, 226-227, 238-239
flo_ai/examples/arium_examples.py (1)
3-6: Clarify input type wording in the docstring.The note about “list of
UserMessageinputs” is slightly misleading here: examples and the builder now acceptstrorList[BaseMessage](andUserMessageisn’t imported in this module). Consider rephrasing to “list of messages (e.g.,BaseMessageinstances)” to match the actual API.flo_ai/flo_ai/arium/arium.py (1)
53-58: Be aware of changed variable flow for string inputs and FunctionNodes.Two behavioral shifts worth double‑checking:
String inputs in
run()
- For
inputs: str, you now callresolve_variables(inputs, variables)before_extract_and_validate_variables.- This means missing-variable validation for string inputs may now rely entirely on
resolve_variablesrather than on the dedicated_extract_and_validate_variablespath.Variables into node executions
Agentnodes are always called withvariables={}.FunctionNodenodes are now always called withvariables=None.- Only
ForEachNodeand nestedAriumNodeexecutions receive the (normalized)variablesmapping.If any existing FunctionNode implementations depend on a non‑None
variablesdict, or if you rely on_extract_and_validate_variablesfor more user‑friendly errors on string inputs, it’s worth confirming these changes are intentional and covered by tests.Also applies to: 92-101, 149-159, 360-375, 469-474, 561-567
flo_ai/flo_ai/arium/llm_router.py (1)
947-955: Dynamic Literal annotation in create_llm_router is best-effort but a bit misleading.For multiple options, using
Literal[option_names]creates a literal of the tuple of names rather than a union of string literals. Since this only affects__annotations__and you’re already# type: ignore‑ing it, it’s harmless at runtime but doesn’t express the intended type.If you want to keep runtime annotations simple and accurate, consider annotating the return type as
Awaitable[str]instead of trying to synthesize a dynamicLiteral:- router_function.__annotations__ = { - 'memory': MessageMemory, - 'return': Awaitable[literal_type], - } + router_function.__annotations__ = { + 'memory': MessageMemory, + 'return': Awaitable[str], + }Also applies to: 957-967, 970-972
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (19)
flo_ai/examples/arium_examples.py(10 hunks)flo_ai/examples/tool_usage.py(5 hunks)flo_ai/examples/vllm_agent_usage.py(7 hunks)flo_ai/flo_ai/arium/arium.py(9 hunks)flo_ai/flo_ai/arium/builder.py(14 hunks)flo_ai/flo_ai/arium/llm_router.py(12 hunks)flo_ai/flo_ai/arium/nodes.py(6 hunks)flo_ai/flo_ai/builder/agent_builder.py(2 hunks)flo_ai/flo_ai/helpers/llm_factory.py(2 hunks)flo_ai/flo_ai/llm/anthropic_llm.py(2 hunks)flo_ai/flo_ai/llm/base_llm.py(3 hunks)flo_ai/flo_ai/llm/rootflo_llm.py(2 hunks)flo_ai/flo_ai/models/agent.py(8 hunks)flo_ai/flo_ai/models/base_agent.py(5 hunks)flo_ai/flo_ai/utils/variable_extractor.py(3 hunks)flo_ai/pyproject.toml(1 hunks)flo_ai/tests/unit-tests/test_anthropic_llm.py(3 hunks)flo_ai/tests/unit-tests/test_openai_llm.py(5 hunks)flo_ai/tests/unit-tests/test_openai_vllm.py(3 hunks)
🚧 Files skipped from review as they are similar to previous changes (5)
- flo_ai/tests/unit-tests/test_openai_llm.py
- flo_ai/flo_ai/llm/anthropic_llm.py
- flo_ai/flo_ai/helpers/llm_factory.py
- flo_ai/flo_ai/builder/agent_builder.py
- flo_ai/flo_ai/llm/rootflo_llm.py
🧰 Additional context used
🧬 Code graph analysis (6)
flo_ai/tests/unit-tests/test_anthropic_llm.py (1)
flo_ai/flo_ai/llm/anthropic_llm.py (1)
Anthropic(18-285)
flo_ai/tests/unit-tests/test_openai_vllm.py (3)
flo_ai/tests/unit-tests/test_base_llm.py (1)
get_message_content(60-63)flo_ai/tests/unit-tests/test_llm_router.py (1)
get_message_content(49-50)flo_ai/flo_ai/llm/openai_llm.py (1)
get_message_content(152-157)
flo_ai/examples/vllm_agent_usage.py (2)
flo_ai/flo_ai/builder/agent_builder.py (4)
AgentBuilder(13-343)with_name(31-34)with_prompt(36-43)with_llm(45-52)flo_ai/flo_ai/llm/openai_vllm.py (1)
OpenAIVLLM(6-103)
flo_ai/flo_ai/arium/nodes.py (2)
flo_ai/flo_ai/arium/protocols.py (2)
ExecutableNode(5-35)run(18-35)flo_ai/flo_ai/tool/base_tool.py (1)
run(45-50)
flo_ai/examples/tool_usage.py (3)
flo_ai/flo_ai/models/chat_message.py (1)
UserMessage(65-70)flo_ai/flo_ai/models/agent.py (1)
run(73-127)flo_ai/flo_ai/tool/base_tool.py (1)
run(45-50)
flo_ai/flo_ai/arium/llm_router.py (1)
flo_ai/flo_ai/arium/memory.py (5)
MessageMemory(133-153)MessageMemoryItem(22-33)get(112-113)get(149-153)get(168-175)
🔇 Additional comments (25)
flo_ai/tests/unit-tests/test_openai_vllm.py (2)
120-124: LGTM - Improved assertion pattern.The refactored assertion style is more flexible and follows best practices for mock testing. Separating the call count verification from kwargs inspection allows the test to focus on specific parameters without being brittle to additional constructor arguments.
302-302: LGTM - Appropriate use of type ignore.The type ignore comment is correctly applied here to test runtime behavior with a string input, even though the method signature expects a dictionary. The implementation handles this case via
isinstancechecks, and this test verifies that behavior.flo_ai/flo_ai/llm/base_llm.py (2)
10-20: Constructor typing forapi_keyand kwargs looks goodThe
api_key: Optional[str] = Noneand explicittemperature: floatalign with the broader typing cleanup and keep the constructor flexible via**kwargs. No issues from a type- or behavior-standpoint.
119-122:format_image_in_messagereturn type widened toAnyis reasonableKeeping this method abstract while widening the return type to
Anymakes sense given that different LLM vendors may require different structured payloads for images. As long as concrete implementations document what they return, this change looks fine.flo_ai/flo_ai/models/base_agent.py (2)
45-47: LGTM - Return type correctly updated toList[BaseMessage].The abstract method signature change aligns with the broader PR objective of standardizing agent outputs.
87-88: LGTM - Mutable default argument issue properly fixed.The pattern
Optional[Dict[str, Any]] = Nonewith runtime initialization to{}correctly addresses the previous review feedback about mutable default arguments.flo_ai/flo_ai/utils/variable_extractor.py (1)
110-136: LGTM - Clean Optional parameter handling with strict validation.The function now properly handles
Noneby defaulting to an empty dict and raises clear errors when referenced variables are missing. This is stricter than before but provides better debugging experience.flo_ai/flo_ai/arium/nodes.py (4)
37-44: LGTM - Mutable default argument properly fixed.The
Optional[Dict[str, Any]] = Nonepattern with(variables or {}).copy()correctly addresses the previous review comment.
136-136: Consistent with line 91-92 check - verifyAriumNodeinstanceof works.This
isinstance(self.execute_node, AriumNode)check is fine sinceAriumNodeis a concrete class, not a Protocol. However, the else branch at lines 152-162 has the same Protocolisinstanceissue.
170-218: NewFunctionNodeclass looks well-structured.The class correctly handles both sync and async functions, wraps results in
UserMessage, and follows theExecutableNodeinterface pattern.
90-92: TheExecutableNodeProtocol is already properly decorated with@runtime_checkable, so theisinstance()check at lines 90-92 is valid and will work correctly at runtime.Likely an incorrect or invalid review comment.
flo_ai/flo_ai/models/agent.py (4)
56-58: LGTM - Proper type coercion for system_prompt.The
str(enhanced_prompt)ensures the parent class always receives a string, handling bothstrandAssistantMessageinputs consistently.
204-206: LGTM - Proper fallback return path added.This ensures
_run_conversationalalways returnsList[BaseMessage]even when exiting the retry loop without an explicit return.
230-231: LGTM - Properly addresses undefined variable issue from past review.The initialization of
function_responseandfunction_nametoNone, combined with explicit checks before use in the fallback path (lines 436-448) and the early return whentool_call_count == 0(lines 408-410), ensures noNameErrorcan occur.Also applies to: 408-410, 435-448
378-383: LGTM - Defensive error handling for function name extraction.The code now safely extracts
error_function_namewith proper fallbacks, preventing potentialNameErrorwhen logging tool failures.flo_ai/examples/tool_usage.py (2)
19-20: LGTM - Correctly wrapsUserMessagein list.The examples now properly pass
[UserMessage(...)]which aligns with theAgent.runsignature expectingList[BaseMessage] | str. This addresses the past review comment.Also applies to: 47-48
78-79: LGTM - String input is valid per the API.Passing a plain string to
agent.run()is supported since the method signature isinputs: List[BaseMessage] | str. The method converts strings to[UserMessage(TextMessageContent(text=inputs))]internally.Also applies to: 118-119
flo_ai/examples/arium_examples.py (1)
53-54: Examples now align with the new Arium/Agent APIs.
- Passing explicit
llm=OpenAI(model='gpt-4o-mini')into allAgentinstances matches the updatedAgentconstructor.- Using simple string inputs in
.build_and_run(...)andarium.run(...)is consistent with the newAriumBuilder/Arium.runsignatures.- The FunctionNode helpers in example 6 correctly take
inputs: List[BaseMessage]plus optionalvariables/**kwargs, matching the new FunctionNode expectations.No functional issues stand out in these changes.
Also applies to: 63-68, 75-79, 115-116, 125-144, 181-182, 189-194, 203-204, 213-217, 223-224, 234-245, 285-294
flo_ai/flo_ai/arium/arium.py (1)
2-3: Memory typing and variable-default fixes look good.
- Switching the constructor to
memory: MessageMemoryand defaulting toMessageMemory()when falsy keeps runtime behavior safe and matches the new memory abstraction.- All places that previously used
variables: Dict[...] = {}now acceptOptional[Dict]and normalize withvariables = variables if variables is not None else {}, removing the shared-mutable-default pitfall._emit_event’s guard on bothcallbackandevents_filteris consistent with the run-time initialization ofevents_filter.No correctness issues spotted in these structural/type changes.
Also applies to: 25-29, 34-40, 53-58, 200-208, 360-375, 401-407, 419-420
flo_ai/flo_ai/arium/builder.py (3)
31-40: Unified node/memory typing in the builder looks consistent.
- Switching
_memorytoOptional[MessageMemory]and defaulting inbuild()matches the newAriumconstructor.- Using
AriumNodeTypefor_start_node,_end_nodes,add_foreach,start_with,end_with,add_edge, andconnectmakes the builder’s API consistent with the execution layer and simplifies router wiring.- String-based node resolution in
add_foreach,start_with, andconnectsearches across agents, function nodes, nested Ariums, and foreach nodes, which is compatible with the YAML workflow loader.These structural/type changes look sound.
Also applies to: 47-50, 96-123, 131-146, 151-161, 192-231
243-263: build_and_run input and variable handling matches Arium, with a minor semantic note.
variablesnow usesOptional[Dict] = Noneplus in‑body normalization, fixing the mutable-default issue.inputscan be astror alist; strings are resolved viaresolve_variablesand wrapped asUserMessage, and non‑stringBaseMessageinstances are passed through unchanged.This mirrors the Arium changes; just keep in mind that variables are resolved before they ever reach
_extract_and_validate_variables, so missing-variable diagnostics now depend onresolve_variablesbehavior rather than the extractor for these code paths.
915-920: Agent creation from direct YAML config is now better typed and more robust.
- Coercing
roleto a string (or''when absent) avoids passingNoneor non‑string values into.with_role(...).- Making
output_schemaoptional and then passing{}when it’sNonematchesAgentBuilder.with_output_schema’s expectation of a mapping and keeps the builder chain simple.These adjustments improve resilience of YAML‑driven agent creation.
Also applies to: 960-979
flo_ai/flo_ai/arium/llm_router.py (3)
64-68: execution_context threading through routers is consistent and compatible with Arium.
BaseLLMRouter.route(...)and all concreteget_routing_prompt(...)methods now acceptexecution_context: Optional[dict], matching howArium._execute_graphinvokes routers.- The decorator-based routers and factory-generated routers (
create_llm_router,create_*_router) expose the same(memory: MessageMemory, execution_context: Optional[dict])callable shape, which keeps user-defined and LLM-based routers interchangeable.These signature changes correctly align the router API with the new execution-context support.
Also applies to: 104-107, 188-193, 313-318, 449-454, 585-589, 646-652, 773-778, 843-846, 1036-1038, 1061-1062, 1130-1137, 1158-1165, 1200-1207
449-471: ReflectionRouter prompt/context refinements look correct.Filtering the conversation to string contents, limiting to the last 3 messages, and computing
suggested_nextonly whenexecution_contextis provided all make the reflection prompts more robust while safely handling theNonecontext case.No functional issues here.
Also applies to: 457-465, 478-501
1035-1043: llm_router decorator wrapper matches the new router call convention.The async
wrapper(memory: MessageMemory, execution_context: Optional[dict] = None)correctly forwards both arguments torouter_instance.route(...), and preservingfunc.__annotations__ensures user-declared return Literals still participate in static checks.This change looks correct.
| # Only ExecutableNode types have a run method | ||
| if isinstance(self.execute_node, ExecutableNode): | ||
| result = await self.execute_node.run( | ||
| inputs=[item], | ||
| variables=item_variables, | ||
| ) | ||
| else: | ||
| # StartNode/EndNode don't have run methods | ||
| raise TypeError( | ||
| f'Node {self.execute_node.name} does not support execution' | ||
| ) |
There was a problem hiding this comment.
Same Protocol isinstance issue as line 91.
If ExecutableNode is not @runtime_checkable, this check will fail at runtime.
🤖 Prompt for AI Agents
In flo_ai/flo_ai/arium/nodes.py around lines 152 to 162, the runtime isinstance
check against ExecutableNode (a Protocol) can fail if ExecutableNode is not
marked @runtime_checkable; replace the isinstance check with a structural check
like hasattr(self.execute_node, "run") (matching the fix used at line 91) or
alternatively make the ExecutableNode Protocol @runtime_checkable and keep the
isinstance — ensure the code calls run only when the object actually exposes a
callable run attribute and raise the same TypeError otherwise.
| ) -> List[BaseMessage]: | ||
| """Run as a tool-using agent when tools are provided""" | ||
| variables = variables or {} | ||
| print('running with tools') |
There was a problem hiding this comment.
Remove debug print statement.
print('running with tools') appears to be leftover debug code that should be removed or converted to a logger call.
) -> List[BaseMessage]:
"""Run as a tool-using agent when tools are provided"""
variables = variables or {}
- print('running with tools')
+ logger.debug('Running agent with tools')Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In flo_ai/flo_ai/models/agent.py around line 212 there is a leftover debug
print: remove the print('running with tools') statement; if runtime visibility
is required replace it with a proper logger call (e.g., use the module/class
logger at appropriate level such as logger.debug or logger.info) ensuring
imports and logger instance are used consistently across the file.
| formatted_content = self.llm.format_image_in_message(input.content) # type: ignore | ||
| message_history.append( | ||
| {'role': input.role, 'content': formatted_content} | ||
| ) | ||
|
|
||
| elif input.content.type == 'document': | ||
| # Format document message and add to history | ||
| formatted_content = await self.llm.format_document_in_message( | ||
| input.content | ||
| input.content # type: ignore |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for the method definitions in the codebase
rg -n "def format_image_in_message|def format_document_in_message" --type py -A 3Repository: rootflo/wavefront
Length of output: 3241
🏁 Script executed:
#!/bin/bash
# Find and examine the BaseLLM class definition
find . -name "*.py" -type f | xargs rg -l "class BaseLLM" | head -5Repository: rootflo/wavefront
Length of output: 128
🏁 Script executed:
#!/bin/bash
# Read the base_agent.py file around lines 106-114 with more context
sed -n '95,125p' flo_ai/flo_ai/models/base_agent.pyRepository: rootflo/wavefront
Length of output: 1735
🏁 Script executed:
#!/bin/bash
# Get the full function signature to understand input types
sed -n '75,130p' flo_ai/flo_ai/models/base_agent.pyRepository: rootflo/wavefront
Length of output: 2893
🏁 Script executed:
#!/bin/bash
# Find the definitions of MediaMessageContent, ImageMessageContent, DocumentMessageContent
rg -n "class MediaMessageContent|class ImageMessageContent|class DocumentMessageContent" --type py -A 5Repository: rootflo/wavefront
Length of output: 1344
🏁 Script executed:
#!/bin/bash
# Check if these are TypedDicts or dataclasses
rg -B 5 "class MediaMessageContent|class ImageMessageContent|class DocumentMessageContent" --type py | head -40Repository: rootflo/wavefront
Length of output: 1071
🏁 Script executed:
#!/bin/bash
# Check how input.content is actually typed when it comes from conversation_history
rg -n "content:" flo_ai/flo_ai/models/chat_message.py | head -20Repository: rootflo/wavefront
Length of output: 365
🏁 Script executed:
#!/bin/bash
# Look for BaseMessage class definition to understand the content type
rg -n "class BaseMessage|class UserMessage|class AssistantMessage" flo_ai/flo_ai/models/chat_message.py -A 3Repository: rootflo/wavefront
Length of output: 625
🏁 Script executed:
#!/bin/bash
# Check if there's a better pattern used elsewhere or if isinstance checks could work
rg -n "isinstance.*ImageMessageContent|isinstance.*DocumentMessageContent" --type pyRepository: rootflo/wavefront
Length of output: 43
Use isinstance() checks instead of type field comparisons to eliminate the need for # type: ignore.
The format_image_in_message and format_document_in_message methods have correct type hints in BaseLLM. The # type: ignore comments are necessary because the current code checks input.content.type == 'image' to distinguish between ImageMessageContent and DocumentMessageContent, but Python's type system cannot narrow a union type based on attribute value checks.
Replace the type field checks with isinstance(input.content, ImageMessageContent) and isinstance(input.content, DocumentMessageContent) to enable proper static type narrowing and eliminate the suppressions.
🤖 Prompt for AI Agents
In flo_ai/flo_ai/models/base_agent.py around lines 106 to 114, the code narrows
message content by checking input.content.type == 'image' / 'document' and uses
"# type: ignore"; replace those checks with isinstance(input.content,
ImageMessageContent) and isinstance(input.content, DocumentMessageContent)
respectively (add the necessary imports if missing), remove the "# type: ignore"
comments, and ensure the calls to self.llm.format_image_in_message and
self.llm.format_document_in_message accept the now statically narrowed types so
the type checker no longer requires suppression.
* fix: add ty and fix types * fix: type in flo_ai examples and tests * chore: fix types and tests in flo_ai
Summary by CodeRabbit
Refactor
Bug Fixes
Chores
✏️ Tip: You can customize this high-level summary in your review settings.