fix: memory access in arium - foreach node and arium node#167
Conversation
WalkthroughArium node execution now normalizes mixed node outputs via a new flattener and wraps resolved input strings as Changes
Sequence Diagram(s)sequenceDiagram
participant Runner as Arium.run
participant Node as Node (ForEach/Arium)
participant Flattener as _flatten_results
participant Agent as Agent.run
Note right of Runner: resolve inputs → wrap strings\nas UserMessage(content=...)
Runner->>Node: await node.run(resolved_inputs, variables=...)
Node-->>Runner: List[MessageMemoryItem|BaseMessage]
Runner->>Flattener: _flatten_results(sequence)
Flattener-->>Runner: List[BaseMessage] (normalized)
alt node is Agent
Runner->>Agent: Agent.run(..., variables={})
Agent-->>Runner: Agent result (normalized similarly)
end
Runner->>Runner: continue execution with normalized results
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (3)
🧰 Additional context used🧬 Code graph analysis (1)flo_ai/flo_ai/arium/llm_router.py (1)
🔇 Additional comments (13)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (1)
flo_ai/flo_ai/arium/arium.py (1)
474-485: Foreach/Arium node result normalization correctly flattens MessageMemoryItemThe new handling for
ForEachNodeandAriumNode:
- Awaits
node.run(...)intoforeach_results/arium_result.- Maps each element to
item.resultwhen it is aMessageMemoryItem, otherwise leaves it as-is.- Returns a plain
List[BaseMessage | str]in both traced and non-traced paths.This removes nested
MessageMemoryItemlayers and ensures downstream memory writes (_execute_graph’s_add_to_memory) store leaf results, which is exactly what the routers and other consumers expect.You might consider extracting the normalization into a small helper (e.g.,
_flatten_results(sequence)) to deduplicate the foreach/arium logic, but that’s purely a readability win, not required for correctness.Also applies to: 488-496, 571-580, 582-588
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
flo_ai/flo_ai/arium/arium.py(3 hunks)flo_ai/flo_ai/arium/llm_router.py(16 hunks)flo_ai/flo_ai/models/agent.py(4 hunks)flo_ai/pyproject.toml(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (3)
flo_ai/flo_ai/models/agent.py (1)
flo_ai/flo_ai/models/chat_message.py (2)
SystemMessage(56-61)TextMessageContent(43-45)
flo_ai/flo_ai/arium/arium.py (4)
flo_ai/flo_ai/models/chat_message.py (2)
UserMessage(65-70)BaseMessage(49-52)flo_ai/flo_ai/utils/variable_extractor.py (1)
resolve_variables(110-139)flo_ai/flo_ai/arium/memory.py (1)
MessageMemoryItem(22-35)flo_ai/flo_ai/arium/nodes.py (4)
run(35-50)run(98-111)run(181-208)AriumNode(12-50)
flo_ai/flo_ai/arium/llm_router.py (1)
flo_ai/flo_ai/arium/memory.py (7)
ExecutionPlan(53-104)StepStatus(12-19)MessageMemory(134-154)MessageMemoryItem(22-35)get(113-114)get(150-154)get(177-184)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Analyze (actions)
🔇 Additional comments (4)
flo_ai/pyproject.toml (1)
3-3: Version bump aligns with PR scopeUpdating the project version to
1.1.0-rc3is consistent with the internal changes; nothing else in this file needs adjustment.flo_ai/flo_ai/models/agent.py (1)
223-224: SystemMessage usage in tool paths looks correctIn
_run_with_toolsand the final-answer path,SystemMessageis constructed with a plain string (content=system_content/ constant string), matching theSystemMessagedefinition and intended usage.Also applies to: 401-404
flo_ai/flo_ai/arium/arium.py (1)
53-53: String input normalization into UserMessage is soundWrapping raw
inputs: strasUserMessage(content=resolve_variables(inputs, variables))cleanly reuses the variable resolver and ensures memory always sees a properUserMessageas the first “input” node. Givenresolve_variablesis a no-op whenvariablesis falsy, this is safe for both templated and plain inputs.flo_ai/flo_ai/arium/llm_router.py (1)
834-852: Router factory and convenience API now consistently typed for MessageMemoryThe migration of:
create_llm_routerto returnCallable[[MessageMemory], str],- the decorated router wrapper to accept
memory: MessageMemory,- and all convenience creators (
create_research_analysis_router,create_main_critic_reflection_router,create_plan_execute_router,create_main_critic_flow_router) to advertiseCallable[[MessageMemory], str]is consistent with the new
MessageMemory-based architecture and with howArium._execute_graphcalls router functions.Type annotations and doc examples (e.g.,
def my_smart_router(memory: MessageMemory) -> Literal[...]) now align with runtime behavior; no additional changes needed here.Also applies to: 946-952, 985-986, 1039-1044, 1112-1119, 1140-1147, 1182-1189
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
flo_ai/flo_ai/arium/llm_router.py (1)
192-196: Guard againstMessageMemoryItem.resultnot having.contentand centralize text extractionAll router prompts now derive text with expressions like:
f'{item.node}: {item.result.content}'(SmartRouter)latest_task = str(conversation[-1].result.content)(TaskClassifierRouter)'\n'.join([msg.result.content for msg in conversation[-3:]])(ReflectionRouter, PlanExecuteRouter)f'Message {i+1}: {msg.result.content}'(ConversationAnalysisRouter)But
MessageMemoryItem.resultis annotated asBaseMessage | strinmemory.py, and upstream Arium logic can legitimately store plain strings as results. Even whenresultis aBaseMessage, its.contentmay be aTextMessageContentor other structured payload, so directly interpolating.contentcan either raiseAttributeError(ifresultisstr) or produce an unhelpful repr.This is the same underlying concern raised in a previous review; with the broader MessageMemory adoption it’s now more important to harden. I’d strongly recommend introducing a single helper to safely stringify a
MessageMemoryItemand reuse it across all routers, for example:-from flo_ai.arium.memory import ( - ExecutionPlan, - StepStatus, - MessageMemory, - MessageMemoryItem, -) +from flo_ai.arium.memory import ( + ExecutionPlan, + StepStatus, + MessageMemory, + MessageMemoryItem, +) +from flo_ai.models.chat_message import BaseMessage, TextMessageContent + + +def _memory_item_to_text(item: MessageMemoryItem) -> str: + """Extract readable text from a MessageMemoryItem.result.""" + value = item.result + if isinstance(value, BaseMessage): + content = getattr(value, "content", "") + if isinstance(content, TextMessageContent): + return content.text + return str(content) + return str(value)Then use it in all prompt builders, e.g.:
- conversation_text = self._truncate_conversation_for_tokens( - [f'{item.node}: {item.result.content}' for item in conversation[-5:]] - ) + conversation_text = self._truncate_conversation_for_tokens( + [f"{item.node}: {_memory_item_to_text(item)}" for item in conversation[-5:]] + )- latest_task = str(conversation[-1].result.content) + latest_task = _memory_item_to_text(conversation[-1])- conversation_text = '\n'.join( - [msg.result.content for msg in conversation[-3:]] - ) + conversation_text = "\n".join( + _memory_item_to_text(msg) for msg in conversation[-3:] + )- conversation_text = '\n'.join( - [ - f'Message {i+1}: {msg.result.content}' - for i, msg in enumerate(recent_messages) - ] - ) + conversation_text = "\n".join( + f"Message {i+1}: {_memory_item_to_text(msg)}" + for i, msg in enumerate(recent_messages) + )Alternatively, if your invariant truly is “
resultis always aBaseMessage”, it would be safer to tightenMessageMemoryItem.result’s type toBaseMessageso type checkers (and future maintainers) can rely on it.Because this touches all routing paths, a latent
strresult would turn into runtimeAttributeErrorin multiple routers, so I’d treat hardening here as fairly high priority.#!/bin/bash # Verify what types are actually being stored in MessageMemoryItem.result. echo "=== MessageMemoryItem initializations ===" rg -n "MessageMemoryItem\(" -S flo_ai | sed -n '1,120p' echo echo "=== Calls to MessageMemory.add ===" rg -n "\.add\(" flo_ai/flo_ai/arium -S -A2 -B2 echo echo "=== Direct assignments to .result (if any) ===" rg -n "\.result\s*=" flo_ai -S -A1 -B1Also applies to: 248-249, 317-322, 453-459, 583-589, 768-777
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
flo_ai/flo_ai/arium/arium.py(4 hunks)flo_ai/flo_ai/arium/llm_router.py(17 hunks)flo_ai/flo_ai/models/agent.py(4 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- flo_ai/flo_ai/arium/arium.py
🧰 Additional context used
🧬 Code graph analysis (2)
flo_ai/flo_ai/models/agent.py (1)
flo_ai/flo_ai/models/chat_message.py (1)
SystemMessage(56-61)
flo_ai/flo_ai/arium/llm_router.py (1)
flo_ai/flo_ai/arium/memory.py (7)
ExecutionPlan(53-104)StepStatus(12-19)MessageMemory(134-154)MessageMemoryItem(22-35)get(113-114)get(150-154)get(177-184)
🔇 Additional comments (1)
flo_ai/flo_ai/models/agent.py (1)
12-12: SystemMessage usage now matches its type and unifies system promptsImporting
SystemMessageand using it with plain stringcontentfor both the main system prompt and the final “provide a final answer” instruction is consistent withSystemMessage’scontent: strdefinition and cleans up the previousAssistantMessage/TextMessageContentmismatch. No further changes needed here.Also applies to: 143-143, 221-221, 400-402
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
flo_ai/flo_ai/arium/llm_router.py (1)
248-274: Minor: Redundantstr()calls in truncation logic.The method now correctly accepts
List[str], but lines 260, 264, and 270 applystr(msg)to elements that are already strings. This is harmless but redundant.Apply this diff to remove redundant conversions:
if not messages: return '' # Start with the most recent message truncated_messages = [messages[-1]] - current_text = str(messages[-1]) + current_text = messages[-1] # Add older messages if we have space for msg in reversed(messages[:-1]): - msg_text = str(msg) + msg_text = msg # Rough token estimation (4 chars per token is a common approximation) estimated_tokens = len(current_text + '\n' + msg_text) // 4 if estimated_tokens <= max_tokens: truncated_messages.insert(0, msg) - current_text = '\n'.join([str(m) for m in truncated_messages]) + current_text = '\n'.join(truncated_messages) else: break - return '\n'.join([str(msg) for msg in truncated_messages]) + return '\n'.join(truncated_messages)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
flo_ai/flo_ai/arium/llm_router.py(17 hunks)flo_ai/flo_ai/arium/memory.py(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
flo_ai/flo_ai/arium/memory.py (1)
flo_ai/flo_ai/models/chat_message.py (1)
BaseMessage(49-52)
flo_ai/flo_ai/arium/llm_router.py (1)
flo_ai/flo_ai/arium/memory.py (7)
ExecutionPlan(51-102)StepStatus(12-19)MessageMemory(132-152)MessageMemoryItem(22-33)get(111-112)get(148-152)get(175-182)
🔇 Additional comments (5)
flo_ai/flo_ai/arium/memory.py (1)
23-26: LGTM! Type narrowing improves type safety.Narrowing
MessageMemoryItem.resultfromBaseMessage | strtoBaseMessagealigns with your clarification that results are always stored asBaseMessageinstances. This change:
- Eliminates potential
AttributeErrorwhen accessing.result.contentin routers- Makes the code's contract explicit
- Improves static type checking
Based on learnings from previous review.
flo_ai/flo_ai/arium/llm_router.py (4)
9-16: LGTM! Imports updated for new memory model.The addition of
Awaitableand the switch toMessageMemory/MessageMemoryItemalign with the refactored memory model and support proper async type annotations.
65-65: Signature update consistent with new memory model.All router methods now explicitly require
MessageMemoryinstead of the genericBaseMemory. This is a breaking change but aligns with the refactored memory architecture where routers access message-specific properties likeitem.result.content.
195-195: Access to.result.contentnow type-safe.With
MessageMemoryItem.resultnarrowed toBaseMessage(no longerBaseMessage | str), accessing.contentthroughout the routers is now safe. The previous concern aboutAttributeError: 'str' object has no attribute 'content'is resolved.
1043-1043: LGTM! Convenience functions properly typed.The return type annotations for
create_research_analysis_routerand other convenience functions correctly declareCallable[[MessageMemory, Optional[dict]], Awaitable[str]], properly reflecting their async nature.
* fix: memory access in arium - foreach node and arium node * fix: resolve review comments * fix: remove string type for result in message memory * fix: change return type annotation of router function
Summary by CodeRabbit
Refactor
New Features
Tests
Chores
✏️ Tip: You can customize this high-level summary in your review settings.