fix(flo-ai): for parser issue#244
Conversation
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughCentralizes Agent response parsing with Changes
Sequence Diagram(s)sequenceDiagram
participant User as User
participant Agent as Agent
participant LLM as LLM
participant Tools as Tools
User->>Agent: send prompt/request
Agent->>LLM: invoke LLM -> get raw response
LLM-->>Agent: return response (assistant message or function_call)
Agent->>Agent: _handle_response_with_parser(assistant_message, role, response)
alt function_call present
Agent->>Tools: invoke tool per function_call
Tools-->>Agent: tool result
Agent->>LLM: send tool result (follow-up)
LLM-->>Agent: final response
Agent->>Agent: _handle_response_with_parser(..., response)
end
Agent-->>User: append history / return final assistant message
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
flo_ai/flo_ai/llm/gemini_llm.py (1)
208-212: Align return semantics with the newOptional[str]contract.Line 211 still returns
''for missing content, so this method effectively behaves asstrmost of the time. Consider returningNonefor missing content (or keeping-> strconsistently across providers) to avoid provider-specific behavior drift.Proposed contract-aligned refactor
def get_message_content(self, response: Any) -> Optional[str]: """Extract message content from response""" if isinstance(response, dict): - return response.get('content', '') - return str(response) + content = response.get('content') + return str(content) if content is not None else None + return str(response) if response is not None else None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@flo_ai/flo_ai/llm/gemini_llm.py` around lines 208 - 212, The get_message_content method currently returns an empty string for missing dict 'content', which conflicts with its Optional[str] return type; change the function (get_message_content) to return None instead of '' when 'content' is absent, keep the signature as Optional[str], update the docstring to state it may return None, and scan callers to handle None (or convert to '' at callsite) so provider behavior is consistent with the new contract.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@flo_ai/flo_ai/agent/agent.py`:
- Around line 137-140: The code writes possible_tool_message['arguments']
directly into AssistantMessage.content which may be a dict/object; change it to
serialize to a stable string (e.g.
json.dumps(possible_tool_message['arguments'], sort_keys=True, separators=(',',
':'), default=str)) before calling add_to_history so history always stores a
string; update the AssistantMessage creation in the agent (the block that calls
add_to_history with AssistantMessage) to pass the JSON string and import json
where needed.
In `@flo_ai/flo_ai/llm/openai_llm.py`:
- Around line 151-156: get_message_content currently returns Optional[str],
causing callers like agent.Agent (calls at .strip().upper()) and
arium.llm_router (calls at .strip().lower()) to crash; restore a non-null
guarantee by changing get_message_content in openai_llm (method
get_message_content) to return a str (never None) — coerce response types and
replace missing content with an empty string (e.g., if response is None or
content is None return ""), update the method signature to -> str, and make the
same signature/behavior change in BaseLLM.get_message_content and the other
implementations (azure_openai_llm, ollama_llm, gemini_llm) so LSP is preserved
and callers need no null checks.
---
Nitpick comments:
In `@flo_ai/flo_ai/llm/gemini_llm.py`:
- Around line 208-212: The get_message_content method currently returns an empty
string for missing dict 'content', which conflicts with its Optional[str] return
type; change the function (get_message_content) to return None instead of ''
when 'content' is absent, keep the signature as Optional[str], update the
docstring to state it may return None, and scan callers to handle None (or
convert to '' at callsite) so provider behavior is consistent with the new
contract.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 487b9550-0392-416b-b09f-fce8f9799120
📒 Files selected for processing (5)
flo_ai/flo_ai/agent/agent.pyflo_ai/flo_ai/llm/azure_openai_llm.pyflo_ai/flo_ai/llm/gemini_llm.pyflo_ai/flo_ai/llm/ollama_llm.pyflo_ai/flo_ai/llm/openai_llm.py
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
flo_ai/setup.py (1)
22-22:⚠️ Potential issue | 🟠 MajorRemove setup.py or sync install_requires with pyproject.toml dependencies.
The
install_requires=[]is empty whilepyproject.tomldeclares 19 required dependencies. Although the build backend is configured to usehatchling(which correctly reads frompyproject.toml), the presence ofsetup.pywith empty dependencies creates an inconsistency. Users installing via legacy methods or if build tools fall back tosetup.pywould fail to install required packages. Either removesetup.pyentirely (since hatchling handles all builds) or keep it synchronized withpyproject.tomldependencies.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@flo_ai/setup.py` at line 22, The setup.py currently defines install_requires=[] which is inconsistent with pyproject.toml; either delete setup.py entirely (since hatchling is the build backend) or update the install_requires list in setup.py to match the dependency entries in pyproject.toml so legacy installers get the same packages; locate the install_requires assignment in setup.py and either remove the file or populate install_requires with the 19 dependencies declared in pyproject.toml (keeping versions identical).
🧹 Nitpick comments (1)
flo_ai/pyproject.toml (1)
71-72: Consider consolidating dev dependency sections.There's overlap between
[dependency-groups] dev(lines 38-50) and[tool.poetry.group.dev.dependencies](lines 71-72).reportlabappears in both with slightly different version constraints (>=4.4.3,<5vs^4.4.4). If Poetry is no longer used for builds (hatchling is the build backend), this section could be removed to reduce confusion.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@flo_ai/pyproject.toml` around lines 71 - 72, There are duplicate dev dependency declarations for reportlab in the pyproject (the [dependency-groups] dev block and the [tool.poetry.group.dev.dependencies] block) with conflicting constraints; remove the redundant Poetry-specific group or reconcile the version to a single source of truth—specifically either delete the entire [tool.poetry.group.dev.dependencies] section (if hatchling is your build backend and Poetry groups are unused) or update reportlab to a single constraint (e.g., align to ">=4.4.3,<5" or "^4.4.4") in the remaining dev group so only one declaration of reportlab exists.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@flo_ai/setup.py`:
- Line 22: The setup.py currently defines install_requires=[] which is
inconsistent with pyproject.toml; either delete setup.py entirely (since
hatchling is the build backend) or update the install_requires list in setup.py
to match the dependency entries in pyproject.toml so legacy installers get the
same packages; locate the install_requires assignment in setup.py and either
remove the file or populate install_requires with the 19 dependencies declared
in pyproject.toml (keeping versions identical).
---
Nitpick comments:
In `@flo_ai/pyproject.toml`:
- Around line 71-72: There are duplicate dev dependency declarations for
reportlab in the pyproject (the [dependency-groups] dev block and the
[tool.poetry.group.dev.dependencies] block) with conflicting constraints; remove
the redundant Poetry-specific group or reconcile the version to a single source
of truth—specifically either delete the entire
[tool.poetry.group.dev.dependencies] section (if hatchling is your build backend
and Poetry groups are unused) or update reportlab to a single constraint (e.g.,
align to ">=4.4.3,<5" or "^4.4.4") in the remaining dev group so only one
declaration of reportlab exists.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 60617eaa-6ac1-476c-989c-fc1466b819c6
📒 Files selected for processing (2)
flo_ai/pyproject.tomlflo_ai/setup.py
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@flo_ai/tests/unit-tests/test_openai_vllm.py`:
- Around line 297-299: The test fails because get_message_content currently
returns string inputs unchanged but the test passes a Python-repr dict string
(response = str({'content': 'Hello, world!'})) and expects a JSON-formatted
string; update the get_message_content function to detect when the input is a
string containing a Python dict/list representation and convert it to a
canonical JSON string (e.g., try json.loads first, and if that fails, safely
parse Python literals with ast.literal_eval and then json.dumps) so calling
llm.get_message_content(response) returns '{"content": "Hello, world!"}' for
that input.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: c4256e52-6da5-460b-8680-87a8bdc5bd67
📒 Files selected for processing (2)
flo_ai/tests/unit-tests/test_openai_llm.pyflo_ai/tests/unit-tests/test_openai_vllm.py
* fix(flo-ai): for parser issue * fix(flo-ai): version upgrade * fix(wavefront): upgrade flo-ai to 1.1.3 * fix for flo-ai * fix for flo-ai * fix test flo-ai * fix test flo-ai * fix test flo-ai
Summary by CodeRabbit
Bug Fixes
Refactor
Chores