Skip to content

fix(flo-ai): for parser issue#244

Merged
vizsatiz merged 8 commits into
developfrom
fix/parser-fix
Mar 16, 2026
Merged

fix(flo-ai): for parser issue#244
vizsatiz merged 8 commits into
developfrom
fix/parser-fix

Conversation

@vizsatiz
Copy link
Copy Markdown
Member

@vizsatiz vizsatiz commented Mar 13, 2026

Summary by CodeRabbit

  • Bug Fixes

    • Improved handling of model responses that lack message content: safer empty fallbacks and fewer missing-output cases.
    • Function-call content is more reliably detected and surfaced in conversation history.
  • Refactor

    • Centralized response parsing to ensure consistent handling of assistant messages and function-call outputs.
  • Chores

    • Package version bumped to 1.1.3.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Mar 13, 2026

Warning

Rate limit exceeded

@vizsatiz has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 23 minutes and 30 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: b0dcab4b-cd5b-4ce1-bf13-b51df276b0f1

📥 Commits

Reviewing files that changed from the base of the PR and between a3dc2ec and 01f0543.

📒 Files selected for processing (1)
  • flo_ai/tests/unit-tests/test_openai_vllm.py
📝 Walkthrough

Walkthrough

Centralizes Agent response parsing with _handle_response_with_parser, refactors _run_conversational and _run_with_tools to call it, adjusts LLM get_message_content fallbacks and return types, and bumps package version to 1.1.3 with downstream dependency updates.

Changes

Cohort / File(s) Summary
Agent Response Handling
flo_ai/flo_ai/agent/agent.py
Adds _handle_response_with_parser(self, assistant_message: Optional[str], role: str, response: Dict[str, Any]) -> None and refactors _run_conversational and _run_with_tools to delegate final response parsing/adding to history to this method (unifies handling of assistant messages, function_call argument messages, and fallback).
LLM: empty-string fallbacks
flo_ai/flo_ai/llm/azure_openai_llm.py, flo_ai/flo_ai/llm/openai_llm.py
get_message_content now returns empty string '' when response lacks content instead of stringifying the entire response.
LLM: optional content / non-dict handling
flo_ai/flo_ai/llm/gemini_llm.py, flo_ai/flo_ai/llm/ollama_llm.py
get_message_content signatures widened to Optional[str] and implementations updated to coerce non-dict responses to str(response) while returning dict content when present.
Tests
flo_ai/tests/unit-tests/test_openai_llm.py, flo_ai/tests/unit-tests/test_openai_vllm.py
Updated expectations to match new get_message_content fallbacks/outputs (empty string or stringified inputs).
Packaging & downstream deps
flo_ai/pyproject.toml, flo_ai/setup.py, wavefront/server/modules/.../pyproject.toml
Bumps package version to 1.1.3 and updates downstream module dependency references from 1.1.11.1.3 in Wavefront module pyproject files.

Sequence Diagram(s)

sequenceDiagram
    participant User as User
    participant Agent as Agent
    participant LLM as LLM
    participant Tools as Tools

    User->>Agent: send prompt/request
    Agent->>LLM: invoke LLM -> get raw response
    LLM-->>Agent: return response (assistant message or function_call)
    Agent->>Agent: _handle_response_with_parser(assistant_message, role, response)
    alt function_call present
        Agent->>Tools: invoke tool per function_call
        Tools-->>Agent: tool result
        Agent->>LLM: send tool result (follow-up)
        LLM-->>Agent: final response
        Agent->>Agent: _handle_response_with_parser(..., response)
    end
    Agent-->>User: append history / return final assistant message
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Poem

🐇 I hopped through threads of text and code,
One handler now keeps the message road,
If words are gone or a function calls,
I tuck the args into history walls,
A carrot-coded path where chats unload 🥕

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (2 warnings)

Check name Status Explanation Resolution
Title check ⚠️ Warning The title 'fix(flo-ai): for parser issue' is vague and lacks specificity about what parser issue is being fixed. Replace with a specific, descriptive title that clearly identifies the parser issue being addressed, such as 'refactor: centralize response parsing in Agent class' or 'fix: standardize get_message_content return types across LLM providers'.
Docstring Coverage ⚠️ Warning Docstring coverage is 76.47% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (1 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix/parser-fix
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
flo_ai/flo_ai/llm/gemini_llm.py (1)

208-212: Align return semantics with the new Optional[str] contract.

Line 211 still returns '' for missing content, so this method effectively behaves as str most of the time. Consider returning None for missing content (or keeping -> str consistently across providers) to avoid provider-specific behavior drift.

Proposed contract-aligned refactor
 def get_message_content(self, response: Any) -> Optional[str]:
     """Extract message content from response"""
     if isinstance(response, dict):
-        return response.get('content', '')
-    return str(response)
+        content = response.get('content')
+        return str(content) if content is not None else None
+    return str(response) if response is not None else None
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@flo_ai/flo_ai/llm/gemini_llm.py` around lines 208 - 212, The
get_message_content method currently returns an empty string for missing dict
'content', which conflicts with its Optional[str] return type; change the
function (get_message_content) to return None instead of '' when 'content' is
absent, keep the signature as Optional[str], update the docstring to state it
may return None, and scan callers to handle None (or convert to '' at callsite)
so provider behavior is consistent with the new contract.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@flo_ai/flo_ai/agent/agent.py`:
- Around line 137-140: The code writes possible_tool_message['arguments']
directly into AssistantMessage.content which may be a dict/object; change it to
serialize to a stable string (e.g.
json.dumps(possible_tool_message['arguments'], sort_keys=True, separators=(',',
':'), default=str)) before calling add_to_history so history always stores a
string; update the AssistantMessage creation in the agent (the block that calls
add_to_history with AssistantMessage) to pass the JSON string and import json
where needed.

In `@flo_ai/flo_ai/llm/openai_llm.py`:
- Around line 151-156: get_message_content currently returns Optional[str],
causing callers like agent.Agent (calls at .strip().upper()) and
arium.llm_router (calls at .strip().lower()) to crash; restore a non-null
guarantee by changing get_message_content in openai_llm (method
get_message_content) to return a str (never None) — coerce response types and
replace missing content with an empty string (e.g., if response is None or
content is None return ""), update the method signature to -> str, and make the
same signature/behavior change in BaseLLM.get_message_content and the other
implementations (azure_openai_llm, ollama_llm, gemini_llm) so LSP is preserved
and callers need no null checks.

---

Nitpick comments:
In `@flo_ai/flo_ai/llm/gemini_llm.py`:
- Around line 208-212: The get_message_content method currently returns an empty
string for missing dict 'content', which conflicts with its Optional[str] return
type; change the function (get_message_content) to return None instead of ''
when 'content' is absent, keep the signature as Optional[str], update the
docstring to state it may return None, and scan callers to handle None (or
convert to '' at callsite) so provider behavior is consistent with the new
contract.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 487b9550-0392-416b-b09f-fce8f9799120

📥 Commits

Reviewing files that changed from the base of the PR and between 45778ea and 4d630e0.

📒 Files selected for processing (5)
  • flo_ai/flo_ai/agent/agent.py
  • flo_ai/flo_ai/llm/azure_openai_llm.py
  • flo_ai/flo_ai/llm/gemini_llm.py
  • flo_ai/flo_ai/llm/ollama_llm.py
  • flo_ai/flo_ai/llm/openai_llm.py

Comment thread flo_ai/flo_ai/agent/agent.py
Comment thread flo_ai/flo_ai/llm/openai_llm.py Outdated
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
flo_ai/setup.py (1)

22-22: ⚠️ Potential issue | 🟠 Major

Remove setup.py or sync install_requires with pyproject.toml dependencies.

The install_requires=[] is empty while pyproject.toml declares 19 required dependencies. Although the build backend is configured to use hatchling (which correctly reads from pyproject.toml), the presence of setup.py with empty dependencies creates an inconsistency. Users installing via legacy methods or if build tools fall back to setup.py would fail to install required packages. Either remove setup.py entirely (since hatchling handles all builds) or keep it synchronized with pyproject.toml dependencies.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@flo_ai/setup.py` at line 22, The setup.py currently defines
install_requires=[] which is inconsistent with pyproject.toml; either delete
setup.py entirely (since hatchling is the build backend) or update the
install_requires list in setup.py to match the dependency entries in
pyproject.toml so legacy installers get the same packages; locate the
install_requires assignment in setup.py and either remove the file or populate
install_requires with the 19 dependencies declared in pyproject.toml (keeping
versions identical).
🧹 Nitpick comments (1)
flo_ai/pyproject.toml (1)

71-72: Consider consolidating dev dependency sections.

There's overlap between [dependency-groups] dev (lines 38-50) and [tool.poetry.group.dev.dependencies] (lines 71-72). reportlab appears in both with slightly different version constraints (>=4.4.3,<5 vs ^4.4.4). If Poetry is no longer used for builds (hatchling is the build backend), this section could be removed to reduce confusion.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@flo_ai/pyproject.toml` around lines 71 - 72, There are duplicate dev
dependency declarations for reportlab in the pyproject (the [dependency-groups]
dev block and the [tool.poetry.group.dev.dependencies] block) with conflicting
constraints; remove the redundant Poetry-specific group or reconcile the version
to a single source of truth—specifically either delete the entire
[tool.poetry.group.dev.dependencies] section (if hatchling is your build backend
and Poetry groups are unused) or update reportlab to a single constraint (e.g.,
align to ">=4.4.3,<5" or "^4.4.4") in the remaining dev group so only one
declaration of reportlab exists.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@flo_ai/setup.py`:
- Line 22: The setup.py currently defines install_requires=[] which is
inconsistent with pyproject.toml; either delete setup.py entirely (since
hatchling is the build backend) or update the install_requires list in setup.py
to match the dependency entries in pyproject.toml so legacy installers get the
same packages; locate the install_requires assignment in setup.py and either
remove the file or populate install_requires with the 19 dependencies declared
in pyproject.toml (keeping versions identical).

---

Nitpick comments:
In `@flo_ai/pyproject.toml`:
- Around line 71-72: There are duplicate dev dependency declarations for
reportlab in the pyproject (the [dependency-groups] dev block and the
[tool.poetry.group.dev.dependencies] block) with conflicting constraints; remove
the redundant Poetry-specific group or reconcile the version to a single source
of truth—specifically either delete the entire
[tool.poetry.group.dev.dependencies] section (if hatchling is your build backend
and Poetry groups are unused) or update reportlab to a single constraint (e.g.,
align to ">=4.4.3,<5" or "^4.4.4") in the remaining dev group so only one
declaration of reportlab exists.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 60617eaa-6ac1-476c-989c-fc1466b819c6

📥 Commits

Reviewing files that changed from the base of the PR and between 4d630e0 and 52259b5.

📒 Files selected for processing (2)
  • flo_ai/pyproject.toml
  • flo_ai/setup.py

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@flo_ai/tests/unit-tests/test_openai_vllm.py`:
- Around line 297-299: The test fails because get_message_content currently
returns string inputs unchanged but the test passes a Python-repr dict string
(response = str({'content': 'Hello, world!'})) and expects a JSON-formatted
string; update the get_message_content function to detect when the input is a
string containing a Python dict/list representation and convert it to a
canonical JSON string (e.g., try json.loads first, and if that fails, safely
parse Python literals with ast.literal_eval and then json.dumps) so calling
llm.get_message_content(response) returns '{"content": "Hello, world!"}' for
that input.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: c4256e52-6da5-460b-8680-87a8bdc5bd67

📥 Commits

Reviewing files that changed from the base of the PR and between a735dcf and a3dc2ec.

📒 Files selected for processing (2)
  • flo_ai/tests/unit-tests/test_openai_llm.py
  • flo_ai/tests/unit-tests/test_openai_vllm.py

Comment thread flo_ai/tests/unit-tests/test_openai_vllm.py Outdated
@vizsatiz vizsatiz merged commit 6baf88a into develop Mar 16, 2026
9 checks passed
@vizsatiz vizsatiz deleted the fix/parser-fix branch March 16, 2026 09:14
thomastomy5 pushed a commit that referenced this pull request Apr 27, 2026
* fix(flo-ai): for parser issue

* fix(flo-ai): version upgrade

* fix(wavefront): upgrade flo-ai to 1.1.3

* fix for flo-ai

* fix for flo-ai

* fix test flo-ai

* fix test flo-ai

* fix test flo-ai
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant