Skip to content

Conversation

@JayaSurya-27
Copy link
Collaborator

@JayaSurya-27 JayaSurya-27 commented May 28, 2025

Pull Request

Description

Describe the changes in this pull request:

  • What feature/bug does this PR address?
  • Provide any relevant links or screenshots.

Checklist

  • Code compiles correctly.
  • Created/updated tests.
  • Linting and formatting applied.
  • Documentation updated.

Related Issues

Closes #<issue_number>

Summary by CodeRabbit

  • New Features
    • Added support for selecting models when configuring evaluations.
    • Enabled flexible evaluation naming, allowing both predefined and custom names.
    • Integrated remote fetching of custom evaluation templates.
  • Improvements
    • Enhanced validation and error handling for evaluation configuration and mappings.
    • Improved type checking and stricter enforcement of required fields.
  • Bug Fixes
    • Ensured correct serialization and handling of evaluation names in all relevant areas.

…on handling. Updated eval_name type and adjusted related logic for consistency in mapping validation.
@JayaSurya-27 JayaSurya-27 requested a review from NVJKKartik May 28, 2025 16:36
@JayaSurya-27 JayaSurya-27 self-assigned this May 28, 2025
@coderabbitai
Copy link

coderabbitai bot commented May 28, 2025

Important

Review skipped

Auto reviews are disabled on base/target branches other than the default branch.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Walkthrough

The updates introduce a new ModelChoices enum for model selection and expand the flexibility of evaluation naming in the EvalTag dataclass. The EvalTag logic is refactored to support stricter validation of config and mapping fields, and to integrate custom evaluation templates fetched remotely. Supporting functions and imports are added or adjusted accordingly.

Changes

File(s) Change Summary
python/fi_instrumentation/fi_types.py Added ModelChoices enum; refactored EvalName values; expanded and validated EvalTag fields; added helper validation methods; updated serialization and string methods.
python/fi_instrumentation/settings.py Added get_custom_eval_template function for fetching custom eval templates; imported requests; updated typing imports.
python/fi_instrumentation/otel.py Updated import to include EvalName; added logic to convert eval_name enums to strings before processing.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant EvalTag
    participant Settings
    participant RemoteService

    User->>EvalTag: Instantiate with eval_name, model, config, mapping
    EvalTag->>Settings: get_custom_eval_template(eval_name)
    Settings->>RemoteService: POST /custom_eval_template (with eval_name)
    RemoteService-->>Settings: Return template or error
    Settings-->>EvalTag: Return template/config
    EvalTag->>EvalTag: Validate eval_name, config, mapping
    EvalTag-->>User: EvalTag instance ready or error raised
Loading

Poem

In fields of code where models dwell,
A rabbit hops with tales to tell—
Of enums new and configs tight,
Of custom evals fetched at night.
With stricter checks and names anew,
The code hops forward—swift and true!
🐇✨


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@JayaSurya-27 JayaSurya-27 changed the base branch from main to dev May 28, 2025 16:36
…n EvalConfig and EvalMappingConfig to streamline evaluation types.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Inline review comments failed to post. This is likely due to GitHub's limits when posting large numbers of comments. If you are seeing this consistently it is likely a permissions issue. Please check "Moderation" -> "Code review limits" under your organization settings.

Actionable comments posted: 4

🧹 Nitpick comments (3)
python/fi_instrumentation/settings.py (2)

118-118: Remove extra blank line.

Remove the extra blank line for consistency with the codebase style:

-    
     url = f"{base_url}/tracer/custom-eval-config/get_custom_eval_by_name/"

119-119: Consider making the API endpoint configurable.

The hardcoded path /tracer/custom-eval-config/get_custom_eval_by_name/ might need to be updated if the API changes. Consider making this configurable or defining it as a constant.

python/fi_instrumentation/fi_types.py (1)

982-986: Simplify nested if statements.

The nested if statements can be combined for better readability.

Apply this diff:

-        if not is_custom_eval:
-            if not isinstance(self.eval_name, EvalName):
-                raise ValueError(
-                    f"eval_name must be an EvalName enum, got {type(self.eval_name)}"
-                )
+        if not is_custom_eval and not isinstance(self.eval_name, EvalName):
+            raise ValueError(
+                f"eval_name must be an EvalName enum, got {type(self.eval_name)}"
+            )
🧰 Tools
🪛 Ruff (0.11.9)

982-983: Use a single if statement instead of nested if statements

Combine if statements using and

(SIM102)

🛑 Comments failed to post (4)
python/fi_instrumentation/settings.py (1)

135-136: ⚠️ Potential issue

Improve exception handling with explicit chaining.

The static analysis correctly identifies that exceptions should be chained to preserve the original traceback and distinguish them from errors in exception handling.

Additionally, consider catching more specific exceptions like requests.RequestException instead of the generic Exception.

Apply this diff to improve exception handling:

-    except Exception as e:
-        raise ValueError(f"Failed to check custom eval template: {e}")
+    except requests.RequestException as e:
+        raise ValueError(f"Failed to check custom eval template: {e}") from e
+    except Exception as e:
+        raise ValueError(f"Unexpected error checking custom eval template: {e}") from e
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

    except requests.RequestException as e:
        raise ValueError(f"Failed to check custom eval template: {e}") from e
    except Exception as e:
        raise ValueError(f"Unexpected error checking custom eval template: {e}") from e
🧰 Tools
🪛 Ruff (0.11.9)

136-136: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)

🤖 Prompt for AI Agents
In python/fi_instrumentation/settings.py around lines 135 to 136, replace the
generic Exception catch with a more specific exception such as
requests.RequestException, and use explicit exception chaining by adding "from
e" when raising the new ValueError to preserve the original traceback and
improve error clarity.
python/fi_instrumentation/fi_types.py (3)

921-924: 🛠️ Refactor suggestion

Consider the performance impact of network calls in __post_init__.

Making a network call to get_custom_eval_template during object initialization could cause performance issues and unexpected failures. Consider:

  1. Caching the results to avoid repeated API calls
  2. Making this validation optional or lazy
  3. Handling network failures gracefully
🤖 Prompt for AI Agents
In python/fi_instrumentation/fi_types.py around lines 921 to 924, the
__post_init__ method makes a network call to get_custom_eval_template which can
degrade performance and cause failures during object initialization. Refactor
this by implementing caching to store and reuse results of
get_custom_eval_template, or defer the call to a lazy evaluation method that
runs only when needed. Additionally, add error handling to gracefully manage
network failures without crashing the initialization process.

977-980: ⚠️ Potential issue

Fix f-string without placeholders.

The f-string on line 979 doesn't contain any placeholders.

Apply this diff:

-        if not self.eval_name: 
-            raise ValueError(
-                f"eval_name must be an Present."
-            )
+        if not self.eval_name: 
+            raise ValueError("eval_name must be Present.")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

        if not self.eval_name:
            raise ValueError("eval_name must be Present.")
🧰 Tools
🪛 Ruff (0.11.9)

979-979: f-string without any placeholders

Remove extraneous f prefix

(F541)

🤖 Prompt for AI Agents
In python/fi_instrumentation/fi_types.py around lines 977 to 980, the f-string
used in the ValueError message does not contain any placeholders and is
unnecessary. Replace the f-string with a regular string literal by removing the
leading 'f' before the quotes in the raise statement.

908-911: ⚠️ Potential issue

Fix f-string without placeholders.

The f-string on line 910 doesn't contain any placeholders.

Apply this diff:

-        if not self.eval_name: 
-            raise ValueError(
-                f"eval_name is required"
-            )
+        if not self.eval_name: 
+            raise ValueError("eval_name is required")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

        if not self.eval_name:
            raise ValueError("eval_name is required")
🧰 Tools
🪛 Ruff (0.11.9)

910-910: f-string without any placeholders

Remove extraneous f prefix

(F541)

🤖 Prompt for AI Agents
In python/fi_instrumentation/fi_types.py around lines 908 to 911, the f-string
used in the ValueError message does not contain any placeholders, making the
f-string unnecessary. Replace the f-string with a regular string literal by
removing the leading 'f' before the quotes in the raise statement.

@JayaSurya-27 JayaSurya-27 merged commit 7e067b4 into dev May 28, 2025
1 check passed
This was referenced May 28, 2025
Merged
@nik13 nik13 deleted the bugfix/custom-evals-prototype branch July 1, 2025 19:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants