Skip to content

Conversation

@sarthakFuture
Copy link
Contributor

@sarthakFuture sarthakFuture commented May 28, 2025

Pull Request

Description

Describe the changes in this pull request:

  • What feature/bug does this PR address?
  • Provide any relevant links or screenshots.

Checklist

  • Code compiles correctly.
  • Created/updated tests.
  • Linting and formatting applied.
  • Documentation updated.

Related Issues

Closes #<issue_number>

Summary by CodeRabbit

  • New Features

    • Introduced support for specifying model choices and custom evaluation templates.
    • Enhanced validation for evaluation configurations and mapping fields, with stricter checks for standard and custom evaluations.
  • Bug Fixes

    • Improved error handling when fetching custom evaluation templates from remote services.
  • Chores

    • Minor whitespace adjustments and code formatting improvements.

@coderabbitai
Copy link

coderabbitai bot commented May 28, 2025

Important

Review skipped

Auto reviews are disabled on base/target branches other than the default branch.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Walkthrough

The update refactors the EvalTag dataclass to use string-based eval names and introduces a ModelChoices enum for model selection. It adds extensive validation for config and mapping fields, distinguishes between custom and standard evals, and adds a utility to fetch custom eval templates via HTTP. Minor formatting and import changes are included.

Changes

File(s) Change Summary
python/fi_instrumentation/fi_types.py Refactored EvalTag to use string eval names, added ModelChoices enum, new validation logic, updated methods.
python/fi_instrumentation/settings.py Added get_custom_eval_template function for remote template fetching; updated imports and minor formatting.
python/fi_instrumentation/otel.py Added a trailing newline at end of file; no logic or control flow changes.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant EvalTag
    participant Settings
    participant RemoteService

    User->>EvalTag: Instantiate EvalTag (with eval_name, config, mapping, etc.)
    EvalTag->>Settings: get_custom_eval_template(eval_name)
    Settings->>RemoteService: POST /custom_eval_template {eval_template_name}
    RemoteService-->>Settings: Respond with template or error
    Settings-->>EvalTag: Return template or raise error
    EvalTag->>EvalTag: Validate config and mapping (custom vs standard)
    EvalTag-->>User: EvalTag instance ready or error raised
Loading

Possibly related PRs

  • future-agi/traceAI#25: Both PRs modify the EvalTag dataclass in fi_types.py, changing eval_name to a string, adding ModelChoices, and extending validation logic for evals.

Suggested reviewers

  • JayaSurya-27
  • NVJKKartik

Poem

In fields of code where models choose,
Eval names now snake_case muse.
Configs and mappings checked with care,
Custom or not, validation's there.
A hop to fetch templates from afar,
This rabbit cheers—how thorough you are!
🐇✨


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@JayaSurya-27 JayaSurya-27 changed the base branch from main to dev May 28, 2025 17:10
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Inline review comments failed to post. This is likely due to GitHub's limits when posting large numbers of comments. If you are seeing this consistently it is likely a permissions issue. Please check "Moderation" -> "Code review limits" under your organization settings.

Actionable comments posted: 3

🧹 Nitpick comments (1)
python/fi_instrumentation/fi_types.py (1)

894-1017: Consider refactoring validation logic for better maintainability.

The validation logic in __post_init__ is comprehensive but quite complex. Consider extracting the custom eval template fetching and validation into separate methods for better readability and testability.

Example structure:

def __post_init__(self):
    # Initialize defaults
    self._initialize_defaults()
    
    # Fetch and validate eval template
    eval_template = self._fetch_eval_template()
    is_custom_eval = self._is_custom_eval(eval_template)
    
    # Validate based on eval type
    if is_custom_eval:
        self._validate_custom_eval(eval_template)
    else:
        self._validate_standard_eval()

def _fetch_eval_template(self) -> Dict[str, Any]:
    """Fetch eval template with proper error handling."""
    try:
        return get_custom_eval_template(self.eval_name)
    except Exception:
        return {}

def _is_custom_eval(self, eval_template: Dict[str, Any]) -> bool:
    """Determine if this is a custom eval."""
    return eval_template.get('isUserEvalTemplate', False)
🧰 Tools
🪛 Ruff (0.11.9)

910-910: f-string without any placeholders

Remove extraneous f prefix

(F541)


923-924: Use a single if statement instead of nested if statements

Combine if statements using and

(SIM102)


926-926: f-string without any placeholders

Remove extraneous f prefix

(F541)


977-977: f-string without any placeholders

Remove extraneous f prefix

(F541)

🛑 Comments failed to post (3)
python/fi_instrumentation/settings.py (1)

106-136: ⚠️ Potential issue

Improve exception handling and remove unnecessary empty line.

The function implementation looks good, but there are a few improvements to make:

  1. Use raise ... from e for better exception chaining
  2. Remove the unnecessary empty line at line 118

Apply this diff to fix the issues:

 def get_custom_eval_template(
    eval_name: str, base_url: Optional[str] = None
 ) -> Dict[str, Any]:
     """
     Check if a custom eval template exists for a given eval name.
     """
     if not eval_name:
         raise ValueError("Eval name is required")
     
     if base_url is None:
         base_url = get_env_collector_endpoint()
-
     
     url = f"{base_url}/tracer/custom-eval-config/get_custom_eval_by_name/"

     try:
         headers = {
             "Content-Type" : "application/json",
             **(get_env_fi_auth_header() or {}),
         }

         response = requests.post(
             url,
             headers=headers,
             json={"eval_template_name": eval_name},
         )

         response.raise_for_status()
         return response.json().get("result", {})
     except Exception as e:
-        raise ValueError(f"Failed to check custom eval template: {e}")
+        raise ValueError(f"Failed to check custom eval template: {e}") from e
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

def get_custom_eval_template(
    eval_name: str, base_url: Optional[str] = None
) -> Dict[str, Any]:
    """
    Check if a custom eval template exists for a given eval name.
    """
    if not eval_name:
        raise ValueError("Eval name is required")
    
    if base_url is None:
        base_url = get_env_collector_endpoint()
    url = f"{base_url}/tracer/custom-eval-config/get_custom_eval_by_name/"

    try:
        headers = {
            "Content-Type": "application/json",
            **(get_env_fi_auth_header() or {}),
        }

        response = requests.post(
            url,
            headers=headers,
            json={"eval_template_name": eval_name},
        )

        response.raise_for_status()
        return response.json().get("result", {})
    except Exception as e:
        raise ValueError(f"Failed to check custom eval template: {e}") from e
🧰 Tools
🪛 Ruff (0.11.9)

136-136: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)

🤖 Prompt for AI Agents
In python/fi_instrumentation/settings.py from lines 106 to 136, improve
exception handling by changing the raise statement in the except block to use
"raise ValueError(...) from e" for proper exception chaining. Also, remove the
unnecessary empty line at line 118 to clean up the code.
python/fi_instrumentation/fi_types.py (2)

973-988: ⚠️ Potential issue

Fix f-string formatting in validation method.

Remove unnecessary f-string prefix where no placeholder is used.

Apply this diff:

     def validate_fagi_system_eval_name(self, is_custom_eval: bool) -> None:

         if not self.eval_name: 
             raise ValueError(
-                f"eval_name must be an Present."
+                "eval_name must be present."
             )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

    def validate_fagi_system_eval_name(self, is_custom_eval: bool) -> None:

        if not self.eval_name: 
            raise ValueError(
                "eval_name must be present."
            )
        
        if not is_custom_eval:
            eval_names = [e.value for e in EvalName]
            if self.eval_name not in eval_names:
                raise ValueError(
                    f"eval_name {self.eval_name} is not a valid eval name"
                )

        return
🧰 Tools
🪛 Ruff (0.11.9)

977-977: f-string without any placeholders

Remove extraneous f prefix

(F541)

🤖 Prompt for AI Agents
In python/fi_instrumentation/fi_types.py around lines 973 to 988, the f-string
in the ValueError message "eval_name must be an Present." is unnecessary since
it contains no placeholders. Remove the f-string prefix (the leading 'f') from
this string to correct the formatting.

908-928: ⚠️ Potential issue

Fix f-string formatting and improve error handling.

Several issues need to be addressed:

  1. Remove unnecessary f-string prefixes where no placeholders are used
  2. Combine nested if statements for better readability
  3. Add error handling for the API call

Apply this diff to fix the issues:

-        if not self.eval_name: 
+        if not self.eval_name:
             raise ValueError(
-                f"eval_name is required"
+                "eval_name is required"
             )
         
         if not self.custom_eval_name:
             self.custom_eval_name = self.eval_name
         
         
-        eval_template = get_custom_eval_template(self.eval_name)
-        is_custom_eval = eval_template.get('isUserEvalTemplate')
-        custom_eval = eval_template.get('evalTemplate', {})
-        if custom_eval:
-            required_keys = custom_eval.get('config', {}).get('requiredKeys', [])
+        try:
+            eval_template = get_custom_eval_template(self.eval_name)
+        except Exception as e:
+            # If the API call fails, assume it's not a custom eval
+            eval_template = {}
+        
+        is_custom_eval = eval_template.get('isUserEvalTemplate', False)
+        custom_eval = eval_template.get('evalTemplate', {})
+        required_keys = []
+        if custom_eval and 'config' in custom_eval:
+            required_keys = custom_eval.get('config', {}).get('requiredKeys', [])
         
-        if not is_custom_eval:
-            if not isinstance(self.model, ModelChoices):
-                raise ValueError(
-                    f"model must be a present for all non-custom evals"
-                )
+        if not is_custom_eval and not isinstance(self.model, ModelChoices):
+            raise ValueError(
+                "model must be present for all non-custom evals"
+            )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

        if not self.eval_name:
            raise ValueError(
                "eval_name is required"
            )
        
        if not self.custom_eval_name:
            self.custom_eval_name = self.eval_name
        
        
        try:
            eval_template = get_custom_eval_template(self.eval_name)
        except Exception as e:
            # If the API call fails, assume it's not a custom eval
            eval_template = {}
        
        is_custom_eval = eval_template.get('isUserEvalTemplate', False)
        custom_eval = eval_template.get('evalTemplate', {})
        required_keys = []
        if custom_eval and 'config' in custom_eval:
            required_keys = custom_eval.get('config', {}).get('requiredKeys', [])
        
        if not is_custom_eval and not isinstance(self.model, ModelChoices):
            raise ValueError(
                "model must be present for all non-custom evals"
            )
🧰 Tools
🪛 Ruff (0.11.9)

910-910: f-string without any placeholders

Remove extraneous f prefix

(F541)


923-924: Use a single if statement instead of nested if statements

Combine if statements using and

(SIM102)


926-926: f-string without any placeholders

Remove extraneous f prefix

(F541)

🤖 Prompt for AI Agents
In python/fi_instrumentation/fi_types.py around lines 908 to 928, remove the
f-string prefixes from error messages that do not contain placeholders to
simplify the code. Combine the nested if statements checking for custom_eval and
is_custom_eval into a single conditional block to improve readability.
Additionally, wrap the call to get_custom_eval_template(self.eval_name) in a
try-except block to handle potential exceptions gracefully, raising a clear
error if the API call fails.

@JayaSurya-27 JayaSurya-27 merged commit 16137ca into dev May 28, 2025
1 check passed
This was referenced May 28, 2025
Merged
@nik13 nik13 deleted the bugfix/model branch July 1, 2025 19:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants