Skip to content

Conversation

@sarthakFuture
Copy link
Contributor

@sarthakFuture sarthakFuture commented May 28, 2025

Pull Request

Description

Describe the changes in this pull request:

  • What feature/bug does this PR address?
  • Provide any relevant links or screenshots.

Checklist

  • Code compiles correctly.
  • Created/updated tests.
  • Linting and formatting applied.
  • Documentation updated.

Related Issues

Closes #<issue_number>

Summary by CodeRabbit

  • New Features
    • Added support for custom evaluation templates and model selection.
    • Introduced validation for model identifiers and evaluation names.
  • Improvements
    • Enhanced validation logic for evaluation configuration and mapping fields.
    • Unified representation of evaluation names.
  • Bug Fixes
    • Ensured consistent handling of evaluation name fields during processing.
  • Chores
    • Added utility for fetching custom evaluation templates from a remote service.

@coderabbitai
Copy link

coderabbitai bot commented May 28, 2025

Walkthrough

The changes introduce a new ModelChoices enum, update the EvalName enum to use snake_case strings, and enhance the EvalTag dataclass with a new optional model field and improved validation logic. Support for custom evaluation templates is added, including a function to fetch template configurations from a remote service.

Changes

File(s) Change Summary
python/fi_instrumentation/fi_types.py Added ModelChoices enum, updated EvalName to snake_case, enhanced EvalTag with model field, improved validation, and support for custom eval templates.
python/fi_instrumentation/settings.py Added get_custom_eval_template function to fetch custom eval templates from a remote service; imported requests.
python/fi_instrumentation/otel.py Ensured eval_name in EvalTag is always a string before processing; updated import list.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant EvalTag
    participant Settings
    participant RemoteService

    User->>EvalTag: Instantiate with eval_name, config, mapping, model
    EvalTag->>Settings: get_custom_eval_template(eval_name)
    Settings->>RemoteService: POST /api/collector/v1/eval-template with eval_name
    RemoteService-->>Settings: Return template config/mapping
    Settings-->>EvalTag: Return template or empty dict
    EvalTag->>EvalTag: Validate eval_name, config, mapping, model
    EvalTag-->>User: EvalTag object ready
Loading

Possibly related PRs

  • future-agi/traceAI#25: Modifies the same enums and dataclass, and introduces similar validation and template-fetching logic.
  • future-agi/traceAI#26: Contains highly overlapping changes to enums, dataclass, validation, and the addition of the template-fetching function.

Suggested reviewers

  • JayaSurya-27

Poem

In fields of code, a model hops,
With enums fresh and snake_case drops.
EvalTags now know their role,
Custom templates make them whole.
Across the wire, configs fly—
A clever bunny, coding spry!
🐇✨

✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (1)
python/fi_instrumentation/fi_types.py (1)

980-993: Fix f-string and simplify nested conditions.

Apply this diff to fix the issues:

     def validate_fagi_system_eval_name(self, is_custom_eval: bool) -> None:
 
         if not self.eval_name: 
             raise ValueError(
-                f"eval_name must be an Present."
+                "eval_name must be Present."
             )
         
-        if not is_custom_eval:
-            if not isinstance(self.eval_name, EvalName):
+        if not is_custom_eval and not isinstance(self.eval_name, EvalName):
                 raise ValueError(
                     f"eval_name must be an EvalName enum, got {type(self.eval_name)}"
                 )
 
         return
🧰 Tools
🪛 Ruff (0.11.9)

984-984: f-string without any placeholders

Remove extraneous f prefix

(F541)


987-988: Use a single if statement instead of nested if statements

Combine if statements using and

(SIM102)

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between fe6a211 and 3fd211f.

📒 Files selected for processing (3)
  • python/fi_instrumentation/fi_types.py (7 hunks)
  • python/fi_instrumentation/otel.py (3 hunks)
  • python/fi_instrumentation/settings.py (3 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (2)
python/fi_instrumentation/otel.py (1)
python/fi_instrumentation/fi_types.py (1)
  • EvalName (434-489)
python/fi_instrumentation/fi_types.py (1)
python/fi_instrumentation/settings.py (2)
  • get_env_collector_endpoint (15-16)
  • get_custom_eval_template (106-136)
🪛 Ruff (0.11.9)
python/fi_instrumentation/settings.py

136-136: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)

python/fi_instrumentation/fi_types.py

902-902: f-string without any placeholders

Remove extraneous f prefix

(F541)


921-923: Use a single if statement instead of nested if statements

(SIM102)


984-984: f-string without any placeholders

Remove extraneous f prefix

(F541)


987-988: Use a single if statement instead of nested if statements

Combine if statements using and

(SIM102)

🔇 Additional comments (1)
python/fi_instrumentation/otel.py (1)

71-73: LGTM!

The conversion of eval_name from EvalName enum to its string value ensures consistent handling of evaluation names before processing.

Comment on lines 106 to 136
def get_custom_eval_template(
eval_name: str, base_url: Optional[str] = None
) -> Dict[str, Any]:
"""
Check if a custom eval template exists for a given eval name.
"""
if not eval_name:
raise ValueError("Eval name is required")

if base_url is None:
base_url = get_env_collector_endpoint()


url = f"{base_url}/tracer/custom-eval-config/get_custom_eval_by_name/"

try:
headers = {
"Content-Type" : "application/json",
**(get_env_fi_auth_header() or {}),
}

response = requests.post(
url,
headers=headers,
json={"eval_template_name": eval_name},
)

response.raise_for_status()
return response.json().get("result", {})
except Exception as e:
raise ValueError(f"Failed to check custom eval template: {e}")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Improve exception handling and fix formatting issues.

The function has a few issues to address:

  1. Exception chaining should be used to preserve the original exception context
  2. Remove unnecessary blank line
  3. Fix inconsistent spacing in the Content-Type header

Apply this diff to fix the issues:

 def get_custom_eval_template(
    eval_name: str, base_url: Optional[str] = None
 ) -> Dict[str, Any]:
     """
     Check if a custom eval template exists for a given eval name.
     """
     if not eval_name:
         raise ValueError("Eval name is required")
     
     if base_url is None:
         base_url = get_env_collector_endpoint()
-
     
     url = f"{base_url}/tracer/custom-eval-config/get_custom_eval_by_name/"

     try:
         headers = {
-            "Content-Type" : "application/json",
+            "Content-Type": "application/json",
             **(get_env_fi_auth_header() or {}),
         }

         response = requests.post(
             url,
             headers=headers,
             json={"eval_template_name": eval_name},
         )

         response.raise_for_status()
         return response.json().get("result", {})
     except Exception as e:
-        raise ValueError(f"Failed to check custom eval template: {e}")
+        raise ValueError(f"Failed to check custom eval template: {e}") from e
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def get_custom_eval_template(
eval_name: str, base_url: Optional[str] = None
) -> Dict[str, Any]:
"""
Check if a custom eval template exists for a given eval name.
"""
if not eval_name:
raise ValueError("Eval name is required")
if base_url is None:
base_url = get_env_collector_endpoint()
url = f"{base_url}/tracer/custom-eval-config/get_custom_eval_by_name/"
try:
headers = {
"Content-Type" : "application/json",
**(get_env_fi_auth_header() or {}),
}
response = requests.post(
url,
headers=headers,
json={"eval_template_name": eval_name},
)
response.raise_for_status()
return response.json().get("result", {})
except Exception as e:
raise ValueError(f"Failed to check custom eval template: {e}")
def get_custom_eval_template(
eval_name: str, base_url: Optional[str] = None
) -> Dict[str, Any]:
"""
Check if a custom eval template exists for a given eval name.
"""
if not eval_name:
raise ValueError("Eval name is required")
if base_url is None:
base_url = get_env_collector_endpoint()
url = f"{base_url}/tracer/custom-eval-config/get_custom_eval_by_name/"
try:
headers = {
"Content-Type": "application/json",
**(get_env_fi_auth_header() or {}),
}
response = requests.post(
url,
headers=headers,
json={"eval_template_name": eval_name},
)
response.raise_for_status()
return response.json().get("result", {})
except Exception as e:
raise ValueError(f"Failed to check custom eval template: {e}") from e
🧰 Tools
🪛 Ruff (0.11.9)

136-136: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)

🤖 Prompt for AI Agents
In python/fi_instrumentation/settings.py between lines 106 and 136, improve
exception handling by using exception chaining with 'from e' when raising the
ValueError to preserve the original exception context. Remove the unnecessary
blank line after setting base_url. Fix the inconsistent spacing in the
Content-Type header by ensuring there is no extra space before the colon.

type: EvalTagType
value: EvalSpanKind
eval_name: EvalName
eval_name: str | EvalName
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Use Union for Python <3.10 compatibility.

The pipe syntax for unions is only available in Python 3.10+. For broader compatibility, use Union from the typing module.

Add the import at the top of the file:

-from typing import Any, Dict, List, Type
+from typing import Any, Dict, List, Type, Union

Then update the type annotation:

-    eval_name: str | EvalName
+    eval_name: Union[str, EvalName]
🤖 Prompt for AI Agents
In python/fi_instrumentation/fi_types.py at line 879, replace the pipe syntax
used for the union type annotation with the Union type from the typing module
for compatibility with Python versions earlier than 3.10. Import Union from
typing at the top of the file and update the type annotation to use Union[str,
EvalName] instead of str | EvalName.

Comment on lines 900 to 936
if not self.eval_name:
raise ValueError(
f"eval_name must be an EvalName enum, got {type(self.eval_name)}"
f"eval_name is required"
)

if not self.custom_eval_name:
self.custom_eval_name = self.eval_name if isinstance(self.eval_name, str) else self.eval_name.value



eval_template = get_custom_eval_template(self.eval_name if isinstance(self.eval_name, str) else self.eval_name.value)
is_custom_eval = eval_template.get('isUserEvalTemplate')
custom_eval = eval_template.get('evalTemplate', {})

self.validate_fagi_system_eval_name(is_custom_eval)

if is_custom_eval:
required_keys = custom_eval.get('config', {}).get('requiredKeys', [])
else:
required_keys = EvalMappingConfig.get_mapping_for_eval(self.eval_name).keys()

if not is_custom_eval:

if not isinstance(self.model, ModelChoices):
if (isinstance(self.model, str)):
valid_models = [model.value for model in ModelChoices]
if self.model not in valid_models:
raise ValueError(
f"model must be a valid model name, got {self.model}. Expected values are: {valid_models}"
)
else:
self.model = ModelChoices(self.model)
else:
raise ValueError(
f"model must be a of type ModelChoices, got {type(self.model)}"
)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Simplify validation logic and fix f-string issues.

Several improvements can be made to this validation logic:

  1. Remove unnecessary f-string prefixes where no placeholders are used
  2. Simplify nested if statements for better readability

Apply these improvements:

         if not self.eval_name: 
             raise ValueError(
-                f"eval_name is required"
+                "eval_name is required"
             )
         
         if not self.custom_eval_name:
             self.custom_eval_name = self.eval_name if isinstance(self.eval_name, str) else self.eval_name.value
         
         
 
         eval_template = get_custom_eval_template(self.eval_name if isinstance(self.eval_name, str) else self.eval_name.value)
         is_custom_eval = eval_template.get('isUserEvalTemplate')
         custom_eval = eval_template.get('evalTemplate', {})
 
         self.validate_fagi_system_eval_name(is_custom_eval)
         
         if is_custom_eval:
             required_keys = custom_eval.get('config', {}).get('requiredKeys', [])
         else:
             required_keys = EvalMappingConfig.get_mapping_for_eval(self.eval_name).keys()
         
         if not is_custom_eval:
             
-            if not isinstance(self.model, ModelChoices):
-                if (isinstance(self.model, str)):
+            if not isinstance(self.model, ModelChoices) and isinstance(self.model, str):
                     valid_models = [model.value for model in ModelChoices]
                     if self.model not in valid_models:
                         raise ValueError(
                             f"model must be a valid model name, got {self.model}. Expected values are: {valid_models}"
                         )
                     else:
                         self.model = ModelChoices(self.model)
-                else:
+            elif not isinstance(self.model, ModelChoices):
                     raise ValueError(
-                        f"model must be a of type ModelChoices, got {type(self.model)}"
+                        f"model must be of type ModelChoices, got {type(self.model)}"
                     )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if not self.eval_name:
raise ValueError(
f"eval_name must be an EvalName enum, got {type(self.eval_name)}"
f"eval_name is required"
)
if not self.custom_eval_name:
self.custom_eval_name = self.eval_name if isinstance(self.eval_name, str) else self.eval_name.value
eval_template = get_custom_eval_template(self.eval_name if isinstance(self.eval_name, str) else self.eval_name.value)
is_custom_eval = eval_template.get('isUserEvalTemplate')
custom_eval = eval_template.get('evalTemplate', {})
self.validate_fagi_system_eval_name(is_custom_eval)
if is_custom_eval:
required_keys = custom_eval.get('config', {}).get('requiredKeys', [])
else:
required_keys = EvalMappingConfig.get_mapping_for_eval(self.eval_name).keys()
if not is_custom_eval:
if not isinstance(self.model, ModelChoices):
if (isinstance(self.model, str)):
valid_models = [model.value for model in ModelChoices]
if self.model not in valid_models:
raise ValueError(
f"model must be a valid model name, got {self.model}. Expected values are: {valid_models}"
)
else:
self.model = ModelChoices(self.model)
else:
raise ValueError(
f"model must be a of type ModelChoices, got {type(self.model)}"
)
if not self.eval_name:
raise ValueError(
"eval_name is required"
)
if not self.custom_eval_name:
self.custom_eval_name = self.eval_name if isinstance(self.eval_name, str) else self.eval_name.value
eval_template = get_custom_eval_template(self.eval_name if isinstance(self.eval_name, str) else self.eval_name.value)
is_custom_eval = eval_template.get('isUserEvalTemplate')
custom_eval = eval_template.get('evalTemplate', {})
self.validate_fagi_system_eval_name(is_custom_eval)
if is_custom_eval:
required_keys = custom_eval.get('config', {}).get('requiredKeys', [])
else:
required_keys = EvalMappingConfig.get_mapping_for_eval(self.eval_name).keys()
if not is_custom_eval:
if not isinstance(self.model, ModelChoices) and isinstance(self.model, str):
valid_models = [model.value for model in ModelChoices]
if self.model not in valid_models:
raise ValueError(
f"model must be a valid model name, got {self.model}. Expected values are: {valid_models}"
)
else:
self.model = ModelChoices(self.model)
elif not isinstance(self.model, ModelChoices):
raise ValueError(
f"model must be of type ModelChoices, got {type(self.model)}"
)
🧰 Tools
🪛 Ruff (0.11.9)

902-902: f-string without any placeholders

Remove extraneous f prefix

(F541)


921-923: Use a single if statement instead of nested if statements

(SIM102)

🤖 Prompt for AI Agents
In python/fi_instrumentation/fi_types.py around lines 900 to 936, simplify the
validation logic by removing f-string prefixes from strings that do not contain
placeholders, and refactor nested if statements to reduce complexity and improve
readability. Specifically, replace f-strings without variables with regular
strings and combine or flatten nested conditionals where possible to make the
code clearer and easier to follow.

@JayaSurya-27 JayaSurya-27 changed the base branch from main to dev May 28, 2025 20:03
@JayaSurya-27 JayaSurya-27 merged commit 1e22118 into dev May 28, 2025
1 check passed
@coderabbitai coderabbitai bot mentioned this pull request May 28, 2025
Merged
4 tasks
@nik13 nik13 deleted the feature/error-message branch July 1, 2025 19:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants