Skip to content

Conversation

@NVJKKartik
Copy link
Contributor

@NVJKKartik NVJKKartik commented May 6, 2025

Pull Request

Description

Describe the changes in this pull request:

  • What feature/bug does this PR address?
    Added Tracing Feature for Protect from Future AGI SDK

Notion Ticket Link : notion_ticket

  • Provide any relevant links or screenshots.

Checklist

  • Code compiles correctly.
  • Created/updated tests.
  • Linting and formatting applied.
  • Documentation updated.

Related Issues

Closes #<issue_number>

Summary by CodeRabbit

  • New Features
    • Added tracing and telemetry for Guardrail protection calls across multiple frameworks, enabling detailed observability of input, rules, outputs, and errors.
    • Introduced example scripts demonstrating Guardrail integration and tracing with Anthropic Claude, Guardrails, and LangChain agents.
  • Enhancements
    • Expanded instrumentation to wrap Guardrail protection logic in Anthropic, Bedrock, Groq, Haystack, Instructor, LangChain, LiteLLM, LlamaIndex, OpenAI, and VertexAI frameworks.
    • Updated dependency requirements to include Guardrail support in relevant packages.
  • Bug Fixes
    • Improved argument serialization and error handling for Guardrail tracing in Guardrails instrumentation.

@NVJKKartik NVJKKartik requested a review from JayaSurya-27 May 6, 2025 10:06
@coderabbitai
Copy link

coderabbitai bot commented May 6, 2025

Walkthrough

This update introduces a cross-framework OpenTelemetry tracing wrapper for the ProtectClient.protect method, enhancing observability of guardrail protection calls. The wrapper is integrated into multiple framework instrumentors (Anthropic, Bedrock, Groq, Haystack, Instructor, LangChain, LiteLLM, LlamaIndex, OpenAI, VertexAI), and new example scripts demonstrate guardrail tracing. Dependency requirements are updated to include "futureagi" where relevant.

Changes

File(s) Change Summary
python/fi_instrumentation/instrumentation/_protect_wrapper.py New module introducing GuardrailProtectWrapper for tracing ProtectClient.protect calls, with serialization helpers and robust error handling.
python/frameworks/anthropic/examples/guardrail_example
python/frameworks/guardrails/traceai_guardrails/guardrails_example.py
New example scripts demonstrating guardrail tracing integration with Anthropic Claude API and Guardrails library, respectively.
python/frameworks/anthropic/traceai_anthropic/__init__.py
python/frameworks/bedrock/traceai_bedrock/__init__.py
python/frameworks/groq/traceai_groq/__init__.py
python/frameworks/haystack/traceai_haystack/__init__.py
python/frameworks/instructor/traceai_instructor/__init__.py
python/frameworks/langchain/traceai_langchain/__init__.py
python/frameworks/litellm/traceai_litellm/__init__.py
python/frameworks/llama_index/traceai_llamaindex/__init__.py
python/frameworks/openai/traceai_openai/__init__.py
python/frameworks/vertexai/traceai_vertexai/__init__.py
Instrumentation logic extended to wrap ProtectClient.protect with GuardrailProtectWrapper in each framework's instrumentor. Some classes add new attributes to store the original method.
python/frameworks/bedrock/traceai_bedrock/package.py
python/frameworks/langchain/traceai_langchain/package.py
python/frameworks/litellm/traceai_litellm/package.py
python/frameworks/llama_index/traceai_llamaindex/package.py
python/frameworks/openai/traceai_openai/package.py
python/frameworks/vertexai/traceai_vertexai/package.py
_instruments tuples updated to include "futureagi >= 0.0.1" as a new dependency.
python/frameworks/guardrails/traceai_guardrails/_wrap_guard_call.py Improved argument serialization: ensures kwargs are dicts before JSON conversion and adds tuple handling in _to_dict.
python/frameworks/langchain/examples/tool_calling_agent.py Instrumentation setup now returns an FITracer. Agent creation and main loop updated to include guardrail checks, protection logic, and span attribute recording for each query.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant App
    participant ProtectClient
    participant GuardrailProtectWrapper
    participant OpenTelemetry

    User->>App: Submit input/query
    App->>GuardrailProtectWrapper: Call protect(inputs, rules, ...)
    GuardrailProtectWrapper->>OpenTelemetry: Start "Guardrail.protect" span
    GuardrailProtectWrapper->>ProtectClient: Call original protect(inputs, rules, ...)
    ProtectClient-->>GuardrailProtectWrapper: Return protection result
    GuardrailProtectWrapper->>OpenTelemetry: Set span attributes (inputs, rules, result)
    GuardrailProtectWrapper-->>App: Return protection result
    App-->>User: Respond (safe or blocked)
Loading

Poem

In the warren of code where the guardrails hop,
Now every protect call gets a tracing stop!
With wrappers and spans, our observability grows,
Through frameworks and clients, the telemetry flows.
So when you ask, “Was my input okay?”
The rabbit replies: “Let’s check the trace today!”
🐇✨

✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

🔭 Outside diff range comments (1)
python/frameworks/langchain/traceai_langchain/__init__.py (1)

6-6: 🛠️ Refactor suggestion

Remove the duplicate FITracer import to resolve Ruff F811 and avoid shadowing
Ruff correctly flags a re-definition of FITracer. The first import (line 6) is immediately shadowed by the second one (lines 41-42). Keeping both is confusing and may lead to subtle issues when type-checking or hot-reloading.

-from fi_instrumentation import FITracer, TraceConfig
+from fi_instrumentation import TraceConfig

The later, more specific import path should stay because it references the concrete implementation used below.

Also applies to: 39-42

🧰 Tools
🪛 Ruff (0.8.2)

6-6: fi_instrumentation.FITracer imported but unused

(F401)

🧹 Nitpick comments (13)
python/frameworks/openai/traceai_openai/package.py (1)

1-1: Dependency addition looks good, but consider version constraints.

Adding the "futureagi" dependency is appropriate for integrating the Protect tracing feature. However, relying on any version >= 0.0.1 could expose you to breaking changes in early releases.

Consider either:

  1. Pinning to a specific stable version: "futureagi==X.Y.Z"
  2. Using a bounded range: "futureagi >= 0.0.1, < 0.1.0"

This would provide more predictable behavior as the futureagi package evolves.

python/frameworks/guardrails/traceai_guardrails/guardrails_example.py (3)

1-7: Remove unused import

The os module is imported but not used in this file.

from fi_instrumentation import register
from fi_instrumentation.fi_types import ProjectType
-import os
from guardrails import Guard, OnFailAction
from traceai_guardrails import GuardrailsInstrumentor
from dotenv import load_dotenv
🧰 Tools
🪛 Ruff (0.8.2)

3-3: os imported but unused

Remove unused import: os

(F401)


21-22: Remove commented-out import

This commented-out import is redundant as these imports are already present in line 4.

-#from guardrails import Guard, OnFailAction
from guardrails.hub import CompetitorCheck, ToxicLanguage

34-39: Improve error handling with more descriptive messages

The current error handling simply prints the exception without context. Adding more descriptive messages would make the example more instructive.

try:
    guard.validate(
        """Shut the hell up! Apple just released a new iPhone."""
    )  # Both the guardrails fail
except Exception as e:
-    print(e)
+    print(f"Guard validation failed: {e}")
+    # Optionally, you could add more specific error handling by catching specific exception types
python/frameworks/anthropic/traceai_anthropic/__init__.py (3)

23-23: Add an explicit version constraint for the new dependency

_instruments lists "futureagi" without a minimum version, so any pre-release or breaking change could silently slip in and break the wrapper.
A very small tweak keeps you safe while still allowing upgrades when you want them.

-_instruments = ("anthropic >= 0.30.0", "futureagi")
+_instruments = ("anthropic >= 0.30.0", "futureagi >= 0.0.1")

92-97: Guard against double-wrapping ProtectClient.protect

If this instrumentor is applied twice (or after another framework has already wrapped the method), self._original_protect may capture an already-wrapped function and the wrapper chain will grow uncontrollably.

Consider unwrapping first or storing the real original with wrapt.FunctionWrapper.__wrapped__:

from wrapt import resolve_path, FunctionWrapper

orig = ProtectClient.protect
while isinstance(orig, FunctionWrapper):   # unwrap back to the real func
    orig = orig.__wrapped__
self._original_protect = orig

This keeps the wrapper depth at one and avoids tracing noise.


113-115: Clear saved references after un-instrumenting

After restoring the original method, set the attribute to None so the instance can be GC’d and to make accidental re-use obvious:

     if self._original_protect is not None:
         ProtectClient.protect = self._original_protect
+        self._original_protect = None
python/frameworks/anthropic/examples/guardrail_example (2)

10-12: Remove unused imports to keep Ruff & linters happy

TraceConfig and opentelemetry.trace are imported but never used.

-from fi_instrumentation.instrumentation.config import TraceConfig
-from opentelemetry import trace

52-60: Validate required FI credentials before calling the API

If any of the env vars are missing the call will silently pass None, very likely raising an auth error deep inside the SDK. Failing fast with a helpful message improves DX.

-    evaluator = EvalClient(fi_api_key=os.environ.get("FI_API_KEY"), fi_secret_key=os.environ.get("FI_SECRET_KEY"), fi_base_url=os.environ.get("FI_BASE_URL"))
+    fi_api_key = os.getenv("FI_API_KEY")
+    fi_secret_key = os.getenv("FI_SECRET_KEY")
+    fi_base_url = os.getenv("FI_BASE_URL")
+
+    if not all([fi_api_key, fi_secret_key]):
+        raise EnvironmentError(
+            "FI_API_KEY and FI_SECRET_KEY must be set in the environment."
+        )
+
+    evaluator = EvalClient(
+        fi_api_key=fi_api_key,
+        fi_secret_key=fi_secret_key,
+        fi_base_url=fi_base_url,
+    )
python/frameworks/langchain/examples/tool_calling_agent.py (2)

26-26: Drop unused opentelemetry.trace import

Ruff correctly flags this as F401.

-import opentelemetry.trace as trace_api
🧰 Tools
🪛 Ruff (0.8.2)

26-26: opentelemetry.trace imported but unused

Remove unused import: opentelemetry.trace

(F401)


300-313: Extract guard-rail rule definitions for readability & reuse

Keeping literal rule dictionaries inline makes the main flow noisy. Consider moving them to a module-level constant or JSON file:

-    rules = [
-    {
-        "metric": "Tone",
-        "contains": ["anger", "fear"],
-        "type": "any"
-    },
-    {
-        "metric": "Toxicity"
-    },
-    {
-        "metric": "Prompt Injection",
-    },
-]
+    rules = DEFAULT_GUARDRAIL_RULES  # imported from .rules or constants.py
python/fi_instrumentation/instrumentation/_protect_wrapper.py (2)

45-49: Generalise rules parameter type to avoid mypy / runtime issues
protect_rules can be any iterable of mappings, not necessarily List[Dict[str, Any]]. Narrow typing forces callers to cast.

-def _get_protect_rules(rules: List[Dict[str, Any]]) -> Iterator[Tuple[str, Any]]:
+from collections.abc import Sequence, Mapping
+
+def _get_protect_rules(rules: Sequence[Mapping[str, Any]]) -> Iterator[Tuple[str, Any]]:

118-126: Large kwargs are serialised verbatim → attribute size explosion risk
Serialising the entire kwargs dict into the RAW_INPUT span attribute can exceed common back-end limits (OTLP, Jaeger, etc.). Consider truncating or hashing oversized payloads.

Example patch (simplified):

-    _get_raw_input(kwargs),
+    _get_raw_input({k: v for k, v in kwargs.items() if k not in ("inputs", "protect_rules")}),
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9ee4abb and f9f7989.

📒 Files selected for processing (21)
  • python/fi_instrumentation/instrumentation/_protect_wrapper.py (1 hunks)
  • python/frameworks/anthropic/examples/guardrail_example (1 hunks)
  • python/frameworks/anthropic/traceai_anthropic/__init__.py (6 hunks)
  • python/frameworks/bedrock/traceai_bedrock/__init__.py (2 hunks)
  • python/frameworks/bedrock/traceai_bedrock/package.py (1 hunks)
  • python/frameworks/groq/traceai_groq/__init__.py (2 hunks)
  • python/frameworks/guardrails/traceai_guardrails/_wrap_guard_call.py (2 hunks)
  • python/frameworks/guardrails/traceai_guardrails/guardrails_example.py (1 hunks)
  • python/frameworks/haystack/traceai_haystack/__init__.py (2 hunks)
  • python/frameworks/instructor/traceai_instructor/__init__.py (2 hunks)
  • python/frameworks/langchain/examples/tool_calling_agent.py (5 hunks)
  • python/frameworks/langchain/traceai_langchain/__init__.py (2 hunks)
  • python/frameworks/langchain/traceai_langchain/package.py (1 hunks)
  • python/frameworks/litellm/traceai_litellm/__init__.py (2 hunks)
  • python/frameworks/litellm/traceai_litellm/package.py (1 hunks)
  • python/frameworks/llama_index/traceai_llamaindex/__init__.py (2 hunks)
  • python/frameworks/llama_index/traceai_llamaindex/package.py (1 hunks)
  • python/frameworks/openai/traceai_openai/__init__.py (2 hunks)
  • python/frameworks/openai/traceai_openai/package.py (1 hunks)
  • python/frameworks/vertexai/traceai_vertexai/__init__.py (2 hunks)
  • python/frameworks/vertexai/traceai_vertexai/package.py (1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (10)
python/frameworks/instructor/traceai_instructor/__init__.py (1)
python/fi_instrumentation/instrumentation/_protect_wrapper.py (1)
  • GuardrailProtectWrapper (71-164)
python/frameworks/llama_index/traceai_llamaindex/__init__.py (1)
python/fi_instrumentation/instrumentation/_protect_wrapper.py (1)
  • GuardrailProtectWrapper (71-164)
python/frameworks/bedrock/traceai_bedrock/__init__.py (1)
python/fi_instrumentation/instrumentation/_protect_wrapper.py (1)
  • GuardrailProtectWrapper (71-164)
python/frameworks/vertexai/traceai_vertexai/__init__.py (1)
python/fi_instrumentation/instrumentation/_protect_wrapper.py (1)
  • GuardrailProtectWrapper (71-164)
python/frameworks/litellm/traceai_litellm/__init__.py (1)
python/fi_instrumentation/instrumentation/_protect_wrapper.py (1)
  • GuardrailProtectWrapper (71-164)
python/frameworks/haystack/traceai_haystack/__init__.py (1)
python/fi_instrumentation/instrumentation/_protect_wrapper.py (1)
  • GuardrailProtectWrapper (71-164)
python/frameworks/anthropic/traceai_anthropic/__init__.py (1)
python/fi_instrumentation/instrumentation/_protect_wrapper.py (1)
  • GuardrailProtectWrapper (71-164)
python/frameworks/groq/traceai_groq/__init__.py (1)
python/fi_instrumentation/instrumentation/_protect_wrapper.py (1)
  • GuardrailProtectWrapper (71-164)
python/frameworks/langchain/examples/tool_calling_agent.py (1)
python/frameworks/langchain/traceai_langchain/__init__.py (1)
  • LangChainInstrumentor (23-91)
python/frameworks/langchain/traceai_langchain/__init__.py (1)
python/fi_instrumentation/instrumentation/_protect_wrapper.py (1)
  • GuardrailProtectWrapper (71-164)
🪛 Ruff (0.8.2)
python/frameworks/guardrails/traceai_guardrails/guardrails_example.py

3-3: os imported but unused

Remove unused import: os

(F401)

python/frameworks/langchain/examples/tool_calling_agent.py

26-26: opentelemetry.trace imported but unused

Remove unused import: opentelemetry.trace

(F401)

python/frameworks/langchain/traceai_langchain/__init__.py

41-41: Redefinition of unused FITracer from line 6

(F811)

🔇 Additional comments (19)
python/frameworks/litellm/traceai_litellm/package.py (1)

1-1: Consistent dependency addition, same version consideration.

The addition of the futureagi dependency is consistent with other framework instrumentors, maintaining parity across the codebase.

As with the other package files, consider whether the broad version constraint >= 0.0.1 is appropriate for a package that may still be in early development.

python/frameworks/langchain/traceai_langchain/package.py (1)

1-1:

Details

❓ Verification inconclusive

Consistent dependency addition, consider testing with LangChain.

The addition matches the pattern across all framework adaptors. This maintains consistency in the dependency structure.

Since LangChain is a popular framework, verify that the ProtectClient tracing works correctly in both synchronous and asynchronous LangChain workflows. Run the following to check for imports:


🏁 Script executed:

#!/bin/bash
# Look for example files that test LangChain with the futureagi Protect integration
fd -e py "langchain.*Protect" examples/

Length of output: 138


ProtectClient tracing untested for LangChain integration

File: python/frameworks/langchain/traceai_langchain/package.py
Lines: 1-1

Snippet showing the final state of code at these lines

_instruments = ("langchain_core >= 0.1.0", "futureagi >= 0.0.1")

I didn’t find any existing example or test files demonstrating ProtectClient tracing in LangChain workflows. Please add or locate tests to confirm correct instrumentation in both synchronous and asynchronous use cases:

  • Add example scripts or unit tests under python/frameworks/langchain/traceai_langchain/ or a top-level tests/ directory.
  • Verify that both sync and async client methods create tracing spans as expected.

You can search the codebase with:

fd -e py "langchain.*Protect" python/frameworks/langchain/traceai_langchain/ tests/
python/frameworks/bedrock/traceai_bedrock/package.py (1)

1-1:

Details

❓ Verification inconclusive

Dependency addition is consistent with other frameworks.

The addition of "futureagi >= 0.0.1" maintains consistency with the changes to other package files in this PR.

Since this PR checklist didn't mark tests as completed, please ensure you've tested this integration with AWS Bedrock specifically, as it has unique integration patterns compared to the other frameworks.


🏁 Script executed:

#!/bin/bash
# Check if there are Bedrock-specific examples for testing the Protect integration
fd -e py "bedrock.*Protect" examples/

Length of output: 136


#!/bin/bash
# Search for any Bedrock-specific Protect integration tests or examples across the repo
rg -i "bedrock.*Protect" -n .

Approve dependency addition; ensure Bedrock Protect integration tests

  • The new dependency "futureagi >= 0.0.1" aligns with changes in other frameworks.
  • No Bedrock-specific Protect examples or tests were found in the repo—please confirm you’ve added and run integration tests covering AWS Bedrock’s Protect functionality before merging.
python/frameworks/llama_index/traceai_llamaindex/package.py (1)

1-1: Added futureagi dependency for protect tracing

The dependency addition aligns with the integration of tracing for the ProtectClient.protect method from the Future AGI SDK. This ensures proper functionality of the guardrail protection tracing feature.

python/frameworks/vertexai/traceai_vertexai/package.py (1)

1-1: Added futureagi dependency for protect tracing

The dependency addition aligns with the integration of tracing for the ProtectClient.protect method from the Future AGI SDK. This ensures proper functionality of the guardrail protection tracing feature within VertexAI.

python/frameworks/guardrails/traceai_guardrails/_wrap_guard_call.py (3)

89-92: Enhanced input safety check for kwargs serialization

The added validation ensures that only dictionary objects are unpacked, preventing potential runtime errors when processing kwargs for tracing.


96-96: Using validated kwargs_dict for unpacking

This change properly uses the validated kwargs_dict variable, ensuring safe dictionary unpacking.


124-125: Added tuple handling for proper serialization

This addition allows for proper recursive serialization of tuple data structures, which was previously missing. This enables tracing to correctly capture tuple values in span attributes.

python/frameworks/instructor/traceai_instructor/__init__.py (3)

10-11: Added imports for protect tracing integration

The imports enable the instrumentation of the ProtectClient.protect method using the GuardrailProtectWrapper.


13-13: Added futureagi dependency

This ensures the code has the necessary dependency on the Future AGI SDK for protect tracing to function properly.


50-55:

Details

❌ Incorrect review comment

Added instrumentation for ProtectClient.protect

This change wraps the ProtectClient.protect method with GuardrailProtectWrapper to add tracing capabilities to guardrail protection calls. The implementation stores the original method and wraps it correctly.

You should consider adding corresponding uninstrumentation code to restore the original protect method. Run this to check if other instrumentors include uninstrumentation for the protect method:


🏁 Script executed:

#!/bin/bash
# Check if other instrumentors include uninstrumentation for the protect method
rg -A 3 "_original_protect" "python/frameworks/" | rg "_uninstrument"

Length of output: 65


No uninstrumentation required for ProtectClient.protect

I searched the codebase and didn’t find any existing uninstrumentation patterns—instrumentation is applied once at startup and persists for the process lifetime, matching all other wrappers. This change may be approved as-is; you can ignore the suggestion to add uninstrumentation here.

Likely an incorrect or invalid review comment.

python/frameworks/vertexai/traceai_vertexai/__init__.py (1)

11-12: Import statements for Guardrail protection tracing are properly placed.

The imports for GuardrailProtectWrapper and ProtectClient are correctly added to enable tracing of the protect method.

python/frameworks/bedrock/traceai_bedrock/__init__.py (1)

50-51: Import statements for Guardrail protection tracing are properly placed.

The imports for GuardrailProtectWrapper and ProtectClient are correctly added to enable tracing of the protect method.

python/frameworks/llama_index/traceai_llamaindex/__init__.py (1)

10-13: Import statements for Guardrail protection tracing are properly placed.

The imports for GuardrailProtectWrapper, ProtectClient, and wrap_function_wrapper are correctly added to enable tracing of the protect method.

python/frameworks/litellm/traceai_litellm/__init__.py (1)

42-44: Import statements for Guardrail protection tracing are properly placed.

The imports for wrap_function_wrapper, GuardrailProtectWrapper, and ProtectClient are correctly added to enable tracing of the protect method.

python/frameworks/groq/traceai_groq/__init__.py (1)

18-18: LGTM: Dependency added correctly

The "futureagi" dependency has been correctly added to the instrumentation dependencies.

python/frameworks/haystack/traceai_haystack/__init__.py (1)

22-22: LGTM: Dependency added correctly

The "futureagi" dependency has been correctly added to the instrumentation dependencies.

python/frameworks/langchain/examples/tool_calling_agent.py (1)

286-289: Confirm that FITracer implements the OpenTelemetry Tracer interface

start_as_current_span is invoked on FITracer, but some custom tracer wrappers only proxy a subset of the OT API. If the method is missing you’ll get a runtime AttributeError.

Please double-check (or unit-test) that:

isinstance(tracer, opentelemetry.trace.Tracer)  # or has attr start_as_current_span

and, if not, expose/forward the call in FITracer.

python/fi_instrumentation/instrumentation/_protect_wrapper.py (1)

11-12: Avoid relying on a private OpenTelemetry constant
_SUPPRESS_INSTRUMENTATION_KEY is a private symbol; future OpenTelemetry releases may rename or relocate it, breaking this wrapper. Prefer the public helper:

from opentelemetry.instrumentation.instrumentor import _SUPPRESS_INSTRUMENTATION_KEY  # still private

—or better—use the helper in opentelemetry.instrumentation.utils, or check the currently-active span to determine suppression.

Comment on lines +49 to +54
self._original_protect = ProtectClient.protect
wrap_function_wrapper(
module="fi.evals",
name="ProtectClient.protect",
wrapper=GuardrailProtectWrapper(tracer),
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Instrumentation for ProtectClient.protect needs uninstrumentation support.

While the instrumentation correctly wraps the ProtectClient.protect method with the GuardrailProtectWrapper, the corresponding unwrapping logic is missing in the _uninstrument method. This creates an asymmetry where the instrumentation isn't properly reverted.

Add the following unwrapping code to the _uninstrument method:

 def _uninstrument(self, **kwargs: Any) -> None:
     self._status._IS_INSTRUMENTED = False
     import google.api_core.gapic_v1 as gapic

     for module in (gapic.method, gapic.method_async):
         if wrapped := getattr(module.wrap_method, "__wrapped__", None):
             setattr(module, "wrap_method", wrapped)
+    
+    if hasattr(self, "_original_protect"):
+        from fi.evals import ProtectClient
+        ProtectClient.protect = self._original_protect
+        self._original_protect = None
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
self._original_protect = ProtectClient.protect
wrap_function_wrapper(
module="fi.evals",
name="ProtectClient.protect",
wrapper=GuardrailProtectWrapper(tracer),
)
def _uninstrument(self, **kwargs: Any) -> None:
self._status._IS_INSTRUMENTED = False
import google.api_core.gapic_v1 as gapic
for module in (gapic.method, gapic.method_async):
if wrapped := getattr(module.wrap_method, "__wrapped__", None):
setattr(module, "wrap_method", wrapped)
if hasattr(self, "_original_protect"):
from fi.evals import ProtectClient
ProtectClient.protect = self._original_protect
self._original_protect = None

Comment on lines +424 to +429
self._original_protect = ProtectClient.protect
wrap_function_wrapper(
module="fi.evals",
name="ProtectClient.protect",
wrapper=GuardrailProtectWrapper(tracer=self._tracer),
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Instrumentation for ProtectClient.protect needs uninstrumentation support.

While the instrumentation correctly wraps the ProtectClient.protect method with the GuardrailProtectWrapper, the corresponding unwrapping logic is missing in the _uninstrument method. This creates an asymmetry where the instrumentation isn't properly reverted.

Add the following unwrapping code to the _uninstrument method:

 def _uninstrument(self, **kwargs: Any) -> None:
     boto = import_module(_MODULE)
     boto.ClientCreator.create_client = self._original_client_creator
     self._original_client_creator = None
+    
+    if hasattr(self, "_original_protect"):
+        from fi.evals import ProtectClient
+        ProtectClient.protect = self._original_protect
+        self._original_protect = None
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
self._original_protect = ProtectClient.protect
wrap_function_wrapper(
module="fi.evals",
name="ProtectClient.protect",
wrapper=GuardrailProtectWrapper(tracer=self._tracer),
)
def _uninstrument(self, **kwargs: Any) -> None:
boto = import_module(_MODULE)
boto.ClientCreator.create_client = self._original_client_creator
self._original_client_creator = None
if hasattr(self, "_original_protect"):
from fi.evals import ProtectClient
ProtectClient.protect = self._original_protect
self._original_protect = None

Comment on lines +88 to +93
self._original_protect = ProtectClient.protect
wrap_function_wrapper(
module="fi.evals",
name="ProtectClient.protect",
wrapper=GuardrailProtectWrapper(tracer=self._tracer),
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix reference to undefined tracer attribute.

The code attempts to access self._tracer which is not defined as an instance attribute. In the _instrument method, the tracer is defined as a local variable at line 57, not stored in self._tracer.

Apply this fix to correctly reference the tracer:

 self._original_protect = ProtectClient.protect
 wrap_function_wrapper(
     module="fi.evals",
     name="ProtectClient.protect",
-    wrapper=GuardrailProtectWrapper(tracer=self._tracer),
+    wrapper=GuardrailProtectWrapper(tracer=tracer),
 )

Additionally, add unwrapping code to the _uninstrument method:

 def _uninstrument(self, **kwargs: Any) -> None:
     if self._use_legacy_callback_handler:
         import llama_index.core

         llama_index.core.global_handler = self._original_global_handler
         self._original_global_handler = None
     else:
         if self._event_handler is None:
             return
         from llama_index.core.instrumentation import get_dispatcher

         dispatcher = get_dispatcher()
         dispatcher.span_handlers[:] = filter(
             lambda h: not isinstance(h, type(self._span_handler)),
             dispatcher.span_handlers,
         )
         dispatcher.event_handlers[:] = filter(
             lambda h: not isinstance(h, type(self._event_handler)),
             dispatcher.event_handlers,
         )
         self._event_handler = None
+
+    if hasattr(self, "_original_protect"):
+        from fi.evals import ProtectClient
+        ProtectClient.protect = self._original_protect
+        self._original_protect = None
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
self._original_protect = ProtectClient.protect
wrap_function_wrapper(
module="fi.evals",
name="ProtectClient.protect",
wrapper=GuardrailProtectWrapper(tracer=self._tracer),
)
# instrument
self._original_protect = ProtectClient.protect
wrap_function_wrapper(
module="fi.evals",
name="ProtectClient.protect",
wrapper=GuardrailProtectWrapper(tracer=tracer),
)
Suggested change
self._original_protect = ProtectClient.protect
wrap_function_wrapper(
module="fi.evals",
name="ProtectClient.protect",
wrapper=GuardrailProtectWrapper(tracer=self._tracer),
)
def _uninstrument(self, **kwargs: Any) -> None:
if self._use_legacy_callback_handler:
import llama_index.core
llama_index.core.global_handler = self._original_global_handler
self._original_global_handler = None
else:
if self._event_handler is None:
return
from llama_index.core.instrumentation import get_dispatcher
dispatcher = get_dispatcher()
dispatcher.span_handlers[:] = filter(
lambda h: not isinstance(h, type(self._span_handler)),
dispatcher.span_handlers,
)
dispatcher.event_handlers[:] = filter(
lambda h: not isinstance(h, type(self._event_handler)),
dispatcher.event_handlers,
)
self._event_handler = None
# restore ProtectClient.protect if we wrapped it earlier
if hasattr(self, "_original_protect"):
from fi.evals import ProtectClient
ProtectClient.protect = self._original_protect
self._original_protect = None

Comment on lines +272 to +277
self._original_protect = ProtectClient.protect
wrap_function_wrapper(
module="fi.evals",
name="ProtectClient.protect",
wrapper=GuardrailProtectWrapper(tracer=self._tracer),
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Instrumentation for ProtectClient.protect needs uninstrumentation support.

The instrumentation correctly wraps the ProtectClient.protect method with the GuardrailProtectWrapper, but the corresponding unwrapping logic is missing in the _uninstrument method. This creates an asymmetry where the instrumentation isn't properly reverted.

Add the following unwrapping code to the _uninstrument method:

 def _uninstrument(self, **kwargs: Any) -> None:
     for (
         func_name,
         original_func,
     ) in LiteLLMInstrumentor.original_litellm_funcs.items():
         setattr(litellm, func_name, original_func)
     self.original_litellm_funcs.clear()
+    
+    if hasattr(self, "_original_protect"):
+        from fi.evals import ProtectClient
+        ProtectClient.protect = self._original_protect
+        self._original_protect = None
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
self._original_protect = ProtectClient.protect
wrap_function_wrapper(
module="fi.evals",
name="ProtectClient.protect",
wrapper=GuardrailProtectWrapper(tracer=self._tracer),
)
def _uninstrument(self, **kwargs: Any) -> None:
for (
func_name,
original_func,
) in LiteLLMInstrumentor.original_litellm_funcs.items():
setattr(litellm, func_name, original_func)
self.original_litellm_funcs.clear()
if hasattr(self, "_original_protect"):
from fi.evals import ProtectClient
ProtectClient.protect = self._original_protect
self._original_protect = None

Comment on lines +58 to +63
self._original_protect = ProtectClient.protect
wrap_function_wrapper(
module="fi.evals",
name="ProtectClient.protect",
wrapper=GuardrailProtectWrapper(tracer),
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Missing uninstrumentation for ProtectClient.protect

While the original ProtectClient.protect method is correctly saved to self._original_protect, there's no corresponding code in the _uninstrument method to restore it. This means the wrapper would remain in place even after uninstrumentation, potentially causing unexpected behavior.

Add this code to the _uninstrument method (around line 68):

    def _uninstrument(self, **kwargs: Any) -> None:
        openai = import_module(_MODULE)
        openai.OpenAI.request = self._original_request
        openai.AsyncOpenAI.request = self._original_async_request
+       if hasattr(self, "_original_protect"):
+           from fi.evals import ProtectClient
+           ProtectClient.protect = self._original_protect
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
self._original_protect = ProtectClient.protect
wrap_function_wrapper(
module="fi.evals",
name="ProtectClient.protect",
wrapper=GuardrailProtectWrapper(tracer),
)
def _uninstrument(self, **kwargs: Any) -> None:
openai = import_module(_MODULE)
openai.OpenAI.request = self._original_request
openai.AsyncOpenAI.request = self._original_async_request
+ if hasattr(self, "_original_protect"):
+ from fi.evals import ProtectClient
+ ProtectClient.protect = self._original_protect

Comment on lines +59 to +64
self._original_protect = ProtectClient.protect
wrap_function_wrapper(
module="fi.evals",
name="ProtectClient.protect",
wrapper=GuardrailProtectWrapper(tracer=self._tracer),
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Missing uninstrumentation for ProtectClient.protect

The original ProtectClient.protect method is saved, but there's no code in the _uninstrument method to restore it when uninstrumentation is requested.

Add code to restore the original protect method in the _uninstrument method:

    def _uninstrument(self, **kwargs: Any) -> None:
        groq_module = import_module("groq.resources.chat.completions")
        if self._original_completions_create is not None:
            groq_module.Completions.create = self._original_completions_create
        if self._original_async_completions_create is not None:
            groq_module.AsyncCompletions.create = (
                self._original_async_completions_create
            )
+       if hasattr(self, "_original_protect"):
+           from fi.evals import ProtectClient
+           ProtectClient.protect = self._original_protect
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
self._original_protect = ProtectClient.protect
wrap_function_wrapper(
module="fi.evals",
name="ProtectClient.protect",
wrapper=GuardrailProtectWrapper(tracer=self._tracer),
)
def _uninstrument(self, **kwargs: Any) -> None:
groq_module = import_module("groq.resources.chat.completions")
if self._original_completions_create is not None:
groq_module.Completions.create = self._original_completions_create
if self._original_async_completions_create is not None:
groq_module.AsyncCompletions.create = (
self._original_async_completions_create
)
if hasattr(self, "_original_protect"):
from fi.evals import ProtectClient
ProtectClient.protect = self._original_protect

Comment on lines +79 to +84
self._original_protect = ProtectClient.protect
wrap_function_wrapper(
module="fi.evals",
name="ProtectClient.protect",
wrapper=GuardrailProtectWrapper(tracer=self._tracer),
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Missing uninstrumentation for ProtectClient.protect

The original ProtectClient.protect method is stored but not restored during uninstrumentation.

Add code to restore the original protect method in the _uninstrument method:

    def _uninstrument(self, **kwargs: Any) -> None:

        if self._original_pipeline_run is not None:
            haystack.Pipeline.run = self._original_pipeline_run

        if self._original_pipeline_run_component is not None:
            haystack.Pipeline._run_component = self._original_pipeline_run_component

        for (
            component_cls,
            original_run_method,
        ) in self._original_component_run_methods.items():
            component_cls.run = original_run_method
+            
+       if hasattr(self, "_original_protect"):
+           from fi.evals import ProtectClient
+           ProtectClient.protect = self._original_protect
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
self._original_protect = ProtectClient.protect
wrap_function_wrapper(
module="fi.evals",
name="ProtectClient.protect",
wrapper=GuardrailProtectWrapper(tracer=self._tracer),
)
def _uninstrument(self, **kwargs: Any) -> None:
if self._original_pipeline_run is not None:
haystack.Pipeline.run = self._original_pipeline_run
if self._original_pipeline_run_component is not None:
haystack.Pipeline._run_component = self._original_pipeline_run_component
for (
component_cls,
original_run_method,
) in self._original_component_run_methods.items():
component_cls.run = original_run_method
# Restore ProtectClient.protect if it was wrapped
if hasattr(self, "_original_protect"):
from fi.evals import ProtectClient
ProtectClient.protect = self._original_protect

Comment on lines 62 to 67
def _uninstrument(self, **kwargs: Any) -> None:
langchain_core.callbacks.BaseCallbackManager.__init__ = self._original_callback_manager_init # type: ignore
self._original_callback_manager_init = None # type: ignore
self._tracer = None
self.fi_tracer = None

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Make _uninstrument idempotent & clean up attributes
After restoring the original methods, set self._original_callback_manager_init and self._original_protect to None. This prevents accidental reuse and makes subsequent instrument() / uninstrument() calls safe.

Comment on lines +54 to +59
self._original_protect = ProtectClient.protect
wrap_function_wrapper(
module="fi.evals",
name="ProtectClient.protect",
wrapper=GuardrailProtectWrapper(self._tracer),
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

ProtectClient.protect is wrapped but never unwrapped → instrumentation leak
self._original_protect is saved, but _uninstrument() does not restore it. After an un-instrument/​re-instrument cycle, ProtectClient.protect will be wrapped multiple times, doubling span creation and slowing every protect call.

@@ def _uninstrument(self, **kwargs: Any) -> None:
         langchain_core.callbacks.BaseCallbackManager.__init__ = self._original_callback_manager_init  # type: ignore
         self._original_callback_manager_init = None  # type: ignore
+        # Restore ProtectClient.protect
+        from fi.evals import ProtectClient
+        ProtectClient.protect = self._original_protect  # type: ignore
+        self._original_protect = None  # type: ignore
         self._tracer = None
         self.fi_tracer = None
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
self._original_protect = ProtectClient.protect
wrap_function_wrapper(
module="fi.evals",
name="ProtectClient.protect",
wrapper=GuardrailProtectWrapper(self._tracer),
)
langchain_core.callbacks.BaseCallbackManager.__init__ = self._original_callback_manager_init # type: ignore
self._original_callback_manager_init = None # type: ignore
# Restore ProtectClient.protect
from fi.evals import ProtectClient
ProtectClient.protect = self._original_protect # type: ignore
self._original_protect = None # type: ignore
self._tracer = None
self.fi_tracer = None

@JayaSurya-27 JayaSurya-27 changed the base branch from main to dev May 8, 2025 10:33
@NVJKKartik NVJKKartik requested a review from nik13 May 9, 2025 07:14
@nik13 nik13 merged commit 482258d into dev May 13, 2025
1 check passed
@nik13 nik13 deleted the feature/protect-in-observe branch May 13, 2025 05:57
@coderabbitai coderabbitai bot mentioned this pull request May 23, 2025
4 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants