Skip to content

Conversation

@JayaSurya-27
Copy link
Collaborator

@JayaSurya-27 JayaSurya-27 commented May 19, 2025

Pull Request

Description

Here when some spans are not ended they wont get exported , this cause our backend to have missing spans and the span tree is incomplete . so we are exporting all these spans when the code gets terminated .

Summary by CodeRabbit

  • New Features

    • Added tracing instrumentation for Guardrail protect operations across multiple frameworks, enabling detailed monitoring of input, output, and rule evaluations.
    • Introduced example scripts demonstrating Guardrail tracing integration with Anthropic Claude API and Guardrails library.
    • Enhanced tracing lifecycle management with improved shutdown handling and detection of unfinished spans.
  • Dependencies

    • Added support for the "futureagi" package in various framework integrations.
  • Bug Fixes

    • Improved serialization and handling of input types (including tuples) for tracing in Guardrails integration.
  • Documentation

    • Provided new example scripts showcasing Guardrail tracing usage in real-world scenarios.

@JayaSurya-27 JayaSurya-27 self-assigned this May 19, 2025
@coderabbitai
Copy link

coderabbitai bot commented May 19, 2025

Important

Review skipped

Auto reviews are disabled on base/target branches other than the default branch.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Walkthrough

This update introduces tracing instrumentation for the ProtectClient.protect method from the futureagi package across multiple framework instrumentors. A new GuardrailProtectWrapper is implemented for detailed OpenTelemetry span creation, capturing guardrail input/output metadata and errors. Example scripts for guardrail integration and enhanced span management are also added, with new dependency requirements for futureagi in several packages.

Changes

File(s) Change Summary
python/fi_instrumentation/instrumentation/_protect_wrapper.py New module implementing GuardrailProtectWrapper for OpenTelemetry tracing of Guardrail operations, including serialization utilities and context-managed span creation.
python/fi_instrumentation/instrumentation/constants.py New constants file defining span tracking configuration keys and default values.
python/fi_instrumentation/otel.py Enhanced tracer provider with signal/exit handling and active span leak detection/cleanup; SimpleSpanProcessor tracks active spans and forcibly ends leaked spans at shutdown.
python/frameworks/anthropic/examples/guardrail_example.py, python/frameworks/guardrails/traceai_guardrails/guardrails_example.py New example scripts demonstrating guardrail tracing integration with Anthropic Claude API and Guardrails validation with FI observability.
python/frameworks/anthropic/traceai_anthropic/__init__.py, python/frameworks/bedrock/traceai_bedrock/__init__.py, python/frameworks/groq/traceai_groq/__init__.py, python/frameworks/haystack/traceai_haystack/__init__.py, python/frameworks/instructor/traceai_instructor/__init__.py, python/frameworks/langchain/traceai_langchain/__init__.py, python/frameworks/litellm/traceai_litellm/__init__.py, python/frameworks/llama_index/traceai_llamaindex/__init__.py, python/frameworks/openai/traceai_openai/__init__.py, python/frameworks/vertexai/traceai_vertexai/__init__.py Each framework's instrumentor now wraps ProtectClient.protect with GuardrailProtectWrapper for tracing, saving/restoring the original method where appropriate.
python/frameworks/bedrock/traceai_bedrock/package.py, python/frameworks/groq/traceai_groq/__init__.py, python/frameworks/langchain/traceai_langchain/package.py, python/frameworks/litellm/traceai_litellm/package.py, python/frameworks/llama_index/traceai_llamaindex/package.py, python/frameworks/openai/traceai_openai/package.py, python/frameworks/vertexai/traceai_vertexai/package.py Updated _instruments tuples to include "futureagi >= 0.0.1" as a new dependency.
python/frameworks/guardrails/traceai_guardrails/_wrap_guard_call.py Improved serialization: _to_dict now handles tuples recursively; _get_raw_input ensures dictionary output for JSON serialization.
python/frameworks/langchain/examples/tool_calling_agent.py Instrumentation setup returns a tracer; agent prompt and model parameters updated; guardrail protection and tracing integrated into the main loop.

Sequence Diagram(s)

sequenceDiagram
    participant UserCode
    participant ProtectClient
    participant GuardrailProtectWrapper
    participant OpenTelemetryTracer
    participant GuardrailSpan

    UserCode->>ProtectClient: protect(inputs, rules, ...)
    ProtectClient->>GuardrailProtectWrapper: __call__(wrapped, instance, args, kwargs)
    GuardrailProtectWrapper->>OpenTelemetryTracer: start_span("Guardrail.protect", attributes)
    OpenTelemetryTracer->>GuardrailSpan: create span context
    GuardrailProtectWrapper->>ProtectClient: call original protect(inputs, rules, ...)
    ProtectClient-->>GuardrailProtectWrapper: result
    GuardrailProtectWrapper->>GuardrailSpan: set output/result attributes
    GuardrailProtectWrapper->>OpenTelemetryTracer: end span
    GuardrailProtectWrapper-->>UserCode: result or exception
Loading

Poem

In fields of code where guardrails grow,
A wrapper hops in, spans aglow.
With every protect, a trace is spun,
Inputs and outputs—none left undone!
Now frameworks leap with futureagi cheer,
As rabbits trace all guardrails here.
🐇✨


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@JayaSurya-27 JayaSurya-27 changed the base branch from main to dev May 19, 2025 11:49
@notion-workspace
Copy link

@notion-workspace
Copy link

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Inline review comments failed to post. This is likely due to GitHub's limits when posting large numbers of comments. If you are seeing this consistently it is likely a permissions issue. Please check "Moderation" -> "Code review limits" under your organization settings.

Actionable comments posted: 13

🔭 Outside diff range comments (1)
python/fi_instrumentation/otel.py (1)

321-350: 🛠️ Refactor suggestion

_active_spans is mutated from many threads without synchronisation

SimpleSpanProcessor is invoked on every span start/end and may be called from multiple
threads (e.g. async frameworks, thread pools).
Plain dict mutation is not thread-safe and can raise RuntimeError: dictionary changed size during iteration, especially inside shutdown() where you iterate over self._active_spans.

+import threading
 ...
 class SimpleSpanProcessor(_SimpleSpanProcessor):
     ...
-    def __init__(...):
-        self._active_spans = {}
+    def __init__(...):
+        self._active_spans: Dict[int, Any] = {}
+        self._lock = threading.Lock()
 ...
-        if len(self._active_spans) >= max_active_spans_tracked:
-            return
-        self._active_spans[span.context.span_id] = span
+        with self._lock:
+            if len(self._active_spans) >= max_active_spans_tracked:
+                return
+            self._active_spans[span.context.span_id] = span
 ...
-        self._active_spans.pop(span.context.span_id, None)
+        with self._lock:
+            self._active_spans.pop(span.context.span_id, None)
 ...
-            active_spans = list(self._active_spans.values())
+            with self._lock:
+                active_spans = list(self._active_spans.values())
🧰 Tools
🪛 Ruff (0.11.9)

328-328: No explicit stacklevel keyword argument found

Set stacklevel=2

(B028)

🧹 Nitpick comments (9)
python/fi_instrumentation/instrumentation/constants.py (1)

1-3: New constants for active span tracking

These constants support the PR's objective of handling incomplete spans by establishing configuration for the maximum number of active spans to track. The use of 10_000 as a default is a good starting point and follows Python's numeric literal syntax for readability.

Consider adding a brief comment for FI_MAX_ACTIVE_SPANS_TRACKED similar to what's provided for the default value.

python/frameworks/guardrails/traceai_guardrails/guardrails_example.py (1)

3-3: Remove unused import

The os module is imported but not used anywhere in the code.

-import os
🧰 Tools
🪛 Ruff (0.11.9)

3-3: os imported but unused

Remove unused import: os

(F401)

python/frameworks/anthropic/examples/guardrail_example.py (3)

10-10: Remove unused import

The TraceConfig is imported but not used anywhere in the code.

-from fi_instrumentation.instrumentation.config import TraceConfig
🧰 Tools
🪛 Ruff (0.11.9)

10-10: fi_instrumentation.instrumentation.config.TraceConfig imported but unused

Remove unused import: fi_instrumentation.instrumentation.config.TraceConfig

(F401)


12-12: Remove unused import

The opentelemetry.trace module is imported but not used anywhere in the code.

-from opentelemetry import trace
🧰 Tools
🪛 Ruff (0.11.9)

12-12: opentelemetry.trace imported but unused

Remove unused import: opentelemetry.trace

(F401)


18-18: Clean up commented code

The commented-out line about eval_tags should be removed since it's not being used.

    project_type=ProjectType.OBSERVE,
-    # eval_tags=eval_tags,
    project_name="FUTURE_AGI",
python/frameworks/langchain/examples/tool_calling_agent.py (1)

26-26: Remove unused import

The opentelemetry.trace module is imported as trace_api but not used in the code.

-import opentelemetry.trace as trace_api
🧰 Tools
🪛 Ruff (0.11.9)

26-26: opentelemetry.trace imported but unused

Remove unused import: opentelemetry.trace

(F401)

python/fi_instrumentation/otel.py (2)

339-342: Environment variable parsing can crash on non-numeric values

int(os.getenv(FI_MAX_ACTIVE_SPANS_TRACKED, DEFAULT_MAX_ACTIVE_SPANS_TRACKED))
throws ValueError if the env var is set to a non-int string. Guard the conversion:

-            max_active_spans_tracked = int(os.getenv(FI_MAX_ACTIVE_SPANS_TRACKED, DEFAULT_MAX_ACTIVE_SPANS_TRACKED))
+            try:
+                max_active_spans_tracked = int(
+                    os.getenv(FI_MAX_ACTIVE_SPANS_TRACKED, DEFAULT_MAX_ACTIVE_SPANS_TRACKED)
+                )
+            except ValueError:
+                max_active_spans_tracked = DEFAULT_MAX_ACTIVE_SPANS_TRACKED

368-372: Swallowed exception hides useful debugging information

except Exception as e: immediately followed by pass discards the error. Either
log it or drop the variable to satisfy Ruff F841.

-                        except Exception as e:
-                            pass
+                        except Exception as exc:
+                            logger.warning("Failed to end leaked span: %s", exc, exc_info=True)
🧰 Tools
🪛 Ruff (0.11.9)

370-370: Local variable e is assigned to but never used

Remove assignment to unused variable e

(F841)

python/fi_instrumentation/instrumentation/_protect_wrapper.py (1)

83-98: Prefer start_as_current_span for simpler context handling

The manual start_span + use_span dance works, but
tracer.start_as_current_span(...) is the idiomatic, safer helper that already
handles context propagation and exception recording.

Not blocking, but consider the refactor for brevity and future maintainability.

🛑 Comments failed to post (13)
python/frameworks/llama_index/traceai_llamaindex/__init__.py (2)

88-93: ⚠️ Potential issue

The _tracer variable is undefined.

The code uses self._tracer when creating the GuardrailProtectWrapper, but there's no assignment to this attribute in the _instrument method. The local variable tracer is defined but not assigned to self._tracer.

Either assign the tracer to an instance attribute or use the local variable directly:

tracer = FITracer(
    trace_api.get_tracer(__name__, __version__, tracer_provider),
    config=config,
)
+ self._tracer = tracer
# ...or alternatively...
- wrapper=GuardrailProtectWrapper(tracer=self._tracer),
+ wrapper=GuardrailProtectWrapper(tracer=tracer),
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

        tracer = FITracer(
            trace_api.get_tracer(__name__, __version__, tracer_provider),
            config=config,
        )
        self._tracer = tracer
        self._original_protect = ProtectClient.protect
        wrap_function_wrapper(
            module="fi.evals",
            name="ProtectClient.protect",
            wrapper=GuardrailProtectWrapper(tracer=tracer),
        )
🤖 Prompt for AI Agents
In python/frameworks/llama_index/traceai_llamaindex/__init__.py around lines 88
to 93, the code uses self._tracer when creating the GuardrailProtectWrapper, but
self._tracer is never assigned. Fix this by assigning the local tracer variable
to self._tracer before using it, or replace self._tracer with the local tracer
variable directly in the wrapper argument.

88-93: ⚠️ Potential issue

Missing restore logic in _uninstrument method.

While the ProtectClient.protect method is properly wrapped and saved to _original_protect, the _uninstrument method doesn't restore it to the original implementation. This might leave the method wrapped even after uninstrumentation.

Add this to the _uninstrument method:

def _uninstrument(self, **kwargs: Any) -> None:
    if self._use_legacy_callback_handler:
        import llama_index.core

        llama_index.core.global_handler = self._original_global_handler
        self._original_global_handler = None
    else:
        if self._event_handler is None:
            return
        from llama_index.core.instrumentation import get_dispatcher

        dispatcher = get_dispatcher()
        dispatcher.span_handlers[:] = filter(
            lambda h: not isinstance(h, type(self._span_handler)),
            dispatcher.span_handlers,
        )
        dispatcher.event_handlers[:] = filter(
            lambda h: not isinstance(h, type(self._event_handler)),
            dispatcher.event_handlers,
        )
        self._event_handler = None
+   # Restore original protect method
+   if hasattr(self, '_original_protect'):
+       from fi.evals import ProtectClient
+       ProtectClient.protect = self._original_protect
+       self._original_protect = None
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

def _uninstrument(self, **kwargs: Any) -> None:
    if self._use_legacy_callback_handler:
        import llama_index.core

        llama_index.core.global_handler = self._original_global_handler
        self._original_global_handler = None
    else:
        if self._event_handler is None:
            return
        from llama_index.core.instrumentation import get_dispatcher

        dispatcher = get_dispatcher()
        dispatcher.span_handlers[:] = filter(
            lambda h: not isinstance(h, type(self._span_handler)),
            dispatcher.span_handlers,
        )
        dispatcher.event_handlers[:] = filter(
            lambda h: not isinstance(h, type(self._event_handler)),
            dispatcher.event_handlers,
        )
        self._event_handler = None

    # Restore original protect method
    if hasattr(self, '_original_protect'):
        from fi.evals import ProtectClient
        ProtectClient.protect = self._original_protect
        self._original_protect = None
🤖 Prompt for AI Agents
In python/frameworks/llama_index/traceai_llamaindex/__init__.py around lines 88
to 93, the _uninstrument method lacks logic to restore ProtectClient.protect to
its original method saved in _original_protect. To fix this, update the
_uninstrument method to assign ProtectClient.protect back to
self._original_protect, ensuring the original method is restored when
uninstrumenting.
python/frameworks/litellm/traceai_litellm/__init__.py (1)

272-277: ⚠️ Potential issue

Missing restore logic in _uninstrument method.

While the ProtectClient.protect method is properly wrapped and saved to _original_protect, the _uninstrument method doesn't restore it to the original implementation. This might leave the method wrapped even after uninstrumentation.

Add this to the _uninstrument method:

def _uninstrument(self, **kwargs: Any) -> None:
    for (
        func_name,
        original_func,
    ) in LiteLLMInstrumentor.original_litellm_funcs.items():
        setattr(litellm, func_name, original_func)
    self.original_litellm_funcs.clear()
+   # Restore original protect method
+   if hasattr(self, '_original_protect'):
+       from fi.evals import ProtectClient
+       ProtectClient.protect = self._original_protect
+       self._original_protect = None
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

def _uninstrument(self, **kwargs: Any) -> None:
    for (
        func_name,
        original_func,
    ) in LiteLLMInstrumentor.original_litellm_funcs.items():
        setattr(litellm, func_name, original_func)
    self.original_litellm_funcs.clear()

    # Restore original protect method
    if hasattr(self, '_original_protect'):
        from fi.evals import ProtectClient
        ProtectClient.protect = self._original_protect
        self._original_protect = None
🤖 Prompt for AI Agents
In python/frameworks/litellm/traceai_litellm/__init__.py around lines 272 to
277, the _uninstrument method lacks logic to restore ProtectClient.protect to
its original method saved in _original_protect. To fix this, update the
_uninstrument method to reassign ProtectClient.protect back to
self._original_protect, ensuring the original method is restored when
uninstrumenting.
python/frameworks/haystack/traceai_haystack/__init__.py (1)

79-84: ⚠️ Potential issue

Missing restore logic in _uninstrument method.

While the ProtectClient.protect method is properly wrapped and saved to _original_protect, the _uninstrument method doesn't restore it to the original implementation. This might leave the method wrapped even after uninstrumentation.

Add this to the _uninstrument method:

def _uninstrument(self, **kwargs: Any) -> None:

    if self._original_pipeline_run is not None:
        haystack.Pipeline.run = self._original_pipeline_run

    if self._original_pipeline_run_component is not None:
        haystack.Pipeline._run_component = self._original_pipeline_run_component

    for (
        component_cls,
        original_run_method,
    ) in self._original_component_run_methods.items():
        component_cls.run = original_run_method
+   
+   # Restore original protect method
+   if hasattr(self, '_original_protect'):
+       from fi.evals import ProtectClient
+       ProtectClient.protect = self._original_protect
+       self._original_protect = None
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

def _uninstrument(self, **kwargs: Any) -> None:

    if self._original_pipeline_run is not None:
        haystack.Pipeline.run = self._original_pipeline_run

    if self._original_pipeline_run_component is not None:
        haystack.Pipeline._run_component = self._original_pipeline_run_component

    for (
        component_cls,
        original_run_method,
    ) in self._original_component_run_methods.items():
        component_cls.run = original_run_method

    # Restore original protect method
    if hasattr(self, '_original_protect'):
        from fi.evals import ProtectClient
        ProtectClient.protect = self._original_protect
        self._original_protect = None
🤖 Prompt for AI Agents
In python/frameworks/haystack/traceai_haystack/__init__.py around lines 79 to
84, the _uninstrument method lacks logic to restore ProtectClient.protect to its
original method saved in _original_protect. To fix this, update the
_uninstrument method to reassign ProtectClient.protect back to
self._original_protect, ensuring the original method is restored when
uninstrumenting.
python/frameworks/openai/traceai_openai/__init__.py (1)

58-63: ⚠️ Potential issue

Missing restore logic in _uninstrument method.

While the ProtectClient.protect method is properly wrapped and saved to _original_protect, the _uninstrument method doesn't restore it to the original implementation. This might leave the method wrapped even after uninstrumentation.

Add this to the _uninstrument method:

def _uninstrument(self, **kwargs: Any) -> None:
    openai = import_module(_MODULE)
    openai.OpenAI.request = self._original_request
    openai.AsyncOpenAI.request = self._original_async_request
+   
+   # Restore original protect method
+   if hasattr(self, '_original_protect'):
+       from fi.evals import ProtectClient
+       ProtectClient.protect = self._original_protect
+       self._original_protect = None
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

    def _uninstrument(self, **kwargs: Any) -> None:
        openai = import_module(_MODULE)
        openai.OpenAI.request = self._original_request
        openai.AsyncOpenAI.request = self._original_async_request

        # Restore original protect method
        if hasattr(self, "_original_protect"):
            from fi.evals import ProtectClient

            ProtectClient.protect = self._original_protect
            self._original_protect = None
🤖 Prompt for AI Agents
In python/frameworks/openai/traceai_openai/__init__.py around lines 58 to 63,
the _uninstrument method does not restore ProtectClient.protect to its original
method saved in _original_protect, which can leave the method wrapped after
uninstrumentation. Modify the _uninstrument method to reassign
ProtectClient.protect back to self._original_protect to properly restore the
original method.
python/frameworks/vertexai/traceai_vertexai/__init__.py (1)

49-54: 🛠️ Refactor suggestion

Missing uninstrumentation logic for ProtectClient.protect

The code correctly saves the original method and wraps it with the GuardrailProtectWrapper. However, there's no corresponding logic in the _uninstrument method to restore the original ProtectClient.protect method, which could cause unexpected behavior if the instrumentation is applied multiple times or if other code expects the original behavior after uninstrumentation.

 def _uninstrument(self, **kwargs: Any) -> None:
     self._status._IS_INSTRUMENTED = False
     import google.api_core.gapic_v1 as gapic

     for module in (gapic.method, gapic.method_async):
         if wrapped := getattr(module.wrap_method, "__wrapped__", None):
             setattr(module, "wrap_method", wrapped)
+    
+    # Restore original ProtectClient.protect method
+    if hasattr(self, "_original_protect"):
+        from fi.evals import ProtectClient
+        ProtectClient.protect = self._original_protect
+        self._original_protect = None
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

def _uninstrument(self, **kwargs: Any) -> None:
    self._status._IS_INSTRUMENTED = False
    import google.api_core.gapic_v1 as gapic

    for module in (gapic.method, gapic.method_async):
        if wrapped := getattr(module.wrap_method, "__wrapped__", None):
            setattr(module, "wrap_method", wrapped)

    # Restore original ProtectClient.protect method
    if hasattr(self, "_original_protect"):
        from fi.evals import ProtectClient
        ProtectClient.protect = self._original_protect
        self._original_protect = None
🤖 Prompt for AI Agents
In python/frameworks/vertexai/traceai_vertexai/__init__.py around lines 49 to
54, the code saves the original ProtectClient.protect method and wraps it but
lacks uninstrumentation logic to restore the original method. To fix this, add
code in the _uninstrument method to reassign ProtectClient.protect back to
self._original_protect, ensuring the original method is restored when
uninstrumentation occurs.
python/frameworks/bedrock/traceai_bedrock/__init__.py (1)

424-429: 🛠️ Refactor suggestion

Missing uninstrumentation logic for ProtectClient.protect

The code correctly saves the original method and wraps it with the GuardrailProtectWrapper. However, there's no corresponding logic in the _uninstrument method to restore the original ProtectClient.protect method, which could lead to unexpected behavior if the instrumentation is applied multiple times.

 def _uninstrument(self, **kwargs: Any) -> None:
     boto = import_module(_MODULE)
     boto.ClientCreator.create_client = self._original_client_creator
     self._original_client_creator = None
+    
+    # Restore original ProtectClient.protect method
+    if hasattr(self, "_original_protect"):
+        from fi.evals import ProtectClient
+        ProtectClient.protect = self._original_protect
+        self._original_protect = None
🤖 Prompt for AI Agents
In python/frameworks/bedrock/traceai_bedrock/__init__.py around lines 424 to
429, the code saves the original ProtectClient.protect method and wraps it but
lacks uninstrumentation logic. To fix this, add code in the _uninstrument method
to restore ProtectClient.protect to self._original_protect, ensuring the
original method is reinstated when uninstrumenting to prevent issues from
multiple instrumentations.
python/frameworks/groq/traceai_groq/__init__.py (1)

59-64: 🛠️ Refactor suggestion

Missing uninstrumentation logic for ProtectClient.protect

The code correctly saves the original method and wraps it with the GuardrailProtectWrapper. However, there's no corresponding logic in the _uninstrument method to restore the original ProtectClient.protect method, which could lead to memory leaks or unexpected behavior if the instrumentation is applied multiple times.

 def _uninstrument(self, **kwargs: Any) -> None:
     groq_module = import_module("groq.resources.chat.completions")
     if self._original_completions_create is not None:
         groq_module.Completions.create = self._original_completions_create
     if self._original_async_completions_create is not None:
         groq_module.AsyncCompletions.create = (
             self._original_async_completions_create
         )
+    
+    # Restore original ProtectClient.protect method
+    if hasattr(self, "_original_protect"):
+        from fi.evals import ProtectClient
+        ProtectClient.protect = self._original_protect
+        self._original_protect = None
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

 def _uninstrument(self, **kwargs: Any) -> None:
     groq_module = import_module("groq.resources.chat.completions")
     if self._original_completions_create is not None:
         groq_module.Completions.create = self._original_completions_create
     if self._original_async_completions_create is not None:
         groq_module.AsyncCompletions.create = (
             self._original_async_completions_create
         )
 
     # Restore original ProtectClient.protect method
     if hasattr(self, "_original_protect"):
         from fi.evals import ProtectClient
         ProtectClient.protect = self._original_protect
         self._original_protect = None
🤖 Prompt for AI Agents
In python/frameworks/groq/traceai_groq/__init__.py around lines 59 to 64, the
code saves the original ProtectClient.protect method and wraps it but lacks
uninstrumentation logic. To fix this, add code in the _uninstrument method to
restore ProtectClient.protect to self._original_protect, ensuring the original
method is reinstated when uninstrumenting to prevent memory leaks or unexpected
behavior.
python/frameworks/instructor/traceai_instructor/__init__.py (1)

50-55: 🛠️ Refactor suggestion

Missing uninstrumentation logic for ProtectClient.protect

The code correctly saves the original method and wraps it with the GuardrailProtectWrapper. However, there's no corresponding logic in the _uninstrument method to restore the original ProtectClient.protect method, which could lead to unexpected behavior if the instrumentation is applied multiple times or if other code expects the original behavior after uninstrumentation.

 def _uninstrument(self, **kwargs: Any) -> None:
     if self._original_patch is not None:
         instructor_module = import_module("instructor")
         instructor_module.patch = self._original_patch  # type: ignore[attr-defined]
         self._original_patch = None

     if self._original_handle_response_model is not None:
         patch_module = import_module("instructor.patch")
         patch_module.handle_response_model = self._original_handle_response_model  # type: ignore[attr-defined]
         self._original_handle_response_model = None
+
+    # Restore original ProtectClient.protect method
+    if hasattr(self, "_original_protect"):
+        from fi.evals import ProtectClient
+        ProtectClient.protect = self._original_protect
+        self._original_protect = None
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

 def _uninstrument(self, **kwargs: Any) -> None:
     if self._original_patch is not None:
         instructor_module = import_module("instructor")
         instructor_module.patch = self._original_patch  # type: ignore[attr-defined]
         self._original_patch = None

     if self._original_handle_response_model is not None:
         patch_module = import_module("instructor.patch")
         patch_module.handle_response_model = self._original_handle_response_model  # type: ignore[attr-defined]
         self._original_handle_response_model = None

     # Restore original ProtectClient.protect method
     if hasattr(self, "_original_protect"):
         from fi.evals import ProtectClient
         ProtectClient.protect = self._original_protect
         self._original_protect = None
🤖 Prompt for AI Agents
In python/frameworks/instructor/traceai_instructor/__init__.py around lines 50
to 55, the code wraps ProtectClient.protect but does not restore the original
method in the _uninstrument method. To fix this, add logic in the _uninstrument
method to reassign ProtectClient.protect back to self._original_protect,
ensuring the original method is restored when uninstrumenting.
python/frameworks/langchain/examples/tool_calling_agent.py (1)

272-273: ⚠️ Potential issue

Missing run_guardrail_check tool

The system prompt instructs the agent to "run the run_guardrail_check tool" before providing a final answer, but this tool is not defined in the code. This will cause the agent to attempt to use a non-existent tool.

Either implement the missing tool or modify the prompt:

-                "You are a helpful AI assistant. Get information using tools. "
-                "Before providing a final answer, run the run_guardrail_check tool on your proposed answer.",
+                "You are a helpful AI assistant. Use the provided tools to get information and provide helpful answers.",
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

                "You are a helpful AI assistant. Use the provided tools to get information and provide helpful answers.",
🤖 Prompt for AI Agents
In python/frameworks/langchain/examples/tool_calling_agent.py around lines 272
to 273, the system prompt references a run_guardrail_check tool that is not
defined in the code, causing errors when the agent tries to use it. To fix this,
either implement the run_guardrail_check tool with the necessary functionality
and add it to the agent's tool list, or remove or modify the prompt text to
exclude the instruction to run this undefined tool.
python/frameworks/langchain/traceai_langchain/__init__.py (2)

54-59: 🛠️ Refactor suggestion

ProtectClient.protect is wrapped but never un-wrapped

You save the original method in _original_protect, yet _uninstrument does not restore it.
If LangChainInstrumentor()._uninstrument() is called (e.g. during hot-reload or tests) the method will stay wrapped, leading to duplicate spans or unexpected behaviour.

 def _uninstrument(self, **kwargs: Any) -> None:
     langchain_core.callbacks.BaseCallbackManager.__init__ = self._original_callback_manager_init  # type: ignore
     self._original_callback_manager_init = None  # type: ignore
     self._tracer = None
     self.fi_tracer = None
+
+    # Restore the original ProtectClient.protect implementation
+    from fi.evals import ProtectClient
+    ProtectClient.protect = self._original_protect
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

def _uninstrument(self, **kwargs: Any) -> None:
    langchain_core.callbacks.BaseCallbackManager.__init__ = self._original_callback_manager_init  # type: ignore
    self._original_callback_manager_init = None  # type: ignore
    self._tracer = None
    self.fi_tracer = None

    # Restore the original ProtectClient.protect implementation
    from fi.evals import ProtectClient
    ProtectClient.protect = self._original_protect
🤖 Prompt for AI Agents
In python/frameworks/langchain/traceai_langchain/__init__.py around lines 54 to
59, the original ProtectClient.protect method is saved but not restored in the
_uninstrument method. To fix this, update the _uninstrument method to restore
ProtectClient.protect to self._original_protect, ensuring the method is properly
unwrapped during uninstrumentation to prevent duplicate spans or unexpected
behavior.

39-43: ⚠️ Potential issue

Avoid shadowing the already-imported FITracer

FITracer is imported at line 6 from fi_instrumentation, then re-imported here from fi_instrumentation.instrumentation._tracers.
Besides triggering Ruff F811, it is confusing for readers and may even refer to a different class if the two modules diverge.

-from fi_instrumentation.instrumentation._tracers import FITracer
+# Re-use the FITracer that is already imported on line 6
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

        from traceai_langchain._tracer import FiTracer as LangChainFiTracer
        from fi.evals import ProtectClient
        # Re-use the FITracer that is already imported on line 6

        self.fi_tracer = FITracer(
🧰 Tools
🪛 Ruff (0.11.9)

41-41: Redefinition of unused FITracer from line 6

(F811)

🤖 Prompt for AI Agents
In python/frameworks/langchain/traceai_langchain/__init__.py around lines 39 to
43, the FITracer class is imported twice from different modules, causing
shadowing and potential confusion. Remove the second import of FITracer from
fi_instrumentation.instrumentation._tracers and use the original import from
fi_instrumentation consistently throughout the file to avoid shadowing and
maintain clarity.
python/fi_instrumentation/instrumentation/_protect_wrapper.py (1)

136-139: 🛠️ Refactor suggestion

Incorrect value passed to _get_raw_input

_get_raw_input is intended to capture the inputs sent to Guardrails, but the wrapper
passes all kwargs instead. This duplicates information already captured and can
leak unrelated parameters.

-                    _get_raw_input(kwargs),
+                    _get_raw_input(inputs),
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

                    _get_raw_input(inputs),
                    ([(GUARDRAIL_ACTION, action)] if action is not None else []),
                    ([(GUARDRAIL_REASON_FLAG, reason_flag)] if reason_flag is not None else []),
                )
🤖 Prompt for AI Agents
In python/fi_instrumentation/instrumentation/_protect_wrapper.py around lines
136 to 139, the call to _get_raw_input incorrectly passes all kwargs instead of
only the inputs intended for Guardrails. To fix this, modify the argument to
_get_raw_input to include only the specific input parameters relevant to
Guardrails, avoiding passing unrelated or duplicate data. This ensures only the
intended inputs are captured and prevents leaking unrelated parameters.

sarthakFuture
sarthakFuture previously approved these changes May 19, 2025
@JayaSurya-27 JayaSurya-27 merged commit 1b175cc into dev May 19, 2025
1 check passed
@nik13 nik13 deleted the bugfix/handle-incomplete-spans branch May 23, 2025 13:26
@coderabbitai coderabbitai bot mentioned this pull request May 23, 2025
4 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants