-
Notifications
You must be signed in to change notification settings - Fork 3
Bugfix/handle incomplete spans #10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the WalkthroughThis update introduces tracing instrumentation for the Changes
Sequence Diagram(s)sequenceDiagram
participant UserCode
participant ProtectClient
participant GuardrailProtectWrapper
participant OpenTelemetryTracer
participant GuardrailSpan
UserCode->>ProtectClient: protect(inputs, rules, ...)
ProtectClient->>GuardrailProtectWrapper: __call__(wrapped, instance, args, kwargs)
GuardrailProtectWrapper->>OpenTelemetryTracer: start_span("Guardrail.protect", attributes)
OpenTelemetryTracer->>GuardrailSpan: create span context
GuardrailProtectWrapper->>ProtectClient: call original protect(inputs, rules, ...)
ProtectClient-->>GuardrailProtectWrapper: result
GuardrailProtectWrapper->>GuardrailSpan: set output/result attributes
GuardrailProtectWrapper->>OpenTelemetryTracer: end span
GuardrailProtectWrapper-->>UserCode: result or exception
Poem
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Caution
Inline review comments failed to post. This is likely due to GitHub's limits when posting large numbers of comments. If you are seeing this consistently it is likely a permissions issue. Please check "Moderation" -> "Code review limits" under your organization settings.
Actionable comments posted: 13
🔭 Outside diff range comments (1)
python/fi_instrumentation/otel.py (1)
321-350: 🛠️ Refactor suggestion
_active_spansis mutated from many threads without synchronisation
SimpleSpanProcessoris invoked on every span start/end and may be called from multiple
threads (e.g. async frameworks, thread pools).
Plain dict mutation is not thread-safe and can raiseRuntimeError: dictionary changed size during iteration, especially insideshutdown()where you iterate overself._active_spans.+import threading ... class SimpleSpanProcessor(_SimpleSpanProcessor): ... - def __init__(...): - self._active_spans = {} + def __init__(...): + self._active_spans: Dict[int, Any] = {} + self._lock = threading.Lock() ... - if len(self._active_spans) >= max_active_spans_tracked: - return - self._active_spans[span.context.span_id] = span + with self._lock: + if len(self._active_spans) >= max_active_spans_tracked: + return + self._active_spans[span.context.span_id] = span ... - self._active_spans.pop(span.context.span_id, None) + with self._lock: + self._active_spans.pop(span.context.span_id, None) ... - active_spans = list(self._active_spans.values()) + with self._lock: + active_spans = list(self._active_spans.values())🧰 Tools
🪛 Ruff (0.11.9)
328-328: No explicit
stacklevelkeyword argument foundSet
stacklevel=2(B028)
🧹 Nitpick comments (9)
python/fi_instrumentation/instrumentation/constants.py (1)
1-3: New constants for active span trackingThese constants support the PR's objective of handling incomplete spans by establishing configuration for the maximum number of active spans to track. The use of
10_000as a default is a good starting point and follows Python's numeric literal syntax for readability.Consider adding a brief comment for
FI_MAX_ACTIVE_SPANS_TRACKEDsimilar to what's provided for the default value.python/frameworks/guardrails/traceai_guardrails/guardrails_example.py (1)
3-3: Remove unused importThe
osmodule is imported but not used anywhere in the code.-import os🧰 Tools
🪛 Ruff (0.11.9)
3-3:
osimported but unusedRemove unused import:
os(F401)
python/frameworks/anthropic/examples/guardrail_example.py (3)
10-10: Remove unused importThe
TraceConfigis imported but not used anywhere in the code.-from fi_instrumentation.instrumentation.config import TraceConfig🧰 Tools
🪛 Ruff (0.11.9)
10-10:
fi_instrumentation.instrumentation.config.TraceConfigimported but unusedRemove unused import:
fi_instrumentation.instrumentation.config.TraceConfig(F401)
12-12: Remove unused importThe
opentelemetry.tracemodule is imported but not used anywhere in the code.-from opentelemetry import trace🧰 Tools
🪛 Ruff (0.11.9)
12-12:
opentelemetry.traceimported but unusedRemove unused import:
opentelemetry.trace(F401)
18-18: Clean up commented codeThe commented-out line about
eval_tagsshould be removed since it's not being used.project_type=ProjectType.OBSERVE, - # eval_tags=eval_tags, project_name="FUTURE_AGI",python/frameworks/langchain/examples/tool_calling_agent.py (1)
26-26: Remove unused importThe
opentelemetry.tracemodule is imported as trace_api but not used in the code.-import opentelemetry.trace as trace_api🧰 Tools
🪛 Ruff (0.11.9)
26-26:
opentelemetry.traceimported but unusedRemove unused import:
opentelemetry.trace(F401)
python/fi_instrumentation/otel.py (2)
339-342: Environment variable parsing can crash on non-numeric values
int(os.getenv(FI_MAX_ACTIVE_SPANS_TRACKED, DEFAULT_MAX_ACTIVE_SPANS_TRACKED))
throwsValueErrorif the env var is set to a non-int string. Guard the conversion:- max_active_spans_tracked = int(os.getenv(FI_MAX_ACTIVE_SPANS_TRACKED, DEFAULT_MAX_ACTIVE_SPANS_TRACKED)) + try: + max_active_spans_tracked = int( + os.getenv(FI_MAX_ACTIVE_SPANS_TRACKED, DEFAULT_MAX_ACTIVE_SPANS_TRACKED) + ) + except ValueError: + max_active_spans_tracked = DEFAULT_MAX_ACTIVE_SPANS_TRACKED
368-372: Swallowed exception hides useful debugging information
except Exception as e:immediately followed bypassdiscards the error. Either
log it or drop the variable to satisfy Ruff F841.- except Exception as e: - pass + except Exception as exc: + logger.warning("Failed to end leaked span: %s", exc, exc_info=True)🧰 Tools
🪛 Ruff (0.11.9)
370-370: Local variable
eis assigned to but never usedRemove assignment to unused variable
e(F841)
python/fi_instrumentation/instrumentation/_protect_wrapper.py (1)
83-98: Preferstart_as_current_spanfor simpler context handlingThe manual
start_span+use_spandance works, but
tracer.start_as_current_span(...)is the idiomatic, safer helper that already
handles context propagation and exception recording.Not blocking, but consider the refactor for brevity and future maintainability.
🛑 Comments failed to post (13)
python/frameworks/llama_index/traceai_llamaindex/__init__.py (2)
88-93:
⚠️ Potential issueThe _tracer variable is undefined.
The code uses
self._tracerwhen creating theGuardrailProtectWrapper, but there's no assignment to this attribute in the_instrumentmethod. The local variabletraceris defined but not assigned toself._tracer.Either assign the tracer to an instance attribute or use the local variable directly:
tracer = FITracer( trace_api.get_tracer(__name__, __version__, tracer_provider), config=config, ) + self._tracer = tracer # ...or alternatively... - wrapper=GuardrailProtectWrapper(tracer=self._tracer), + wrapper=GuardrailProtectWrapper(tracer=tracer),📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.tracer = FITracer( trace_api.get_tracer(__name__, __version__, tracer_provider), config=config, ) self._tracer = tracer self._original_protect = ProtectClient.protect wrap_function_wrapper( module="fi.evals", name="ProtectClient.protect", wrapper=GuardrailProtectWrapper(tracer=tracer), )🤖 Prompt for AI Agents
In python/frameworks/llama_index/traceai_llamaindex/__init__.py around lines 88 to 93, the code uses self._tracer when creating the GuardrailProtectWrapper, but self._tracer is never assigned. Fix this by assigning the local tracer variable to self._tracer before using it, or replace self._tracer with the local tracer variable directly in the wrapper argument.
88-93:
⚠️ Potential issueMissing restore logic in _uninstrument method.
While the
ProtectClient.protectmethod is properly wrapped and saved to_original_protect, the_uninstrumentmethod doesn't restore it to the original implementation. This might leave the method wrapped even after uninstrumentation.Add this to the
_uninstrumentmethod:def _uninstrument(self, **kwargs: Any) -> None: if self._use_legacy_callback_handler: import llama_index.core llama_index.core.global_handler = self._original_global_handler self._original_global_handler = None else: if self._event_handler is None: return from llama_index.core.instrumentation import get_dispatcher dispatcher = get_dispatcher() dispatcher.span_handlers[:] = filter( lambda h: not isinstance(h, type(self._span_handler)), dispatcher.span_handlers, ) dispatcher.event_handlers[:] = filter( lambda h: not isinstance(h, type(self._event_handler)), dispatcher.event_handlers, ) self._event_handler = None + # Restore original protect method + if hasattr(self, '_original_protect'): + from fi.evals import ProtectClient + ProtectClient.protect = self._original_protect + self._original_protect = None📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.def _uninstrument(self, **kwargs: Any) -> None: if self._use_legacy_callback_handler: import llama_index.core llama_index.core.global_handler = self._original_global_handler self._original_global_handler = None else: if self._event_handler is None: return from llama_index.core.instrumentation import get_dispatcher dispatcher = get_dispatcher() dispatcher.span_handlers[:] = filter( lambda h: not isinstance(h, type(self._span_handler)), dispatcher.span_handlers, ) dispatcher.event_handlers[:] = filter( lambda h: not isinstance(h, type(self._event_handler)), dispatcher.event_handlers, ) self._event_handler = None # Restore original protect method if hasattr(self, '_original_protect'): from fi.evals import ProtectClient ProtectClient.protect = self._original_protect self._original_protect = None🤖 Prompt for AI Agents
In python/frameworks/llama_index/traceai_llamaindex/__init__.py around lines 88 to 93, the _uninstrument method lacks logic to restore ProtectClient.protect to its original method saved in _original_protect. To fix this, update the _uninstrument method to assign ProtectClient.protect back to self._original_protect, ensuring the original method is restored when uninstrumenting.python/frameworks/litellm/traceai_litellm/__init__.py (1)
272-277:
⚠️ Potential issueMissing restore logic in _uninstrument method.
While the
ProtectClient.protectmethod is properly wrapped and saved to_original_protect, the_uninstrumentmethod doesn't restore it to the original implementation. This might leave the method wrapped even after uninstrumentation.Add this to the
_uninstrumentmethod:def _uninstrument(self, **kwargs: Any) -> None: for ( func_name, original_func, ) in LiteLLMInstrumentor.original_litellm_funcs.items(): setattr(litellm, func_name, original_func) self.original_litellm_funcs.clear() + # Restore original protect method + if hasattr(self, '_original_protect'): + from fi.evals import ProtectClient + ProtectClient.protect = self._original_protect + self._original_protect = None📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.def _uninstrument(self, **kwargs: Any) -> None: for ( func_name, original_func, ) in LiteLLMInstrumentor.original_litellm_funcs.items(): setattr(litellm, func_name, original_func) self.original_litellm_funcs.clear() # Restore original protect method if hasattr(self, '_original_protect'): from fi.evals import ProtectClient ProtectClient.protect = self._original_protect self._original_protect = None🤖 Prompt for AI Agents
In python/frameworks/litellm/traceai_litellm/__init__.py around lines 272 to 277, the _uninstrument method lacks logic to restore ProtectClient.protect to its original method saved in _original_protect. To fix this, update the _uninstrument method to reassign ProtectClient.protect back to self._original_protect, ensuring the original method is restored when uninstrumenting.python/frameworks/haystack/traceai_haystack/__init__.py (1)
79-84:
⚠️ Potential issueMissing restore logic in _uninstrument method.
While the
ProtectClient.protectmethod is properly wrapped and saved to_original_protect, the_uninstrumentmethod doesn't restore it to the original implementation. This might leave the method wrapped even after uninstrumentation.Add this to the
_uninstrumentmethod:def _uninstrument(self, **kwargs: Any) -> None: if self._original_pipeline_run is not None: haystack.Pipeline.run = self._original_pipeline_run if self._original_pipeline_run_component is not None: haystack.Pipeline._run_component = self._original_pipeline_run_component for ( component_cls, original_run_method, ) in self._original_component_run_methods.items(): component_cls.run = original_run_method + + # Restore original protect method + if hasattr(self, '_original_protect'): + from fi.evals import ProtectClient + ProtectClient.protect = self._original_protect + self._original_protect = None📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.def _uninstrument(self, **kwargs: Any) -> None: if self._original_pipeline_run is not None: haystack.Pipeline.run = self._original_pipeline_run if self._original_pipeline_run_component is not None: haystack.Pipeline._run_component = self._original_pipeline_run_component for ( component_cls, original_run_method, ) in self._original_component_run_methods.items(): component_cls.run = original_run_method # Restore original protect method if hasattr(self, '_original_protect'): from fi.evals import ProtectClient ProtectClient.protect = self._original_protect self._original_protect = None🤖 Prompt for AI Agents
In python/frameworks/haystack/traceai_haystack/__init__.py around lines 79 to 84, the _uninstrument method lacks logic to restore ProtectClient.protect to its original method saved in _original_protect. To fix this, update the _uninstrument method to reassign ProtectClient.protect back to self._original_protect, ensuring the original method is restored when uninstrumenting.python/frameworks/openai/traceai_openai/__init__.py (1)
58-63:
⚠️ Potential issueMissing restore logic in _uninstrument method.
While the
ProtectClient.protectmethod is properly wrapped and saved to_original_protect, the_uninstrumentmethod doesn't restore it to the original implementation. This might leave the method wrapped even after uninstrumentation.Add this to the
_uninstrumentmethod:def _uninstrument(self, **kwargs: Any) -> None: openai = import_module(_MODULE) openai.OpenAI.request = self._original_request openai.AsyncOpenAI.request = self._original_async_request + + # Restore original protect method + if hasattr(self, '_original_protect'): + from fi.evals import ProtectClient + ProtectClient.protect = self._original_protect + self._original_protect = None📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.def _uninstrument(self, **kwargs: Any) -> None: openai = import_module(_MODULE) openai.OpenAI.request = self._original_request openai.AsyncOpenAI.request = self._original_async_request # Restore original protect method if hasattr(self, "_original_protect"): from fi.evals import ProtectClient ProtectClient.protect = self._original_protect self._original_protect = None🤖 Prompt for AI Agents
In python/frameworks/openai/traceai_openai/__init__.py around lines 58 to 63, the _uninstrument method does not restore ProtectClient.protect to its original method saved in _original_protect, which can leave the method wrapped after uninstrumentation. Modify the _uninstrument method to reassign ProtectClient.protect back to self._original_protect to properly restore the original method.python/frameworks/vertexai/traceai_vertexai/__init__.py (1)
49-54: 🛠️ Refactor suggestion
Missing uninstrumentation logic for ProtectClient.protect
The code correctly saves the original method and wraps it with the
GuardrailProtectWrapper. However, there's no corresponding logic in the_uninstrumentmethod to restore the originalProtectClient.protectmethod, which could cause unexpected behavior if the instrumentation is applied multiple times or if other code expects the original behavior after uninstrumentation.def _uninstrument(self, **kwargs: Any) -> None: self._status._IS_INSTRUMENTED = False import google.api_core.gapic_v1 as gapic for module in (gapic.method, gapic.method_async): if wrapped := getattr(module.wrap_method, "__wrapped__", None): setattr(module, "wrap_method", wrapped) + + # Restore original ProtectClient.protect method + if hasattr(self, "_original_protect"): + from fi.evals import ProtectClient + ProtectClient.protect = self._original_protect + self._original_protect = None📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.def _uninstrument(self, **kwargs: Any) -> None: self._status._IS_INSTRUMENTED = False import google.api_core.gapic_v1 as gapic for module in (gapic.method, gapic.method_async): if wrapped := getattr(module.wrap_method, "__wrapped__", None): setattr(module, "wrap_method", wrapped) # Restore original ProtectClient.protect method if hasattr(self, "_original_protect"): from fi.evals import ProtectClient ProtectClient.protect = self._original_protect self._original_protect = None🤖 Prompt for AI Agents
In python/frameworks/vertexai/traceai_vertexai/__init__.py around lines 49 to 54, the code saves the original ProtectClient.protect method and wraps it but lacks uninstrumentation logic to restore the original method. To fix this, add code in the _uninstrument method to reassign ProtectClient.protect back to self._original_protect, ensuring the original method is restored when uninstrumentation occurs.python/frameworks/bedrock/traceai_bedrock/__init__.py (1)
424-429: 🛠️ Refactor suggestion
Missing uninstrumentation logic for ProtectClient.protect
The code correctly saves the original method and wraps it with the
GuardrailProtectWrapper. However, there's no corresponding logic in the_uninstrumentmethod to restore the originalProtectClient.protectmethod, which could lead to unexpected behavior if the instrumentation is applied multiple times.def _uninstrument(self, **kwargs: Any) -> None: boto = import_module(_MODULE) boto.ClientCreator.create_client = self._original_client_creator self._original_client_creator = None + + # Restore original ProtectClient.protect method + if hasattr(self, "_original_protect"): + from fi.evals import ProtectClient + ProtectClient.protect = self._original_protect + self._original_protect = None🤖 Prompt for AI Agents
In python/frameworks/bedrock/traceai_bedrock/__init__.py around lines 424 to 429, the code saves the original ProtectClient.protect method and wraps it but lacks uninstrumentation logic. To fix this, add code in the _uninstrument method to restore ProtectClient.protect to self._original_protect, ensuring the original method is reinstated when uninstrumenting to prevent issues from multiple instrumentations.python/frameworks/groq/traceai_groq/__init__.py (1)
59-64: 🛠️ Refactor suggestion
Missing uninstrumentation logic for ProtectClient.protect
The code correctly saves the original method and wraps it with the
GuardrailProtectWrapper. However, there's no corresponding logic in the_uninstrumentmethod to restore the originalProtectClient.protectmethod, which could lead to memory leaks or unexpected behavior if the instrumentation is applied multiple times.def _uninstrument(self, **kwargs: Any) -> None: groq_module = import_module("groq.resources.chat.completions") if self._original_completions_create is not None: groq_module.Completions.create = self._original_completions_create if self._original_async_completions_create is not None: groq_module.AsyncCompletions.create = ( self._original_async_completions_create ) + + # Restore original ProtectClient.protect method + if hasattr(self, "_original_protect"): + from fi.evals import ProtectClient + ProtectClient.protect = self._original_protect + self._original_protect = None📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.def _uninstrument(self, **kwargs: Any) -> None: groq_module = import_module("groq.resources.chat.completions") if self._original_completions_create is not None: groq_module.Completions.create = self._original_completions_create if self._original_async_completions_create is not None: groq_module.AsyncCompletions.create = ( self._original_async_completions_create ) # Restore original ProtectClient.protect method if hasattr(self, "_original_protect"): from fi.evals import ProtectClient ProtectClient.protect = self._original_protect self._original_protect = None🤖 Prompt for AI Agents
In python/frameworks/groq/traceai_groq/__init__.py around lines 59 to 64, the code saves the original ProtectClient.protect method and wraps it but lacks uninstrumentation logic. To fix this, add code in the _uninstrument method to restore ProtectClient.protect to self._original_protect, ensuring the original method is reinstated when uninstrumenting to prevent memory leaks or unexpected behavior.python/frameworks/instructor/traceai_instructor/__init__.py (1)
50-55: 🛠️ Refactor suggestion
Missing uninstrumentation logic for ProtectClient.protect
The code correctly saves the original method and wraps it with the
GuardrailProtectWrapper. However, there's no corresponding logic in the_uninstrumentmethod to restore the originalProtectClient.protectmethod, which could lead to unexpected behavior if the instrumentation is applied multiple times or if other code expects the original behavior after uninstrumentation.def _uninstrument(self, **kwargs: Any) -> None: if self._original_patch is not None: instructor_module = import_module("instructor") instructor_module.patch = self._original_patch # type: ignore[attr-defined] self._original_patch = None if self._original_handle_response_model is not None: patch_module = import_module("instructor.patch") patch_module.handle_response_model = self._original_handle_response_model # type: ignore[attr-defined] self._original_handle_response_model = None + + # Restore original ProtectClient.protect method + if hasattr(self, "_original_protect"): + from fi.evals import ProtectClient + ProtectClient.protect = self._original_protect + self._original_protect = None📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.def _uninstrument(self, **kwargs: Any) -> None: if self._original_patch is not None: instructor_module = import_module("instructor") instructor_module.patch = self._original_patch # type: ignore[attr-defined] self._original_patch = None if self._original_handle_response_model is not None: patch_module = import_module("instructor.patch") patch_module.handle_response_model = self._original_handle_response_model # type: ignore[attr-defined] self._original_handle_response_model = None # Restore original ProtectClient.protect method if hasattr(self, "_original_protect"): from fi.evals import ProtectClient ProtectClient.protect = self._original_protect self._original_protect = None🤖 Prompt for AI Agents
In python/frameworks/instructor/traceai_instructor/__init__.py around lines 50 to 55, the code wraps ProtectClient.protect but does not restore the original method in the _uninstrument method. To fix this, add logic in the _uninstrument method to reassign ProtectClient.protect back to self._original_protect, ensuring the original method is restored when uninstrumenting.python/frameworks/langchain/examples/tool_calling_agent.py (1)
272-273:
⚠️ Potential issueMissing run_guardrail_check tool
The system prompt instructs the agent to "run the run_guardrail_check tool" before providing a final answer, but this tool is not defined in the code. This will cause the agent to attempt to use a non-existent tool.
Either implement the missing tool or modify the prompt:
- "You are a helpful AI assistant. Get information using tools. " - "Before providing a final answer, run the run_guardrail_check tool on your proposed answer.", + "You are a helpful AI assistant. Use the provided tools to get information and provide helpful answers.",📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements."You are a helpful AI assistant. Use the provided tools to get information and provide helpful answers.",🤖 Prompt for AI Agents
In python/frameworks/langchain/examples/tool_calling_agent.py around lines 272 to 273, the system prompt references a run_guardrail_check tool that is not defined in the code, causing errors when the agent tries to use it. To fix this, either implement the run_guardrail_check tool with the necessary functionality and add it to the agent's tool list, or remove or modify the prompt text to exclude the instruction to run this undefined tool.python/frameworks/langchain/traceai_langchain/__init__.py (2)
54-59: 🛠️ Refactor suggestion
ProtectClient.protectis wrapped but never un-wrappedYou save the original method in
_original_protect, yet_uninstrumentdoes not restore it.
IfLangChainInstrumentor()._uninstrument()is called (e.g. during hot-reload or tests) the method will stay wrapped, leading to duplicate spans or unexpected behaviour.def _uninstrument(self, **kwargs: Any) -> None: langchain_core.callbacks.BaseCallbackManager.__init__ = self._original_callback_manager_init # type: ignore self._original_callback_manager_init = None # type: ignore self._tracer = None self.fi_tracer = None + + # Restore the original ProtectClient.protect implementation + from fi.evals import ProtectClient + ProtectClient.protect = self._original_protect📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.def _uninstrument(self, **kwargs: Any) -> None: langchain_core.callbacks.BaseCallbackManager.__init__ = self._original_callback_manager_init # type: ignore self._original_callback_manager_init = None # type: ignore self._tracer = None self.fi_tracer = None # Restore the original ProtectClient.protect implementation from fi.evals import ProtectClient ProtectClient.protect = self._original_protect🤖 Prompt for AI Agents
In python/frameworks/langchain/traceai_langchain/__init__.py around lines 54 to 59, the original ProtectClient.protect method is saved but not restored in the _uninstrument method. To fix this, update the _uninstrument method to restore ProtectClient.protect to self._original_protect, ensuring the method is properly unwrapped during uninstrumentation to prevent duplicate spans or unexpected behavior.
39-43:
⚠️ Potential issueAvoid shadowing the already-imported
FITracer
FITraceris imported at line 6 fromfi_instrumentation, then re-imported here fromfi_instrumentation.instrumentation._tracers.
Besides triggering Ruff F811, it is confusing for readers and may even refer to a different class if the two modules diverge.-from fi_instrumentation.instrumentation._tracers import FITracer +# Re-use the FITracer that is already imported on line 6📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.from traceai_langchain._tracer import FiTracer as LangChainFiTracer from fi.evals import ProtectClient # Re-use the FITracer that is already imported on line 6 self.fi_tracer = FITracer(🧰 Tools
🪛 Ruff (0.11.9)
41-41: Redefinition of unused
FITracerfrom line 6(F811)
🤖 Prompt for AI Agents
In python/frameworks/langchain/traceai_langchain/__init__.py around lines 39 to 43, the FITracer class is imported twice from different modules, causing shadowing and potential confusion. Remove the second import of FITracer from fi_instrumentation.instrumentation._tracers and use the original import from fi_instrumentation consistently throughout the file to avoid shadowing and maintain clarity.python/fi_instrumentation/instrumentation/_protect_wrapper.py (1)
136-139: 🛠️ Refactor suggestion
Incorrect value passed to
_get_raw_input
_get_raw_inputis intended to capture the inputs sent to Guardrails, but the wrapper
passes all kwargs instead. This duplicates information already captured and can
leak unrelated parameters.- _get_raw_input(kwargs), + _get_raw_input(inputs),📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements._get_raw_input(inputs), ([(GUARDRAIL_ACTION, action)] if action is not None else []), ([(GUARDRAIL_REASON_FLAG, reason_flag)] if reason_flag is not None else []), )🤖 Prompt for AI Agents
In python/fi_instrumentation/instrumentation/_protect_wrapper.py around lines 136 to 139, the call to _get_raw_input incorrectly passes all kwargs instead of only the inputs intended for Guardrails. To fix this, modify the argument to _get_raw_input to include only the specific input parameters relevant to Guardrails, avoiding passing unrelated or duplicate data. This ensures only the intended inputs are captured and prevents leaking unrelated parameters.
Pull Request
Description
Here when some spans are not ended they wont get exported , this cause our backend to have missing spans and the span tree is incomplete . so we are exporting all these spans when the code gets terminated .
Summary by CodeRabbit
New Features
Dependencies
Bug Fixes
Documentation