Describe the Problem
When configuring a strict input/output filter within the orchestration module config and then trying to do chat completion with the OrchestrationClient, the exact reason, why the completion failed in the input/output filtering step is not reflected in the thrown OrchestrationClientException.
The exception itself provides only an error code and a message:
Request failed with status 400 Bad Request and error message: '400 - Filtering Module - Input Filter: Prompt filtered due to safety violations. Please modify the prompt and try again.'
Inspecting the response of the service itself, there is more information provided to the SDK from the AI Core Service itself:
{
"request_id": "<request_id>",
"code": 400,
"message": "400 - Filtering Module - Input Filter: Prompt filtered due to safety violations. Please modify the prompt and try again.",
"location": "Filtering Module - Input Filter",
"module_results": {
"templating": [
{
"content": "<prompt>",
"role": "user"
}
],
"input_masking": {
"message": "Input to LLM is masked successfully.",
"data": {
"masked_template": "[{\"content\": \"<masked_prompt>\", \"role\": \"user\"}]"
}
},
"input_filtering": {
"message": "Prompt filtered due to safety violations. Please modify the prompt and try again.",
"data": {
"azure_content_safety": {
"Hate": 0
},
"llama_guard_3_8b": {
"hate": true
}
}
}
}
}
Could you please add the failure reasons (in this case triggering the hate filter of llama_guard_3_8b) in the exception so consumers of the AI SDK can decide how to react on them..
Propose a Solution
When catching a OrchestrationClientException, please attach the failure reasons of the affected modules to the exception, so that e.g. the following works:
try {
//...
} catch (OrchestrationClientException e)
Map<String,FilteringModuleResult> filteringResult = e.getFilteringResult();
FilteringModuleResult llamaGuardResult = filteringResult.get("llama_guard_3_8b");
boolean hateFilterTriggered = llamaGuardResult.get("hate");
}
FilteringModuleResult could roughly resemble a Map.
Describe Alternatives
No response
Affected Development Phase
Development
Impact
Inconvenience
Timeline
No response
Describe the Problem
When configuring a strict input/output filter within the orchestration module config and then trying to do chat completion with the
OrchestrationClient, the exact reason, why the completion failed in the input/output filtering step is not reflected in the thrownOrchestrationClientException.The exception itself provides only an error code and a message:
Inspecting the response of the service itself, there is more information provided to the SDK from the AI Core Service itself:
{ "request_id": "<request_id>", "code": 400, "message": "400 - Filtering Module - Input Filter: Prompt filtered due to safety violations. Please modify the prompt and try again.", "location": "Filtering Module - Input Filter", "module_results": { "templating": [ { "content": "<prompt>", "role": "user" } ], "input_masking": { "message": "Input to LLM is masked successfully.", "data": { "masked_template": "[{\"content\": \"<masked_prompt>\", \"role\": \"user\"}]" } }, "input_filtering": { "message": "Prompt filtered due to safety violations. Please modify the prompt and try again.", "data": { "azure_content_safety": { "Hate": 0 }, "llama_guard_3_8b": { "hate": true } } } } }Could you please add the failure reasons (in this case triggering the
hatefilter ofllama_guard_3_8b) in the exception so consumers of the AI SDK can decide how to react on them..Propose a Solution
When catching a
OrchestrationClientException, please attach the failure reasons of the affected modules to the exception, so that e.g. the following works:FilteringModuleResultcould roughly resemble a Map.Describe Alternatives
No response
Affected Development Phase
Development
Impact
Inconvenience
Timeline
No response