feature/PAAL-212-trace-ids-in-testworkflow#11
Merged
qa-jil-kamerling merged 12 commits intomainfrom Dec 15, 2025
Merged
Conversation
by adding otel mock, updating parameters
g3force
reviewed
Dec 12, 2025
g3force
reviewed
Dec 15, 2025
felixk101
approved these changes
Dec 15, 2025
Comment on lines
+49
to
+81
| # Required environment variable for local testing | ||
| export OPENAI_API_BASE="http://localhost:11001" # AI Gateway endpoint | ||
| export GOOGLE_API_KEY="your-api-key" # Required for Gemini models | ||
| ``` | ||
|
|
||
| ### Running the 4-Phase Pipeline Locally | ||
|
|
||
| ```shell | ||
| # Phase 1: Download and convert dataset to RAGAS format | ||
| uv run python3 scripts/setup.py "http://localhost:11020/dataset.csv" | ||
|
|
||
| # Phase 2: Execute queries through agent via A2A protocol | ||
| uv run python3 scripts/run.py "http://localhost:11010" | ||
|
|
||
| # Phase 3: Evaluate responses using RAGAS metrics | ||
| uv run python3 scripts/evaluate.py gemini-2.5-flash-lite "faithfulness answer_relevancy" | ||
|
|
||
| # Phase 4: Publish metrics to OTLP endpoint | ||
| uv run python3 scripts/publish.py "workflow-name" | ||
| ``` | ||
|
|
||
| ### Testkube Execution | ||
|
|
||
| ```shell | ||
| # Run complete evaluation workflow in Kubernetes | ||
| kubectl testkube run testworkflow ragas-evaluation-workflow \ | ||
| --config datasetUrl="http://data-server.data-server:8000/dataset.csv" \ | ||
| --config agentUrl="http://weather-agent.sample-agents:8000" \ | ||
| --config metrics="nv_accuracy context_recall" \ | ||
| --config workflowName="Test-Run" \ | ||
| -n testkube | ||
|
|
||
| # Watch workflow execution |
Contributor
There was a problem hiding this comment.
ggf. magst du einfach @README.md verwenden
Contributor
Author
There was a problem hiding this comment.
Danke, aber wir hatten uns entschieden Claude.mds und readmes zu trennen
|
|
||
| # Save to file | ||
| with open(output_file, "w") as f: | ||
| json.dump(asdict(evaluation_scores), f, indent=2) |
Contributor
There was a problem hiding this comment.
Ich hatte mit dem Code schonmal Probleme. Es wird manchmal NaN als Zahl in die JSON-Outputdatei geschrieben. Das ist allerdings nicht valides JSON. 😕
In meinem Code musste ich in publish.py folgendes einbauen, was nicht so schön ist.
def _is_metric_value(value: Any) -> TypeGuard[int | float]:
"""Check if a value is a valid metric score (numeric and not NaN)."""
if not isinstance(value, (int, float)):
return False
if isinstance(value, float) and math.isnan(value):
return False
return True
(Das ist aber nicht Teil des Tickets)
FYI @fmallmann
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
OTEL Tracing Implementation
Phase 2: test run
Phase 3: evaluation
enabling correlation between test execution and agent behavior
Dependencies added: opentelemetry-api, opentelemetry-sdk, opentelemetry-exporter-otlp-proto-http, opentelemetry-instrumentation-httpx