Evaluation package that allows benchmarking of agentic AIs from various sources and frameworks by producing statistical results which can be compared across different use cases and datasets.
-
Updated
Feb 24, 2026 - Python
Evaluation package that allows benchmarking of agentic AIs from various sources and frameworks by producing statistical results which can be compared across different use cases and datasets.
Add a description, image, and links to the user-proxy topic page so that developers can more easily learn about it.
To associate your repository with the user-proxy topic, visit your repo's landing page and select "manage topics."