Skip to content
#

mmlu

Here are 13 public repositories matching this topic...

Dataset management library for ML experiments—loaders for SciFact, FEVER, GSM8K, HumanEval, MMLU, TruthfulQA, HellaSwag; git-like versioning with lineage tracking; transformation pipelines; quality validation with schema checks and duplicate detection; GenStage streaming for large datasets. Built for reproducible AI research.

  • Updated Dec 6, 2025
  • Elixir

Enterprise-grade LLM evaluation framework | Multi-model benchmarking, honest dashboards, system profiling | Academic metrics: MMLU, TruthfulQA, HellaSwag | Zero fake data | PyPI: llm-benchmark-toolkit | Blog: https://dev.to/nahuelgiudizi/building-an-honest-llm-evaluation-framework-from-fake-metrics-to-real-benchmarks-2b90

  • Updated Dec 5, 2025
  • Python

Improve this page

Add a description, image, and links to the mmlu topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the mmlu topic, visit your repo's landing page and select "manage topics."

Learn more