Implement the batch request feature#11
Open
chakravarthik27 wants to merge 5 commits intomainfrom
Open
Conversation
Co-authored-by: Copilot <copilot@github.com>
pratacosmin
approved these changes
Apr 30, 2026
…max retries Co-authored-by: Copilot <copilot@github.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This pull request adds support for batch processing of requests in the benchmarking system, enabling more efficient execution when the underlying model client supports batch APIs (notably for OpenAI endpoints). The changes introduce a
batch_sizeparameter throughout the execution and CLI layers, implement batch request logic in the executor, and add batch request support to relevant model clients. This allows multiple requests to be grouped and sent together, reducing overhead and improving performance.Batch execution support in benchmarking:
batch_sizeparameter toExecutionSpec, the executor, and the CLI (--batch-size), allowing users to specify how many requests to process together when supported. The executor now processes requests in batches ifbatch_sizeis set. (src/helm/benchmark/executor.py[1] [2] [3] [4];src/helm/benchmark/run.py[5] [6] [7] [8]Batch request API in model clients:
make_batch_requestmethod to the baseClientclass, with a default implementation that raisesNotImplementedError. (src/helm/clients/client.pysrc/helm/clients/client.pyR23-R26)make_batch_requestinAutoClient, which delegates batch requests to the appropriate underlying client and includes retry logic. (src/helm/clients/auto_client.pysrc/helm/clients/auto_client.pyR135-R160)OpenAIClientandOpenAIResponsesClient, including logic to prepare JSONL files, upload them, poll for completion, and parse batch results. (src/helm/clients/openai_client.py[1];src/helm/clients/openai_responses_client.py[2] [3]OpenAI batch API integration:
RequestResultobjects. (src/helm/clients/openai_client.py[1];src/helm/clients/openai_responses_client.py[2]Minor improvements:
src/helm/benchmark/executor.py[1];src/helm/clients/auto_client.py[2]prompt_cache_retentionfield to requests inOpenAIResponsesClientfor batch compatibility. (src/helm/clients/openai_responses_client.pysrc/helm/clients/openai_responses_client.pyR115)These changes collectively enable efficient batch processing throughout the benchmarking system, especially for OpenAI models, reducing request overhead and improving throughput.