Skip to content

llamacpp: bump llama.cpp (b7245)#471

Merged
doringeman merged 1 commit intomainfrom
bump-llamacpp
Dec 3, 2025
Merged

llamacpp: bump llama.cpp (b7245)#471
doringeman merged 1 commit intomainfrom
bump-llamacpp

Conversation

@doringeman
Copy link
Contributor

See https://github.com/ggml-org/llama.cpp/tree/b7245.

Had to deal with changes from more PRs. I can look again and list them here if anybody is interested.
An example is ggml-org/llama.cpp@037c62f, after which I removed httplib.h and included cpp-httplib to target_link_libraries.

See https://github.com/ggml-org/llama.cpp/tree/b7245.

Signed-off-by: Dorin Geman <dorin.geman@docker.com>
Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry @doringeman, your pull request is larger than the review limit of 150000 diff characters

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @doringeman, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request primarily focuses on updating the llama.cpp submodule to its latest version and performing a significant refactoring of the server's codebase. The refactoring effort aims to modularize the server components, making the code more organized and easier to maintain. Additionally, it introduces compatibility with the Anthropic API and improves the robustness of API parameter validation.

Highlights

  • llama.cpp Submodule Update: The core llama.cpp library has been updated to a newer version (commit 37adc9c6ba6057bfe7c036c201abe85471d854a1), bringing in the latest upstream changes and features, including those from PR #13901.
  • Extensive Server Code Refactoring: The server-side code has undergone a major architectural refactoring. A large utils.hpp file has been split into multiple dedicated header and source files (server-common, server-context, server-http, server-models, server-queue, server-task), significantly improving modularity, organization, and maintainability.
  • HTTP Library Integration Streamlined: The cpp-httplib library is now explicitly linked and managed via CMake, replacing previous manual inclusions and streamlining the HTTP server setup and dependencies.
  • Anthropic API Compatibility Added: New functionality has been introduced to convert Anthropic Messages API requests into the OpenAI Chat Completions API format, enhancing the server's compatibility with different large language model API standards.
  • Improved Error Handling: Error handling within API parameter parsing has been refined, with std::runtime_error instances being replaced by std::invalid_argument for more specific and appropriate error reporting in cases of invalid input.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request is a significant refactoring of the server-side code, breaking it down into a more modular structure with separate files for handling HTTP requests, task queuing, model management, and common utilities. This greatly improves the organization and maintainability of the server. A major new feature is the introduction of a router mode for managing multiple model instances, which are spawned as subprocesses.

My review focuses on a few areas for improvement:

  • Correcting a redundant code block.
  • Improving build system robustness in CMake.
  • Ensuring consistent exception handling for input validation.
  • Noting a potential race condition in port allocation for subprocesses.

Overall, this is a very positive change that modernizes the server architecture.

utils.hpp
httplib.h
)
file(GLOB TARGET_SRCS "*.cpp")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Using file(GLOB ...) to collect source files is generally discouraged in CMake. If you add or remove a source file, the build system won't automatically detect the change and re-run CMake, which can lead to build issues. It's more robust to list the source files explicitly.

set(TARGET_SRCS
    server.cpp
    server-common.cpp
    server-context.cpp
    server-http.cpp
    server-models.cpp
    server-queue.cpp
    server-task.cpp
)

@ericcurtin ericcurtin mentioned this pull request Dec 3, 2025
@doringeman doringeman merged commit 4044682 into main Dec 3, 2025
8 checks passed
@doringeman doringeman deleted the bump-llamacpp branch December 3, 2025 14:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants