Conversation
See https://github.com/ggml-org/llama.cpp/tree/b7245. Signed-off-by: Dorin Geman <dorin.geman@docker.com>
There was a problem hiding this comment.
Sorry @doringeman, your pull request is larger than the review limit of 150000 diff characters
Summary of ChangesHello @doringeman, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request primarily focuses on updating the Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request is a significant refactoring of the server-side code, breaking it down into a more modular structure with separate files for handling HTTP requests, task queuing, model management, and common utilities. This greatly improves the organization and maintainability of the server. A major new feature is the introduction of a router mode for managing multiple model instances, which are spawned as subprocesses.
My review focuses on a few areas for improvement:
- Correcting a redundant code block.
- Improving build system robustness in CMake.
- Ensuring consistent exception handling for input validation.
- Noting a potential race condition in port allocation for subprocesses.
Overall, this is a very positive change that modernizes the server architecture.
| utils.hpp | ||
| httplib.h | ||
| ) | ||
| file(GLOB TARGET_SRCS "*.cpp") |
There was a problem hiding this comment.
Using file(GLOB ...) to collect source files is generally discouraged in CMake. If you add or remove a source file, the build system won't automatically detect the change and re-run CMake, which can lead to build issues. It's more robust to list the source files explicitly.
set(TARGET_SRCS
server.cpp
server-common.cpp
server-context.cpp
server-http.cpp
server-models.cpp
server-queue.cpp
server-task.cpp
)
See https://github.com/ggml-org/llama.cpp/tree/b7245.
Had to deal with changes from more PRs. I can look again and list them here if anybody is interested.
An example is ggml-org/llama.cpp@037c62f, after which I removed
httplib.hand includedcpp-httplibtotarget_link_libraries.