refactor(core): Introduce LlmUtilityService and promptIdContext#7952
refactor(core): Introduce LlmUtilityService and promptIdContext#7952
Conversation
There was a problem hiding this comment.
Summary of Changes
Hello @abhipatel12, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request significantly refactors how utility-focused LLM interactions are handled and how prompt identifiers are managed within the application. By introducing a dedicated service for stateless LLM operations and a context-based mechanism for prompt ID propagation, the changes enhance the system's architecture, making it more modular, testable, and robust for consistent telemetry.
Highlights
- Centralized LLM Utility Calls: Introduced a new LlmUtilityService to centralize stateless, utility-focused LLM calls, such as JSON generation, improving modularity and testability by decoupling these operations from the main GeminiClient.
- Consistent Prompt ID Propagation: Implemented a new promptIdContext using Node.js AsyncLocalStorage to consistently propagate prompt IDs across the call stack. This simplifies function signatures and ensures implicit availability of prompt IDs for logging and telemetry.
- Refactored Smart-Edit Self-Correction: The smart-edit tool's self-correction logic (FixLLMEditWithInstruction) now utilizes the new LlmUtilityService for its JSON generation needs, enhancing its robustness and standardization.
- Improved Testability and Bug Fixes: Added and updated several tests, which helped uncover and fix a bug in the smart-edit test setup where a key service was not being mocked correctly.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
|
Size Change: +5.25 kB (+0.04%) Total Size: 13.2 MB
ℹ️ View Unchanged
|
There was a problem hiding this comment.
Code Review
This is a great refactoring that introduces LlmUtilityService to centralize utility LLM calls and promptIdContext for better telemetry and context propagation. The separation of concerns improves modularity and testability. The implementation is solid, with good test coverage for the new components. I have one suggestion to make the JSON parsing in LlmUtilityService more robust to handle variations in the model's output format.
Code Coverage Summary
CLI Package - Full Text ReportCore Package - Full Text ReportFor detailed HTML reports, please see the 'coverage-reports-22.x-ubuntu-latest' artifact from the main CI run. |
3851b74 to
064117c
Compare
064117c to
f3c9ce7
Compare
abhipatel12
left a comment
There was a problem hiding this comment.
Thanks for the review. Made the changes, lmk what you think!

TLDR
This pull request introduces a new
LlmUtilityServiceto centralize stateless, utility-focused LLM calls (like JSON generation) and a newpromptIdContextto consistently propagate prompt IDs.This refactoring decouples internal tools, like the self-correction mechanism in
smart-edit, from the mainGeminiClient, improving modularity and testability. ThepromptIdContextusesAsyncLocalStorageto avoid passing thepromptIdthrough multiple layers of the application, making it implicitly available for logging and telemetry.Dive Deeper
Previously, internal components that needed to make LLM calls for utility purposes (e.g., fixing a failed edit) were directly dependent on the
GeminiClient. This created tight coupling and made it difficult to enforce specific configurations (liketemperature: 0) for these calls.This PR introduces two key components to address this:
LlmUtilityService: A new service dedicated to handling stateless LLM tasks. Its first responsibility is a robustgenerateJsonmethod, which is now used by thesmart-edittool's self-correction logic (FixLLMEditWithInstruction). This ensures that all utility JSON generation calls are standardized and routed through a single, testable service.promptIdContext: A new context provider built onAsyncLocalStorage. This allows us to set a uniquepromptIdat the beginning of a user request (inuseGeminiStreamfor the interactive CLI andrunNonInteractivefor non-interactive mode) and have it be accessible deep within the call stack (like in theLlmUtilityService) without needing to pass it down as a parameter. This simplifies function signatures and ensures consistent telemetry.As part of this refactoring, several tests were added and updated, uncovering and fixing a bug in the
smart-edittest setup where a key service was not being mocked correctly.Reviewer Test Plan
Pull down the branch and run the full test suite to ensure all changes are validated:
Pay close attention to the new tests:
llmUtilityService.test.ts,llm-edit-fixer.test.ts, and the updatedsmart-edit.test.ts.Manually test the
replacetool's self-correction capability. Find a file and construct a prompt that is slightly incorrect, which would cause a normal search-and-replace to fail.Example:
If a file contains the line
const version = "1.0.0";, run the following prompt:(Note the extra space in the
old_string). The tool should fail the initial attempt, trigger the self-correction mechanism viaLlmUtilityService, and successfully apply the change.Verify that standard, correct
replaceoperations still work as expected.(Optional) Enable debug mode and observe the logs to confirm that a consistent
promptIdis being used throughout the lifecycle of a single prompt, from the initial user input to the final tool execution.Testing Matrix
Linked issues / bugs
This PR makes progress on #7809