Conversation
…nclude a new test case for it.
Summary of ChangesHello @kevin-ramdass, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request resolves a critical bug in the Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
Size Change: +144 B (0%) Total Size: 21.6 MB ℹ️ View Unchanged
|
There was a problem hiding this comment.
Code Review
This pull request correctly fixes a bug where MCP tool errors were not being detected. The change aligns with the MCP specification by checking for the isError property at the top level of the tool result object. A new test case has been added to verify this fix. I've suggested an improvement to the new test to make it more comprehensive by covering all cases handled by the implementation. Otherwise, the changes look good and improve the correctness of MCP tool error handling.
| it('should return a structured error if MCP tool reports a top-level isError (spec compliant)', async () => { | ||
| const tool = new DiscoveredMCPTool( | ||
| mockCallableToolInstance, | ||
| serverName, | ||
| serverToolName, | ||
| baseDescription, | ||
| inputSchema, | ||
| ); | ||
| const params = { param: 'isErrorTopLevelCase' }; | ||
| const functionCall = { | ||
| name: serverToolName, | ||
| args: params, | ||
| }; | ||
|
|
||
| // Spec compliant error response: { isError: true } at the top level of content (or response object in this mapping) | ||
| const errorResponse = { isError: true }; | ||
| const mockMcpToolResponseParts: Part[] = [ | ||
| { | ||
| functionResponse: { | ||
| name: serverToolName, | ||
| response: errorResponse, | ||
| }, | ||
| }, | ||
| ]; | ||
| mockCallTool.mockResolvedValue(mockMcpToolResponseParts); | ||
| const expectedErrorMessage = `MCP tool '${serverToolName}' reported tool error for function call: ${safeJsonStringify( | ||
| functionCall, | ||
| )} with response: ${safeJsonStringify(mockMcpToolResponseParts)}`; | ||
| const invocation = tool.build(params); | ||
| const result = await invocation.execute(new AbortController().signal); | ||
|
|
||
| expect(result.error?.type).toBe(ToolErrorType.MCP_TOOL_ERROR); | ||
| expect(result.llmContent).toBe(expectedErrorMessage); | ||
| expect(result.returnDisplay).toContain( | ||
| `Error: MCP tool '${serverToolName}' reported an error.`, | ||
| ); | ||
| }); |
There was a problem hiding this comment.
The new test case for spec-compliant MCP errors only validates the case where isError is a boolean true. To ensure the fix is robust and prevent future regressions, it's important to also test the case where isError is the string 'true', as the implementation correctly handles both. I suggest converting this test to use it.each to cover both scenarios, similar to the existing test for legacy error formats.
it.each([
{ isErrorValue: true, description: 'true (bool)' },
{ isErrorValue: 'true', description: '"true" (str)' },
])(
'should return a structured error if MCP tool reports a top-level isError: $description (spec compliant)',
async ({ isErrorValue }) => {
const tool = new DiscoveredMCPTool(
mockCallableToolInstance,
serverName,
serverToolName,
baseDescription,
inputSchema,
);
const params = { param: 'isErrorTopLevelCase' };
const functionCall = {
name: serverToolName,
args: params,
};
// Spec compliant error response: { isError: true } at the top level of content (or response object in this mapping)
const errorResponse = { isError: isErrorValue };
const mockMcpToolResponseParts: Part[] = [
{
functionResponse: {
name: serverToolName,
response: errorResponse,
},
},
];
mockCallTool.mockResolvedValue(mockMcpToolResponseParts);
const expectedErrorMessage = `MCP tool '${serverToolName}' reported tool error for function call: ${safeJsonStringify(
functionCall,
)} with response: ${safeJsonStringify(mockMcpToolResponseParts)}`;
const invocation = tool.build(params);
const result = await invocation.execute(new AbortController().signal);
expect(result.error?.type).toBe(ToolErrorType.MCP_TOOL_ERROR);
expect(result.llmContent).toBe(expectedErrorMessage);
expect(result.returnDisplay).toContain(
`Error: MCP tool '${serverToolName}' reported an error.`,
);
},
);
Summary
This PR fixes a bug where MCP tool errors were being interpreted as successes.
The disocveredMCPTool implementation was previously checking for a nested error.isError property (response.error.isError). However, the MCP specification places isError at the top level of the tool result object (response.isError). This mismatch caused valid error responses from MCP servers to be treated as successful executions.
Details
Related Issues
How to Validate
Tested with local MCP tool that was returning an error.
Pre-Merge Checklist