fix(api): add safe JSON parsing for upstream provider responses#64
Conversation
Unhandled JSON.parse calls crash with SyntaxError when upstream LLM
providers return malformed JSON (HTML error pages, truncated responses).
Adds safeParseToolArgs (returns {} on failure) for tool arguments and
parseJsonResponse (descriptive re-throw) for response bodies. Wraps
error body parsing in try-catch with format-specific fallbacks.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
📝 WalkthroughWalkthrough在多个适配器和 API 路由中引入并使用两个新的 JSON 工具函数 Changes
Sequence Diagram(s)sequenceDiagram
rect rgba(200,220,255,0.5)
participant Client
end
rect rgba(200,255,200,0.5)
participant API as "API Route\n(completions/messages/responses)"
end
rect rgba(255,220,200,0.5)
participant Utils as "utils/json\n(safeParseToolArgs,\nparseJsonResponse)"
end
rect rgba(255,200,220,0.5)
participant Upstream as "Upstream\n(OpenAI/Anthropic)"
end
Client->>API: 发起请求/工具调用
API->>Upstream: 转发请求并接收响应文本
Upstream-->>API: 返回响应文本或错误体
API->>Utils: 使用 parseJsonResponse(text, context)
alt 解析成功
Utils-->>API: 返回解析结果
API->>Client: 返回正常响应
else 解析失败
Utils-->>API: 抛出含上下文的错误
API->>API: 捕获并构造标准化错误对象 (unparseable_error)
API->>Client: 返回标准化错误响应
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 分钟 Possibly related issues
Possibly related PRs
诗歌
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧹 Recent nitpick comments
📜 Recent review detailsConfiguration used: Organization UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (6)
🚧 Files skipped from review as they are similar to previous changes (2)
🧰 Additional context used🧠 Learnings (2)📚 Learning: 2026-01-24T18:23:42.635ZApplied to files:
📚 Learning: 2026-01-24T22:33:16.471ZApplied to files:
🧬 Code graph analysis (3)backend/src/utils/json.test.ts (1)
backend/src/utils/json.integration.test.ts (1)
backend/src/api/v1/messages.ts (1)
🔇 Additional comments (10)
✏️ Tip: You can disable this entire section by setting Comment |
Summary of ChangesHello @pescn, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the backend's resilience against malformed or unexpected responses from external LLM providers. By centralizing JSON parsing logic into new, safer utility functions, the system can now gracefully handle non-JSON content or corrupted data, preventing application crashes and providing clearer error feedback. This change improves the overall stability and user experience when interacting with upstream services. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces safe JSON parsing utilities to prevent crashes from malformed upstream responses, which is a great improvement for robustness. The changes are well-implemented, replacing bare JSON.parse calls with safer alternatives across various adapters and API endpoints. I've suggested one improvement to the new safeParseToolArgs utility to make it even more robust by ensuring it always returns an object, preventing potential runtime errors from valid but non-object JSON inputs, aligning with our guidelines for robust JSON parsing and ready-to-use helper function returns.
pescn
left a comment
There was a problem hiding this comment.
✅ Fixed: Added runtime type validation
Good catch! You're absolutely right that JSON.parse() can return scalar values, null, or arrays - not just objects.
For example:
JSON.parse("123")→123(number)JSON.parse("null")→nullJSON.parse("[1,2,3]")→ array
If an LLM hallucinates and returns a scalar/array as tool arguments, this would cause a type safety issue.
Changes Applied
Updated safeParseToolArgs() to validate the parsed result:
const parsed: unknown = JSON.parse(jsonString);
// Validate that parsed result is a plain object (not null, array, or scalar)
if (typeof parsed === "object" && parsed !== null && !Array.isArray(parsed)) {
return parsed as Record<string, unknown>;
}
// If LLM returned a scalar, array, or null as tool arguments, treat as malformed
logger.warn("Parsed tool arguments is not an object, returning empty object", {
input: jsonString,
parsedType: Array.isArray(parsed) ? "array" : typeof parsed,
parsedValue: parsed,
});
return {};Test Coverage
Added 6 new tests covering:
- Scalar number (
"123") - Scalar boolean (
"true") - Scalar string (
'"hello"') nullvalue- Arrays (
"[1,2,3]") - Array of objects (
'[{"name":"test"}]')
All 46 tests pass ✅
…e JSON parsing - Add runtime type validation in safeParseToolArgs to reject scalars, arrays, and null - Standardize error response format across all API endpoints with "unparseable_error" code - Add comprehensive logging with full response bodies for debugging - Create 46 unit and integration tests covering malformed JSON scenarios This addresses code review feedback to ensure type safety when parsing tool arguments from LLM responses that may return non-object values. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Summary
safeParseToolArgs()andparseJsonResponse()utilities inbackend/src/utils/json.tsto handle malformed JSON from upstream LLM providers gracefullyJSON.parsecalls across the backend with safe alternatives: tool arguments return{}on failure, response bodies re-throw with descriptive errors, and error bodies use inline try-catch with format-specific fallbacksSyntaxErrorcrashes when providers return HTML error pages, truncated responses, or other non-JSON contentTest plan
bun run check— type checking passesbun run lint— linting passes (0 new warnings)bun run build— build succeeds{}🤖 Generated with Claude Code
Summary by CodeRabbit
发行说明
Bug Fixes
Refactor
Tests
Chores
✏️ Tip: You can customize this high-level summary in your review settings.