feat(ai): add DeepSeek provider with session support and tests#67
feat(ai): add DeepSeek provider with session support and tests#67regulusleow wants to merge 12 commits intotickernelz:mainfrom
Conversation
There was a problem hiding this comment.
Pull request overview
Adds a new DeepSeek-backed AI provider to the existing AI provider framework (used for auto-capture and user profile learning), including provider registration, configuration template updates, and a test suite plus an integration script.
Changes:
- Introduces
DeepSeekProviderimplementing the same session-based Chat Completions tool-calling flow as the existing OpenAI Chat provider, with DeepSeek defaults. - Registers
"deepseek"in provider type unions and theAIProviderFactory, and updatesCONFIG_TEMPLATEwith DeepSeek configuration guidance. - Adds unit tests for request construction, auth behavior, temperature handling, and tool-call success/failure; adds an integration script for real API verification.
Reviewed changes
Copilot reviewed 6 out of 6 changed files in this pull request and generated 8 comments.
Show a summary per file
| File | Description |
|---|---|
src/services/ai/providers/deepseek.ts |
New provider implementation (session + tool calling) with DeepSeek defaults. |
src/services/ai/session/session-types.ts |
Adds "deepseek" to AIProviderType. |
src/services/ai/ai-provider-factory.ts |
Registers DeepSeek in factory + supported providers list. |
src/config.ts |
Expands memory provider typing and updates config template with DeepSeek example. |
tests/deepseek-provider.test.ts |
Unit tests for DeepSeek provider behaviors. |
scripts/test-deepseek.ts |
Manual/integration test script for live DeepSeek API. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
You can also share your feedback on Copilot code review. Take the survey.
tickernelz
left a comment
There was a problem hiding this comment.
Thanks for adding DeepSeek support! A few items to address:
1. Code duplication with OpenAIChatCompletionProvider
DeepSeek's chat API is OpenAI-compatible. The 384-line deepseek.ts is nearly identical to openai-chat-completion.ts. Please refactor to either:
- Extend
OpenAIChatCompletionProviderand override only the differences (API URL default, provider name) - Or extract a shared base class for OpenAI-compatible providers
This reduces maintenance burden — bug fixes in one place apply to both.
2. Remove or relocate scripts/test-deepseek.ts
This is a manual integration test script that requires a real API key. Either:
- Remove it from the PR (unit tests are sufficient)
- Or move it to a
scripts/directory with a README explaining its purpose
3. Add tests for filterIncompleteToolCallSequences
This method has complex branching logic but no direct test coverage. Please add test cases for:
- Complete tool call sequences (assistant + tool response)
- Incomplete sequences (assistant with tool calls but missing responses)
- Mixed sequences
4. Type safety
Several places use any (constructor config, message types). Consider defining proper interfaces for DeepSeek-specific config and message types.
Once these are addressed, this will be ready to merge.
|
Just for context, did you try using |
@tickernelz Thanks for the review — all four items are addressed now.
|
@NaNomicon Yes,
|
- Use local variable `iterationTimeout` instead of `this.config.iterationTimeout` in timeout error message - Use local variable `maxIterations` instead of `this.config.maxIterations` in max iterations error message - Use `toolSchema.function.name` instead of hardcoded 'save_memories' in retry prompt
Replace the unused ToolCallResponse interface with three focused interfaces: - DeepSeekToolCall: tool call structure - DeepSeekErrorResponse: error body shape - DeepSeekResponse: success response shape Eliminates 'as any' cast and gives full type coverage to the response path.
bbfd668 to
1cfc913
Compare
|
Thanks again for working through the review feedback here. I went back through the updated commits and comments, and I think the code is in much better shape now. The shared OpenAIChatCompletionBaseProvider extraction in particular feels like a good improvement on its own. At this point, though, I think the main question is about the direction we want for provider support. My preference is for us to keep the public surface area centered around generic protocols rather than adding more explicit vendor-specific providers. In this case, DeepSeek appears to fit the OpenAI-compatible chat-completions path, and from the discussion it sounds like openai-chat with a custom memoryApiUrl already works. Because of that, I’m hesitant to add deepseek as a new hardcoded provider type. I worry that if we go this route, we’ll keep growing the matrix of provider enums, factory cases, docs, and tests for each OpenAI-compatible vendor, which will get harder to maintain over time. I think the better long-term direction is:
So from a product/API design perspective, I’d prefer not to merge the DeepSeek-specific provider as-is. If you’re open to it, I’d be very happy to support reshaping this toward better support for custom/config-defined providers instead, because I think that would be useful well beyond just DeepSeek. |
At this point, I agree there is not a strong need to split DeepSeek out as a dedicated provider. It is still fundamentally using the OpenAI-compatible chat completions path, and the existing generic path is sufficient for this use case. So I’m going to reshape this PR accordingly:
|
Consolidate OpenAI-compatible usage under the generic openai-chat path and update config guidance accordingly. Preserve shared provider behavior coverage by renaming and adapting tests to the OpenAI chat completion provider.
|
One more thought on the reshaped PR — I don't think we actually need the OpenAIChatCompletionBaseProvider extraction either. With the DeepSeek provider removed, OpenAIChatCompletionProvider would be the only subclass, and all the abstract methods are just returning string labels. That's a premature abstraction for a case we're explicitly choosing not to support right now. That said, the type-safety improvements from this work are genuinely valuable — the typed interfaces (ToolCallResponse, APIMessage, RequestBody), the type guards (isErrorResponseBody, isToolCallResponse), and removing the any casts. Those are worth keeping. I see two options and am happy with either:
Either way works for us. If a real protocol divergence comes up later that justifies extracting a base class, we can do it then with concrete requirements. Thanks for being flexible on this. |
|
@NaNomicon Thanks, this makes sense — I agree option 2 is cleaner. I will close this PR and open a fresh one focused only on the type-safety improvements in Also, I suggest we keep a clear config note to guide OpenAI-compatible usage, for example: // Any OpenAI-compatible endpoint can use the "openai-chat" provider pattern below.
// Common examples: DeepSeek, Qwen (via Alibaba Cloud ModelStudio),
// Zhipu GLM (BigModel platform), and Kimi (Moonshot AI platform).This should make the integration path clearer for users using compatible endpoints via |
|
I've opened a new focused PR for the follow-up changes: https://github.com/tickernelz/opencode-mem/pull/92\n\nI'll close this PR (#67) to keep scope clean and continue discussion/review on #92. |
Add DeepSeek provider support
Adds DeepSeek (https://api-docs.deepseek.com/) as a supported AI provider for auto-capture and user profile learning.
What's changed
ToolCallResponseinterface with three focused typed interfaces:DeepSeekToolCall,DeepSeekErrorResponse, andDeepSeekResponsescripts/test-deepseek.ts— run withDEEPSEEK_API_KEY=sk-... bun run scripts/test-deepseek.tsto verify real API connectivityConfiguration
Testing
bun test tests/deepseek-provider.test.tsDEEPSEEK_API_KEY=sk-... DEEPSEEK_API_URL=https://api.deepseek.com bun run scripts/test-deepseek.ts