Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions API.en.md
Original file line number Diff line number Diff line change
Expand Up @@ -267,6 +267,7 @@ data: [DONE]
- `deepseek-reasoner` / `deepseek-reasoner-search` models emit `delta.reasoning_content`
- Text emits `delta.content`
- Last chunk includes `finish_reason` and `usage`
- Token counting prefers pass-through from upstream DeepSeek SSE (`accumulated_token_usage` / `token_usage`), and only falls back to local estimation when upstream usage is absent

#### Tool Calls

Expand Down Expand Up @@ -383,6 +384,7 @@ Business auth required. Returns OpenAI-compatible embeddings shape.
## Claude-Compatible API

Besides `/anthropic/v1/*`, DS2API also supports shortcut paths: `/v1/messages`, `/messages`, `/v1/messages/count_tokens`, `/messages/count_tokens`.
Implementation-wise this path is unified on the OpenAI Chat Completions parse-and-translate pipeline to avoid maintaining divergent parsing chains.

### `GET /anthropic/v1/models`

Expand Down Expand Up @@ -517,6 +519,7 @@ Supported paths:
- `/v1/models/{model}:streamGenerateContent` (compat path)

Authentication is the same as other business routes (`Authorization: Bearer <token>` or `x-api-key`).
Implementation-wise this path is unified on the OpenAI Chat Completions parse-and-translate pipeline to avoid maintaining divergent parsing chains.

### `POST /v1beta/models/{model}:generateContent`

Expand All @@ -535,6 +538,7 @@ Returns SSE (`text/event-stream`), each chunk as `data: <json>`:
- regular text: incremental text chunks
- `tools` mode: buffered and emitted as `functionCall` at finalize phase
- final chunk: includes `finishReason: "STOP"` and `usageMetadata`
- Token counting prefers pass-through from upstream DeepSeek SSE (`accumulated_token_usage` / `token_usage`), and only falls back to local estimation when upstream usage is absent

---

Expand Down
4 changes: 4 additions & 0 deletions API.md
Original file line number Diff line number Diff line change
Expand Up @@ -267,6 +267,7 @@ data: [DONE]
- `deepseek-reasoner` / `deepseek-reasoner-search` 模型输出 `delta.reasoning_content`
- 普通文本输出 `delta.content`
- 最后一段包含 `finish_reason` 和 `usage`
- token 计数优先透传上游 DeepSeek SSE(如 `accumulated_token_usage` / `token_usage`);仅在上游缺失时回退本地估算

#### Tool Calls

Expand Down Expand Up @@ -389,6 +390,7 @@ data: [DONE]
## Claude 兼容接口

除标准路径 `/anthropic/v1/*` 外,还支持快捷路径 `/v1/messages`、`/messages`、`/v1/messages/count_tokens`、`/messages/count_tokens`。
实现上统一走 OpenAI Chat Completions 解析与回译链路,避免多套解析逻辑分叉维护。

### `GET /anthropic/v1/models`

Expand Down Expand Up @@ -523,6 +525,7 @@ data: {"type":"message_stop"}
- `/v1/models/{model}:streamGenerateContent`(兼容路径)

鉴权方式同业务接口(`Authorization: Bearer <token>` 或 `x-api-key`)。
实现上统一走 OpenAI Chat Completions 解析与回译链路,避免多套解析逻辑分叉维护。

### `POST /v1beta/models/{model}:generateContent`

Expand All @@ -541,6 +544,7 @@ data: {"type":"message_stop"}
- 常规文本:持续返回增量文本 chunk
- `tools` 场景:会缓冲并在结束时输出 `functionCall` 结构
- 结束 chunk:包含 `finishReason: "STOP"` 与 `usageMetadata`
- token 计数优先透传上游 DeepSeek SSE(如 `accumulated_token_usage` / `token_usage`);仅在上游缺失时回退本地估算

---

Expand Down
1 change: 0 additions & 1 deletion README.MD
Original file line number Diff line number Diff line change
Expand Up @@ -344,7 +344,6 @@ cp opencode.json.example opencode.json
| `DS2API_CONFIG_PATH` | 配置文件路径 | `config.json` |
| `DS2API_CONFIG_JSON` | 直接注入配置(JSON 或 Base64) | — |
| `DS2API_ENV_WRITEBACK` | 环境变量模式下自动写回配置文件并切换文件模式(`1/true/yes/on`) | 关闭 |
| `DS2API_POW_CONCURRENCY` | PoW 并行计算协程数(可选) | 默认 CPU 核心数 |
| `DS2API_STATIC_ADMIN_DIR` | 管理台静态文件目录 | `static/admin` |
| `DS2API_AUTO_BUILD_WEBUI` | 启动时自动构建 WebUI | 本地开启,Vercel 关闭 |
| `DS2API_DEV_PACKET_CAPTURE` | 本地开发抓包开关(记录最近会话请求/响应体) | 本地非 Vercel 默认开启 |
Expand Down
1 change: 0 additions & 1 deletion README.en.md
Original file line number Diff line number Diff line change
Expand Up @@ -344,7 +344,6 @@ cp opencode.json.example opencode.json
| `DS2API_CONFIG_PATH` | Config file path | `config.json` |
| `DS2API_CONFIG_JSON` | Inline config (JSON or Base64) | — |
| `DS2API_ENV_WRITEBACK` | Auto-write env-backed config to file and transition to file mode (`1/true/yes/on`) | Disabled |
| `DS2API_POW_CONCURRENCY` | PoW parallel solver goroutine count (optional) | Default CPU core count |
| `DS2API_STATIC_ADMIN_DIR` | Admin static assets dir | `static/admin` |
| `DS2API_AUTO_BUILD_WEBUI` | Auto-build WebUI on startup | Enabled locally, disabled on Vercel |
| `DS2API_ACCOUNT_MAX_INFLIGHT` | Max in-flight requests per account | `2` |
Expand Down
9 changes: 3 additions & 6 deletions internal/adapter/openai/prompt_build_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -74,16 +74,13 @@ func TestBuildOpenAIFinalPrompt_VercelPreparePathKeepsFinalAnswerInstruction(t *
}

finalPrompt, _ := buildOpenAIFinalPrompt(messages, tools, "")
if !strings.Contains(finalPrompt, "After receiving a tool result, use it directly.") {
t.Fatalf("vercel prepare finalPrompt missing final-answer instruction: %q", finalPrompt)
}
if !strings.Contains(finalPrompt, "Only call another tool if the result is insufficient.") {
t.Fatalf("vercel prepare finalPrompt missing retry guard instruction: %q", finalPrompt)
if !strings.Contains(finalPrompt, "Remember: Output ONLY the <tool_calls>...</tool_calls> XML block when calling tools.") {
t.Fatalf("vercel prepare finalPrompt missing final tool-call anchor instruction: %q", finalPrompt)
}
if !strings.Contains(finalPrompt, "TOOL CALL FORMAT") {
t.Fatalf("vercel prepare finalPrompt missing xml format instruction: %q", finalPrompt)
}
if !strings.Contains(finalPrompt, "Do NOT wrap the XML in markdown code fences") {
if !strings.Contains(finalPrompt, "Do NOT wrap XML in markdown fences") {
t.Fatalf("vercel prepare finalPrompt missing no-fence xml instruction: %q", finalPrompt)
}
if strings.Contains(finalPrompt, "```json") {
Expand Down
13 changes: 9 additions & 4 deletions internal/adapter/openai/responses_handler.go
Original file line number Diff line number Diff line change
Expand Up @@ -130,12 +130,17 @@ func (h *Handler) handleResponsesNonStream(w http.ResponseWriter, resp *http.Res
}

responseObj := openaifmt.BuildResponseObject(responseID, model, finalPrompt, sanitizedThinking, sanitizedText, toolNames)
if result.OutputTokens > 0 {
if result.PromptTokens > 0 || result.OutputTokens > 0 {
if usage, ok := responseObj["usage"].(map[string]any); ok {
usage["output_tokens"] = result.OutputTokens
if input, ok := usage["input_tokens"].(int); ok {
usage["total_tokens"] = input + result.OutputTokens
if result.PromptTokens > 0 {
usage["input_tokens"] = result.PromptTokens
}
if result.OutputTokens > 0 {
usage["output_tokens"] = result.OutputTokens
}
input, _ := usage["input_tokens"].(int)
output, _ := usage["output_tokens"].(int)
usage["total_tokens"] = input + output
}
}
h.getResponseStore().put(owner, responseID, responseObj)
Expand Down
18 changes: 16 additions & 2 deletions internal/adapter/openai/responses_stream_runtime_core.go
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,7 @@ type responsesStreamRuntime struct {
messagePartAdded bool
sequence int
failed bool
promptTokens int
outputTokens int

persistResponse func(obj map[string]any)
Expand Down Expand Up @@ -152,9 +153,19 @@ func (s *responsesStreamRuntime) finalize() {
if s.outputTokens > 0 {
if usage, ok := obj["usage"].(map[string]any); ok {
usage["output_tokens"] = s.outputTokens
if input, ok := usage["input_tokens"].(int); ok {
usage["total_tokens"] = input + s.outputTokens
}
}
if s.promptTokens > 0 || s.outputTokens > 0 {
if usage, ok := obj["usage"].(map[string]any); ok {
if s.promptTokens > 0 {
usage["input_tokens"] = s.promptTokens
}
if s.outputTokens > 0 {
usage["output_tokens"] = s.outputTokens
}
input, _ := usage["input_tokens"].(int)
output, _ := usage["output_tokens"].(int)
usage["total_tokens"] = input + output
}
}
if s.persistResponse != nil {
Expand Down Expand Up @@ -185,6 +196,9 @@ func (s *responsesStreamRuntime) onParsed(parsed sse.LineResult) streamengine.Pa
if !parsed.Parsed {
return streamengine.ParsedDecision{}
}
if parsed.PromptTokens > 0 {
s.promptTokens = parsed.PromptTokens
}
if parsed.OutputTokens > 0 {
s.outputTokens = parsed.OutputTokens
}
Expand Down
94 changes: 94 additions & 0 deletions internal/adapter/openai/stream_status_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -238,3 +238,97 @@ func TestChatCompletionsStreamContentFilterStopsNormallyWithoutLeak(t *testing.T
t.Fatalf("expected finish_reason=stop for content-filter upstream stop, got %#v", choice["finish_reason"])
}
}

func TestResponsesStreamUsageOverridesFromBatchAccumulatedTokenUsage(t *testing.T) {
statuses := make([]int, 0, 1)
h := &Handler{
Store: mockOpenAIConfig{wideInput: true},
Auth: streamStatusAuthStub{},
DS: streamStatusDSStub{resp: makeOpenAISSEHTTPResponse(
`data: {"p":"response/content","v":"hello"}`,
`data: {"p":"response","o":"BATCH","v":[{"p":"accumulated_token_usage","v":190},{"p":"quasi_status","v":"FINISHED"}]}`,
)},
}
r := chi.NewRouter()
r.Use(captureStatusMiddleware(&statuses))
RegisterRoutes(r, h)

reqBody := `{"model":"deepseek-chat","input":"hi","stream":true}`
req := httptest.NewRequest(http.MethodPost, "/v1/responses", strings.NewReader(reqBody))
req.Header.Set("Authorization", "Bearer direct-token")
req.Header.Set("Content-Type", "application/json")
rec := httptest.NewRecorder()
r.ServeHTTP(rec, req)

if rec.Code != http.StatusOK {
t.Fatalf("expected 200, got %d body=%s", rec.Code, rec.Body.String())
}
if len(statuses) != 1 || statuses[0] != http.StatusOK {
t.Fatalf("expected captured status 200, got %#v", statuses)
}
frames, done := parseSSEDataFrames(t, rec.Body.String())
if !done {
t.Fatalf("expected [DONE], body=%s", rec.Body.String())
}
if len(frames) == 0 {
t.Fatalf("expected at least one json frame, body=%s", rec.Body.String())
}
last := frames[len(frames)-1]
resp, _ := last["response"].(map[string]any)
if resp == nil {
t.Fatalf("expected response payload in final frame, got %#v", last)
}
usage, _ := resp["usage"].(map[string]any)
if usage == nil {
t.Fatalf("expected usage in response payload, got %#v", resp)
}
if got, _ := usage["output_tokens"].(float64); int(got) != 190 {
t.Fatalf("expected output_tokens=190, got %#v", usage["output_tokens"])
}
}

func TestResponsesNonStreamUsageOverridesPromptAndOutputTokenUsage(t *testing.T) {
statuses := make([]int, 0, 1)
h := &Handler{
Store: mockOpenAIConfig{wideInput: true},
Auth: streamStatusAuthStub{},
DS: streamStatusDSStub{resp: makeOpenAISSEHTTPResponse(
`data: {"p":"response/content","v":"ok"}`,
`data: {"p":"response","o":"BATCH","v":[{"p":"token_usage","v":{"prompt_tokens":11,"completion_tokens":29}},{"p":"quasi_status","v":"FINISHED"}]}`,
)},
}
r := chi.NewRouter()
r.Use(captureStatusMiddleware(&statuses))
RegisterRoutes(r, h)

reqBody := `{"model":"deepseek-chat","input":"hi","stream":false}`
req := httptest.NewRequest(http.MethodPost, "/v1/responses", strings.NewReader(reqBody))
req.Header.Set("Authorization", "Bearer direct-token")
req.Header.Set("Content-Type", "application/json")
rec := httptest.NewRecorder()
r.ServeHTTP(rec, req)

if rec.Code != http.StatusOK {
t.Fatalf("expected 200, got %d body=%s", rec.Code, rec.Body.String())
}
if len(statuses) != 1 || statuses[0] != http.StatusOK {
t.Fatalf("expected captured status 200, got %#v", statuses)
}
var out map[string]any
if err := json.Unmarshal(rec.Body.Bytes(), &out); err != nil {
t.Fatalf("decode response failed: %v body=%s", err, rec.Body.String())
}
usage, _ := out["usage"].(map[string]any)
if usage == nil {
t.Fatalf("expected usage object, got %#v", out)
}
if got, _ := usage["input_tokens"].(float64); int(got) != 11 {
t.Fatalf("expected input_tokens=11, got %#v", usage["input_tokens"])
}
if got, _ := usage["output_tokens"].(float64); int(got) != 29 {
t.Fatalf("expected output_tokens=29, got %#v", usage["output_tokens"])
}
if got, _ := usage["total_tokens"].(float64); int(got) != 40 {
t.Fatalf("expected total_tokens=40, got %#v", usage["total_tokens"])
}
}
20 changes: 20 additions & 0 deletions internal/translatorcliproxy/bridge_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,26 @@ func TestFromOpenAINonStreamClaude(t *testing.T) {
}
}

func TestFromOpenAINonStreamClaudePreservesUsageFromOpenAI(t *testing.T) {
original := []byte(`{"model":"claude-sonnet-4-5","messages":[{"role":"user","content":"hi"}],"stream":false}`)
translatedReq := []byte(`{"model":"claude-sonnet-4-5","messages":[{"role":"user","content":"hi"}],"stream":false}`)
openaibody := []byte(`{"id":"chatcmpl_1","object":"chat.completion","created":1,"model":"claude-sonnet-4-5","choices":[{"index":0,"message":{"role":"assistant","content":"hello"},"finish_reason":"stop"}],"usage":{"prompt_tokens":11,"completion_tokens":29,"total_tokens":40}}`)
got := string(FromOpenAINonStream(sdktranslator.FormatClaude, "claude-sonnet-4-5", original, translatedReq, openaibody))
if !strings.Contains(got, `"input_tokens":11`) || !strings.Contains(got, `"output_tokens":29`) {
t.Fatalf("expected claude usage to preserve prompt/completion tokens, got: %s", got)
}
}

func TestFromOpenAINonStreamGeminiPreservesUsageFromOpenAI(t *testing.T) {
original := []byte(`{"contents":[{"role":"user","parts":[{"text":"hi"}]}]}`)
translatedReq := []byte(`{"model":"gemini-2.5-pro","messages":[{"role":"user","content":"hi"}],"stream":false}`)
openaibody := []byte(`{"id":"chatcmpl_1","object":"chat.completion","created":1,"model":"gemini-2.5-pro","choices":[{"index":0,"message":{"role":"assistant","content":"hello"},"finish_reason":"stop"}],"usage":{"prompt_tokens":11,"completion_tokens":29,"total_tokens":40}}`)
got := string(FromOpenAINonStream(sdktranslator.FormatGemini, "gemini-2.5-pro", original, translatedReq, openaibody))
if !strings.Contains(got, `"promptTokenCount":11`) || !strings.Contains(got, `"candidatesTokenCount":29`) || !strings.Contains(got, `"totalTokenCount":40`) {
t.Fatalf("expected gemini usageMetadata to preserve prompt/completion tokens, got: %s", got)
}
}

func TestParseFormatAliases(t *testing.T) {
cases := map[string]sdktranslator.Format{
"responses": sdktranslator.FormatOpenAIResponse,
Expand Down
Loading
Loading