Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
35 changes: 30 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@
- [简体中文](locales/zh-CN/README.md)
- [繁體中文](locales/zh-TW/README.md)
- ...
</details>
</details>

---

Expand Down Expand Up @@ -66,16 +66,41 @@ Learn more: [Using Modes](https://docs.roocode.com/basic-usage/using-modes) •

<div align="center">

| | | |
| :-----------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
| <a href="https://www.youtube.com/watch?v=Mcq3r1EPZ-4"><img src="https://img.youtube.com/vi/Mcq3r1EPZ-4/maxresdefault.jpg" width="100%"></a><br><b>Installing Roo Code</b> | <a href="https://www.youtube.com/watch?v=ZBML8h5cCgo"><img src="https://img.youtube.com/vi/ZBML8h5cCgo/maxresdefault.jpg" width="100%"></a><br><b>Configuring Profiles</b> | <a href="https://www.youtube.com/watch?v=r1bpod1VWhg"><img src="https://img.youtube.com/vi/r1bpod1VWhg/maxresdefault.jpg" width="100%"></a><br><b>Codebase Indexing</b> |
| <a href="https://www.youtube.com/watch?v=iiAv1eKOaxk"><img src="https://img.youtube.com/vi/iiAv1eKOaxk/maxresdefault.jpg" width="100%"></a><br><b>Custom Modes</b> | <a href="https://www.youtube.com/watch?v=Ho30nyY332E"><img src="https://img.youtube.com/vi/Ho30nyY332E/maxresdefault.jpg" width="100%"></a><br><b>Checkpoints</b> | <a href="https://www.youtube.com/watch?v=HmnNSasv7T8"><img src="https://img.youtube.com/vi/HmnNSasv7T8/maxresdefault.jpg" width="100%"></a><br><b>Context Management</b> |
| | | |
| :-----------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
| <a href="https://www.youtube.com/watch?v=Mcq3r1EPZ-4"><img src="https://img.youtube.com/vi/Mcq3r1EPZ-4/maxresdefault.jpg" width="100%"></a><br><b>Installing Roo Code</b> | <a href="https://www.youtube.com/watch?v=ZBML8h5cCgo"><img src="https://img.youtube.com/vi/ZBML8h5cCgo/maxresdefault.jpg" width="100%"></a><br><b>Configuring Profiles</b> | <a href="https://www.youtube.com/watch?v=r1bpod1VWhg"><img src="https://img.youtube.com/vi/r1bpod1VWhg/maxresdefault.jpg" width="100%"></a><br><b>Codebase Indexing</b> |
| <a href="https://www.youtube.com/watch?v=iiAv1eKOaxk"><img src="https://img.youtube.com/vi/iiAv1eKOaxk/maxresdefault.jpg" width="100%"></a><br><b>Custom Modes</b> | <a href="https://www.youtube.com/watch?v=Ho30nyY332E"><img src="https://img.youtube.com/vi/Ho30nyY332E/maxresdefault.jpg" width="100%"></a><br><b>Checkpoints</b> | <a href="https://www.youtube.com/watch?v=HmnNSasv7T8"><img src="https://img.youtube.com/vi/HmnNSasv7T8/maxresdefault.jpg" width="100%"></a><br><b>Context Management</b> |

</div>
<p align="center">
<a href="https://docs.roocode.com/tutorial-videos">More quick tutorial and feature videos...</a>
</p>

## Supported API Providers

Roo Code integrates with a wide range of AI providers:

**Major Providers:**

- Anthropic (Claude)
- OpenAI
- Google Gemini
- Amazon Bedrock

**Open-Weight Models:**

- **Harmony** (GPT-OSS models: gpt-oss-20b, gpt-oss-120b)
- Groq
- Mistral
- Ollama (local)
- LM Studio (local)

**Additional Providers:**

- xAI, SambaNova, DeepSeek, Doubao, Featherless, Fireworks, MiniMax, Moonshot, QwenCode, Vertex AI, and more

Each provider can be configured with custom settings in the Roo Code settings panel.

## Resources

- **[Documentation](https://docs.roocode.com):** The official guide to installing, configuring, and mastering Roo Code.
Expand Down
2 changes: 1 addition & 1 deletion package.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"name": "roo-code",
"packageManager": "pnpm@10.8.1",
"packageManager": "pnpm@10.28.1",
"engines": {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This packageManager version change from pnpm@10.8.1 to pnpm@10.28.1 appears unrelated to the Harmony provider feature. Consider reverting this change or submitting it as a separate PR to keep the feature scope focused.

Fix it with Roo Code or mention @roomote and request a fix.

"node": "20.19.2"
},
Expand Down
11 changes: 11 additions & 0 deletions packages/types/src/provider-settings.ts
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ import {
fireworksModels,
geminiModels,
groqModels,
harmonyModels,
ioIntelligenceModels,
mistralModels,
moonshotModels,
Expand Down Expand Up @@ -129,6 +130,7 @@ export const providerNames = [
"gemini",
"gemini-cli",
"groq",
"harmony",
"mistral",
"moonshot",
"minimax",
Expand Down Expand Up @@ -352,6 +354,11 @@ const groqSchema = apiModelIdProviderModelSchema.extend({
groqApiKey: z.string().optional(),
})

const harmonySchema = apiModelIdProviderModelSchema.extend({
harmonyApiKey: z.string().optional(),
harmonyBaseUrl: z.string().optional(),
})

const huggingFaceSchema = baseProviderSettingsSchema.extend({
huggingFaceApiKey: z.string().optional(),
huggingFaceModelId: z.string().optional(),
Expand Down Expand Up @@ -445,6 +452,7 @@ export const providerSettingsSchemaDiscriminated = z.discriminatedUnion("apiProv
fakeAiSchema.merge(z.object({ apiProvider: z.literal("fake-ai") })),
xaiSchema.merge(z.object({ apiProvider: z.literal("xai") })),
groqSchema.merge(z.object({ apiProvider: z.literal("groq") })),
harmonySchema.merge(z.object({ apiProvider: z.literal("harmony") })),
basetenSchema.merge(z.object({ apiProvider: z.literal("baseten") })),
huggingFaceSchema.merge(z.object({ apiProvider: z.literal("huggingface") })),
chutesSchema.merge(z.object({ apiProvider: z.literal("chutes") })),
Expand Down Expand Up @@ -486,6 +494,7 @@ export const providerSettingsSchema = z.object({
...fakeAiSchema.shape,
...xaiSchema.shape,
...groqSchema.shape,
...harmonySchema.shape,
...basetenSchema.shape,
...huggingFaceSchema.shape,
...chutesSchema.shape,
Expand Down Expand Up @@ -572,6 +581,7 @@ export const modelIdKeysByProvider: Record<TypicalProvider, ModelIdKey> = {
requesty: "requestyModelId",
xai: "apiModelId",
groq: "apiModelId",
harmony: "apiModelId",
baseten: "apiModelId",
chutes: "apiModelId",
litellm: "litellmModelId",
Expand Down Expand Up @@ -660,6 +670,7 @@ export const MODELS_BY_PROVIDER: Record<
models: Object.keys(geminiModels),
},
groq: { id: "groq", label: "Groq", models: Object.keys(groqModels) },
harmony: { id: "harmony", label: "Harmony", models: Object.keys(harmonyModels) },
"io-intelligence": {
id: "io-intelligence",
label: "IO Intelligence",
Expand Down
66 changes: 66 additions & 0 deletions packages/types/src/providers/harmony.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
import type { ModelInfo } from "../model.js"

/**
* Harmony-compatible API provider types and models
*
* Harmony is an open response format specification for GPT-OSS models
* that enables structured output with separate reasoning and answer channels.
*
* @see https://developers.openai.com/cookbook/articles/openai-harmony
* @see https://github.com/openai/harmony
*/

/**
* Supported Harmony model identifiers
*
* - gpt-oss-20b: 20B parameter open-weight model, optimal for speed
* - gpt-oss-120b: 120B parameter open-weight model, optimal for quality
*
* Both models support:
* - 128,000 token context window
* - Reasoning effort levels (low, medium, high)
* - Streaming responses
* - Function calling
*/
export type HarmonyModelId = "gpt-oss-20b" | "gpt-oss-120b"

/**
* Default Harmony model
* @default "gpt-oss-20b" - Balanced model for general use
*/
export const harmonyDefaultModelId: HarmonyModelId = "gpt-oss-20b"

/**
* Harmony model definitions and capabilities
*
* All Harmony models support:
* - 128,000 token context window for comprehensive codebase analysis
* - Reasoning effort levels: low, medium, high
* - Streaming responses for real-time feedback
* - Function calling for tool integration
* - OpenAI-compatible API interface
*/
export const harmonyModels: Record<HarmonyModelId, ModelInfo> = {
"gpt-oss-20b": {
maxTokens: 8192,
contextWindow: 128000,
supportsImages: false,
supportsPromptCache: false,
supportsReasoningEffort: ["low", "medium", "high"],
inputPrice: 0,
outputPrice: 0,
description:
"GPT-OSS 20B: 20 billion parameter open-weight model. Optimized for fast inference with 128K context window.",
},
"gpt-oss-120b": {
maxTokens: 8192,
contextWindow: 128000,
supportsImages: false,
supportsPromptCache: false,
supportsReasoningEffort: ["low", "medium", "high"],
inputPrice: 0,
outputPrice: 0,
description:
"GPT-OSS 120B: 120 billion parameter open-weight model. Higher quality reasoning with 128K context window.",
},
}
4 changes: 4 additions & 0 deletions packages/types/src/providers/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ export * from "./featherless.js"
export * from "./fireworks.js"
export * from "./gemini.js"
export * from "./groq.js"
export * from "./harmony.js"
export * from "./huggingface.js"
export * from "./io-intelligence.js"
export * from "./lite-llm.js"
Expand Down Expand Up @@ -44,6 +45,7 @@ import { featherlessDefaultModelId } from "./featherless.js"
import { fireworksDefaultModelId } from "./fireworks.js"
import { geminiDefaultModelId } from "./gemini.js"
import { groqDefaultModelId } from "./groq.js"
import { harmonyDefaultModelId } from "./harmony.js"
import { ioIntelligenceDefaultModelId } from "./io-intelligence.js"
import { litellmDefaultModelId } from "./lite-llm.js"
import { mistralDefaultModelId } from "./mistral.js"
Expand Down Expand Up @@ -88,6 +90,8 @@ export function getProviderDefaultModelId(
return xaiDefaultModelId
case "groq":
return groqDefaultModelId
case "harmony":
return harmonyDefaultModelId
case "huggingface":
return "meta-llama/Llama-3.3-70B-Instruct"
case "chutes":
Expand Down
3 changes: 3 additions & 0 deletions src/api/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ import {
FakeAIHandler,
XAIHandler,
GroqHandler,
HarmonyHandler,
HuggingFaceHandler,
ChutesHandler,
LiteLLMHandler,
Expand Down Expand Up @@ -167,6 +168,8 @@ export function buildApiHandler(configuration: ProviderSettings): ApiHandler {
return new XAIHandler(options)
case "groq":
return new GroqHandler(options)
case "harmony":
return new HarmonyHandler(options)
case "deepinfra":
return new DeepInfraHandler(options)
case "huggingface":
Expand Down
109 changes: 109 additions & 0 deletions src/api/providers/__tests__/harmony-edge-cases.spec.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,109 @@
// npx vitest run src/api/providers/__tests__/harmony-edge-cases.spec.ts
// Integration tests for Harmony API edge cases
// Run with: HARMONY_API_KEY=your-key HARMONY_BASE_URL=your-base-url npx vitest run --run api/providers/__tests__/harmony-edge-cases.spec.ts

import { describe, it, expect, beforeEach, vi } from "vitest"
import OpenAI from "openai"

const isIntegrationTest = !!process.env.HARMONY_API_KEY && !!process.env.HARMONY_BASE_URL
const skipIfNoApi = isIntegrationTest ? describe : describe.skip

skipIfNoApi("Harmony API Edge Cases (Integration Tests)", () => {
let client: OpenAI

beforeEach(() => {
const apiKey = process.env.HARMONY_API_KEY || "sk-placeholder"
const baseURL = process.env.HARMONY_BASE_URL
if (!baseURL) {
throw new Error("HARMONY_BASE_URL environment variable is required for integration tests")
}
client = new OpenAI({ baseURL, apiKey })
})

it("should handle large input (testing context window)", async () => {
const largeInput = "Summarize this text: " + "Lorem ipsum dolor sit amet. ".repeat(500)
const response = await client.chat.completions.create({
model: "gpt-oss-20b",
messages: [{ role: "user", content: largeInput }],
max_tokens: 100,
})

expect(response.choices).toHaveLength(1)
expect(response.choices[0].message.content).toBeTruthy()
expect(response.usage?.prompt_tokens).toBeGreaterThan(0)
})

it("should handle conversation with multiple messages", async () => {
const response = await client.chat.completions.create({
model: "gpt-oss-20b",
messages: [
{ role: "user", content: "What is your name?" },
{ role: "assistant", content: "I'm Claude, an AI assistant." },
{ role: "user", content: "What can you help me with?" },
],
max_tokens: 100,
})

expect(response.choices).toHaveLength(1)
expect(response.choices[0].message.content).toBeTruthy()
})

it("should return proper error for invalid API key", async () => {
const baseURL = process.env.HARMONY_BASE_URL
if (!baseURL) {
throw new Error("HARMONY_BASE_URL environment variable is required")
}
const badClient = new OpenAI({
baseURL,
apiKey: "invalid-key-12345",
})

await expect(
badClient.chat.completions.create({
model: "gpt-oss-20b",
messages: [{ role: "user", content: "Test" }],
}),
).rejects.toThrow()
})

it("should return proper error for unknown model", async () => {
await expect(
client.chat.completions.create({
model: "unknown-model-xyz",
messages: [{ role: "user", content: "Test" }],
}),
).rejects.toThrow()
})

it("should list available models", async () => {
const models = await client.models.list()

expect(models.data).toBeDefined()
expect(Array.isArray(models.data)).toBe(true)
if (models.data.length > 0) {
expect(models.data[0].id).toBeTruthy()
}
})

it("should handle high temperature (creative output)", async () => {
const response = await client.chat.completions.create({
model: "gpt-oss-20b",
messages: [{ role: "user", content: "Generate a creative story starter in one sentence" }],
temperature: 1.5,
max_tokens: 100,
})

expect(response.choices[0].message.content).toBeTruthy()
})

it("should handle zero temperature (deterministic)", async () => {
const response = await client.chat.completions.create({
model: "gpt-oss-20b",
messages: [{ role: "user", content: "What is 2+2?" }],
temperature: 0,
max_tokens: 50,
})

expect(response.choices[0].message.content).toBeTruthy()
})
})
Loading
Loading