Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .github/PULL_REQUEST_TEMPLATE.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@
## Checklist

- [ ] `npm run type-check` passes
- [ ] `npm run lint` passes
- [ ] `npm test` passes
- [ ] `npm run test:e2e` passes (run before merging UI changes or cutting a release)
- [ ] No hardcoded colours — CSS variables only (`var(--accent)`, `var(--text-primary)`, etc.)
Expand Down
44 changes: 44 additions & 0 deletions changelogs/v1.2.3.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
# v1.2.3

## What's new

### Features

- **Configurable temperature** — LLM sampling temperature is now a first-class setting in AI & Chat (0–1, step 0.05, presets 0.1 / 0.3 / 0.5 / 0.7). Previously hardcoded to 0.3 everywhere. Plan mode always overrides to 0.1 for deterministic codebase analysis regardless of this setting.

- **Temperature wired end-to-end** — the setting is now forwarded through all three AI call paths: Pi agent (`pi-agent:prompt` and `pi-agent:approve-plan`), chat panel, and PRD modal. Previously only the Pi agent partially respected it.

- **Max steps uncapped** — the max steps input now accepts values up to 1000, with an ∞ preset (maps to 1000) for complex multi-file tasks. Presets updated to 10 / 20 / 30 / 50 / ∞. The Pi agent loop now reads this value from settings rather than using the hardcoded constant of 30.

- **Smarter coding agent tool set** — the Pi agent (`CAIRN_TOOL_NAMES`) gains access to seven previously missing tools: `get_neighbors` (N-hop graph traversal), `bulk_update_task_status`, `link_note_to_task`, `create_tag`, `get_idea_flow_rules`, `update_idea_flow_node`, and `layout_idea_flow`. The agent could previously create idea flow nodes and tasks but couldn't link, tag, or arrange them.

- **AI & Chat settings reorganised** — "Enable AI features" is now "Enable inline AI" under a new "Visibility" group that also contains Agent view and AI Chat view toggles. All AI-related visibility is now in one place instead of being split between AI & Chat and General → Views.

### Fixes

- **MCP project selector showing icon name instead of icon** — the project dropdown in the MCP Settings "Project context" panel was rendering `p.icon` as a string inside an HTML `<option>` element (which only supports plain text), causing it to show "Cpu Cairn Project M…" instead of the project name. Fixed by rendering `p.name` only.

- **`maxSteps` setting not applied to Pi agent** — the "Max steps" setting in AI & Chat was correctly forwarded by the chat panel and PRD modal but was silently ignored by the Pi agent, which always used a hardcoded constant of 30. The setting now flows through `PiAgentPromptRequest.config`, `AgentLLMConfig`, and `runAgentLoop`.

- **`temperature` not forwarded by chat panel or PRD modal** — both callers forwarded `maxSteps` but dropped `temperature`. Fixed: `ChatStreamRequest.config`, `ChatRequest.config` (tools.ts), and `electron/ipc/chat.ts` now all carry and apply temperature.

- **`aiEnabled` toggle description was misleading** — the "In-app AI" group description claimed toggling it would hide "chat, text actions, PRD generator, task spawning, and Idea Flow summaries." In practice it only hides inline editor AI buttons — the Agent and AI Chat views are controlled independently via view visibility toggles. Description now accurately reflects what the toggle does.

- **General → Views included Agent and AI Chat rows** — these belong in AI & Chat since they are AI features. Moved to the new Visibility group; General → Views now only lists non-AI views (Board, Idea Flow, Knowledge Graph, Insights) with a note pointing to AI & Chat for the others.

### Changes

- `src/store/slices/ui.ts` — `AIConfig` gains `temperature: number` field
- `src/lib/constants.ts` — `DEFAULT_AI_CONFIG` gains `temperature: 0.3`
- `src/components/settings/AISettings.tsx` — "In-app AI" group replaced by "Visibility" group; Agent view and AI Chat view toggles added; temperature `SettingsRow` added (Thermometer icon, presets 0.1/0.3/0.5/0.7); max steps input range raised to 1–1000, presets updated to 10/20/30/50/∞; context window description clarified as display-only
- `src/components/settings/ViewVisibilitySettings.tsx` — Agent and AI Chat rows removed; description updated to reference AI & Chat settings; unused icon imports removed
- `src/components/settings/GeneralSettings.tsx` — no change (ViewVisibilitySettings handles its own removal)
- `src/hooks/useChatStream.ts` — `ChatStreamRequest.config` gains `temperature?: number`
- `src/components/chat/chat-panel.tsx` — forwards `temperature` in config payload
- `src/components/notes/notes-view/PrdModal.tsx` — forwards `temperature` in config payload
- `src/components/agent/PiAgentPane.tsx` — forwards `maxSteps` and `temperature` in both `prompt` and `approvePlan` payloads
- `electron/lib/tools.ts` — `ChatRequest.config` gains `temperature?: number`
- `electron/ipc/chat.ts` — reads `req.config?.temperature ?? 0.3`; replaces hardcoded `temperature: 0.3` in fetch body
- `electron/ipc/pi-agent.ts` — `PiAgentPromptRequest.config` and `PiAgentApprovePlanRequest.config` gain `maxSteps?` and `temperature?`; both `llmConfig` construction sites read them with safe fallbacks
- `electron/lib/pi-agent-loop.ts` — `AgentLLMConfig` gains `maxSteps` and `temperature`; `MAX_STEPS = 30` constant removed; loop uses `llmConfig.maxSteps`; plan mode forces `temperature = 0.1` regardless of user setting; execute mode uses configured value; error message reports actual configured step count; `CAIRN_TOOL_NAMES` expanded with 7 new tools; stale comment referencing removed tool names updated
- `src/components/settings/MCPSettings.tsx` — project selector `<option>` renders `p.name` only (was `p.icon + p.name`)
5 changes: 3 additions & 2 deletions electron/ipc/chat.ts
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,8 @@ async function runToolLoop(
signal?: AbortSignal,
getWin?: () => BrowserWindow | null,
): Promise<{ exhausted: true; content: string } | { exhausted: false }> {
const maxSteps = req.config?.maxSteps ?? 20;
const maxSteps = req.config?.maxSteps ?? 20;
const temperature = req.config?.temperature ?? 0.3;
for (let round = 0; round < maxSteps; round++) {
if (signal?.aborted) return { exhausted: true, content: "" };
let response: Response;
Expand All @@ -47,7 +48,7 @@ async function runToolLoop(
response = await fetch(`${baseUrl}/v1/chat/completions`, {
method: "POST",
headers,
body: JSON.stringify({ model, messages, tools: TOOLS, tool_choice: "auto", max_tokens: 4096, temperature: 0.3 }),
body: JSON.stringify({ model, messages, tools: TOOLS, tool_choice: "auto", max_tokens: 4096, temperature }),
});
} catch {
return { exhausted: true, content: `Could not reach the AI endpoint at \`${baseUrl}\`. Check your endpoint URL and make sure the server is running.` };
Expand Down
20 changes: 14 additions & 6 deletions electron/ipc/pi-agent.ts
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,8 @@ interface PiAgentPromptRequest {
baseUrl?: string;
model?: string;
apiKey?: string;
maxSteps?: number;
temperature?: number;
};
}

Expand All @@ -59,6 +61,8 @@ interface PiAgentApprovePlanRequest {
baseUrl?: string;
model?: string;
apiKey?: string;
maxSteps?: number;
temperature?: number;
};
}

Expand Down Expand Up @@ -91,9 +95,11 @@ export function registerPiAgentHandler(

// Resolve LLM config — renderer passes config from its aiConfig store
const llmConfig: AgentLLMConfig = {
baseUrl: normaliseBaseUrl(req.config?.baseUrl || "https://api.openai.com"),
model: req.config?.model || "gpt-4o",
apiKey: req.config?.apiKey || "",
baseUrl: normaliseBaseUrl(req.config?.baseUrl || "https://api.openai.com"),
model: req.config?.model || "gpt-4o",
apiKey: req.config?.apiKey || "",
maxSteps: req.config?.maxSteps ?? 20,
temperature: req.config?.temperature ?? 0.3,
};

// Get or create session
Expand Down Expand Up @@ -213,9 +219,11 @@ export function registerPiAgentHandler(
};

const llmConfig: AgentLLMConfig = {
baseUrl: normaliseBaseUrl(req.config?.baseUrl || "https://api.openai.com"),
model: req.config?.model || "gpt-4o",
apiKey: req.config?.apiKey || "",
baseUrl: normaliseBaseUrl(req.config?.baseUrl || "https://api.openai.com"),
model: req.config?.model || "gpt-4o",
apiKey: req.config?.apiKey || "",
maxSteps: req.config?.maxSteps ?? 20,
temperature: req.config?.temperature ?? 0.3,
};

let session = sessions.get(sessionId);
Expand Down
20 changes: 10 additions & 10 deletions electron/lib/pi-agent-loop.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -181,7 +181,7 @@ describe("runAgentLoop — SSE streaming", () => {

await runAgentLoop(
session, "You are a test assistant.", "/tmp",
{ baseUrl: server.url, model: "test", apiKey: "test" },
{ baseUrl: server.url, model: "test", apiKey: "test", maxSteps: 10, temperature: 0.3 },
db, chatReq, "/tmp", callbacks,
);

Expand Down Expand Up @@ -210,7 +210,7 @@ describe("runAgentLoop — SSE streaming", () => {

await runAgentLoop(
session, "You are a test assistant.", "/tmp",
{ baseUrl: server.url, model: "test", apiKey: "test" },
{ baseUrl: server.url, model: "test", apiKey: "test", maxSteps: 10, temperature: 0.3 },
db, chatReq, "/tmp", callbacks,
);

Expand Down Expand Up @@ -243,7 +243,7 @@ describe("runAgentLoop — SSE streaming", () => {

await runAgentLoop(
session, "You are a test assistant.", "/tmp",
{ baseUrl: server.url, model: "test", apiKey: "test" },
{ baseUrl: server.url, model: "test", apiKey: "test", maxSteps: 10, temperature: 0.3 },
db, chatReq, "/tmp", callbacks,
);

Expand Down Expand Up @@ -272,7 +272,7 @@ describe("runAgentLoop — SSE streaming", () => {

await runAgentLoop(
session, "You are a test assistant.", "/tmp",
{ baseUrl: server.url, model: "test", apiKey: "test" },
{ baseUrl: server.url, model: "test", apiKey: "test", maxSteps: 10, temperature: 0.3 },
db, chatReq, "/tmp", callbacks,
);

Expand Down Expand Up @@ -321,7 +321,7 @@ describe("runAgentLoop — SSE streaming", () => {

await runAgentLoop(
session, "You are a test assistant.", "/tmp",
{ baseUrl: server.url, model: "test", apiKey: "test" },
{ baseUrl: server.url, model: "test", apiKey: "test", maxSteps: 10, temperature: 0.3 },
db, chatReq, "/tmp", callbacks,
);

Expand Down Expand Up @@ -350,7 +350,7 @@ describe("runAgentLoop — SSE streaming", () => {

await runAgentLoop(
session, "You are a test assistant.", "/tmp",
{ baseUrl: server.url, model: "test", apiKey: "test" },
{ baseUrl: server.url, model: "test", apiKey: "test", maxSteps: 10, temperature: 0.3 },
db, chatReq, "/tmp", callbacks,
);

Expand All @@ -370,7 +370,7 @@ describe("runAgentLoop — SSE streaming", () => {

await runAgentLoop(
session, "You are a test assistant.", "/tmp",
{ baseUrl: server.url, model: "test", apiKey: "test" },
{ baseUrl: server.url, model: "test", apiKey: "test", maxSteps: 10, temperature: 0.3 },
db, chatReq, "/tmp", callbacks,
);

Expand Down Expand Up @@ -418,7 +418,7 @@ describe("runAgentLoop — SSE streaming", () => {

await runAgentLoop(
session, "You are a test assistant.", "/tmp",
{ baseUrl: server.url, model: "test", apiKey: "test" },
{ baseUrl: server.url, model: "test", apiKey: "test", maxSteps: 10, temperature: 0.3 },
db, chatReq, "/tmp", callbacks,
);

Expand Down Expand Up @@ -462,7 +462,7 @@ describe.skipIf(!liveBaseUrl)("runAgentLoop — live endpoint", () => {
session,
"You are a test assistant. Follow instructions exactly.",
"/tmp",
{ baseUrl: liveBaseUrl!, model: liveModel, apiKey: liveApiKey },
{ baseUrl: liveBaseUrl!, model: liveModel, apiKey: liveApiKey, maxSteps: 10, temperature: 0.3 },
db, chatReq, "/tmp", callbacks,
);

Expand All @@ -480,7 +480,7 @@ describe.skipIf(!liveBaseUrl)("runAgentLoop — live endpoint", () => {
session,
"You are a test assistant with access to coding tools. When asked to list files, always use the ls tool.",
"/tmp",
{ baseUrl: liveBaseUrl!, model: liveModel, apiKey: liveApiKey },
{ baseUrl: liveBaseUrl!, model: liveModel, apiKey: liveApiKey, maxSteps: 10, temperature: 0.3 },
db, chatReq, "/tmp", callbacks,
);

Expand Down
36 changes: 26 additions & 10 deletions electron/lib/pi-agent-loop.ts
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,10 @@ export interface AgentLLMConfig {
baseUrl: string;
model: string;
apiKey: string;
/** Maximum tool-call iterations per turn. Defaults to 20. */
maxSteps: number;
/** Sampling temperature. Plan mode overrides this to 0.1 for determinism. */
temperature: number;
}

// ── Message types ─────────────────────────────────────────────────────────────
Expand Down Expand Up @@ -107,26 +111,38 @@ const CODING_TOOL_DEFS = [
];

// Cairn data tool names exposed to the coding agent.
// Redundant tools (create_note, update_note, update_task_status, list_notes,
// list_tasks, get_cairn_context) are excluded — see AGENT_EXCLUDED_TOOLS in
// tool-schemas.ts. ensure_note / patch_note / search_* cover all use cases.
// get_cairn_context is excluded (see AGENT_EXCLUDED_TOOLS) — agents use
// get_project_context_pack instead. Delete tools are intentionally omitted
// to prevent autonomous destructive actions.
const CAIRN_TOOL_NAMES = new Set([
// ── Context / read ──────────────────────────────────────────────────────────
"get_active_context",
"get_project_context_pack",
"get_neighbors",
// ── Notes ───────────────────────────────────────────────────────────────────
"get_note",
"ensure_note",
"patch_note",
"append_to_note",
"search_notes",
// ── Tasks ───────────────────────────────────────────────────────────────────
"get_task",
"create_task",
"update_task",
"bulk_update_task_status",
"search_tasks",
"list_ready_tasks",
"link_note_to_task",
// ── Tags ────────────────────────────────────────────────────────────────────
"create_tag",
// ── Idea Flow ───────────────────────────────────────────────────────────────
"get_idea_flow",
"get_idea_flow_rules",
"create_idea_flow_node",
"update_idea_flow_node",
"create_idea_flow_edge",
// Renderer-side only — main process no-ops; renderer renders an inline QuestionForm.
"layout_idea_flow",
// ── Renderer-side only — main process no-ops; renderer renders an inline QuestionForm.
"ask_questions",
]);

Expand Down Expand Up @@ -220,8 +236,6 @@ async function executeSingleTool(

// ── Main loop ─────────────────────────────────────────────────────────────────

const MAX_STEPS = 30;

export async function runAgentLoop(
session: PiAgentSession,
systemPrompt: string,
Expand All @@ -239,15 +253,17 @@ export async function runAgentLoop(
const { signal } = session.abortCtrl;
const allTools = getAllToolDefs(mode);

const { baseUrl, model, apiKey } = llmConfig;
const { baseUrl, model, apiKey, maxSteps, temperature: configTemp } = llmConfig;
// Plan mode always uses 0.1 for deterministic analysis regardless of user setting
const temperature = mode === "plan" ? 0.1 : (configTemp ?? 0.3);
if (!apiKey && !isLocalEndpoint(baseUrl)) {
callbacks.onError("No API key configured. Set one in Settings → AI & Chat.");
return;
}

let steps = 0;

while (steps < MAX_STEPS) {
while (steps < maxSteps) {
if (signal.aborted) { callbacks.onDone(); return; }
steps++;
// From step 2 onwards, signal the renderer to finalise the previous
Expand Down Expand Up @@ -276,7 +292,7 @@ export async function runAgentLoop(
tools: allTools,
tool_choice: "auto",
max_tokens: 8192,
temperature: 0.3,
temperature,
stream: true,
stream_options: { include_usage: true },
}),
Expand Down Expand Up @@ -461,6 +477,6 @@ export async function runAgentLoop(

// Exceeded max steps
callbacks.onError(
`Reached the maximum of ${MAX_STEPS} steps. Any changes made have been saved. Try a more focused request.`
`Reached the maximum of ${maxSteps} steps. Any changes made have been saved. Try a more focused request.`
);
}
2 changes: 1 addition & 1 deletion electron/lib/tools.ts
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ export interface ChatRequest {
projectId?: string;
workspaceId?: string;
history?: Array<{ role: "user" | "assistant" | "system"; content: string }>;
config?: { baseUrl?: string; model?: string; apiKey?: string; maxSteps?: number };
config?: { baseUrl?: string; model?: string; apiKey?: string; maxSteps?: number; temperature?: number };
systemPrompt?: string;
}

Expand Down
16 changes: 10 additions & 6 deletions src/components/agent/PiAgentPane.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -398,9 +398,11 @@ export function PiAgentPane({ session, isActive }: PiAgentPaneProps) {
taskTitle: session.taskTitle !== "Ad-hoc session" ? session.taskTitle : undefined,
mode: session.mode ?? "execute",
config: {
baseUrl: aiConfig.baseUrl || undefined,
model: aiConfig.model || undefined,
apiKey: aiConfig.apiKey || undefined,
baseUrl: aiConfig.baseUrl || undefined,
model: aiConfig.model || undefined,
apiKey: aiConfig.apiKey || undefined,
maxSteps: aiConfig.maxSteps ?? 20,
temperature: aiConfig.temperature ?? 0.3,
},
};
window.electron?.piAgent.prompt(promptPayload);
Expand Down Expand Up @@ -448,9 +450,11 @@ export function PiAgentPane({ session, isActive }: PiAgentPaneProps) {
cwd: session.cwd,
taskTitle: session.taskTitle !== "Ad-hoc session" ? session.taskTitle : undefined,
config: {
baseUrl: aiConfig.baseUrl || undefined,
model: aiConfig.model || undefined,
apiKey: aiConfig.apiKey || undefined,
baseUrl: aiConfig.baseUrl || undefined,
model: aiConfig.model || undefined,
apiKey: aiConfig.apiKey || undefined,
maxSteps: aiConfig.maxSteps ?? 20,
temperature: aiConfig.temperature ?? 0.3,
},
});
}
Expand Down
9 changes: 5 additions & 4 deletions src/components/chat/chat-panel.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -183,10 +183,11 @@ export function ChatPanel({ prefill, onPrefillConsumed }: ChatPanelProps = {}) {
workspaceId: activeWorkspaceId,
history: messages.slice(-40).map((m) => ({ role: m.role, content: m.content })),
config: {
baseUrl: aiConfig.baseUrl || undefined,
model: aiConfig.model || undefined,
apiKey: aiConfig.apiKey || undefined,
maxSteps: aiConfig.maxSteps ?? 20,
baseUrl: aiConfig.baseUrl || undefined,
model: aiConfig.model || undefined,
apiKey: aiConfig.apiKey || undefined,
maxSteps: aiConfig.maxSteps ?? 20,
temperature: aiConfig.temperature ?? 0.3,
},
});
}
Expand Down
9 changes: 5 additions & 4 deletions src/components/notes/notes-view/PrdModal.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -93,10 +93,11 @@ export function PrdModal({ projectId, workspaceId, onClose }: PrdModalProps) {
workspaceId,
history,
config: {
baseUrl: aiConfig.baseUrl || "https://api.openai.com",
model: aiConfig.model || "gpt-4o-mini",
apiKey: aiConfig.apiKey || "",
maxSteps: aiConfig.maxSteps ?? 20,
baseUrl: aiConfig.baseUrl || "https://api.openai.com",
model: aiConfig.model || "gpt-4o-mini",
apiKey: aiConfig.apiKey || "",
maxSteps: aiConfig.maxSteps ?? 20,
temperature: aiConfig.temperature ?? 0.3,
},
systemPrompt: buildPrdSystemPrompt(projectId),
});
Expand Down
Loading