feat(prompts): enhance intent classification and tool invocation guid…#16
feat(prompts): enhance intent classification and tool invocation guid…#16
Conversation
…elines feat(schemas): update shipping estimate and support ticket schemas with additional fields
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughRefines prompts and system invariants to enforce pre-tool parameter checks and user confirmation for write operations; expands intent-classifier examples routing shipping/product queries to tools; relaxes/extends shipping and support-ticket schemas; updates composition chain to include tool output and adjust early-return behavior; tests updated accordingly. Changes
Sequence Diagram(s)sequenceDiagram
participant User as "User"
participant LLM as "LLM (classifier/composer)"
participant ToolExec as "ToolExecutor/Validator"
participant ToolAPI as "Tool API (read/write)"
User->>LLM: Ask (e.g., "How much to ship to Jakarta?")
LLM->>LLM: Classify intent → TOOL or RAG, pick toolName
alt TOOL selected
LLM->>User: Request_missing_params / Ask_confirmation_for_write
User->>LLM: Provide params / Confirm
LLM->>ToolExec: Validate params (pre-tool checks)
alt params valid
ToolExec->>ToolAPI: Invoke tool (read or write)
ToolAPI-->>ToolExec: Return structured result
ToolExec-->>LLM: ToolOutput (JSON)
LLM->>LLM: Compose response using ToolOutput
LLM->>User: Present tool result
else missing/invalid
ToolExec-->>LLM: Indicate missing/invalid params
LLM->>User: Request required info (do not call tool)
end
else RAG selected
LLM->>LLM: Answer from knowledge/policy
LLM->>User: Provide RAG response
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
packages/llm/src/prompts/system.ts (1)
10-16: Conversation rules don't fully align with updated schema requirements.The new conversation rules are a good addition, but they don't reflect all required schema fields:
- Line 14: For shipping estimates, the prompt mentions "destination and package weight" but doesn't explicitly mention
destinationCity(now required) or thatdestinationZipCodeis optional. Consider being more specific: "ask for destination city, country, and optionally zip code and package weight."- Line 15: For support tickets, the prompt mentions "summarize the issue and ask for confirmation" but doesn't instruct the LLM to collect
customerEmailandcustomerPhone(both now required in the schema).Aligning these rules with the actual schema requirements will help the LLM gather all necessary parameters before tool invocation.
💡 Suggested alignment
Conversation Rules: - Always collect required information before invoking a tool. - For order lookups: ask for order ID and email if not provided. -- For shipping estimates: ask for destination and package weight if not provided. -- For support tickets: summarize the issue and ask for confirmation before creating. +- For shipping estimates: ask for destination city, country, and optionally zip code and weight if not provided. +- For support tickets: collect customer email and phone, summarize the issue, and ask for confirmation before creating. - Never fabricate tool results or data.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/llm/src/prompts/system.ts` around lines 10 - 16, Update the "Conversation Rules" block in prompts/system.ts so the shipping-estimate rule explicitly requests destinationCity and destinationCountry, mentions destinationZipCode as optional, and still requests package weight; and update the support-ticket rule to require collection of customerEmail and customerPhone in addition to summarizing the issue and asking for confirmation before creating the ticket; keep the other rules (order lookup fields and "never fabricate" note) unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/llm/src/tools/schemas.ts`:
- Around line 26-31: The ShippingEstimateSchema now requires destinationCity and
makes destinationZipCode optional but the handler and tool input builders still
treat them incorrectly; update the shippingEstimateTool handler (in
packages/llm/src/tools/index.ts) to destructure destinationCity from the tool
input alongside destinationZipCode, destinationCountry, and weightKg, add a
null/undefined check when using destinationZipCode in the response (fall back to
an empty string or a clear message if missing), and modify the tool input
construction in packages/llm/src/langchain/chains/tool-execution.ts (both the
fallback detection and buildToolInput logic) to include destinationCity so
validation matches the schema and avoids runtime errors.
- Around line 39-47: SupportTicketCreationSchema currently requires
customerEmail and customerPhone but the handler that processes this schema (the
destructuring in the support ticket handler that currently reads {
issueDescription, category, priority, idempotencyKey, confirmed }) and the
fallback in tool-execution.ts omit these fields; update the handler to also
destructure and accept customerEmail and customerPhone, include them in logging
and the returned result object from the support ticket creation function (and
ensure the same fields are populated/propagated in the fallback logic in
tool-execution.ts), or if these should be optional instead, change
SupportTicketCreationSchema to mark customerEmail and/or customerPhone as
.optional() so schema and handler match.
---
Nitpick comments:
In `@packages/llm/src/prompts/system.ts`:
- Around line 10-16: Update the "Conversation Rules" block in prompts/system.ts
so the shipping-estimate rule explicitly requests destinationCity and
destinationCountry, mentions destinationZipCode as optional, and still requests
package weight; and update the support-ticket rule to require collection of
customerEmail and customerPhone in addition to summarizing the issue and asking
for confirmation before creating the ticket; keep the other rules (order lookup
fields and "never fabricate" note) unchanged.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: d9833b45-4131-421f-b009-2fb2fa679324
📒 Files selected for processing (3)
packages/llm/src/prompts/index.tspackages/llm/src/prompts/system.tspackages/llm/src/tools/schemas.ts
| export const ShippingEstimateSchema = z.object({ | ||
| destinationZipCode: z.string().describe('The destination postal or zip code'), | ||
| destinationZipCode: z.string().optional().describe('The destination postal or zip code'), | ||
| destinationCountry: z.string().describe('The destination country code (e.g., US, UK, ID)'), | ||
| destinationCity: z.string().describe('The destination city for the shipment'), | ||
| weightKg: z.number().positive().optional().describe('Optional weight in kilograms'), | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify all usages of ShippingEstimateSchema and shippingEstimateTool to identify missing destinationCity handling
echo "=== Searching for shippingEstimateTool handler implementation ==="
rg -n -A 20 'shippingEstimateTool\s*=' --type ts
echo ""
echo "=== Searching for shipping_estimate tool input construction ==="
rg -n -B 2 -A 8 "shipping_estimate" --type ts
echo ""
echo "=== Searching for destinationCity usage ==="
rg -n 'destinationCity' --type tsRepository: Ryanakml/ChatFuse
Length of output: 7506
Schema-handler mismatch: destinationCity is required but never extracted; destinationZipCode now optional but used unconditionally.
The shippingEstimateTool handler in packages/llm/src/tools/index.ts (line 118) only destructures destinationZipCode, destinationCountry, and weightKg. The newly required destinationCity field is never extracted from the input. Additionally, the handler uses destinationZipCode directly in the response (line 125) without null-checking, despite it now being optional in the schema.
The tool input construction in packages/llm/src/langchain/chains/tool-execution.ts (lines 50-56 and 100-106) also omits destinationCity from both the fallback detection and the buildToolInput function, causing schema validation failures.
Update the handler parameter destructuring to include destinationCity, null-check destinationZipCode, and add destinationCity to the tool input construction.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/llm/src/tools/schemas.ts` around lines 26 - 31, The
ShippingEstimateSchema now requires destinationCity and makes destinationZipCode
optional but the handler and tool input builders still treat them incorrectly;
update the shippingEstimateTool handler (in packages/llm/src/tools/index.ts) to
destructure destinationCity from the tool input alongside destinationZipCode,
destinationCountry, and weightKg, add a null/undefined check when using
destinationZipCode in the response (fall back to an empty string or a clear
message if missing), and modify the tool input construction in
packages/llm/src/langchain/chains/tool-execution.ts (both the fallback detection
and buildToolInput logic) to include destinationCity so validation matches the
schema and avoids runtime errors.
…for clarity and accuracy
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
packages/llm/src/langchain/chains/composition.ts (1)
17-25: Missing routing rule fortool_path.The
ROUTING RULESsection provides guidance forrag_path,clarification_path, andescalation_path, but omitstool_path. The LLM has no explicit instructions on how to compose responses when tool execution results are available.Consider adding a rule such as:
If route is tool_path: incorporate the tool output into a natural, helpful response for the user.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/llm/src/langchain/chains/composition.ts` around lines 17 - 25, Add a missing routing rule for "tool_path" inside the ROUTING RULES block so the LLM knows how to handle tool execution results: update the rules string (the ROUTING RULES constant/block in composition.ts) to include a line like "If route is tool_path: incorporate the tool output into a clear, natural, and helpful response for the user, citing results and indicating any uncertainties," ensuring it mirrors the tone/constraints of the other rules (no hallucinations, be polite, suggest escalation if needed).packages/llm/src/langchain/chains/__tests__/composition.test.ts (1)
25-46: Consider adding explicit test for tool_path NOT skipping composition.The existing test at lines 25-46 uses
tool_pathwithout a pre-populatedcomposedResponse. For completeness, consider adding a test that verifiestool_pathwith a pre-existingcomposedResponsestill invokes the model (i.e., does NOT skip), to explicitly validate the new condition at line 64 ofcomposition.ts.Suggested additional test
it('should NOT skip model invocation for tool_path even with existing composedResponse', async () => { mockRouterInvoke.mockResolvedValueOnce({ content: 'Updated order status response.', confidence: 0.95, escalate_flag: false, }); const initialState: AgentState = { originalInput: 'Where is my order?', normalizedInput: 'where is my order?', route: 'tool_path', intent: 'TOOL', composedResponse: 'Stale response that should be replaced', toolExecution: { toolName: 'order_status_lookup', toolInput: { orderId: '123' }, toolOutput: { status: 'processing' }, toolDurationMs: 100, toolSuccess: true, }, }; const result = await compositionChain.invoke(initialState); expect(mockRouterInvoke).toHaveBeenCalledTimes(1); expect(result.composedResponse).toBe('Updated order status response.'); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/llm/src/langchain/chains/__tests__/composition.test.ts` around lines 25 - 46, Add a unit test to ensure the compositionChain.invoke path does not skip model invocation when route === 'tool_path' even if initialState.composedResponse is present: create a test that sets initialState.route = 'tool_path' and includes a non-empty composedResponse and toolExecution, mock mockRouterInvoke to resolve with a new content and confidence, call compositionChain.invoke(initialState), and assert mockRouterInvoke was called once and that result.composedResponse equals the mocked content; this verifies the condition in composition.ts that tool_path should still invoke the model.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/llm/src/langchain/chains/composition.ts`:
- Around line 109-111: The truthy check on state.toolExecution?.toolOutput
incorrectly treats valid falsy values as missing; change the conditional to test
for the existence of state.toolExecution (not the truthiness of toolOutput) so
that toolOutput values like false, 0, "" or null are preserved and
JSON.stringify(state.toolExecution.toolOutput) is used when state.toolExecution
is present, otherwise return 'No tool output'; locate the expression building
the object with key toolOutput in composition.ts and update that conditional
accordingly.
---
Nitpick comments:
In `@packages/llm/src/langchain/chains/__tests__/composition.test.ts`:
- Around line 25-46: Add a unit test to ensure the compositionChain.invoke path
does not skip model invocation when route === 'tool_path' even if
initialState.composedResponse is present: create a test that sets
initialState.route = 'tool_path' and includes a non-empty composedResponse and
toolExecution, mock mockRouterInvoke to resolve with a new content and
confidence, call compositionChain.invoke(initialState), and assert
mockRouterInvoke was called once and that result.composedResponse equals the
mocked content; this verifies the condition in composition.ts that tool_path
should still invoke the model.
In `@packages/llm/src/langchain/chains/composition.ts`:
- Around line 17-25: Add a missing routing rule for "tool_path" inside the
ROUTING RULES block so the LLM knows how to handle tool execution results:
update the rules string (the ROUTING RULES constant/block in composition.ts) to
include a line like "If route is tool_path: incorporate the tool output into a
clear, natural, and helpful response for the user, citing results and indicating
any uncertainties," ensuring it mirrors the tone/constraints of the other rules
(no hallucinations, be polite, suggest escalation if needed).
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 21bd34b5-7688-4e84-8afe-2aae81ee9601
📒 Files selected for processing (3)
packages/llm/src/langchain/__tests__/pipeline.test.tspackages/llm/src/langchain/chains/__tests__/composition.test.tspackages/llm/src/langchain/chains/composition.ts
| toolOutput: state.toolExecution?.toolOutput | ||
| ? JSON.stringify(state.toolExecution.toolOutput) | ||
| : 'No tool output', |
There was a problem hiding this comment.
Falsy toolOutput values will be incorrectly treated as missing.
The check state.toolExecution?.toolOutput uses a truthy evaluation. If toolOutput is a legitimate falsy value (null, 0, false, ""), it will incorrectly yield 'No tool output'. Since toolOutput is typed as unknown (per types.ts), check for toolExecution existence instead.
Proposed fix
- toolOutput: state.toolExecution?.toolOutput
- ? JSON.stringify(state.toolExecution.toolOutput)
- : 'No tool output',
+ toolOutput: state.toolExecution
+ ? JSON.stringify(state.toolExecution.toolOutput)
+ : 'No tool output',🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/llm/src/langchain/chains/composition.ts` around lines 109 - 111, The
truthy check on state.toolExecution?.toolOutput incorrectly treats valid falsy
values as missing; change the conditional to test for the existence of
state.toolExecution (not the truthiness of toolOutput) so that toolOutput values
like false, 0, "" or null are preserved and
JSON.stringify(state.toolExecution.toolOutput) is used when state.toolExecution
is present, otherwise return 'No tool output'; locate the expression building
the object with key toolOutput in composition.ts and update that conditional
accordingly.
…new utility functions
…ols and repositories
…and format promise resolution
…elines
feat(schemas): update shipping estimate and support ticket schemas with additional fields
Summary by CodeRabbit
New Features
User Data & Inputs
Documentation
Tests