-
Notifications
You must be signed in to change notification settings - Fork 1
Seng/agent migration #327
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Seng/agent migration #327
Conversation
📝 WalkthroughWalkthroughThis pull request introduces a comprehensive refactor establishing a new doc-agents project structure using the Agentuity framework. It adds two document-focused agents (DocProcessing and DocQA), an interactive AgentPulse assistant, and restructures the backend to use an SDK-based approach. Multiple API endpoints are removed and replaced with new session management, streaming, and agent communication routes. Middleware is updated to proxy agent requests with bearer token authentication, and the environment configuration is migrated from custom validation to SDK-based initialization. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Middleware
participant DocAgent as Doc Agent<br/>(doc-agents)
participant VectorStore
participant KVStore
Client->>Middleware: POST /api/sessions/:id/messages
Middleware->>Middleware: Proxy to backend<br/>+ Bearer Token
alt Chat Message Path
Middleware->>DocAgent: Message to AgentPulse
DocAgent->>VectorStore: Vector search (DocQA)
VectorStore-->>DocAgent: Relevant chunks
DocAgent->>DocAgent: Generate response<br/>+ Tutorial state
DocAgent-->>Middleware: SSE stream
Middleware->>KVStore: Persist message
Middleware->>KVStore: Save session
Middleware-->>Client: Stream + Session update
end
sequenceDiagram
participant User
participant Frontend as Frontend<br/>(Next.js)
participant SessionAPI as Session API<br/>(doc-agents)
participant AgentPulse as AgentPulse Agent
participant Tools
participant VectorDB as Vector DB
participant KVStore
User->>Frontend: Ask question
Frontend->>Frontend: useAgentuity hook
Frontend->>SessionAPI: POST /api/agent_pulse
SessionAPI->>AgentPulse: Invoke with message
par Streaming Loop
loop Each streaming chunk
AgentPulse->>AgentPulse: Generate text delta
AgentPulse-->>Frontend: text-delta (SSE)
Frontend->>Frontend: Update UI in real-time
end
end
alt Tool invocation (e.g., askDocsTool)
AgentPulse->>Tools: Resolve tool
Tools->>VectorDB: Vector search query
VectorDB-->>Tools: Ranked results
Tools-->>AgentPulse: Tool result
end
AgentPulse->>SessionAPI: Finish stream
SessionAPI->>KVStore: Persist session + messages
SessionAPI-->>Frontend: session-saved event
Frontend->>Frontend: Update conversation
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes 🚥 Pre-merge checks | ✅ 1✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 14
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
🟠 Major comments (37)
doc-agents/.agents/skills/agentuity-cli-cloud-deployment-remove/SKILL.md-34-39 (1)
34-39: Fix inconsistency between required flag notation and option syntax.Line 38 marks
--project-idas "Yes" (required) in the Options table, but line 25 shows it in[options]brackets, which conventionally indicates optional parameters. Clarify whether this flag is actually required and update either the table or the usage syntax accordingly to avoid user confusion.doc-agents/.agents/skills/agentuity-cli-cloud-agent-list/SKILL.md-53-55 (1)
53-55: Expand the Output section with schema details.The output documentation only states the return type (
array) without describing the structure or contents of array elements. Users need to know what fields/properties are included in each returned object.Consider expanding this section to include:
- The schema of array elements (e.g., agent name, ID, status, etc.)
- An example JSON output showing actual structure
For consistency, check other similar skill documentation files to match their output documentation style.
doc-agents/.agents/skills/agentuity-cli-cloud-session-list/SKILL.md-28-40 (1)
28-40: Change all filter options to "Required: No".The table marks all filter options (lines 31–40) as "Required: Yes," but the examples section contradicts this:
- Line 47 runs the command with zero filters
- Lines 53, 59, 71, 77, 83, 89 each use individual filters in isolation
All filter options must be optional to match the documented usage. Update the "Required" column to "No" for
--projectId,--deploymentId,--trigger,--env,--threadId,--agentIdentifier,--devmode,--success,--startAfter, and--startBefore.doc-agents/.agents/skills/agentuity-cli-cloud-scp-upload/SKILL.md-17-20 (1)
17-20: Clarify the "cloud deploy" prerequisite.Line 20 lists "cloud deploy" as a prerequisite, but it is unclear what this means:
- Is it a command to run beforehand (e.g.,
agentuity cloud deploy)?- A requirement for an active deployment to exist?
- A service/feature name within Agentuity?
Please provide more context so users understand this dependency.
doc-agents/.agents/skills/agentuity-cli-cloud-scp-upload/SKILL.md-30-33 (1)
30-33: Provide descriptions for Arguments table.Lines 32–33 show empty descriptions (
-) for both<source>and<destination>. Please add meaningful descriptions to help users understand what each argument does. For example:
<source>: Local file path to upload<destination>: Remote destination path (defaults to home directory if omitted)🔎 Example fix for argument descriptions
| Argument | Type | Required | Description | |----------|------|----------|-------------| -| `<source>` | string | Yes | - | -| `<destination>` | string | No | - | +| `<source>` | string | Yes | Local file path to upload | +| `<destination>` | string | No | Remote destination path (defaults to home directory if omitted) |doc-agents/.agents/skills/agentuity-cli-cloud-scp-upload/SKILL.md-22-26 (1)
22-26: Update usage syntax to reflect required--identifierflag.The usage syntax shows
[options]as optional, but line 39 marks--identifieras Required. This should be clarified in the usage line—for example:agentuity cloud scp upload <source> [destination] --identifier=<id>or similar, matching your CLI convention. Additionally, the examples should be updated to consistently show--identifierin all cases, as several examples (lines 46, 52, 64) demonstrate the command without it, contradicting the Required designation.doc-agents/agentuity.json-12-12 (1)
12-12: Verify the region value.The region is set to
"use", which does not match standard cloud region naming conventions (e.g.,us-east-1,us-west-2,eu-west-1). Verify this is the intended value or correct it to a valid region identifier..github/workflows/sync-docs-full.yml-5-5 (1)
5-5: Reconsider running full docs sync on every pull request.Adding a
pull_requesttrigger means this workflow will execute a full documentation sync on every PR. This raises several concerns:
- Performance: Full syncs process all files and could be slow/resource-intensive for every PR
- Data pollution: PR branch content may be synced to the vector store, potentially polluting production data
- Cost: If using OpenAI embeddings, this could incur significant costs for every PR
Consider one of these alternatives:
- Remove the
pull_requesttrigger and keep onlyworkflow_dispatchfor manual full syncs- Add conditions to only run on specific PR labels or paths
- Use a separate test/staging vector store for PR environments
doc-agents/.agents/skills/agentuity-cli-cloud-db-list/SKILL.md-29-32 (1)
29-32: Incorrect "Required" designation for optional flags.Both flags are marked as "Required: Yes," but boolean options like these are universally optional in CLI design. Line 31's description explicitly mentions a default behavior ("masked in terminal"). Update the "Required" column to "No" or remove it entirely if defaults apply.
doc-agents/.agents/skills/agentuity-cli-cloud-deployment-logs/SKILL.md-68-70 (1)
68-70: Document the output array structure and field types.The output section only states "Returns:
array" without specifying:
- What each array element contains
- The structure/type of log entries (timestamps, log levels, messages, etc.)
- Example output format
Compare this to the comprehensive output documentation in other skills (e.g.,
cloud-agent-get).🔎 Suggested improvement approach
## Output -Returns: `array` +Returns JSON array of log entries: + +```json +[ + { + "timestamp": "string (ISO 8601)", + "level": "string (INFO, WARN, ERROR, DEBUG)", + "message": "string", + "source": "string (optional)" + } +] +``` + +| Field | Type | Description | +|-------|------|-------------| +| `timestamp` | string | ISO 8601 formatted timestamp of the log entry. | +| `level` | string | Log level: INFO, WARN, ERROR, or DEBUG. | +| `message` | string | The log message content. | +| `source` | string | Optional source identifier (e.g., component name). |doc-agents/.agents/skills/agentuity-cli-cloud-apikey-get/SKILL.md-1-39 (1)
1-39: Add comprehensive Output section documentation.This skill file is missing the Output section entirely, whereas comparable skill files (e.g.,
cloud-agent-get,cloud-deployment-logs) document the returned JSON structure and field types. This leaves users unable to understand what data the command returns.Add an Output section that includes:
- JSON example output structure
- Field definitions and descriptions
- Data types for each field
- Whether fields are always present or conditional
🔎 Suggested improvement approach
+## Output + +Returns JSON object: + +```json +{ + "id": "string", + "value": "string", + "name": "string", + "createdAt": "string", + "updatedAt": "string", + "lastUsedAt": "string" +} +``` + +| Field | Type | Description | +|-------|------|-------------| +| `id` | string | Unique identifier of the API key. | +| `value` | string | The API key value. Only returned on creation; otherwise redacted. | +| `name` | string | User-provided name for the API key. | +| `createdAt` | string | ISO 8601 timestamp when the key was created. | +| `updatedAt` | string | ISO 8601 timestamp of the last update. | +| `lastUsedAt` | string | ISO 8601 timestamp of last usage, if available. |doc-agents/.agents/skills/agentuity-cli-cloud-agent-get/SKILL.md-47-77 (1)
47-77: Document output fields with actionable descriptions.The output schema describes multiple fields as "unknown" types and includes only "-" placeholders for field descriptions. This leaves users uncertain about the structure and purpose of the returned data.
Consider clarifying:
- What each field represents (e.g.,
identifiervsid, purpose ofmetadata)- What "unknown" types actually contain (objects, any type, or undefined?)
- Whether fields are always present or conditionally returned
🔎 Suggested improvement approach
-| `description` | unknown | - | +| `description` | string | User-provided description of the agent. May be null if not set. | -| `deploymentId` | unknown | - | +| `deploymentId` | string | ID of the currently active deployment, if any. | -| `metadata` | unknown | - | +| `metadata` | object | Custom metadata key-value pairs associated with the agent. | +| `devmode` | boolean | Indicates whether the agent is in development mode. |Committable suggestion skipped: line range outside the PR's diff.
doc-agents/.agents/skills/agentuity-cli-cloud-stream-get/SKILL.md-34-38 (1)
34-38: Incomplete options documentation and incorrect Required value.The
--outputoption is marked as Required: Yes, but this appears to be incorrect—users should be able to run the command without downloading to a file. Additionally, example at line 51 uses--jsonflag, which is not documented in the Options table. Add missing options to the table and correct the Required value for--output.🔎 Proposed fix for options section
## Options | Option | Type | Required | Default | Description | |--------|------|----------|---------|-------------| -| `--output` | string | Yes | - | download stream content to file | +| `--output` | string | No | - | download stream content to file | +| `--json` | boolean | No | - | output as JSON |doc-agents/.agents/skills/agentuity-cli-cloud-stream-list/SKILL.md-29-34 (1)
29-34: Fix contradictory option requirements.All options are marked as "Required: Yes" but their descriptions mention defaults (e.g., "default: 100" for
--size). If defaults exist, these options should be marked as "Required: No". This contradiction will confuse users.🔎 Suggested fix
| Option | Type | Required | Default | Description | |--------|------|----------|---------|-------------| -| `--size` | number | Yes | - | maximum number of streams to return (default: 100) | -| `--offset` | number | Yes | - | number of streams to skip for pagination | -| `--name` | string | Yes | - | filter by stream name | -| `--metadata` | string | Yes | - | filter by metadata (format: key=value or key1=value1,key2=value2) | +| `--size` | number | No | 100 | maximum number of streams to return | +| `--offset` | number | No | 0 | number of streams to skip for pagination | +| `--name` | string | No | - | filter by stream name | +| `--metadata` | string | No | - | filter by metadata (format: key=value or key1=value1,key2=value2) |doc-agents/.agents/skills/agentuity-cli-cloud-apikey-list/SKILL.md-30-31 (1)
30-31: Clarify required options and update prerequisites.The options table marks both
--orgIdand--projectIdas "Required: Yes", but the Prerequisites section (line 18) only mentions authentication. If these options are truly required, the Prerequisites should reflect the need for organization and project context. If they're optional filters, mark them as "Required: No".🔎 Suggested fix if options are optional filters
| Option | Type | Required | Default | Description | |--------|------|----------|---------|-------------| -| `--orgId` | string | Yes | - | filter by organization id | -| `--projectId` | string | Yes | - | filter by project id | +| `--orgId` | string | No | - | filter by organization id | +| `--projectId` | string | No | - | filter by project id |doc-agents/app.ts-3-15 (1)
3-15: Add error handling and consider implementing setup/shutdown logic.The top-level await for
createApphas no error handling, which means any initialization failures will result in unhandled promise rejections. Additionally, bothsetupandshutdowncallbacks are empty placeholders—if this is intentional scaffolding, consider adding TODO comments or basic error handling structure.🔎 Suggested improvements
+try { const { server, logger } = await createApp({ setup: async () => { // anything you return from this will be automatically // available in the ctx.app. this allows you to initialize // global resources and make them available to routes and // agents in a typesafe way + // TODO: Add initialization logic + return {}; }, shutdown: async (_state) => { // the state variable will be the same value was what you // return from setup above. you can use this callback to // close any resources or other shutdown related tasks + // TODO: Add cleanup logic }, }); logger.debug('Running %s', server.url); +} catch (error) { + console.error('Failed to start application:', error); + process.exit(1); +}Committable suggestion skipped: line range outside the PR's diff.
doc-agents/.agents/skills/agentuity-cli-cloud-db-logs/SKILL.md-34-48 (1)
34-48: Fix documentation of required options.Options on lines 38-42 are marked as "Required: Yes" (--startDate, --endDate, --username, --command, --hasError, --sessionId), but the example on line 55 shows the command working without any of these parameters:
agentuity cloud db logs my-database. These should be marked as optional filters, not required options.🔎 Proposed fix
Update the "Required" column for filter options to "No":
| `--startDate` | string | Yes | - | Start date for filtering logs | + | `--startDate` | string | No | - | Start date for filtering logs |Apply the same change to
--endDate,--username,--command,--hasError, and--sessionId.Committable suggestion skipped: line range outside the PR's diff.
doc-agents/.agents/skills/agentuity-cli-cloud-db-create/SKILL.md-41-45 (1)
41-45: Clarify or remove the alternate command example.Line 44 shows
agentuity cloud db newas an example, but the declared command isagentuity cloud db create. Ifnewis an unsupported alias or undocumented command, this example will mislead users and agents. Either document it as an alias or replace it with a valid example.🔎 Proposed fix (if `new` is not a real alias)
Run new command: ```bash -agentuity cloud db new +agentuity cloud db create --name my-db</details> </blockquote></details> <details> <summary>doc-agents/.agents/skills/agentuity-cli-cloud-db-create/SKILL.md-29-39 (1)</summary><blockquote> `29-39`: **Resolve contradictory required/optional status for `--name` option.** Line 31 declares `--name` as required (`Required: Yes`), but line 38 shows an example (`agentuity cloud db create`) that runs without providing the option. Clarify whether `--name` is truly required, optional, or becomes required under specific conditions. <details> <summary>🔎 Potential fixes (choose one based on actual behavior)</summary> **Option A: If --name is optional:** ```diff | Option | Type | Required | Default | Description | |--------|------|----------|---------|-------------| -| `--name` | string | Yes | - | Custom database name | +| `--name` | string | No | - | Custom database name |Option B: If --name is required:
Create new item: ```bash -agentuity cloud db create +agentuity cloud db create --name <database-name></details> > Committable suggestion skipped: line range outside the PR's diff. </blockquote></details> <details> <summary>doc-agents/.agents/skills/agentuity-cli-auth-ssh-add/SKILL.md-26-56 (1)</summary><blockquote> `26-56`: **Clarify the optional/required nature of `--file` and document usage modes.** Line 30 declares `--file` as required (`Required: Yes`), but the interactive example (line 34-38) shows the command running without `--file`. This indicates the flag is optional when using interactive mode. The documentation needs to clarify the two usage modes: 1. Interactive mode (no flags required) 2. File-based mode (--file used to specify source) Currently, the Options table doesn't reflect this optionality. <details> <summary>🔎 Proposed fix</summary> ```diff | Option | Type | Required | Default | Description | |--------|------|----------|---------|-------------| -| `--file` | string | Yes | - | File containing the public key | +| `--file` | string | No | stdin or interactive | File containing the public key (omit for interactive prompt or stdin) |Alternatively, add a note under Usage to clarify the modes:
## Usage ```bash agentuity auth ssh add [options]
+Usage modes:
+- Interactive: Run without options to be prompted for the key
+- File-based: Use--fileto read the public key from a file
+- Stdin: Pipe the key via stdin</details> </blockquote></details> <details> <summary>doc-agents/package.json-23-23 (1)</summary><blockquote> `23-23`: **Update langchain to at least 0.3.80 or later to address known vulnerabilities and runtime issues.** LangChain 0.3.28 has a documented template-injection vulnerability in langchain-core and reported runtime/integration issues. Upgrade to 0.3.80+ or the latest stable release. The caret pinning (^0.3.28) allows patch updates, but starting from a vulnerable base version creates unnecessary risk. </blockquote></details> <details> <summary>doc-agents/src/api/process-docs/index.ts-2-2 (1)</summary><blockquote> `2-2`: **Remove unused import.** The `docProcessingAgent` import is not used in this file. Based on the AI summary, the actual document processing logic appears to be in the related `route.ts` file. <details> <summary>🔎 Proposed fix</summary> ```diff import { createRouter } from '@agentuity/runtime'; -import docProcessingAgent from '@agent/doc_processing/agent'; const router = createRouter();doc-agents/.agents/skills/agentuity-cli-cloud-storage-upload/SKILL.md-35-40 (1)
35-40: Fix inconsistent Required/Default designations in Options table.The Options table marks both options as "Required: Yes" and "Default: -", but their descriptions contradict this:
--key(line 39): Description states "defaults to basename or 'stdin'", so it should not be marked Required.--contentType(line 40): Description states "auto-detected if not provided", so it should not be marked Required.Update the Required and Default columns to match the actual behavior described.
doc-agents/.agents/skills/agentuity-cli-project-show/SKILL.md-58-77 (1)
58-77: Replace "unknown" type annotations with specific types.The Output schema lists
descriptionandtagsfields with type "unknown" (lines 61, 62, 73, 74). This is ambiguous for API consumers or agents using this documentation.Specify the actual types: Are they strings? Arrays? Objects? Optional or required? Update the schema and field descriptions with concrete type information.
doc-agents/.agents/skills/agentuity-cli-cloud-keyvalue-delete-namespace/SKILL.md-28-33 (1)
28-33: Document argument descriptions and clarify<confirm>behavior.The Arguments table shows empty descriptions for both arguments. Specifically:
<confirm>(line 33) lacks documentation of expected values or behavior.Looking at the Examples (lines 40, 46, 52),
<confirm>appears to support multiple forms:
- Interactive (line 40: no argument provided)
- Flag-style (lines 46, 52:
--confirm)Clarify whether
<confirm>is positional or flag-based, and update the Arguments table with specific accepted values or behavior.doc-agents/.agents/skills/agentuity-cli-cloud-env-list/SKILL.md-47-49 (1)
47-49: Complete the Output section.The Output section is incomplete compared to the structure of other skill documentation files. It should include a JSON schema example and a table describing the returned fields, similar to the format used in agentuity-cli-cloud-vector-search and agentuity-cli-cloud-env-import.
doc-agents/.agents/skills/agentuity-cli-cloud-vector-search/SKILL.md-35-41 (1)
35-41: Clarify which options are required vs. optional.The Options table marks
--limit,--similarity, and--metadataas all Required: "Yes", but the descriptions and defaults suggest they should be optional. For example,--limitincludes "(default: 10)", implying it's not required for the command to work.Correct the Required column to accurately reflect whether these options must be provided or can be omitted.
🔎 Proposed fix for the options table
| Option | Type | Required | Default | Description | |--------|------|----------|---------|-------------| - | `--limit` | number | Yes | - | maximum number of results to return (default: 10) | - | `--similarity` | number | Yes | - | minimum similarity threshold (0.0-1.0) | - | `--metadata` | string | Yes | - | filter by metadata (format: key=value or key1=value1,key2=value2) | + | `--limit` | number | No | 10 | maximum number of results to return | + | `--similarity` | number | No | - | minimum similarity threshold (0.0-1.0) | + | `--metadata` | string | No | - | filter by metadata (format: key=value or key1=value1,key2=value2) |doc-agents/.agents/skills/agentuity-cli-cloud-vector-search/SKILL.md-43-73 (1)
43-73: Inconsistent command syntax in examples.The examples use varying command prefixes that don't match the documented usage (
agentuity cloud vector search). Line 48 usesagentuity vector search, line 54 usesagentuity vector list, line 60 usesagentuity vector search, and line 72 usesagentuity vector ls embeddings—all missing the "cloud" prefix or using incorrect subcommands.Ensure all examples follow the correct command syntax:
agentuity cloud vector search.🔎 Proposed fixes for example commands
- Search for similar products: - - ```bash - agentuity vector search products "comfortable office chair" - ``` + Search for similar products: + + ```bash + agentuity cloud vector search products "comfortable office chair" + ``` - Search knowledge base: - - ```bash - agentuity vector list knowledge-base "machine learning" - ``` + Search knowledge base: + + ```bash + agentuity cloud vector search knowledge-base "machine learning" + ``` - Limit results: - - ```bash - agentuity vector search docs "API documentation" --limit 5 - ``` + Limit results: + + ```bash + agentuity cloud vector search docs "API documentation" --limit 5 + ``` - Filter by metadata: - - ```bash - agentuity vector ls embeddings "neural networks" --metadata category=ai - ``` + Filter by metadata: + + ```bash + agentuity cloud vector search embeddings "neural networks" --metadata category=ai + ```Committable suggestion skipped: line range outside the PR's diff.
doc-agents/src/api/process-docs/route.ts-7-11 (1)
7-11: Add error handling and consider webhook authentication.The endpoint directly awaits
docProcessingAgent.run()without error handling. If the agent throws an exception, it may result in unclear error responses or unhandled rejections.Additionally, since this endpoint is called from CI workflows (per the AI summary), consider adding webhook signature verification to prevent unauthorized requests.
🔎 Proposed fix with error handling
router.post('/', docProcessingAgent.validator(), async (c) => { const data = c.req.valid('json'); - const result = await docProcessingAgent.run(data); - return c.json(result); + try { + const result = await docProcessingAgent.run(data); + return c.json(result); + } catch (error) { + console.error('Doc processing failed:', error); + return c.json({ success: false, error: 'Processing failed' }, 500); + } });doc-agents/.agents/skills/agentuity-cli-cloud-storage-get/SKILL.md-38-38 (1)
38-38: Fix contradictory requirement for--showCredentials.The
--showCredentialsoption is marked asRequired: Yes, but the description indicates default behavior (masked in terminal, unmasked in JSON), which implies the option is optional. If it's optional with default behavior, changeRequiredtoNo.🔎 Proposed fix
| Option | Type | Required | Default | Description | |--------|------|----------|---------|-------------| -| `--showCredentials` | boolean | Yes | - | Show credentials in plain text (default: masked in terminal, unmasked in JSON) | +| `--showCredentials` | boolean | No | `false` | Show credentials in plain text (default: masked in terminal, unmasked in JSON) |doc-agents/.agents/skills/agentuity-cli-cloud-thread-list/SKILL.md-29-32 (1)
29-32: Fix inconsistencies in the options table.The options table has several issues:
Contradictory requirements:
--orgIdand--projectIdare marked asRequired: Yes, but the examples (lines 36-40) show the command working without these options, and lines 48-58 show them as optional filters.Naming inconsistency: The table uses camelCase (
--orgId,--projectId), but the examples use kebab-case (--project-id,--org-id). Verify the correct format and make them consistent.Invalid default value: Having
Required: YeswithDefault: -is contradictory. If they're required, they shouldn't have defaults; if they're optional, mark them asNo.🔎 Proposed fix
| Option | Type | Required | Default | Description | |--------|------|----------|---------|-------------| | `--count` | number | No | `10` | Number of threads to list (1–100) | -| `--orgId` | string | Yes | - | Filter by organization ID | -| `--projectId` | string | Yes | - | Filter by project ID | +| `--org-id` | string | No | - | Filter by organization ID | +| `--project-id` | string | No | - | Filter by project ID |doc-agents/.agents/skills/agentuity-cli-profile-create/SKILL.md-32-33 (1)
32-33: Fix contradictory requirement for--switch.The
--switchoption is marked asRequired: Yes, but:
- The description says "switch to this profile (if more than one)", which implies conditional/optional behavior
- The first example (lines 37-41) shows the command working without this flag
This should be marked as
Required: Nosince it's an optional flag.🔎 Proposed fix
| Option | Type | Required | Default | Description | |--------|------|----------|---------|-------------| -| `--switch` | boolean | Yes | - | switch to this profile (if more than one) | +| `--switch` | boolean | No | `false` | Switch to this profile after creation |doc-agents/.agents/skills/agentuity-cli-cloud-vector-upsert/SKILL.md-36-42 (1)
36-42: Fix mutually exclusive options marked as required.All options (
--document,--embeddings,--metadata,--file) are marked asRequired: Yes, but they appear to be mutually exclusive based on the examples:
- Lines 46-50: Use
--documentalone- Lines 58-62: Use
--embeddingsalone- Lines 64-74: Use
--filealoneIf these options are mutually exclusive (you use one OR another), they should be marked as
Required: Nowith a note explaining which combination is valid, or restructure the table to clarify the mutual exclusivity.🔎 Proposed fix
| Option | Type | Required | Default | Description | |--------|------|----------|---------|-------------| -| `--document` | string | Yes | - | document text to embed | -| `--embeddings` | string | Yes | - | pre-computed embeddings as JSON array | -| `--metadata` | string | Yes | - | metadata as JSON object | -| `--file` | string | Yes | - | path to JSON file containing vectors, or "-" for stdin | +| `--document` | string | No | - | Document text to embed (mutually exclusive with --embeddings and --file) | +| `--embeddings` | string | No | - | Pre-computed embeddings as JSON array (mutually exclusive with --document and --file) | +| `--metadata` | string | No | - | Metadata as JSON object (can be combined with --document or --embeddings) | +| `--file` | string | No | - | Path to JSON file containing vectors, or "-" for stdin (mutually exclusive with --document and --embeddings) |doc-agents/.agents/skills/agentuity-cli-cloud-storage-get/SKILL.md-48-52 (1)
48-52: Fix the incorrect subcommand in the example.Line 51 uses
agentuity cloud storage show my-bucket, but the documented command isagentuity cloud storage get. Changeshowtogetto match the command syntax defined in the metadata (line 9) and usage section (line 25).agentuity cloud storage get my-bucketdoc-agents/src/agent/doc_processing/chunk-mdx.ts-146-147 (1)
146-147: Handle potentially undefined frontmatter fields.
frontmatter.titleandfrontmatter.descriptionmay beundefinedif the MDX file lacks these fields, but theChunktype requires them asstring. This mismatch can cause downstream issues when processing chunks that expect string values.🔎 Proposed fix with fallback values
return { id: crypto.randomUUID(), chunkIndex: idx, totalChunks: chunks.length, contentType: chunk.metadata.contentType, heading: currentHeading, text: chunk.pageContent, - title: frontmatter.title, - description: frontmatter.description, + title: frontmatter.title ?? '', + description: frontmatter.description ?? '', createdAt: new Date().toISOString(), };Alternatively, update the
Chunktype to maketitleanddescriptionoptional (title?: string), depending on how downstream consumers handle these fields.doc-agents/.agents/skills/agentuity-cli-cloud-redis-show/SKILL.md-31-31 (1)
31-31: Clarify whether--showCredentialsis required or optional.The Options table marks
--showCredentialsas "Required: Yes" (line 31), but the basic example on lines 38-39 demonstrates the command without this option. Either the option is optional (and the table should reflect this), or the basic example is incomplete. Correct the documentation to accurately reflect the command's behavior.Also applies to: 38-39
doc-agents/.agents/skills/agentuity-cli-cloud-redis-show/SKILL.md-29-32 (1)
29-32: Fix option naming inconsistency and required status declaration.The Options table lists the option as
--showCredentials(camelCase) on line 31, but the example on line 44 uses--show-credentials(kebab-case). Additionally, the option is marked "Required: Yes" yet the first example on line 38 successfully executes the command without it. Align the option name in the table to match the examples and correct the required status to reflect actual usage.
| | Option | Type | Required | Default | Description | | ||
| |--------|------|----------|---------|-------------| | ||
| | `--showCredentials` | boolean | Yes | - | Show credentials in plain text (default: masked in terminal, unmasked in JSON) | | ||
| | `--nameOnly` | boolean | Yes | - | Print the name only | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Inconsistent option naming convention between documentation and examples.
CLI documentation should consistently use kebab-case for option names, but the options table (lines 29-32) lists --showCredentials (camelCase) while the example (line 57) shows --show-credentials (kebab-case). Users won't know which syntax to use.
🔎 Proposed fix
Standardize on kebab-case throughout the documentation:
| Option | Type | Required | Default | Description |
|--------|------|----------|---------|-------------|
-| `--showCredentials` | boolean | Yes | - | Show credentials in plain text (default: masked in terminal, unmasked in JSON) |
-| `--nameOnly` | boolean | Yes | - | Print the name only |
+| `--show-credentials` | boolean | No | false | Show credentials in plain text (default: masked in terminal, unmasked in JSON) |
+| `--name-only` | boolean | No | false | Print the name only |Also applies to: 57-57
🤖 Prompt for AI Agents
In doc-agents/.agents/skills/agentuity-cli-cloud-db-list/SKILL.md around lines
29 to 32 (and also update line 57), the options table uses camelCase
(`--showCredentials`) while examples use kebab-case (`--show-credentials`);
standardize option naming to kebab-case across the doc by renaming
`--showCredentials` to `--show-credentials` in the table, and verify other
options (e.g., `--nameOnly`) are converted to `--name-only` so the examples and
documentation match.
doc-agents/.agents/skills/agentuity-cli-cloud-vector-delete-namespace/SKILL.md
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
🧹 Nitpick comments (3)
doc-agents/src/agent/doc_processing/docs-orchestrator.ts (3)
20-20: Consider aligning batch size with clearVectorDb.The limit here is 100, while
clearVectorDbuses 1000 (line 149). Unless there's a specific reason for the smaller batch size, consider using a consistent value to simplify maintenance and potentially improve performance.
80-137: Consider parallelizing file processing for better performance.Files are currently processed sequentially, which could be slow for large batches. Since each file's processing is independent (removal, chunking, upserting are all scoped to the file's path), consider using
Promise.allorPromise.allSettledto process multiple files concurrently.🔎 Example parallel processing approach
// Process changed files in parallel with error isolation const fileResults = await Promise.allSettled( changed.map(async (file) => { const { path: logicalPath, content: base64Content } = file; // Base64-decode the content let content: string; try { const buf = Buffer.from(base64Content, 'base64'); const normalized = base64Content.replace(/\s/g, ''); if (buf.toString('base64') !== normalized) { throw new Error('Malformed base64 payload'); } content = buf.toString('utf-8'); } catch (decodeErr) { throw new Error( `Invalid base64 content for ${logicalPath}: ${decodeErr}` ); } // Remove existing vectors for this path await removeVectorsByPath(ctx, logicalPath, VECTOR_STORE_NAME); // Process the document content into chunks const chunks = await processDoc(content); // Add path metadata to all chunks const chunksWithMetadata = chunks.map((chunk) => ({ ...chunk, metadata: { ...chunk.metadata, path: logicalPath, }, })); // Batch upsert all chunks at once for efficiency const upsertResults = await ctx.vector.upsert( VECTOR_STORE_NAME, ...chunksWithMetadata ); ctx.logger.info( 'Upserted %d chunks for file: %s', upsertResults.length, logicalPath ); return { logicalPath, chunks: chunks.length }; }) ); // Process results for (const result of fileResults) { if (result.status === 'fulfilled') { processed++; ctx.logger.info( 'Successfully processed file: %s (%d chunks)', result.value.logicalPath, result.value.chunks ); } else { errors++; const filePath = changed[fileResults.indexOf(result)]?.path || 'unknown'; errorFiles.push(filePath); ctx.logger.error('Error processing file %s: %o', filePath, result.reason); } }Note: Be mindful of rate limits and resource constraints when increasing concurrency.
148-148: Inconsistent query strings across functions.This function uses
query: ' '(space) whileremoveVectorsByPathusesquery: 'A'. If both are arbitrary placeholders required by the API, standardize them or extract to a named constant for clarity.// At the top of the file const METADATA_ONLY_QUERY = 'A'; // Arbitrary query required by vector search API when filtering by metadata only // Then use consistently: const batch = await ctx.vector.search(VECTOR_STORE_NAME, { query: METADATA_ONLY_QUERY, limit: 1000, });
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
doc-agents/src/agent/doc_processing/docs-orchestrator.ts
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: sync
| while (true) { | ||
| const vectors = await ctx.vector.search(vectorStoreName, { | ||
| query: 'A', | ||
| limit: 100, | ||
| metadata: { path: logicalPath }, | ||
| }); | ||
|
|
||
| if (vectors.length === 0) { | ||
| break; | ||
| } | ||
|
|
||
| // Batch delete all vectors at once for efficiency | ||
| const keys = vectors.map((v: { key: string }) => v.key); | ||
| const deletedCount = await ctx.vector.delete(vectorStoreName, ...keys); | ||
| totalDeleted += deletedCount; | ||
|
|
||
| ctx.logger.info( | ||
| 'Deleted %d vectors (total: %d) for path: %s', | ||
| deletedCount, | ||
| totalDeleted, | ||
| logicalPath | ||
| ); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Add error handling inside the deletion loop.
The while(true) loop lacks error handling for ctx.vector.search and ctx.vector.delete operations. If these operations fail mid-loop, the function will throw and leave the deletion incomplete, potentially causing inconsistent state.
🔎 Proposed fix with error handling
while (true) {
- const vectors = await ctx.vector.search(vectorStoreName, {
- query: 'A',
- limit: 100,
- metadata: { path: logicalPath },
- });
+ let vectors;
+ try {
+ vectors = await ctx.vector.search(vectorStoreName, {
+ query: 'A',
+ limit: 100,
+ metadata: { path: logicalPath },
+ });
+ } catch (searchErr) {
+ ctx.logger.error('Error searching vectors for path %s: %o', logicalPath, searchErr);
+ throw searchErr;
+ }
if (vectors.length === 0) {
break;
}
// Batch delete all vectors at once for efficiency
const keys = vectors.map((v: { key: string }) => v.key);
- const deletedCount = await ctx.vector.delete(vectorStoreName, ...keys);
+ let deletedCount;
+ try {
+ deletedCount = await ctx.vector.delete(vectorStoreName, ...keys);
+ } catch (deleteErr) {
+ ctx.logger.error('Error deleting vectors for path %s: %o', logicalPath, deleteErr);
+ throw deleteErr;
+ }
totalDeleted += deletedCount;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| while (true) { | |
| const vectors = await ctx.vector.search(vectorStoreName, { | |
| query: 'A', | |
| limit: 100, | |
| metadata: { path: logicalPath }, | |
| }); | |
| if (vectors.length === 0) { | |
| break; | |
| } | |
| // Batch delete all vectors at once for efficiency | |
| const keys = vectors.map((v: { key: string }) => v.key); | |
| const deletedCount = await ctx.vector.delete(vectorStoreName, ...keys); | |
| totalDeleted += deletedCount; | |
| ctx.logger.info( | |
| 'Deleted %d vectors (total: %d) for path: %s', | |
| deletedCount, | |
| totalDeleted, | |
| logicalPath | |
| ); | |
| } | |
| while (true) { | |
| let vectors; | |
| try { | |
| vectors = await ctx.vector.search(vectorStoreName, { | |
| query: 'A', | |
| limit: 100, | |
| metadata: { path: logicalPath }, | |
| }); | |
| } catch (searchErr) { | |
| ctx.logger.error('Error searching vectors for path %s: %o', logicalPath, searchErr); | |
| throw searchErr; | |
| } | |
| if (vectors.length === 0) { | |
| break; | |
| } | |
| // Batch delete all vectors at once for efficiency | |
| const keys = vectors.map((v: { key: string }) => v.key); | |
| let deletedCount; | |
| try { | |
| deletedCount = await ctx.vector.delete(vectorStoreName, ...keys); | |
| } catch (deleteErr) { | |
| ctx.logger.error('Error deleting vectors for path %s: %o', logicalPath, deleteErr); | |
| throw deleteErr; | |
| } | |
| totalDeleted += deletedCount; | |
| ctx.logger.info( | |
| 'Deleted %d vectors (total: %d) for path: %s', | |
| deletedCount, | |
| totalDeleted, | |
| logicalPath | |
| ); | |
| } |
| * Process documentation sync from embedded payload - completely filesystem-free | ||
| */ | ||
| export async function syncDocsFromPayload( | ||
| ctx: any, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Replace any type with proper typing.
Using ctx: any disables type checking and loses IDE support. Define or import a proper context type to improve type safety and maintainability.
// Example: Define or import a context type
interface DocProcessingContext {
logger: Logger;
vector: VectorStore;
// ... other properties
}
export async function syncDocsFromPayload(
ctx: DocProcessingContext,
payload: SyncPayload
): Promise<SyncStats>🤖 Prompt for AI Agents
In doc-agents/src/agent/doc_processing/docs-orchestrator.ts around line 56,
replace the loose ctx: any with a concrete context type: define or import an
interface (e.g., DocProcessingContext) that lists the fields the function
actually uses (logger, vectorStore/VectorStore, db/client, config/options, and
any request or tracing metadata) and update the function signature to use that
type; ensure necessary types are imported from their modules and update any call
sites to satisfy the new type (or cast only at call boundaries), so you restore
type safety and IDE tooling without changing runtime behavior.
| try { | ||
| const buf = Buffer.from(base64Content, 'base64'); | ||
| // re-encode to verify round-trip | ||
| if (buf.toString('base64') !== base64Content.replace(/\s/g, '')) { | ||
| throw new Error('Malformed base64 payload'); | ||
| } | ||
| content = buf.toString('utf-8'); | ||
| } catch (decodeErr) { | ||
| throw new Error( | ||
| `Invalid base64 content for ${logicalPath}: ${decodeErr}` | ||
| ); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix the base64 validation logic.
The round-trip validation at line 89 has a logical flaw. The comparison buf.toString('base64') !== base64Content.replace(/\s/g, '') only removes whitespace from the original base64Content, but not from the re-encoded result. This asymmetric comparison could cause false positives or false negatives depending on how Node.js formats the base64 output.
🔎 Proposed fix for symmetric validation
let content: string;
try {
const buf = Buffer.from(base64Content, 'base64');
// re-encode to verify round-trip
- if (buf.toString('base64') !== base64Content.replace(/\s/g, '')) {
+ const normalized = base64Content.replace(/\s/g, '');
+ if (buf.toString('base64') !== normalized) {
throw new Error('Malformed base64 payload');
}
content = buf.toString('utf-8');
} catch (decodeErr) {
throw new Error(
`Invalid base64 content for ${logicalPath}: ${decodeErr}`
);
}🤖 Prompt for AI Agents
In doc-agents/src/agent/doc_processing/docs-orchestrator.ts around lines 86 to
97, the round-trip base64 validation is asymmetric because it only strips
whitespace from the original base64Content; normalize both sides before
comparing to avoid false positives/negatives. Fix by creating a normalized input
string (remove all whitespace) and a normalized output string (re-encode the
decoded buffer to base64 and remove whitespace from that result) and compare
those two normalized values; if they differ throw the malformed payload error as
before. Ensure you still decode to utf-8 for content after the validation
succeeds.
| export async function clearVectorDb(ctx: any) { | ||
| ctx.logger.info('Clearing all vectors from store: %s', VECTOR_STORE_NAME); | ||
| while (true) { | ||
| const batch = await ctx.vector.search(VECTOR_STORE_NAME, { | ||
| query: ' ', | ||
| limit: 1000, | ||
| }); | ||
| if (batch.length === 0) break; | ||
|
|
||
| // Batch delete all vectors at once for efficiency | ||
| const keys = batch.map((v: { key: string }) => v.key); | ||
| await ctx.vector.delete(VECTOR_STORE_NAME, ...keys); | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add error handling and consider a safety confirmation.
This function has two concerns:
-
Missing error handling: Similar to
removeVectorsByPath, the loop lacks error handling for vector operations, which could leave the operation incomplete. -
No safety confirmation: This is a destructive operation that clears the entire vector database. Consider adding a parameter or check to confirm the operation is intentional, especially if this could be called accidentally.
🔎 Proposed improvements
-export async function clearVectorDb(ctx: any) {
+export async function clearVectorDb(ctx: any, confirm: boolean = false) {
+ if (!confirm) {
+ ctx.logger.warn('clearVectorDb called without confirmation - skipping');
+ return;
+ }
+
ctx.logger.info('Clearing all vectors from store: %s', VECTOR_STORE_NAME);
while (true) {
- const batch = await ctx.vector.search(VECTOR_STORE_NAME, {
- query: ' ',
- limit: 1000,
- });
+ let batch;
+ try {
+ batch = await ctx.vector.search(VECTOR_STORE_NAME, {
+ query: ' ',
+ limit: 1000,
+ });
+ } catch (searchErr) {
+ ctx.logger.error('Error searching vectors during clear: %o', searchErr);
+ throw searchErr;
+ }
+
if (batch.length === 0) break;
// Batch delete all vectors at once for efficiency
const keys = batch.map((v: { key: string }) => v.key);
- await ctx.vector.delete(VECTOR_STORE_NAME, ...keys);
+ try {
+ await ctx.vector.delete(VECTOR_STORE_NAME, ...keys);
+ } catch (deleteErr) {
+ ctx.logger.error('Error deleting vectors during clear: %o', deleteErr);
+ throw deleteErr;
+ }
}
+ ctx.logger.info('Successfully cleared all vectors from store: %s', VECTOR_STORE_NAME);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export async function clearVectorDb(ctx: any) { | |
| ctx.logger.info('Clearing all vectors from store: %s', VECTOR_STORE_NAME); | |
| while (true) { | |
| const batch = await ctx.vector.search(VECTOR_STORE_NAME, { | |
| query: ' ', | |
| limit: 1000, | |
| }); | |
| if (batch.length === 0) break; | |
| // Batch delete all vectors at once for efficiency | |
| const keys = batch.map((v: { key: string }) => v.key); | |
| await ctx.vector.delete(VECTOR_STORE_NAME, ...keys); | |
| } | |
| } | |
| export async function clearVectorDb(ctx: any, confirm: boolean = false) { | |
| if (!confirm) { | |
| ctx.logger.warn('clearVectorDb called without confirmation - skipping'); | |
| return; | |
| } | |
| ctx.logger.info('Clearing all vectors from store: %s', VECTOR_STORE_NAME); | |
| while (true) { | |
| let batch; | |
| try { | |
| batch = await ctx.vector.search(VECTOR_STORE_NAME, { | |
| query: ' ', | |
| limit: 1000, | |
| }); | |
| } catch (searchErr) { | |
| ctx.logger.error('Error searching vectors during clear: %o', searchErr); | |
| throw searchErr; | |
| } | |
| if (batch.length === 0) break; | |
| // Batch delete all vectors at once for efficiency | |
| const keys = batch.map((v: { key: string }) => v.key); | |
| try { | |
| await ctx.vector.delete(VECTOR_STORE_NAME, ...keys); | |
| } catch (deleteErr) { | |
| ctx.logger.error('Error deleting vectors during clear: %o', deleteErr); | |
| throw deleteErr; | |
| } | |
| } | |
| ctx.logger.info('Successfully cleared all vectors from store: %s', VECTOR_STORE_NAME); | |
| } |
🤖 Prompt for AI Agents
In doc-agents/src/agent/doc_processing/docs-orchestrator.ts around lines 144 to
157, the clearVectorDb function needs error handling and a safety confirmation:
wrap the vector search/delete loop in try/catch (log the error with
ctx.logger.error and rethrow or return a failure) and handle transient failures
(optionally retry or break on repeated errors) so the loop cannot silently leave
the DB in a partial state; add an explicit confirmation parameter (e.g.,
requireConfirm boolean or a required confirmation token) and check it at the
start, logging and throwing if not provided to prevent accidental full clears;
ensure deletes are still batched but guarded by the confirmation and error
handling.
- Implemented a new Documentation Q&A agent that answers questions about Agentuity documentation using Retrieval-Augmented Generation (RAG). - Created supporting files including agent logic, prompt handling, document retrieval, and type definitions. - Added API route for the agent to handle requests and return structured responses. - Updated the frontend to interact with the new agent, allowing users to ask questions about documentation directly.
- Removed the 'hello' agent and its associated files from the project. - Updated API routes to remove references to the 'hello' agent. - Introduced a new middleware for bearer token authentication to enhance security for API routes. - Updated TypeScript configuration to include paths for middleware.
- Changed agentBaseUrl to use a local development URL for easier testing. - Refactored buildAgentConfig to accept an endpoint instead of an agent ID. - Updated getAgentQaConfig and getAgentPulseConfig to use specific API endpoints. - Improved error messages for agent configuration validation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🧹 Nitpick comments (11)
lib/env.ts (3)
4-9: Move import statement to the top of the file.The import statement on line 9 follows the interface declaration. Standard practice is to place all imports at the top of the file before any declarations.
🔎 Proposed fix
+import { config } from '@/lib/config'; + /** * Environment variable validation and configuration utility */ export interface AgentConfig { url: string; bearerToken?: string; } - -import { config } from '@/lib/config';
24-27: Consider normalizing URL construction to handle trailing slashes.If
AGENT_BASE_URLis set with a trailing slash (e.g.,http://example.com/), the resulting URL will have double slashes (http://example.com//api/doc-qa). While most servers handle this gracefully, it's cleaner to normalize.🔎 Proposed fix
return { - url: `${baseUrl}${endpoint}`, + url: `${baseUrl.replace(/\/+$/, '')}${endpoint}`, bearerToken: bearerToken || undefined, };
59-64: Type augmentation won't affectprocess.envtyping.The
ProcessEnvinterface declared in the global scope doesn't augmentNodeJS.ProcessEnv, so TypeScript won't recognize these properties onprocess.env.🔎 Proposed fix
declare global { - interface ProcessEnv { - AGENT_BASE_URL?: string; - AGENT_BEARER_TOKEN?: string; + namespace NodeJS { + interface ProcessEnv { + AGENT_BASE_URL?: string; + AGENT_BEARER_TOKEN?: string; + } } }doc-agents/src/api/doc-qa/route.ts (1)
7-12: Consider adding explicit error handling for robustness.While the agent's
answerQuestionfunction has internal error handling (as shown in rag.ts), adding a try-catch block arounddocQAAgent.run(data)would improve robustness by handling unexpected errors at the route level and providing consistent error responses to clients.🔎 Proposed error handling enhancement
router.post('/', bearerTokenAuth, docQAAgent.validator(), async (c) => { - const data = c.req.valid('json'); - const result = await docQAAgent.run(data); - return c.json(result); + try { + const data = c.req.valid('json'); + const result = await docQAAgent.run(data); + return c.json(result); + } catch (error) { + c.var.logger.error('Error in doc-qa route:', error); + return c.json({ error: 'Failed to process question' }, 500); + } });doc-agents/src/web/App.tsx (4)
84-86: Consider using document content as key.Using array index as the key can cause React reconciliation issues if the documents array is reordered. While unlikely in this case, it's better practice to use a stable identifier.
🔎 Proposed fix
- {result.documents.map((doc, idx) => ( - <li key={idx}>{doc}</li> + {result.documents.map((doc) => ( + <li key={doc}>{doc}</li> ))}
162-162: Use self-closing SVG element.The
<path>element should be self-closing since it has no children, improving code consistency and following JSX best practices.🔎 Proposed fix
- <path d="M20 6 9 17l-5-5"></path> + <path d="M20 6 9 17l-5-5" />
145-147: Consider adding a type guard to avoid non-null assertions.The
filter(Boolean)removes null values but TypeScript doesn't automatically narrow the type. Using a type guard would eliminate the need for non-null assertions (step!).🔎 Proposed fix
] - .filter(Boolean) - .map((step) => ( - <div key={step!.key} className="step"> + .filter((step): step is NonNullable<typeof step> => step !== null) + .map((step) => ( + <div key={step.key} className="step"> <div className="step-icon">
176-448: Consider extracting styles to a separate CSS file.The large inline style block (270+ lines) makes the component harder to maintain and increases bundle size. Moving styles to a separate CSS file or using a CSS-in-JS solution would improve maintainability and enable better caching.
doc-agents/src/agent/doc_qa/types.ts (1)
24-44: Consider generating types from schemas to reduce duplication.The TypeScript types (lines 25-44) manually duplicate the schema definitions (lines 3-22). Many schema libraries support generating TypeScript types from schemas, which would maintain a single source of truth.
doc-agents/src/agent/doc_qa/retriever.ts (1)
167-167: Consider adding type definition for vector result.The cast to
anyto access thesimilarityproperty indicates missing type information. Consider defining a proper type for the vector search result to improve type safety.doc-agents/src/agent/doc_qa/rag.ts (1)
102-109: Consider clarifying the prompt parameter.The
promptparameter on line 106 provides general guidance that could be incorporated into the system prompt. Consider either making this more specific to the current query or removing it if the system prompt is sufficient.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (3)
doc-agents/src/generated/app.tsis excluded by!**/generated/**doc-agents/src/generated/registry.tsis excluded by!**/generated/**doc-agents/src/generated/routes.tsis excluded by!**/generated/**
📒 Files selected for processing (13)
doc-agents/src/agent/doc_qa/agent.tsdoc-agents/src/agent/doc_qa/index.tsdoc-agents/src/agent/doc_qa/prompt.tsdoc-agents/src/agent/doc_qa/rag.tsdoc-agents/src/agent/doc_qa/retriever.tsdoc-agents/src/agent/doc_qa/types.tsdoc-agents/src/api/doc-qa/route.tsdoc-agents/src/api/index.tsdoc-agents/src/middleware/auth.tsdoc-agents/src/web/App.tsxdoc-agents/tsconfig.jsonlib/config.tslib/env.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- doc-agents/tsconfig.json
🧰 Additional context used
🧬 Code graph analysis (4)
doc-agents/src/api/doc-qa/route.ts (1)
doc-agents/src/middleware/auth.ts (1)
bearerTokenAuth(7-33)
doc-agents/src/agent/doc_qa/agent.ts (1)
doc-agents/src/agent/doc_qa/rag.ts (1)
answerQuestion(14-130)
doc-agents/src/agent/doc_qa/retriever.ts (1)
doc-agents/src/agent/doc_qa/types.ts (1)
RelevantDoc(25-31)
doc-agents/src/agent/doc_qa/prompt.ts (1)
doc-agents/src/agent/doc_qa/types.ts (1)
PromptType(38-38)
🪛 Biome (2.1.2)
doc-agents/src/web/App.tsx
[error] 162-162: JSX elements without children should be marked as self-closing. In JSX, it is valid for any element to be self-closing.
Safe fix: Use a self-closing element instead.
(lint/style/useSelfClosingElements)
[error] 85-85: Avoid using the index of an array as key property in an element.
This is the source of the key value.
The order of the items may change, and this also affects performances and component state.
Check the React documentation.
(lint/suspicious/noArrayIndexKey)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: sync
🔇 Additional comments (15)
lib/config.ts (1)
9-10: LGTM!The shift to an endpoint-based URL construction approach is clear, and the localhost default is appropriate for local development. The comment adds helpful context.
lib/env.ts (2)
30-36: LGTM!Clean wrapper functions with clear, fixed endpoint paths.
41-52: LGTM!The validation logic and error messaging are clear and helpful. Since both agent configs share the same base URL validation via
buildAgentConfig, validating one effectively validates the shared configuration.doc-agents/src/agent/doc_qa/index.ts (1)
1-1: LGTM! Clean barrel export pattern.This is a standard and idiomatic TypeScript re-export pattern that provides a clean public API surface for the DocQA agent module.
doc-agents/src/agent/doc_qa/agent.ts (1)
5-20: LGTM! Well-structured agent definition.The agent is properly configured with:
- Clear input/output schemas matching the RAG implementation
- Appropriate delegation to the
answerQuestionfunction- Consistent error handling inherited from the RAG layer
doc-agents/src/middleware/auth.ts (1)
7-33: Good security practices observed.The middleware correctly:
- Validates Authorization header presence and format
- Fails closed when
AGENT_BEARER_TOKENis not configured- Logs authentication failures without exposing sensitive data
- Returns consistent error messages that don't leak implementation details
doc-agents/src/web/App.tsx (2)
1-5: LGTM!The imports and environment variable setup are correct and follow best practices for React 19.
6-9: LGTM!The component setup uses React 19 hooks correctly and follows modern patterns.
doc-agents/src/agent/doc_qa/types.ts (1)
1-23: LGTM!The schema definitions are well-structured and properly typed. The use of
@agentuity/schemawith optional fields and enums follows best practices.doc-agents/src/agent/doc_qa/prompt.ts (2)
1-12: LGTM!The imports and Zod schema definition are clean and properly structured for AI SDK compatibility.
14-77: LGTM!The rephrasing logic is well-implemented with appropriate error handling, conservative prompting, and safe fallback behavior. The low temperature (0.1) ensures consistent results.
doc-agents/src/agent/doc_qa/rag.ts (4)
1-12: LGTM!The imports and Zod schema are properly structured and match the corresponding TypeScript types.
14-22: LGTM!The function properly chains prompt rephrasing and document retrieval, creating a logical workflow.
24-100: LGTM!The system prompt is comprehensive and well-structured, with clear rules, MDX formatting examples, and navigation conventions. The guidance for document citation with heading anchors is particularly well-documented.
110-129: LGTM!The error handling is robust with a user-friendly MDX-formatted fallback response. The logging ensures debugging visibility.
| /** | ||
| * Determines the prompt type based on the input string using LLM classification. | ||
| * Uses specific, measurable criteria to decide between Normal and Agentic RAG. | ||
| * @param ctx - Agent Context for logging and LLM access | ||
| * @param input - The input string to analyze | ||
| * @returns {Promise<PromptType>} - The determined PromptType | ||
| */ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update JSDoc to match implementation.
The JSDoc mentions "Agentic RAG" but the actual return type is PromptType which uses the enum values "Normal" and "Thinking". Update the documentation to use consistent terminology.
🔎 Proposed fix
/**
* Determines the prompt type based on the input string using LLM classification.
- * Uses specific, measurable criteria to decide between Normal and Agentic RAG.
+ * Uses specific, measurable criteria to decide between Normal and Thinking prompt types.
* @param ctx - Agent Context for logging and LLM access
* @param input - The input string to analyze
* @returns {Promise<PromptType>} - The determined PromptType
*/📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| /** | |
| * Determines the prompt type based on the input string using LLM classification. | |
| * Uses specific, measurable criteria to decide between Normal and Agentic RAG. | |
| * @param ctx - Agent Context for logging and LLM access | |
| * @param input - The input string to analyze | |
| * @returns {Promise<PromptType>} - The determined PromptType | |
| */ | |
| /** | |
| * Determines the prompt type based on the input string using LLM classification. | |
| * Uses specific, measurable criteria to decide between Normal and Thinking prompt types. | |
| * @param ctx - Agent Context for logging and LLM access | |
| * @param input - The input string to analyze | |
| * @returns {Promise<PromptType>} - The determined PromptType | |
| */ |
🤖 Prompt for AI Agents
In doc-agents/src/agent/doc_qa/prompt.ts around lines 79 to 85, the JSDoc
references "Agentic RAG" but the function actually returns PromptType enum
values "Normal" and "Thinking"; update the JSDoc to use the same terminology as
the implementation (mention PromptType and the enum values "Normal" and
"Thinking"), adjust the description to say it decides between Normal and
Thinking prompt types based on LLM classification, and ensure @returns documents
{Promise<PromptType>} with the correct enum value descriptions.
| // Query for all chunks at once | ||
| const chunkQueries = validIndices.map((index) => | ||
| ctx.vector.search(VECTOR_STORE_NAME, { | ||
| query: path, | ||
| limit: 1, | ||
| metadata: { path: path, chunkIndex: index }, | ||
| }) | ||
| ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# First, locate and read the target file
find . -name "retriever.ts" -path "*/doc_qa/*" -type fRepository: agentuity/docs
Length of output: 100
🏁 Script executed:
#!/bin/bash
# Check how vector.search is used elsewhere in the codebase
rg -n -C3 'vector\.search\(' --type=tsRepository: agentuity/docs
Length of output: 4692
🏁 Script executed:
#!/bin/bash
# Also search for vector store initialization and type definitions
rg -n 'vector\s*[:=]' --type=ts -A2 | head -100Repository: agentuity/docs
Length of output: 40
🏁 Script executed:
#!/bin/bash
# Read the retriever.ts file context
sed -n '1,60p' doc-agents/src/agent/doc_qa/retriever.tsRepository: agentuity/docs
Length of output: 1668
🏁 Script executed:
#!/bin/bash
# Check the corresponding file in agent-docs to see the difference
sed -n '1,60p' agent-docs/src/agents/doc-qa/retriever.tsRepository: agentuity/docs
Length of output: 1662
Change query: path to empty/whitespace query.
Line 51 uses the file path as the semantic search query, but should use an empty string or whitespace like the corresponding implementation in agent-docs/src/agents/doc-qa/retriever.ts (line 52 uses query: ' '). Since you're filtering by metadata (path and chunkIndex), the query parameter should be an empty/neutral value to retrieve chunks by metadata only, not by semantic similarity to the file path string.
query: ' ',
🤖 Prompt for AI Agents
In doc-agents/src/agent/doc_qa/retriever.ts around lines 48 to 55, the
vector.search call uses query: path which causes semantic search on the file
path; change the query to an empty/neutral string (e.g., a single space ' ' or
empty string '') so the search is driven only by the metadata filter (path and
chunkIndex), leaving the rest of the call intact.
| const token = authHeader.slice(7); | ||
|
|
||
| // Validate token matches expected value | ||
| if (!expectedToken || token !== expectedToken) { | ||
| c.var.logger.warn('Invalid bearer token'); | ||
| return c.json({ error: 'Invalid bearer token' }, 401); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Use constant-time comparison to prevent timing attacks.
Line 26 uses !== to compare the bearer token, which is vulnerable to timing attacks. An attacker could measure response times to guess the token byte-by-byte. Use crypto.timingSafeEqual() for secure, constant-time string comparison.
🔎 Proposed fix using constant-time comparison
import { createMiddleware } from 'hono/factory';
+import { timingSafeEqual } from 'crypto';
/**
* Bearer token authentication middleware
* Validates the Authorization header contains the correct bearer token
*/
export const bearerTokenAuth = createMiddleware(async (c, next) => {
const authHeader = c.req.header('Authorization');
const expectedToken = process.env.AGENT_BEARER_TOKEN;
// Check if Authorization header exists
if (!authHeader) {
c.var.logger.warn('Missing Authorization header');
return c.json({ error: 'Missing Authorization header' }, 401);
}
// Check if it starts with Bearer
if (!authHeader.startsWith('Bearer ')) {
c.var.logger.warn('Invalid Authorization header format');
return c.json({ error: 'Invalid Authorization header format' }, 401);
}
const token = authHeader.slice(7);
// Validate token matches expected value
- if (!expectedToken || token !== expectedToken) {
+ if (!expectedToken) {
+ c.var.logger.error('AGENT_BEARER_TOKEN not configured');
+ return c.json({ error: 'Invalid bearer token' }, 401);
+ }
+
+ // Use constant-time comparison to prevent timing attacks
+ const tokenBuffer = Buffer.from(token);
+ const expectedBuffer = Buffer.from(expectedToken);
+
+ if (tokenBuffer.length !== expectedBuffer.length || !timingSafeEqual(tokenBuffer, expectedBuffer)) {
c.var.logger.warn('Invalid bearer token');
return c.json({ error: 'Invalid bearer token' }, 401);
}
// Token is valid, proceed to next handler
await next();
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const token = authHeader.slice(7); | |
| // Validate token matches expected value | |
| if (!expectedToken || token !== expectedToken) { | |
| c.var.logger.warn('Invalid bearer token'); | |
| return c.json({ error: 'Invalid bearer token' }, 401); | |
| } | |
| import { createMiddleware } from 'hono/factory'; | |
| import { timingSafeEqual } from 'crypto'; | |
| /** | |
| * Bearer token authentication middleware | |
| * Validates the Authorization header contains the correct bearer token | |
| */ | |
| export const bearerTokenAuth = createMiddleware(async (c, next) => { | |
| const authHeader = c.req.header('Authorization'); | |
| const expectedToken = process.env.AGENT_BEARER_TOKEN; | |
| // Check if Authorization header exists | |
| if (!authHeader) { | |
| c.var.logger.warn('Missing Authorization header'); | |
| return c.json({ error: 'Missing Authorization header' }, 401); | |
| } | |
| // Check if it starts with Bearer | |
| if (!authHeader.startsWith('Bearer ')) { | |
| c.var.logger.warn('Invalid Authorization header format'); | |
| return c.json({ error: 'Invalid Authorization header format' }, 401); | |
| } | |
| const token = authHeader.slice(7); | |
| // Validate token matches expected value | |
| if (!expectedToken) { | |
| c.var.logger.error('AGENT_BEARER_TOKEN not configured'); | |
| return c.json({ error: 'Invalid bearer token' }, 401); | |
| } | |
| // Use constant-time comparison to prevent timing attacks | |
| const tokenBuffer = Buffer.from(token); | |
| const expectedBuffer = Buffer.from(expectedToken); | |
| if (tokenBuffer.length !== expectedBuffer.length || !timingSafeEqual(tokenBuffer, expectedBuffer)) { | |
| c.var.logger.warn('Invalid bearer token'); | |
| return c.json({ error: 'Invalid bearer token' }, 401); | |
| } | |
| // Token is valid, proceed to next handler | |
| await next(); | |
| }); |
🤖 Prompt for AI Agents
In doc-agents/src/middleware/auth.ts around lines 23 to 29, the token comparison
uses !== which is vulnerable to timing attacks; replace the direct equality
check with a constant-time comparison using Node's crypto.timingSafeEqual:
import crypto, ensure expectedToken exists, then compare in constant time by
converting both values to Buffers of equal length (either use
crypto.timingSafeEqual directly when buffers are same length or hash both inputs
to a fixed length like SHA-256 and compare those hashes) and use that result to
decide logging/response; keep the existing warning and 401 response path.
| <svg | ||
| aria-hidden="true" | ||
| aria-label="Agentuity Logo" | ||
| className="logo" | ||
| fill="none" | ||
| height="191" | ||
| viewBox="0 0 220 191" | ||
| width="220" | ||
| xmlns="http://www.w3.org/2000/svg" | ||
| > |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove aria-hidden="true" to fix accessibility.
The SVG has both aria-hidden="true" and aria-label="Agentuity Logo", which is contradictory. The aria-hidden attribute prevents screen readers from announcing the element, making the aria-label ineffective.
🔎 Proposed fix
<svg
- aria-hidden="true"
aria-label="Agentuity Logo"
+ role="img"
className="logo"
fill="none"
height="191"
viewBox="0 0 220 191"
width="220"
xmlns="http://www.w3.org/2000/svg"
>📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| <svg | |
| aria-hidden="true" | |
| aria-label="Agentuity Logo" | |
| className="logo" | |
| fill="none" | |
| height="191" | |
| viewBox="0 0 220 191" | |
| width="220" | |
| xmlns="http://www.w3.org/2000/svg" | |
| > | |
| <svg | |
| aria-label="Agentuity Logo" | |
| role="img" | |
| className="logo" | |
| fill="none" | |
| height="191" | |
| viewBox="0 0 220 191" | |
| width="220" | |
| xmlns="http://www.w3.org/2000/svg" | |
| > |
🤖 Prompt for AI Agents
In doc-agents/src/web/App.tsx around lines 14 to 23, the SVG is marked with both
aria-hidden="true" and aria-label="Agentuity Logo" which is contradictory;
remove the aria-hidden="true" attribute so the aria-label can be announced by
assistive technologies (optionally ensure a role="img" is present if you want to
explicitly expose it as an image).
- Replaced instances of `agentuity` with `bunx @agentuity/cli` in various skill documentation files to standardize command execution. - Updated the README to reflect the new total skills count and generation date. - Removed the `agentuity-cli-upgrade` skill from the documentation as it is no longer applicable.
- Replaced environment variable validation with direct API service calls for agent configuration. - Updated API routes to utilize new service methods for querying agents and managing tutorial states. - Refactored session management to improve error handling and streamline KV store interactions. - Introduced a new title generation service for session titles based on conversation history. - Removed deprecated environment validation logic and associated files to simplify the codebase.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 20
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
🤖 Fix all issues with AI Agents
In @app/api/sessions/[sessionId]/messages/route.ts:
- Around line 166-182: The async onTutorialData callback currently performs
await TutorialStateManager.updateTutorialProgress but may be invoked without
awaiting by callAgentPulseStreaming; wrap the body of onTutorialData in a
try-catch that catches and logs errors (including the thrown error) and ensure
callAgentPulseStreaming awaits the callback if it supports async handlers (or
otherwise handle returned Promise), referencing onTutorialData,
TutorialStateManager.updateTutorialProgress, callAgentPulseStreaming,
controller.enqueue and encoder.encode so failures in updateTutorialProgress are
caught and do not become unhandled rejections.
- Line 16: Remove the unused constant AGENT_REQUEST_TIMEOUT from this file:
locate the declaration const AGENT_REQUEST_TIMEOUT = 30000; and delete it (and
any related comments) since timeouts are handled by callAgentPulseStreaming;
ensure there are no other references to AGENT_REQUEST_TIMEOUT in this module
after removal.
In @doc-agents/.agents/skills/agentuity-cli-auth-ssh-add/SKILL.md:
- Around line 28-38: The options table incorrectly marks `--file` as required
while examples for "Add SSH key interactively" and "Add SSH key from stdin" omit
it; update SKILL.md so the table marks `--file` as optional (change "Yes" to
"No" or "Optional") and add a short note in the table or header that the command
reads from stdin when `--file` is not provided, or alternatively update the
examples to always include `--file` and show the stdin/interactive variants
explicitly (modify the "Add SSH key interactively" and "Add SSH key from stdin"
examples to either include `--file <path>` or annotate that stdin/interactive
mode is used).
In @doc-agents/.agents/skills/agentuity-cli-cloud-agent-get/SKILL.md:
- Around line 50-77: The generated SKILL.md contains placeholder types
("unknown") for fields `description`, `deploymentId`, and `metadata` (e.g., in
agentuity-cli-cloud-agent-get SKILL.md); update the upstream Agentuity CLI
schema generator to map these schema properties to their real types (e.g.,
string/object/UUID/etc.) instead of emitting "unknown": locate the generator
logic that builds property type strings (the module invoked by the CLI command
`agentuity ai skills generate`) and add explicit handlers for `description`,
`deploymentId`, and `metadata` (or a generic rule that resolves unknown JSON
Schema/$ref types properly), then run the generator to regenerate all affected
SKILL.md files (cloud-deployment-show, cloud-keyvalue-get, cloud-session-get,
cloud-thread-get, profile-show, project-show) so the docs reflect correct types.
- Line 35: Update the example description that currently reads "Get item
details:" to accurately reflect the command it documents by changing the text to
"Get agent details:" (or similar agent-specific phrasing); locate the occurrence
of the string "Get item details:" in SKILL.md and replace it so the example
description matches the agent details retrieval command.
- Around line 29-31: Replace the placeholder "-" description for the
`<agent_id>` argument in SKILL.md with a concise explanation: state that
`<agent_id>` is the unique identifier (UUID or slug) of the agent to retrieve,
explain where to get it (e.g., from the Agents list in the web UI, the agent
details page, or from the CLI/API list endpoints), and give an example format
(e.g., "e.g., 123e4567-e89b-12d3-a456-426614174000" or "agent-slug"). Ensure the
Description cell for `<agent_id>` clearly instructs users how to locate or
derive the ID before calling the skill.
In @doc-agents/.agents/skills/agentuity-cli-cloud-db-sql/SKILL.md:
- Around line 35-53: Examples 2 and 3 use the wrong subcommands ("cloud db exec"
and "cloud db query") and should match the documented canonical command
"agentuity cloud db sql"; update the example lines that currently show "bunx
@agentuity/cli cloud db exec my-database ..." and "bunx @agentuity/cli cloud db
query my-database ..." to use "bunx @agentuity/cli cloud db sql my-database ..."
(including the --json flag where present) so all examples consistently use the
`sql` subcommand, or alternatively add a note clarifying that `exec` and `query`
are aliases of `sql` if that is intended.
In @doc-agents/.agents/skills/agentuity-cli-cloud-deploy/SKILL.md:
- Around line 16-20: Remove the duplicate/malformed prerequisite entry "- auth
login" from the Prerequisites list in SKILL.md (the block containing
"Authenticated with `agentuity auth login`" and "Project context required...");
keep the existing "Authenticated with `agentuity auth login`" line intact and
delete the stray "- auth login" line so the prerequisites are not duplicated or
corrupted.
In @doc-agents/.agents/skills/agentuity-cli-cloud-deployment-rollback/SKILL.md:
- Around line 16-20: The "cloud deploy" line in the Prerequisites is ambiguous;
update the Prerequisites section under "## Prerequisites" to explicitly state
what is required (e.g., whether it refers to an Agentuity skill named
"cloud-deploy", a CLI command `agentuity cloud deploy`, or a configured cloud
deployment service) and provide the exact install/configure step or command to
satisfy it (for example: "Install the cloud-deploy skill: `agentuity skill
install cloud-deploy`" or "Ensure Cloud Deploy is configured in your project:
add X config Y"). Include the clarified phrasing in place of the bare "cloud
deploy" text so readers know the exact dependency and how to fulfill it.
In @doc-agents/.agents/skills/agentuity-cli-cloud-env-delete/SKILL.md:
- Around line 24-46: The examples in SKILL.md use a different CLI invocation
than the documented usage; update the example commands to match the documented
pattern (use "agentuity cloud env delete <key>" and "agentuity cloud env rm
<key>" instead of "bunx @agentuity/cli ..."), and replace the two identical
"Delete item:" captions with specific descriptions—e.g., "Delete using full
command:" for the delete example and "Delete using the rm alias:" for the rm
example—so the examples align with the usage shown by the "agentuity cloud env
delete" heading.
- Around line 28-32: Update the empty description for the `<key>` argument in
the Arguments table: replace the placeholder dash with a clear description such
as "Name of the environment variable to delete" (or "Key of the environment
variable to delete") so the `<key>` field in the skill SKILL.md explicitly
documents that it is the environment variable name to remove.
In @doc-agents/.agents/skills/agentuity-cli-cloud-keyvalue-delete/SKILL.md:
- Around line 28-33: Update the arguments table in SKILL.md to add clear
descriptions for `<namespace>` and `<key>`: explain that `<namespace>` is the
logical grouping or bucket for stored key-value pairs (list common namespaces or
how to query available namespaces, e.g., "defaults, user, session") and state
any format constraints (allowed characters, max length), and describe `<key>` as
the identifier within that namespace used to locate the value (mention format
expectations, uniqueness within a namespace, and any encoding rules). Ensure
both descriptions are brief and actionable so users know what values are valid
and how to discover available namespaces and keys.
- Around line 35-53: The examples show shorthand aliases but the Usage section
documents the full command 'agentuity cloud keyvalue delete'; update SKILL.md so
they match by either adding the official aliases ('kv delete' and 'kv rm') to
the Usage/Commands list and noting they map to 'agentuity cloud keyvalue
delete', or replace the example invocations with the primary form 'bunx
@agentuity/cli agentuity cloud keyvalue delete <env> <key>' (and add alias
examples only if they are official). Reference the command symbols 'agentuity
cloud keyvalue delete', 'kv delete', and 'kv rm' when editing the Usage and
Examples to ensure clarity.
In @doc-agents/.agents/skills/agentuity-cli-cloud-keyvalue-keys/SKILL.md:
- Around line 24-52: The examples for the keys skill inconsistently label
aliases: the Usage shows "agentuity cloud keyvalue keys", Examples use the
package-runner "bunx @agentuity/cli kv", and only "kv ls" is marked "(using
alias)" while "kv list" is not; update the SKILL.md examples so aliases are
documented consistently—either mark both "kv ls" and "kv list" as aliases (e.g.,
“(alias)”) or remove the “(using alias)” note entirely from the examples for
production/cache/staging so they match the pattern used across other keyvalue
skills.
In @doc-agents/.agents/skills/agentuity-cli-cloud-secret-pull/SKILL.md:
- Around line 24-25: The examples in SKILL.md are inconsistent with the usage
line: change the example invocations (currently using "bunx @agentuity/cli
secret pull") to match the documented command "agentuity cloud secret pull
[options]" or add a short note explaining that "bunx @agentuity/cli secret pull"
is an alternative invocation that maps to "agentuity cloud secret pull"; update
all example lines (the two occurrences that use "bunx @agentuity/cli secret
pull") so they either prepend the missing "cloud" keyword (e.g., "bunx
@agentuity/cli cloud secret pull") or replace them with the canonical "agentuity
cloud secret pull [options]" to remove ambiguity and keep usage/examples
consistent.
- Line 20: The prereq entry "cloud deploy" is ambiguous; update SKILL.md to
clarify that "cloud deploy" refers to the Agentuity CLI command used to deploy
the cloud runtime (i.e., run `agentuity cloud deploy`) and must be executed
before using this skill so the cloud environment and secrets backend exist;
change the bare `cloud deploy` line to a short explanatory sentence mentioning
the specific CLI command name ("agentuity cloud deploy"), its purpose (deploy
cloud runtime/secrets backend), and add a link or pointer to the CLI/docs for
usage and authentication steps.
In @doc-agents/.agents/skills/agentuity-cli-cloud-secret-set/SKILL.md:
- Line 20: The docs mention the --project-id flag as an alternative to running
from a project directory but it’s missing from the Arguments table; update
SKILL.md to either add an optional --project-id entry to the Arguments table
(include its flag name, type, description like "Override current project;
accepts project ID string", and whether it’s required) and mirror this addition
in the related section around lines 28–33, or add a cross-reference to a global
options reference if that’s preferred; ensure the flag name --project-id is
explicitly documented and consistent across the prerequisites and arguments
sections.
- Line 6: Update SKILL.md to use consistent command invocation across Metadata,
Usage, and Examples by either choosing the bare CLI invocation "agentuity cloud
secret set" for docs aimed at globally installed users or showing both forms
with guidance; specifically, change the Metadata line that currently lists
`agentuity cloud secret set` and the Usage line to match the Examples (or vice
versa), and add a short explanatory note under Usage or Examples that shows both
forms — "agentuity cloud secret set" for globally installed installs and "bunx
@agentuity/cli secret set" for running without global install — so readers know
when to use each. Ensure the example blocks (currently using `bunx
@agentuity/cli secret set`) are updated to either mirror the chosen canonical
command or retain both variants with clear context.
- Around line 30-33: Update the SKILL.md argument table to replace the
placeholder "-" entries for `<key>` and `<value>` with concrete guidance: for
`<key>` state the expected format (e.g., uppercase letters, digits and
underscores only, must start with a letter, typical max length ~64 characters)
and that it represents the environment/config key name; for `<value>` state it
accepts UTF-8 strings, recommend a practical max length (e.g., 4k characters),
note that spaces and most special characters are allowed but must be quoted or
escaped in shells, and that multiline values should be provided via proper
quoting, heredoc or encoded (e.g., base64) if the CLI does not support raw
newlines. Include a short note about encoding expectations (UTF-8) and any
recommended quoting/escaping practices so users know how to pass values safely.
- Around line 54-65: The JSON example in SKILL.md uses a string literal for the
success field ("success": "boolean") which mismatches the documented type;
update the example JSON so `success` is a real boolean value (e.g., true or
false) to match the table, leaving `key` and `path` as string examples, and
ensure the sample response block near the `success` field in the file reflects
this change.
In @doc-agents/.agents/skills/agentuity-cli-cloud-session-get/SKILL.md:
- Around line 45-71: The JSON schema in SKILL.md uses "unknown" for several
fields (start_time, end_time, duration, error, timeline, route) which is
ambiguous; update those entries to concrete types or example values—e.g., change
"start_time"/"end_time" to ISO timestamp strings or "string | null", "duration"
to a number (seconds) or "number | null", "error" to "string | null" or an
object like {"code": "...", "message": "..."}, and "timeline"/"route" to
"object" with a small example structure or "object | null"; ensure you update
the entries for start_time, end_time, duration, error, timeline, and route in
the JSON block so consumers know exact types/nullable behavior.
- Around line 29-31: The table row for the `<session_id>` argument in SKILL.md
is missing a description; update the cell for `<session_id>` to explain what
value to supply (e.g., the cloud session identifier/UUID returned by the
Agentuity cloud when a session is created), how to obtain it (from the session
creation response or the web console), and give a short example format (for
example: "123e4567-e89b-12d3-a456-426614174000"); reference the
`agentuity-cli-cloud-session-get` skill name and ensure the wording clarifies it
is required and unique per session.
In @doc-agents/.agents/skills/agentuity-cli-cloud-session-list/SKILL.md:
- Line 59: Replace the inconsistent kebab-case flags in the example invocations
of "bunx @agentuity/cli cloud session list" (the examples containing
"--project-id" and "--deployment-id") with the camelCase variants used in the
options table and other examples: change "--project-id" to "--projectId" and
"--deployment-id" to "--deploymentId" so the example flags match the CLI's
expected option names (also ensure this matches other flags shown like
"--success", "--devmode", "--trigger", "--env").
- Line 80: Replace the phrase "API triggered sessions" with the hyphenated
compound adjective "API-triggered sessions" in the SKILL.md content (the
heading/line containing "Only API triggered sessions") so the adjective
correctly modifies "sessions."
In @doc-agents/.agents/skills/agentuity-cli-cloud-storage-download/SKILL.md:
- Around line 36-66: The README shows a conflicting definition for the
`--metadata` option: the options table marks `--metadata` as Required = "Yes"
but examples use it optionally; update the documentation so the table and
examples match by changing the Required column for `--metadata` to "No" if it is
optional, or make all examples include `--metadata` if it is truly
required—specifically edit the SKILL.md options table entry for `--metadata` to
reflect the correct requirement and adjust examples accordingly to be
consistent.
In @doc-agents/.agents/skills/agentuity-cli-cloud-storage-upload/SKILL.md:
- Around line 39-40: The options table wrongly marks `--key` and `--contentType`
as required despite their descriptions saying they have defaults/are
auto-detected; update the "Required" column for both `--key` and `--contentType`
to "No" in the table (or alternatively remove the default/auto-detect wording if
you intend them to be mandatory) so the table and descriptions are consistent;
ensure the row for `--key` still mentions "defaults to basename or 'stdin' for
piped uploads" and the row for `--contentType` still notes "auto-detected if not
provided."
- Line 53: The example uses the wrong command name; update the example
invocation from "cloud storage put" to the documented command "cloud storage
upload" so it matches the rest of the docs (replace the example line that
currently reads: bunx @agentuity/cli cloud storage put my-bucket file.txt
--content-type text/plain with bunx @agentuity/cli cloud storage upload
my-bucket file.txt --content-type text/plain).
In @doc-agents/.agents/skills/agentuity-cli-cloud-vector-delete/SKILL.md:
- Around line 34-59: Examples in SKILL.md use the wrong command syntax; update
each example invocation to include the missing "cloud" subcommand so they match
the documented usage "agentuity cloud vector delete" (e.g., change "bunx
@agentuity/cli vector delete products chair-001" to use "bunx @agentuity/cli
cloud vector delete products chair-001", similarly update the "vector rm" and
"vector del" examples to "cloud vector rm" and "cloud vector del" with their
existing arguments and --confirm flag).
In
@doc-agents/.agents/skills/agentuity-cli-cloud-vector-list-namespaces/SKILL.md:
- Around line 28-45: The three example commands in SKILL.md are missing the
required "cloud" segment; update each example invocation (the examples showing
"bunx @agentuity/cli vector list-namespaces", "bunx @agentuity/cli vector
namespaces", and "bunx @agentuity/cli vector ns") to include "cloud" immediately
after the package, i.e. change them to "bunx @agentuity/cli cloud vector
list-namespaces", "bunx @agentuity/cli cloud vector namespaces", and "bunx
@agentuity/cli cloud vector ns" so they match the documented usage and command
metadata.
In @doc-agents/.agents/skills/agentuity-cli-cloud-vector-stats/SKILL.md:
- Around line 34-52: Update the example commands in SKILL.md so they match the
documented usage "agentuity cloud vector stats [name]": replace each example
invocation that currently uses "bunx @agentuity/cli vector stats ..." with "bunx
@agentuity/cli cloud vector stats ..." (e.g., update the all-namespaces,
products, and embeddings examples to include the cloud subcommand).
In @doc-agents/.agents/skills/agentuity-cli-profile-use/SKILL.md:
- Line 40: The example command uses the wrong verb "profile switch" and should
use the documented "profile use"; update the example line that currently reads
"bunx @agentuity/cli profile switch staging" to "bunx @agentuity/cli profile use
staging" so the sample matches the CLI's actual "profile use" command.
In @doc-agents/.agents/skills/agentuity-cli-project-create/SKILL.md:
- Line 66: The example shows the undocumented alias "bunx @agentuity/cli project
new --no-register" while the documented primary command is "agentuity project
create"; update SKILL.md so the example uses the primary command ("agentuity
project create --no-register") or, if "new" is an accepted alias, add a brief
note stating that "project new" is an alias of "project create" and show both
forms (e.g., primary first, alias second) so readers aren’t confused.
In @doc-agents/.agents/skills/README.md:
- Line 137: Update the skills README entry that currently says "Show project
detail" to the corrected plural "Show project details"; because this file is
auto-generated, either run the generator command agentuity ai skills generate
--output ./skills to regenerate the README or directly edit the README line
under the "repl" section to replace "detail" with "details".
In @doc-agents/package.json:
- Around line 16-21: Replace all occurrences of "latest" for the listed
dependencies ("@agentuity/react", "@agentuity/runtime", "@agentuity/schema",
"@agentuity/workbench", "@ai-sdk/openai", "ai") in package.json with explicit
semver ranges or exact versions (e.g., ^x.y.z or x.y.z) by looking up the
current desired versions in the npm registry, then run npm install to regenerate
the lockfile; ensure every dependency in that diff is pinned consistently and
verify package.json and package-lock.json (or yarn.lock) are updated before
committing.
- Around line 29-30: The devDependencies in package.json use floating "latest"
versions for @agentuity/cli and @types/bun; replace those with pinned, specific
version strings (e.g., "1.2.3" style) to ensure deterministic builds. Update the
entries for "@agentuity/cli" and "@types/bun" in package.json to concrete semver
values (or exact versions) and run npm/yarn install to regenerate lockfile so CI
uses the pinned versions.
In @doc-agents/src/agent/agent_pulse/context-builder.ts:
- Around line 37-50: In buildCurrentTutorialInfo(tutorials: Tutorial[],
tutorialState?: TutorialState) adjust the completion check to match the
state-manager by changing the condition that uses tutorialState.currentStep vs
currentTutorial.totalSteps from a strict greater-than to greater-than-or-equal;
specifically update the if in buildCurrentTutorialInfo so it treats currentStep
>= totalSteps as completed to avoid the off-by-one mismatch with the
state-manager logic.
- Around line 7-32: The tutorial list returned by getTutorialList and embedded
by buildContext is not runtime-validated or sanitized; add a Zod schema for the
Tutorial shape in the module that parses the API response (the code that
currently casts to `as Tutorial[]` in tutorial.ts), validate the parsed payload
and reject or coerce invalid entries, then ensure buildContext uses the
validated result (the value returned by getTutorialList) and sanitizes all
string fields (title, description, difficulty, etc.) before embedding—e.g.,
trim, remove control/chars that could inject prompts, and escape or redact
suspicious tokens—so update tutorial parsing to run Zod.parse/ safeParse and
update buildContext (and buildCurrentTutorialInfo if it formats fields) to call
a small sanitizeString helper on every string field prior to JSON.stringify or
interpolation, falling back to defaultFallbackContext on validation failure.
- Around line 85-87: The TOOL-USAGE RULES comment incorrectly names
startTutorialById; update the instruction to reference the actual tool
startTutorialAtStep and clarify its behavior (use when the user selects a
tutorial; for new tutorials set step=1 and validate step is between 1 and the
tutorial's totalSteps). Also ensure any mention of startTutorialById in the same
comment block or nearby logic is replaced with startTutorialAtStep to avoid
confusion for the agent.
In @doc-agents/src/api/title-generator/route.ts:
- Around line 54-102: The POST handler at router.post('/') is missing
authentication; import the existing bearerTokenAuth (symbol: bearerTokenAuth)
and apply it as middleware on the route (i.e., router.post('/', bearerTokenAuth,
async (c) => ...) so the request is authenticated before calling
generateText/openai; update the file imports to include bearerTokenAuth and
ensure the route signature still uses the async handler (router.post). Also add
or wire a rate-limiting middleware (e.g., a shared rateLimit middleware used
across OpenAI endpoints) to this route to protect against abuse.
- Around line 38-42: The sentenceCase function currently lowercases the whole
string then capitalizes the first character, which destroys intended
capitalization; update sentenceCase to preserve the original capitalization of
the rest of the string by removing the toLowerCase() step and returning
str.charAt(0).toUpperCase() + str.slice(1) (leave the empty-check intact) so
proper nouns and acronyms (e.g., "API Reference Guide") are not altered.
In @lib/api/services/agentPulse.ts:
- Around line 119-131: The final buffer handling block only parses text-delta
and finish types; update it to mirror the main SSE parsing logic by handling all
StreamingChunk types — at minimum check for 'text-delta' (call
callbacks.onTextDelta with chunk.textDelta), 'finish' (callbacks.onFinish),
'status' (callbacks.onStatus or equivalent), 'tutorial-data'
(callbacks.onTutorialData), and 'error' (callbacks.onError) — and include safe
existence checks before invoking each callback; use the existing StreamingChunk
type, buffer parsing (JSON.parse(buffer.slice(6))), and the callbacks object to
ensure consistent behavior with the main loop.
In @lib/api/services/agentQa.ts:
- Around line 12-15: The AgentQaResponse interface is currently not exported but
is intended for public use; modify the declaration of AgentQaResponse in
agentQa.ts so it is exported (e.g., add the export keyword before the interface
or export it as a type alias) so the symbol AgentQaResponse is available for the
existing re-export in lib/api/services/index.ts.
In @lib/kv-store.ts:
- Around line 27-29: The exported interface KVDeleteResult is unused because
deleteKVValue currently returns Promise<boolean>; either remove the
KVDeleteResult declaration or change deleteKVValue's signature to return
Promise<KVDeleteResult> and adapt its implementation to return an object
matching { exists: boolean } (update any call sites accordingly); reference the
KVDeleteResult interface and the deleteKVValue function to locate and apply the
change.
- Around line 55-57: Remove the debug console.log in the code that prints
serviceUrls; specifically, delete the line console.log('serviceUrls',
serviceUrls) located where getServiceUrls() is called and before returning new
KeyValueStorageService(serviceUrls.keyvalue, adapter) so the function returns
only the KeyValueStorageService instance without emitting debug output.
In @lib/tutorial/state-manager.ts:
- Around line 90-100: completeTutorial currently creates two separate Date()
instances and silently ignores failures from setKVValue; change it to create a
single timestamp variable (e.g., const now = new Date().toISOString()) and
assign that to both state.tutorials[tutorialId].completedAt and .lastAccessedAt,
then await setKVValue inside a try/catch and log any error (include error
details) instead of swallowing it—use the same logging approach used by
updateTutorialProgress and reference getUserTutorialState and getTutorialKey to
locate the surrounding code.
🟡 Minor comments (27)
doc-agents/.agents/skills/agentuity-cli-cloud-storage-upload/SKILL.md-53-53 (1)
53-53: Fix the command name in the example.Line 53 uses
cloud storage put, but the command signature defined throughout the documentation iscloud storage upload. Update the example for consistency.-bunx @agentuity/cli cloud storage put my-bucket file.txt --content-type text/plain +bunx @agentuity/cli cloud storage upload my-bucket file.txt --content-type text/plaindoc-agents/.agents/skills/agentuity-cli-cloud-env-delete/SKILL.md-28-32 (1)
28-32: Provide a description for the<key>argument.The argument table on line 32 has an empty description for
<key>. Clarify what this argument represents (e.g., "Environment variable name" or "Key of the environment variable to delete").🔎 Proposed fix for argument description
| Argument | Type | Required | Description | |----------|------|----------|-------------| -| `<key>` | string | Yes | - | +| `<key>` | string | Yes | Environment variable key to delete |doc-agents/.agents/skills/agentuity-cli-cloud-session-list/SKILL.md-80-80 (1)
80-80: Use hyphenation for compound adjective.Line 80 reads "API triggered sessions" but should be "API-triggered sessions" to properly hyphenate the compound adjective modifying "sessions".
🔎 Proposed fix for hyphenation
-Only API triggered sessions: +Only API-triggered sessions:doc-agents/.agents/skills/agentuity-cli-cloud-secret-set/SKILL.md-20-20 (1)
20-20: Document the--project-idflag mentioned in prerequisites.Line 20 references the
--project-idflag as an alternative to running from a project directory, but it's not included in the Arguments table. Either add it as an optional argument or provide its documentation elsewhere (e.g., in a global options reference).Also applies to: 28-33
doc-agents/.agents/skills/agentuity-cli-cloud-secret-set/SKILL.md-6-6 (1)
6-6: Reconcile inconsistent command invocation across sections.The documented command varies across sections:
- Metadata (line 9):
agentuity cloud secret set- Usage (line 25):
agentuity cloud secret set- Examples (lines 40, 46):
bunx @agentuity/cli secret setClarify whether users should invoke via the bare
agentuitycommand, thebunx @agentuity/cliwrapper, or both. If both are valid, explain when each is appropriate (e.g., with/without global installation).Also applies to: 9-9, 24-25, 40-40, 46-46
doc-agents/.agents/skills/agentuity-cli-cloud-secret-set/SKILL.md-30-33 (1)
30-33: Complete argument descriptions.Argument descriptions are empty (shown as "-"). Provide meaningful guidance on:
- What format or values
<key>should take (e.g., uppercase with underscores likeDATABASE_URL)?- What constraints apply to
<value>(length limits, character restrictions, encoding)?- Whether values can contain spaces, special characters, or newlines?
This information is critical for users to invoke the command correctly.
doc-agents/.agents/skills/agentuity-cli-cloud-secret-set/SKILL.md-54-65 (1)
54-65: Fix the type mismatch in the output schema.The JSON example (line 54–59) shows
"success": "boolean"(a string literal), but the field description table (line 62–63) lists the type asboolean. This is a critical inconsistency that will confuse users about the actual response format.Correct the JSON example to match the intended schema:
🔎 Proposed fix
```json { - "success": "boolean", + "success": true, "key": "string", "path": "string" }</details> </blockquote></details> <details> <summary>doc-agents/.agents/skills/agentuity-cli-cloud-secret-pull/SKILL.md-20-20 (1)</summary><blockquote> `20-20`: **Clarify the "cloud deploy" prerequisite.** Line 20 lists `cloud deploy` as a prerequisite but provides no explanation. It's unclear whether this refers to a deployment service, a prerequisite command, or something else. Provide additional context to help users understand this requirement. </blockquote></details> <details> <summary>doc-agents/.agents/skills/agentuity-cli-cloud-session-get/SKILL.md-29-31 (1)</summary><blockquote> `29-31`: **Add description for the `<session_id>` argument.** The argument description is missing (only a dash). Users need guidance on what values to provide or examples of a valid session ID format. <details> <summary>🔎 Suggested improvement</summary> ```diff | Argument | Type | Required | Description | |----------|------|----------|-------------| -| `<session_id>` | string | Yes | - | +| `<session_id>` | string | Yes | The unique identifier of the session to retrieve (e.g., `sess_abc123xyz`) |doc-agents/.agents/skills/agentuity-cli-cloud-agent-get/SKILL.md-35-35 (1)
35-35: Fix example description to match context.The example description says "Get item details" but the command retrieves agent details. Update to maintain consistency.
🔎 Proposed fix
-Get item details: +Get agent details:doc-agents/.agents/skills/agentuity-cli-cloud-agent-get/SKILL.md-29-31 (1)
29-31: Add meaningful description for the<agent_id>argument.The Argument table at line 31 shows only a dash "-" for the Description. Provide a clear explanation of what
<agent_id>is and how to obtain it.🔎 Proposed fix
| Argument | Type | Required | Description | |----------|------|----------|-------------| -| `<agent_id>` | string | Yes | - | +| `<agent_id>` | string | Yes | The unique identifier of the agent to retrieve |doc-agents/.agents/skills/agentuity-cli-cloud-storage-download/SKILL.md-36-66 (1)
36-66: Resolve conflicting--metadataoption definition.The options table marks
--metadataas Required = "Yes" (line 40), but the examples show it being used optionally—it appears in only one of four examples (line 65). Either the option is required (and all examples should include it) or it's optional (and the table should mark Required = "No").doc-agents/.agents/skills/agentuity-cli-cloud-deployment-rollback/SKILL.md-16-20 (1)
16-20: Clarify the prerequisite on line 19.The prerequisite "cloud deploy" is unclear. Specify whether it's a skill dependency, a configuration requirement, or something else.
doc-agents/.agents/skills/agentuity-cli-cloud-deploy/SKILL.md-16-20 (1)
16-20: Remove malformed prerequisite text on line 20.Line 20 contains only
- auth login, which appears to be a duplicate or corrupted entry. The required authentication is already documented on line 18. This should be removed.🔎 Proposed fix
## Prerequisites - Authenticated with `agentuity auth login` - Project context required (run from project directory or use `--project-id`) -- auth logindoc-agents/.agents/skills/agentuity-cli-profile-use/SKILL.md-40-40 (1)
40-40: Fix incorrect command verb in example.The example on line 40 uses
profile switchinstead of the documentedprofile usecommand. This will fail if users copy the example as-is.🔎 Proposed fix
Switch to the "staging" profile: ```bash - bunx @agentuity/cli profile switch staging + bunx @agentuity/cli profile use staging</details> <!-- review_comment_end] --> <!-- [scratchpad_start] [task review_file_3/3] File 3 review complete. [scratchpad_end] --> </blockquote></details> <details> <summary>doc-agents/.agents/skills/agentuity-cli-project-create/SKILL.md-66-66 (1)</summary><blockquote> `66-66`: **Clarify the `project new` command usage.** Line 66 shows `bunx @agentuity/cli project new --no-register`, but the documented command is `agentuity project create`. If `new` is an alias for `create`, this should be explicitly documented in a note or the example should use the primary `create` command. </blockquote></details> <details> <summary>doc-agents/.agents/skills/agentuity-cli-cloud-keyvalue-keys/SKILL.md-24-52 (1)</summary><blockquote> `24-52`: **Document all command aliases consistently in the keys skill examples.** The Usage section (`agentuity cloud keyvalue keys`) and Examples section (`bunx @agentuity/cli kv`) intentionally document different execution methods—the Usage section shows the canonical syntax while Examples demonstrate the package runner shorthand. This pattern is consistent across all keyvalue skills. However, within the keys skill's examples, aliases are labeled inconsistently: "kv ls" is marked "(using alias)" but "kv list" appears without any label despite being a separate alias. For clarity, either label all aliases uniformly or remove inconsistent labels to match the pattern used in other skills (where aliases may be shown without explicit labels). </blockquote></details> <details> <summary>doc-agents/.agents/skills/README.md-137-137 (1)</summary><blockquote> `137-137`: **Fix grammatical issue in skills table.** Line 137 shows "Show project detail" which should be "Show project details" (plural noun). Since this is an auto-generated file, regenerating via `agentuity ai skills generate --output ./skills` should correct this automatically. </blockquote></details> <details> <summary>doc-agents/.agents/skills/agentuity-cli-cloud-keyvalue-delete/SKILL.md-28-33 (1)</summary><blockquote> `28-33`: **Add descriptions for command arguments.** The arguments table lacks descriptions for both `<namespace>` and `<key>`. Provide context about what each argument represents (e.g., what namespaces are available, format expectations). </blockquote></details> <details> <summary>doc-agents/.agents/skills/agentuity-cli-cloud-keyvalue-delete/SKILL.md-35-53 (1)</summary><blockquote> `35-53`: **Clarify relationship between documented command and example aliases.** The usage section documents the full command `agentuity cloud keyvalue delete`, but all examples use shorter aliases (`kv delete`, `kv rm`). If these are official shortcuts, they should be mentioned in the Usage section. If not, examples should demonstrate the primary command form. </blockquote></details> <details> <summary>doc-agents/src/api/title-generator/route.ts-38-42 (1)</summary><blockquote> `38-42`: **Sentence case may alter intentional capitalization.** The `sentenceCase` function converts the entire string to lowercase before capitalizing the first character (line 40). This will incorrectly lowercase proper nouns, acronyms, and intentionally capitalized words (e.g., "API Reference Guide" becomes "Api reference guide"). <details> <summary>🔎 Proposed fix to preserve existing capitalization</summary> ```diff function sentenceCase(str: string): string { if (!str) return ''; - const lower = str.toLowerCase(); - return lower.charAt(0).toUpperCase() + lower.slice(1); + // Only capitalize first letter, preserve rest + return str.charAt(0).toUpperCase() + str.slice(1); }doc-agents/src/agent/agent_pulse/context-builder.ts-85-87 (1)
85-87: Fix tool name mismatch in instructions.Line 86 references
startTutorialByIdbut the actual tool name isstartTutorialAtStep(as seen indoc-agents/src/agent/agent_pulse/tools.ts). This mismatch could confuse the AI model about which tool to use.🔎 Proposed fix
=== TOOL-USAGE RULES (must follow) === -- startTutorialById must only be used when user select a tutorial. If the user starts a new tutorial, the step number should be set to one. Valid step is between 1 and totalSteps of the specific tutorial. +- startTutorialAtStep must only be used when user select a tutorial. If the user starts a new tutorial, the step number should be set to one. Valid step is between 1 and totalSteps of the specific tutorial. - Treat askDocsAgentTool as a search helper; ignore results you judge irrelevant.lib/kv-store.ts-27-29 (1)
27-29:KVDeleteResultinterface is defined but unused.The
KVDeleteResultinterface is exported butdeleteKVValuereturnsPromise<boolean>instead. Either use this interface as the return type or remove it.lib/kv-store.ts-55-57 (1)
55-57: Remove debugconsole.logstatement.Line 56 contains a debug log (
console.log('serviceUrls', serviceUrls)) that should be removed before merging to production.🔎 Proposed fix
const serviceUrls = getServiceUrls(); - console.log('serviceUrls', serviceUrls); return new KeyValueStorageService(serviceUrls.keyvalue, adapter);lib/api/services/agentPulse.ts-119-131 (1)
119-131: Incomplete handling of chunk types in final buffer processing.The final buffer processing only handles
text-deltaandfinishchunk types, but the main loop handlesstatus,tutorial-data, anderroras well. If the last SSE event is one of these types, it will be silently ignored.🔎 Proposed fix to handle all chunk types consistently
// Process any remaining data if (buffer.length > 0 && buffer.startsWith('data: ')) { try { const chunk = JSON.parse(buffer.slice(6)) as StreamingChunk; if (chunk.type === 'text-delta' && chunk.textDelta) { callbacks.onTextDelta?.(chunk.textDelta); + } else if (chunk.type === 'status') { + callbacks.onStatus?.(chunk.message || '', chunk.category); + } else if (chunk.type === 'tutorial-data' && chunk.tutorialData) { + callbacks.onTutorialData?.(chunk.tutorialData); } else if (chunk.type === 'finish') { callbacks.onFinish?.(); + } else if (chunk.type === 'error') { + callbacks.onError?.(chunk.error || 'Unknown error'); } } catch (error) { console.error('Error parsing final SSE chunk:', error); } }app/api/sessions/[sessionId]/messages/route.ts-16-16 (1)
16-16: Unused constantAGENT_REQUEST_TIMEOUT.
AGENT_REQUEST_TIMEOUTis defined but never used in this file. The timeout is now handled internally bycallAgentPulseStreaming. Remove this dead code.🔎 Proposed fix
// Constants const DEFAULT_CONVERSATION_HISTORY_LIMIT = 10; -const AGENT_REQUEST_TIMEOUT = 30000; // 30 secondslib/tutorial/state-manager.ts-90-100 (1)
90-100:completeTutorialsilently ignores KV write failures.Unlike
updateTutorialProgresswhich logs on failure,completeTutorialdoesn't check or log the result ofsetKVValue. Additionally, lines 94-95 create two separateDateobjects when a single timestamp could be reused.🔎 Proposed fix
async function completeTutorial(userId: string, tutorialId: string): Promise<void> { const state = await getUserTutorialState(userId); if (state.tutorials[tutorialId]) { - state.tutorials[tutorialId].completedAt = new Date().toISOString(); - state.tutorials[tutorialId].lastAccessedAt = new Date().toISOString(); + const now = new Date().toISOString(); + state.tutorials[tutorialId].completedAt = now; + state.tutorials[tutorialId].lastAccessedAt = now; const key = getTutorialKey(userId); - await setKVValue(key, state, { storeName: config.kvStoreName }); + const success = await setKVValue(key, state, { storeName: config.kvStoreName }); + if (!success) { + console.error(`Failed to complete tutorial. UserId: ${userId}, TutorialId: ${tutorialId}`); + } } }
🧹 Nitpick comments (28)
doc-agents/.agents/skills/agentuity-cli-cloud-thread-get/SKILL.md (1)
45-69: Clarify "unknown" type annotations in the output schema.The JSON schema and table (lines 51-52, 55, 65-66) document several fields with type
"unknown"(deleted_at,deleted_by,user_data). For API consumers, these should be more specific: isuser_dataa JSON object, string, or null? Aredeleted_atanddeleted_byalso nullable fields?Provide explicit type definitions or note when fields are optional/nullable to improve clarity.
doc-agents/.agents/skills/agentuity-cli-cloud-storage-upload/SKILL.md (1)
30-33: Add descriptions for arguments.The Arguments table for
<name>and<filename>lacks descriptions in the "Description" column. Consider adding clarifications (e.g., is<name>the bucket name? Is<filename>a local or remote path?).doc-agents/.agents/skills/agentuity-cli-cloud-apikey-get/SKILL.md (2)
33-38: Provide a concrete example ID in the example.The example uses the placeholder
<id>without showing what a real API key ID looks like. Consider showing a concrete example ID (e.g.,api_key_12345abc) to help users understand the expected format.
1-38: Add an Output section for consistency with related skills.The create skill documents the output JSON schema; consider adding an Output section here to document what the GET response looks like and clarify whether it returns the full API key object, metadata only, or another structure.
doc-agents/.agents/skills/agentuity-cli-cloud-session-get/SKILL.md (1)
73-97: Enhance field descriptions with more actionable guidance.Several descriptions are generic or merely repeat field names (e.g., "Thread ID" for
thread_id, "Error message" forerror). Adding more context—such as when fields are null, what they represent in the execution flow, or how to use them—would improve usability.🔎 Example improvements
-| `error` | unknown | Error message | +| `error` | unknown | Error message, if session failed; null if successful | -| `timeline` | unknown | Session timeline | +| `timeline` | unknown | Ordered list of session execution events with timestamps | -| `thread_id` | string | Thread ID | +| `thread_id` | string | Unique conversation thread ID for multi-turn interactions |doc-agents/.agents/skills/agentuity-cli-cloud-db-sql/SKILL.md (1)
13-13: Fix heading capitalization for consistency.The heading "# Cloud Db Sql" uses inconsistent casing. For standard Markdown conventions, it should be "# Cloud DB SQL" (acronym capitalized) or "# Cloud Database SQL".
🔎 Proposed fix
-# Cloud Db Sql +# Cloud DB SQLdoc-agents/.agents/skills/agentuity-cli-profile-create/SKILL.md (1)
37-41: Clarify example descriptions for distinct use cases.Examples on lines 37 and 49 both use the same description "Create new item:" but demonstrate different profiles (production vs. development). Rename these to reflect the profile context or use case difference for better clarity.
🔎 Proposed improvements
## Examples - Create new item: + Create production profile: ```bash bunx @agentuity/cli profile create productionUse switch option:
bunx @agentuity/cli profile create staging --switch
- Create new item:
Create development profile:
bunx @agentuity/cli profile create development</details> <!-- review_comment_end] --> <!-- [scratchpad_start] [task review_file_2/3] File 2 review complete. [scratchpad_end] --> Also applies to: 49-53 </blockquote></details> <details> <summary>doc-agents/.agents/skills/agentuity-cli-auth-login/SKILL.md (1)</summary><blockquote> `24-34`: **Clarify example descriptions and consider documenting output.** Both examples have identical descriptions ("Login to account:"), making it unclear what distinguishes them. The first uses the full command path while the second uses an alias shortcut. Additionally, consider documenting the expected output or behavior after successful login (similar to the output schema in other SKILL.md files like `agentuity-cli-cloud-keyvalue-get`). <details> <summary>💡 Suggested improvement</summary> ```diff -Login to account: +Login using full command: ```bash bunx @agentuity/cli auth login-Login to account:
+Login using shorthand alias:bunx @agentuity/cli login
+## Output
+
+Opens default browser for authentication. Upon success, credentials are stored locally and confirmed in terminal.</details> </blockquote></details> <details> <summary>doc-agents/.agents/skills/agentuity-cli-cloud-keyvalue-get/SKILL.md (1)</summary><blockquote> `30-33`: **Add descriptions for command arguments.** The Arguments table lists `<namespace>` and `<key>` as required but provides no description (shows "-"). Adding brief descriptions would improve usability. <details> <summary>📝 Suggested descriptions</summary> ```diff | Argument | Type | Required | Description | |----------|------|----------|-------------| -| `<namespace>` | string | Yes | - | -| `<key>` | string | Yes | - | +| `<namespace>` | string | Yes | The keyvalue namespace to query | +| `<key>` | string | Yes | The key to retrieve from the namespace |doc-agents/.agents/skills/agentuity-cli-cloud-keyvalue-list-namespaces/SKILL.md (1)
47-49: Consider adding output schema details.The output section only states
Returns: arraywithout describing the structure of the array elements. Adding details about what each array item contains would improve the documentation's usefulness.📋 Example enhancement
## Output -Returns: `array` +Returns an array of namespace strings: + +```json +["production", "staging", "cache"] +``` + +Each element is a string representing a keyvalue namespace name.doc-agents/.agents/skills/agentuity-cli-auth-ssh-list/SKILL.md (1)
28-38: Duplicate example section headers reduce clarity.Lines 28 and 34 both use "List items:" as section headers, but they show different command variants (primary and alias). Users reading the documentation would see identical labels for distinct examples, making it unclear which form to use.
Consider differentiation like:
- "List items using primary command:" (line 28)
- "List items using alias:" (line 34)
doc-agents/.agents/skills/agentuity-cli-auth-logout/SKILL.md (1)
24-34: Duplicate example section headers and missing output schema.Similar to the auth-ssh-list file, lines 24 and 30 both use "Logout from account:" for different command forms. Additionally, this file lacks an output schema specification (see
agentuity-cli-auth-ssh-list/SKILL.mdline 46 for comparison). Consider adding an## Outputsection describing what the logout command returns.doc-agents/.agents/skills/agentuity-cli-cloud-keyvalue-delete-namespace/SKILL.md (1)
13-15: Normalize heading formatting to match command syntax.Line 13 presents the heading as "Cloud Keyvalue Delete-namespace" (inconsistent capitalization), while the actual command syntax on line 25 and throughout uses "keyvalue delete-namespace" (lowercase). For consistency with the command structure and other documentation, consider: "# Cloud Keyvalue Delete-Namespace" or follow the exact command name format.
lib/api/services/titleGenerator.ts (2)
8-11: Consider extracting shared types to a common location.The
ConversationMessageinterface is duplicated indoc-agents/src/agent/agent_pulse/types.ts(lines 0-3). Consider extracting this to a shared types file to maintain a single source of truth and prevent drift between definitions.🔎 Suggested approach
Create a shared types file (e.g.,
lib/types/conversation.ts):export interface ConversationMessage { author: 'USER' | 'ASSISTANT'; content: string; }Then import from both locations:
-interface ConversationMessage { - author: 'USER' | 'ASSISTANT'; - content: string; -} +import type { ConversationMessage } from '@/lib/types/conversation';
33-33: Handle empty string response from API.Line 33 returns
response.title || 'New chat', which correctly handlesundefinedandnull, but an empty string''from the API would also fall back to the default. Consider explicitly checking for a meaningful title to avoid accepting whitespace-only responses.🔎 Suggested refinement
- return response.title || 'New chat'; + return response.title?.trim() || 'New chat';doc-agents/src/api/title-generator/route.ts (1)
24-24: Complex regex pattern needs validation.The markdown emphasis removal regex on Line 24 uses multiple capture groups and may not handle all edge cases (e.g., unbalanced markers, nested emphasis). Consider testing with various inputs including malformed markdown.
🔎 Test cases to verify
// Test cases for sanitizeTitle const testCases = [ '**bold** and *italic*', '**unbalanced bold', 'nested **bold *and italic* text**', '___triple underscore___', ];doc-agents/src/agent/agent_pulse/agent.ts (1)
9-16: Schema duplication: conversation history structure defined inline.The conversation history schema (lines 11-14) duplicates the structure already defined in
doc-agents/src/agent/agent_pulse/types.ts(ConversationMessage interface). This creates a third definition alongside the one inlib/api/services/titleGenerator.ts.Consider reusing the shared type definition or creating a schema from the TypeScript interface to maintain consistency.
lib/api/types.ts (1)
13-22: Remove redundant type annotation.The type annotation on Line 16 is trivially inferred from its initialization value.
🔎 Proposed fix
constructor( message: string, - public status: number = 500, + public status = 500, public details?: unknown ) {doc-agents/src/agent/agent_pulse/tools.ts (1)
26-31: Improve error message and remove unnecessary template literal.Line 30 uses a template literal without interpolation. Additionally, the error message "Error fetching tutorial information" is too generic and doesn't provide helpful context to the user.
🔎 Proposed fix
// Validate tutorial exists before starting const tutorialResponse = await getTutorialMeta(tutorialId, agentContext); if (!tutorialResponse.success || !tutorialResponse.data) { - return `Error fetching tutorial information`; + return `Unable to find tutorial "${tutorialId}". Please verify the tutorial ID is correct.`; }lib/api/services/agentQa.ts (1)
8-10: Remove unused interface.The
AgentQaRequestinterface is defined but never used. The request body on Line 28 is constructed as an inline object literal.🔎 Proposed fix
-interface AgentQaRequest { - message: string; -} - interface AgentQaResponse { answer: string; documents?: string[]; }Alternatively, if you want to maintain type safety for the request body, use it:
interface AgentQaRequest { message: string; } interface AgentQaResponse { answer: string; documents?: string[]; } const AGENT_QA_TIMEOUT = 30000; // 30 seconds export async function queryAgentQa(message: string): Promise<AgentQaResponse> { try { + const requestBody: AgentQaRequest = { message }; const response = await apiRequest<AgentQaResponse>( '/api/doc-qa', { method: 'POST', - body: { message }, + body: requestBody, timeout: AGENT_QA_TIMEOUT, }, undefined, // use default baseUrl from config process.env.AGENT_BEARER_TOKEN );doc-agents/src/agent/agent_pulse/tutorial.ts (1)
49-113: Solid error handling and type mapping.The function follows the same robust error handling pattern as
getTutorialListand correctly validates the nested response structure before accessing the data.The inline type definition (lines 75-86) is verbose but provides type safety. If this response structure is reused elsewhere, consider extracting it to a shared type.
💡 Optional: Extract response type if reused
If this response structure is used in multiple places:
interface TutorialApiResponse { id: string; metadata: { title: string; description: string; totalSteps: number; difficulty?: string; estimatedTime?: string; }; fullContent: string; steps: Array<{ stepNumber: number; title: string; estimatedTime?: string }>; } export async function getTutorialMeta(tutorialId: string, ctx: any): Promise<ApiResponse<Tutorial>> { // ... const responseData = (await response.json()) as ApiResponse<TutorialApiResponse>; // ... }lib/api/client.ts (1)
11-14: Consider normalizing base URL to handle trailing slashes.The
buildUrlfunction doesn't account for trailing slashes inbaseUrl. IfbaseUrlis"http://example.com/"andendpointis"/api/test", the result will be"http://example.com//api/test"(double slash). While most HTTP clients handle this gracefully, it's better to be explicit.🔎 Proposed fix
function buildUrl(baseUrl: string, endpoint: string): string { + const cleanBase = baseUrl.endsWith('/') ? baseUrl.slice(0, -1) : baseUrl; const cleanEndpoint = endpoint.startsWith('/') ? endpoint : `/${endpoint}`; - return `${baseUrl}${cleanEndpoint}`; + return `${cleanBase}${cleanEndpoint}`; }doc-agents/src/api/agent_pulse/route.ts (1)
55-75: Improve type safety by avoidingas anycasts.Multiple
as anycasts appear throughout this section (lines 55, 64, 70, 75), which bypass TypeScript's type checking. While this might be expedient, it reduces type safety and could hide bugs.Consider defining proper types for the state and context objects used by
createTools,buildContext, andbuildSystemPrompt.💡 Example type definitions
interface AgentContext { logger: { info: (msg: string, ...args: any[]) => void; warn: (msg: string, ...args: any[]) => void; error: (msg: string, ...args: any[]) => void; debug: (msg: string, ...args: any[]) => void; }; } interface ToolState { action: Action | null; } // Then use these types instead of 'as any' const state: ToolState = { action: null }; const agentContext: AgentContext = { logger };lib/api/services/agentPulse.ts (1)
24-40: Consider stronger typing fortutorialDatafields.The
tutorialDatafields on lines 29 and 37 useany, which bypasses TypeScript's type checking. Consider using a more specific type or at leastunknownto enforce type checking at usage sites.doc-agents/src/agent/agent_pulse/types.ts (3)
39-47: DuplicateStreamingChunktype definition with differingtutorialDatatyping.This
StreamingChunkinterface is also defined inlib/api/services/agentPulse.ts(lines 23-31), but with different typing fortutorialData:
- Here:
tutorialData?: Action(strongly typed)- There:
tutorialData?: any(loosely typed)Consider consolidating these types into a shared location to ensure consistency and prevent drift.
1-9: Consider consolidating duplicate type definitions.
ConversationMessageandTutorialStateare also defined inlib/api/services/agentPulse.ts(lines 8-16). Having parallel definitions increases maintenance burden and risks divergence. Consider sharing these types from a single source.
20-26: Consider usingunknowninstead ofanyfordetailsfield.Using
unknowninstead ofanyfor thedetailsfield would enforce explicit type checking when accessing this property, improving type safety.lib/kv-store.ts (1)
42-58: Service re-initialization on every KV operation impacts performance.
initializeKVService()creates a newKeyValueStorageServiceinstance on everyget,set, anddeletecall. This involves creating a new fetch adapter and potentially redundant setup work. Consider caching the service instance at module level with lazy initialization.🔎 Proposed refactor to cache the service instance
+let kvServiceInstance: KeyValueStorageService | null = null; + // Initialize the KV service function initializeKVService() { + if (kvServiceInstance) { + return kvServiceInstance; + } + if (!process.env.AGENTUITY_SDK_KEY) { throw new Error('AGENTUITY_SDK_KEY environment variable is required'); } const adapter = createServerFetchAdapter({ headers: { Authorization: `Bearer ${process.env.AGENTUITY_SDK_KEY}`, 'User-Agent': 'Next.js App/1.0', }, }, logger); const serviceUrls = getServiceUrls(); - console.log('serviceUrls', serviceUrls); - return new KeyValueStorageService(serviceUrls.keyvalue, adapter); + kvServiceInstance = new KeyValueStorageService(serviceUrls.keyvalue, adapter); + return kvServiceInstance; }Also applies to: 73-74, 109-111, 132-133
| | Option | Type | Required | Default | Description | | ||
| |--------|------|----------|---------|-------------| | ||
| | `--file` | string | Yes | - | File containing the public key | | ||
|
|
||
| ## Examples | ||
|
|
||
| Add SSH key interactively: | ||
|
|
||
| ```bash | ||
| bunx @agentuity/cli auth ssh add | ||
| ``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
fd "SKILL.md" doc-agents/.agents/skills/agentuity-cli-auth-ssh-add/ --type fRepository: agentuity/docs
Length of output: 119
🏁 Script executed:
cat -n "doc-agents/.agents/skills/agentuity-cli-auth-ssh-add/SKILL.md"Repository: agentuity/docs
Length of output: 2143
Resolve inconsistency between required --file flag and examples that omit it.
The options table marks --file as required, but multiple examples contradict this: the "Add SSH key interactively" example (line 37) and "Add SSH key from stdin" example (line 55) both omit the flag. The description on line 14 states the command "reads from file or stdin", suggesting the flag should be optional. Either mark --file as optional in the table, or update all examples to include it with clear documentation of input methods.
🤖 Prompt for AI Agents
In @doc-agents/.agents/skills/agentuity-cli-auth-ssh-add/SKILL.md around lines
28 - 38, The options table incorrectly marks `--file` as required while examples
for "Add SSH key interactively" and "Add SSH key from stdin" omit it; update
SKILL.md so the table marks `--file` as optional (change "Yes" to "No" or
"Optional") and add a short note in the table or header that the command reads
from stdin when `--file` is not provided, or alternatively update the examples
to always include `--file` and show the stdin/interactive variants explicitly
(modify the "Add SSH key interactively" and "Add SSH key from stdin" examples to
either include `--file <path>` or annotate that stdin/interactive mode is used).
|
|
||
| ```json | ||
| { | ||
| "id": "string", | ||
| "name": "string", | ||
| "description": "unknown", | ||
| "identifier": "string", | ||
| "deploymentId": "unknown", | ||
| "devmode": "boolean", | ||
| "metadata": "unknown", | ||
| "createdAt": "string", | ||
| "updatedAt": "string", | ||
| "evals": "array" | ||
| } | ||
| ``` | ||
|
|
||
| | Field | Type | Description | | ||
| |-------|------|-------------| | ||
| | `id` | string | - | | ||
| | `name` | string | - | | ||
| | `description` | unknown | - | | ||
| | `identifier` | string | - | | ||
| | `deploymentId` | unknown | - | | ||
| | `devmode` | boolean | - | | ||
| | `metadata` | unknown | - | | ||
| | `createdAt` | string | - | | ||
| | `updatedAt` | string | - | | ||
| | `evals` | array | - | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, let's check if the file exists and read it
find . -name "SKILL.md" -path "*/agentuity-cli-cloud-agent-get/*" 2>/dev/null | head -5Repository: agentuity/docs
Length of output: 124
🏁 Script executed:
# List files in the agent directory to understand structure
ls -la doc-agents/.agents/skills/agentuity-cli-cloud-agent-get/ 2>/dev/null || echo "Path not found, searching for similar directories"Repository: agentuity/docs
Length of output: 229
🏁 Script executed:
# Search for SKILL.md files to understand the pattern
fd "SKILL\.md" doc-agents/ 2>/dev/null | head -10Repository: agentuity/docs
Length of output: 40
🏁 Script executed:
cat -n doc-agents/.agents/skills/agentuity-cli-cloud-agent-get/SKILL.mdRepository: agentuity/docs
Length of output: 2132
🏁 Script executed:
# Check for auto-generation markers in the file
head -20 doc-agents/.agents/skills/agentuity-cli-cloud-agent-get/SKILL.mdRepository: agentuity/docs
Length of output: 567
🏁 Script executed:
# Search for similar SKILL.md files to see if they also have "unknown" types
fd "SKILL\.md" doc-agents/ -x grep -l "unknown" {} \; 2>/dev/nullRepository: agentuity/docs
Length of output: 40
🏁 Script executed:
# Search for all SKILL.md files and check which ones have "unknown" types
fd "SKILL\.md" doc-agents/ -x grep -c "unknown" {} + | grep -v ":0$"Repository: agentuity/docs
Length of output: 40
🏁 Script executed:
# Get a count of SKILL.md files with and without "unknown" types
echo "=== Files with 'unknown' type ===" && \
fd "SKILL\.md" doc-agents/ -x grep -l "unknown" {} \; | wc -l && \
echo "=== Total SKILL.md files ===" && \
fd "SKILL\.md" doc-agents/ | wc -lRepository: agentuity/docs
Length of output: 124
🏁 Script executed:
# Look for any generation markers or comments in multiple SKILL.md files
fd "SKILL\.md" doc-agents/ | head -5 | xargs -I {} sh -c 'echo "=== {} ===" && head -15 {}'Repository: agentuity/docs
Length of output: 40
🏁 Script executed:
# Search for any documentation or scripts that might generate these files
find doc-agents -type f \( -name "*.js" -o -name "*.ts" -o -name "*.py" -o -name "*.sh" -o -name "*.json" \) | grep -E "(gen|build|script)" | head -10Repository: agentuity/docs
Length of output: 388
🏁 Script executed:
# Let's search more carefully in the .agents directory
find doc-agents/.agents -name "SKILL.md" 2>/dev/null | wc -lRepository: agentuity/docs
Length of output: 60
🏁 Script executed:
# Count how many SKILL.md files have "unknown" type
find doc-agents/.agents -name "SKILL.md" -exec grep -l "unknown" {} \; 2>/dev/null | wc -lRepository: agentuity/docs
Length of output: 59
🏁 Script executed:
# Get total count of SKILL.md files in .agents
find doc-agents/.agents -name "SKILL.md" 2>/dev/null | wc -lRepository: agentuity/docs
Length of output: 60
🏁 Script executed:
# List a few other SKILL.md files to compare
find doc-agents/.agents -name "SKILL.md" 2>/dev/null | head -5 | xargs -I {} sh -c 'echo "File: {}" && tail -15 {}'Repository: agentuity/docs
Length of output: 1582
🏁 Script executed:
# List all SKILL.md files that have "unknown" types
find doc-agents/.agents -name "SKILL.md" -exec grep -l "unknown" {} \; 2>/dev/nullRepository: agentuity/docs
Length of output: 518
🏁 Script executed:
# Check if there's any auto-generation marker or comment pattern
find doc-agents/.agents -name "SKILL.md" -exec grep -l "unknown" {} \; | xargs head -5Repository: agentuity/docs
Length of output: 1887
🏁 Script executed:
# Compare the specific file against another one to see the pattern
echo "=== File WITH unknown ===" && \
sed -n '50,77p' doc-agents/.agents/skills/agentuity-cli-cloud-agent-get/SKILL.md && \
echo "" && echo "=== File WITHOUT unknown (auth-ssh-add) ===" && \
sed -n '50,77p' doc-agents/.agents/skills/agentuity-cli-auth-ssh-add/SKILL.mdRepository: agentuity/docs
Length of output: 1253
🏁 Script executed:
# Look for any README or documentation about schema generation
find doc-agents -type f \( -name "README*" -o -name "CONTRIBUTING*" \) | xargs grep -l "SKILL\|schema\|generate" 2>/dev/nullRepository: agentuity/docs
Length of output: 149
🏁 Script executed:
# Check the README files to understand the schema/auto-generation
cat doc-agents/.agents/skills/README.mdRepository: agentuity/docs
Length of output: 14078
🏁 Script executed:
# Also check the root README
cat doc-agents/README.md | head -50Repository: agentuity/docs
Length of output: 1511
🏁 Script executed:
# Look at one of the other files with "unknown" to understand the pattern
cat doc-agents/.agents/skills/agentuity-cli-cloud-deployment-show/SKILL.md | sed -n '50,77p'Repository: agentuity/docs
Length of output: 599
🏁 Script executed:
# Check if there's any TypeScript or code generation in the codebase
grep -r "unknown" doc-agents/src --include="*.ts" --include="*.js" 2>/dev/null | grep -i "type\|schema" | head -5Repository: agentuity/docs
Length of output: 40
🏁 Script executed:
# Verify line numbers are correct by checking exact line ranges
sed -n '50,77p' doc-agents/.agents/skills/agentuity-cli-cloud-agent-get/SKILL.md | wc -lRepository: agentuity/docs
Length of output: 60
🏁 Script executed:
# Check if there's documentation about what these "unknown" fields should actually contain
find doc-agents -type f \( -name "*.ts" -o -name "*.js" \) | xargs grep -l "deploymentId\|description.*unknown" 2>/dev/null | head -3Repository: agentuity/docs
Length of output: 40
These SKILL.md files are auto-generated; fix the upstream Agentuity CLI schema generator to provide complete type information.
The description, deploymentId, and metadata fields use the placeholder type "unknown" instead of actual types. This pattern appears in 7 SKILL.md files (agentuity-cli-cloud-agent-get, cloud-deployment-show, cloud-keyvalue-get, cloud-session-get, cloud-thread-get, profile-show, and project-show). Since these are auto-generated (per README: agentuity ai skills generate --output ./skills), the fix must be applied in the upstream Agentuity CLI's schema generation tooling, not here.
🤖 Prompt for AI Agents
In @doc-agents/.agents/skills/agentuity-cli-cloud-agent-get/SKILL.md around
lines 50 - 77, The generated SKILL.md contains placeholder types ("unknown") for
fields `description`, `deploymentId`, and `metadata` (e.g., in
agentuity-cli-cloud-agent-get SKILL.md); update the upstream Agentuity CLI
schema generator to map these schema properties to their real types (e.g.,
string/object/UUID/etc.) instead of emitting "unknown": locate the generator
logic that builds property type strings (the module invoked by the CLI command
`agentuity ai skills generate`) and add explicit handlers for `description`,
`deploymentId`, and `metadata` (or a generic rule that resolves unknown JSON
Schema/$ref types properly), then run the generator to regenerate all affected
SKILL.md files (cloud-deployment-show, cloud-keyvalue-get, cloud-session-get,
cloud-thread-get, profile-show, project-show) so the docs reflect correct types.
| ## Options | ||
|
|
||
| | Option | Type | Required | Default | Description | | ||
| |--------|------|----------|---------|-------------| | ||
| | `--name` | string | Yes | - | the name for the API key | | ||
| | `--expires-at` | string | Yes | - | expiration date as ISO 8601 (2025-12-31T23:59:59Z) or duration (1h, 2d, 30d, 1y) | | ||
| | `--confirm` | boolean | Yes | - | Skip confirmation prompts (required for non-TTY) | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Clarify the --confirm parameter requirement.
The Options table marks --confirm as Required: Yes, but the description states "required for non-TTY" (conditional), and the first two examples omit it. This creates ambiguity about whether the parameter is universally required or only for non-interactive environments.
Reconcile the table, description, and examples to reflect the actual behavior. If --confirm is conditional, consider updating the table to clarify the condition or split the requirement.
| ## Examples | ||
|
|
||
| Execute SQL query: | ||
|
|
||
| ```bash | ||
| bunx @agentuity/cli cloud db sql my-database "SELECT * FROM users LIMIT 10" | ||
| ``` | ||
|
|
||
| Execute query with JSON output: | ||
|
|
||
| ```bash | ||
| bunx @agentuity/cli cloud db exec my-database "SELECT COUNT(*) FROM orders" --json | ||
| ``` | ||
|
|
||
| Query with filter: | ||
|
|
||
| ```bash | ||
| bunx @agentuity/cli cloud db query my-database "SELECT * FROM products WHERE price > 100" | ||
| ``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix example commands to match documented command syntax.
The documented command is agentuity cloud db sql (line 25), but examples 2 and 3 use different subcommands:
- Line 46:
cloud db exec(should becloud db sql) - Line 52:
cloud db query(should becloud db sql)
Only the first example (line 40) matches the documented syntax. This inconsistency could confuse users. Either update the examples to use sql or clarify if exec, query, and sql are aliases.
🔎 Proposed fix (if sql is the canonical command)
-Execute query with JSON output:
-
-```bash
-bunx @agentuity/cli cloud db exec my-database "SELECT COUNT(*) FROM orders" --json
-```
-
-Query with filter:
-
-```bash
-bunx @agentuity/cli cloud db query my-database "SELECT * FROM products WHERE price > 100"
+Execute query with JSON output:
+
+```bash
+bunx @agentuity/cli cloud db sql my-database "SELECT COUNT(*) FROM orders" --json
+```
+
+Query with filter:
+
+```bash
+bunx @agentuity/cli cloud db sql my-database "SELECT * FROM products WHERE price > 100"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ## Examples | |
| Execute SQL query: | |
| ```bash | |
| bunx @agentuity/cli cloud db sql my-database "SELECT * FROM users LIMIT 10" | |
| ``` | |
| Execute query with JSON output: | |
| ```bash | |
| bunx @agentuity/cli cloud db exec my-database "SELECT COUNT(*) FROM orders" --json | |
| ``` | |
| Query with filter: | |
| ```bash | |
| bunx @agentuity/cli cloud db query my-database "SELECT * FROM products WHERE price > 100" | |
| ``` | |
| ## Examples | |
| Execute SQL query: | |
🤖 Prompt for AI Agents
In @doc-agents/.agents/skills/agentuity-cli-cloud-db-sql/SKILL.md around lines
35 - 53, Examples 2 and 3 use the wrong subcommands ("cloud db exec" and "cloud
db query") and should match the documented canonical command "agentuity cloud db
sql"; update the example lines that currently show "bunx @agentuity/cli cloud db
exec my-database ..." and "bunx @agentuity/cli cloud db query my-database ..."
to use "bunx @agentuity/cli cloud db sql my-database ..." (including the --json
flag where present) so all examples consistently use the `sql` subcommand, or
alternatively add a note clarifying that `exec` and `query` are aliases of `sql`
if that is intended.
| ```bash | ||
| agentuity cloud env delete <key> | ||
| ``` | ||
|
|
||
| ## Arguments | ||
|
|
||
| | Argument | Type | Required | Description | | ||
| |----------|------|----------|-------------| | ||
| | `<key>` | string | Yes | - | | ||
|
|
||
| ## Examples | ||
|
|
||
| Delete item: | ||
|
|
||
| ```bash | ||
| bunx @agentuity/cli env delete OLD_FEATURE_FLAG | ||
| ``` | ||
|
|
||
| Delete item: | ||
|
|
||
| ```bash | ||
| bunx @agentuity/cli env rm PORT | ||
| ``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Examples use mismatched command syntax compared to documented usage.
The documented usage (line 25) shows agentuity cloud env delete <key>, but the examples use bunx @agentuity/cli env delete and bunx @agentuity/cli env rm. These are different command patterns. Align the examples with the documented usage, or clarify if these alternate syntaxes are intentional and documented elsewhere.
Additionally, both example descriptions ("Delete item:") are identical and non-descriptive. Provide more specific descriptions for each example.
🔎 Proposed fix for example consistency
## Examples
-Delete item:
+Delete an environment variable:
```bash
-bunx @agentuity/cli env delete OLD_FEATURE_FLAG
+agentuity cloud env delete OLD_FEATURE_FLAG-Delete item:
+Delete using the rm alias:
-bunx @agentuity/cli env rm PORT
+agentuity cloud env rm PORT</details>
<!-- suggestion_start -->
<details>
<summary>📝 Committable suggestion</summary>
> ‼️ **IMPORTANT**
> Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
```suggestion
🤖 Prompt for AI Agents
In @doc-agents/.agents/skills/agentuity-cli-cloud-env-delete/SKILL.md around
lines 24 - 46, The examples in SKILL.md use a different CLI invocation than the
documented usage; update the example commands to match the documented pattern
(use "agentuity cloud env delete <key>" and "agentuity cloud env rm <key>"
instead of "bunx @agentuity/cli ..."), and replace the two identical "Delete
item:" captions with specific descriptions—e.g., "Delete using full command:"
for the delete example and "Delete using the rm alias:" for the rm example—so
the examples align with the usage shown by the "agentuity cloud env delete"
heading.
| "@agentuity/cli": "latest", | ||
| "@types/bun": "latest", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Pin devDependency versions instead of using "latest".
The same deterministic build concerns apply to devDependencies. Pin @agentuity/cli and @types/bun to specific versions.
📌 Suggested fix
"devDependencies": {
- "@agentuity/cli": "latest",
- "@types/bun": "latest",
+ "@agentuity/cli": "^0.0.111",
+ "@types/bun": "^1.0.0",
"@types/react": "^19.2.0",📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "@agentuity/cli": "latest", | |
| "@types/bun": "latest", | |
| "@agentuity/cli": "^0.0.111", | |
| "@types/bun": "^1.0.0", |
🤖 Prompt for AI Agents
In @doc-agents/package.json around lines 29 - 30, The devDependencies in
package.json use floating "latest" versions for @agentuity/cli and @types/bun;
replace those with pinned, specific version strings (e.g., "1.2.3" style) to
ensure deterministic builds. Update the entries for "@agentuity/cli" and
"@types/bun" in package.json to concrete semver values (or exact versions) and
run npm/yarn install to regenerate lockfile so CI uses the pinned versions.
| interface AgentQaResponse { | ||
| answer: string; | ||
| documents?: string[]; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Export AgentQaResponse for external usage.
The AgentQaResponse interface is re-exported in lib/api/services/index.ts (lines 11), indicating it's part of the public API surface. It should be exported from this file.
🔎 Proposed fix
-interface AgentQaResponse {
+export interface AgentQaResponse {
answer: string;
documents?: string[];
}🤖 Prompt for AI Agents
In @lib/api/services/agentQa.ts around lines 12 - 15, The AgentQaResponse
interface is currently not exported but is intended for public use; modify the
declaration of AgentQaResponse in agentQa.ts so it is exported (e.g., add the
export keyword before the interface or export it as a type alias) so the symbol
AgentQaResponse is available for the existing re-export in
lib/api/services/index.ts.
- Added new environment variables: AGENT_BEARER_TOKEN and AGENTUITY_REGION to .env.example for improved configuration. - Updated KV service initialization to require AGENTUITY_REGION alongside AGENTUITY_SDK_KEY for better error handling. - Refactored logger implementation to utilize createLogger from the agentuity server package for improved logging capabilities.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In @lib/kv-store.ts:
- Around line 34-48: initializeKVService() currently constructs a new
KeyValueStorageService and adapter on every call, causing unnecessary allocation
and overhead; change it to lazily cache the service instance in module scope
(e.g., a private variable) and return that cached instance on subsequent calls
so adapter and KeyValueStorageService are created only once; update
initializeKVService (or provide a new getKVService wrapper) to perform the
one-time creation using the existing createServerFetchAdapter, getServiceUrls,
and KeyValueStorageService logic and then return the cached instance for all
callers (ensure the env checks remain on first init).
- Around line 125-128: The deleteKVValue function currently returns
Promise<boolean> while a KVDeleteResult interface exists; update deleteKVValue
to return Promise<KVDeleteResult> with shape {deleted: boolean, error?: string}
(mirroring getKVValue's KVGetResult pattern) so callers can distinguish "not
found" vs "error", adjust catch to populate error and deleted=false, and update
any callers accordingly; also consider making setKVValue follow the same pattern
(Promise<KVDeleteResult>-style result or a KVSetResult) for consistency with
getKVValue and to surface error details from setKVValue.
- Around line 70-75: The catch block that currently logs the error and returns {
exists: false } (catch(error) { console.error(...); return { exists: false }; })
masks service errors as “key not found”; change the error handling in the KV
access function so it no longer conflates failures with missing keys—either
rethrow the caught error (propagate the exception) or return a richer result
shape (e.g., include an error string or code like { exists?: boolean; error?:
string }) so callers such as the [sessionId]/route.ts handler can distinguish
404 from 5xx and handle/translate errors appropriately; ensure you update the
function signature and any callers to handle the new behavior.
🧹 Nitpick comments (1)
lib/kv-store.ts (1)
27-29: Remove unusedKVDeleteResultinterface.The
KVDeleteResultinterface is defined but never used. ThedeleteKVValuefunction returnsPromise<boolean>instead (line 118). Consider either removing this unused type or updatingdeleteKVValueto returnKVDeleteResultfor consistency withKVGetResult.♻️ Proposed fix: Remove unused interface
-export interface KVDeleteResult { - exists: boolean; -} -
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
.env.examplelib/kv-store.ts
🧰 Additional context used
🧬 Code graph analysis (1)
lib/kv-store.ts (1)
lib/config.ts (1)
config(6-10)
🪛 dotenv-linter (4.0.0)
.env.example
[warning] 7-7: [EndingBlankLine] No blank line at the end of the file
(EndingBlankLine)
[warning] 7-7: [UnorderedKey] The AGENTUITY_REGION key should go before the AGENTUITY_SDK_KEY key
(UnorderedKey)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: sync
🔇 Additional comments (5)
.env.example (2)
5-7: Fix formatting and key ordering per linter.Two static analysis issues detected:
- Missing blank line at end of file (line 7)
- Key ordering:
AGENTUITY_REGIONshould precedeAGENTUITY_SDK_KEY📋 Proposed fix
-AGENTUITY_SDK_KEY= -AGENT_BEARER_TOKEN= -AGENTUITY_REGION=use +AGENTUITY_REGION=use +AGENTUITY_SDK_KEY= +AGENT_BEARER_TOKEN=[suggest_minor_issue]
5-7: Update README.md to reference AGENTUITY_SDK_KEY instead of AGENTUITY_API_KEY.The README.md file (lines 37-38) still instructs users to update
AGENTUITY_API_KEY, which contradicts the .env.example file that now uses onlyAGENTUITY_SDK_KEY. Additionally, whileagent-docs/index.tsmaintains backward compatibility by accepting both keys, the README should be updated to align with the new key name to avoid confusion.lib/kv-store.ts (3)
1-11: LGTM! Clear documentation and appropriate imports.The file header clearly documents the server-side-only restriction and the imports are appropriate for the SDK-based approach.
93-96: Good: TTL validation enforces minimum threshold.The TTL validation correctly enforces a minimum of 60 seconds and throws an error immediately, preventing invalid configurations from reaching the KV service.
103-106: Verify silent error handling is intentional.The
setKVValuefunction catches all errors and returns a boolean, making it impossible for callers to distinguish between different failure scenarios (authentication failures, quota issues, invalid store names, network errors, etc.). Most callers do capture the return value, but without granular error information, they can only know that an operation failed, not why. Additionally, at least one caller (lib/tutorial/state-manager.ts:98incompleteTutorial()) ignores the return value entirely, swallowing errors silently. Consider whether the function should either throw specific error types or return a result object that includes error details (similar to howgetKVValuereturnsKVGetResultwithexists/data, but improved to include error context).
| // Initialize the KV service | ||
| function initializeKVService() { | ||
| if (!process.env.AGENTUITY_SDK_KEY || !process.env.AGENTUITY_REGION) { | ||
| throw new Error('AGENTUITY_SDK_KEY and AGENTUITY_REGION environment variables are required'); | ||
| } | ||
|
|
||
| return null; | ||
| const adapter = createServerFetchAdapter({ | ||
| headers: { | ||
| Authorization: `Bearer ${process.env.AGENTUITY_SDK_KEY}` | ||
| }, | ||
| }, logger); | ||
|
|
||
| const serviceUrls = getServiceUrls(process.env.AGENTUITY_REGION); | ||
| return new KeyValueStorageService(serviceUrls.keyvalue, adapter); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Service instance created on every operation causes performance overhead.
initializeKVService() is called on every KV operation (lines 63, 99, 122), creating a new KeyValueStorageService instance each time. This introduces unnecessary overhead for service initialization, adapter creation, and object allocation.
Consider caching the service instance with lazy initialization:
♻️ Proposed refactor: Cache service instance
+// Cache the KV service instance
+let kvServiceInstance: KeyValueStorageService | null = null;
+
// Initialize the KV service
function initializeKVService() {
+ if (kvServiceInstance) {
+ return kvServiceInstance;
+ }
+
if (!process.env.AGENTUITY_SDK_KEY || !process.env.AGENTUITY_REGION) {
throw new Error('AGENTUITY_SDK_KEY and AGENTUITY_REGION environment variables are required');
}
const adapter = createServerFetchAdapter({
headers: {
Authorization: `Bearer ${process.env.AGENTUITY_SDK_KEY}`
},
}, logger);
const serviceUrls = getServiceUrls(process.env.AGENTUITY_REGION);
- return new KeyValueStorageService(serviceUrls.keyvalue, adapter);
+ kvServiceInstance = new KeyValueStorageService(serviceUrls.keyvalue, adapter);
+ return kvServiceInstance;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Initialize the KV service | |
| function initializeKVService() { | |
| if (!process.env.AGENTUITY_SDK_KEY || !process.env.AGENTUITY_REGION) { | |
| throw new Error('AGENTUITY_SDK_KEY and AGENTUITY_REGION environment variables are required'); | |
| } | |
| return null; | |
| const adapter = createServerFetchAdapter({ | |
| headers: { | |
| Authorization: `Bearer ${process.env.AGENTUITY_SDK_KEY}` | |
| }, | |
| }, logger); | |
| const serviceUrls = getServiceUrls(process.env.AGENTUITY_REGION); | |
| return new KeyValueStorageService(serviceUrls.keyvalue, adapter); | |
| } | |
| // Cache the KV service instance | |
| let kvServiceInstance: KeyValueStorageService | null = null; | |
| // Initialize the KV service | |
| function initializeKVService() { | |
| if (kvServiceInstance) { | |
| return kvServiceInstance; | |
| } | |
| if (!process.env.AGENTUITY_SDK_KEY || !process.env.AGENTUITY_REGION) { | |
| throw new Error('AGENTUITY_SDK_KEY and AGENTUITY_REGION environment variables are required'); | |
| } | |
| const adapter = createServerFetchAdapter({ | |
| headers: { | |
| Authorization: `Bearer ${process.env.AGENTUITY_SDK_KEY}` | |
| }, | |
| }, logger); | |
| const serviceUrls = getServiceUrls(process.env.AGENTUITY_REGION); | |
| kvServiceInstance = new KeyValueStorageService(serviceUrls.keyvalue, adapter); | |
| return kvServiceInstance; | |
| } |
🤖 Prompt for AI Agents
In @lib/kv-store.ts around lines 34 - 48, initializeKVService() currently
constructs a new KeyValueStorageService and adapter on every call, causing
unnecessary allocation and overhead; change it to lazily cache the service
instance in module scope (e.g., a private variable) and return that cached
instance on subsequent calls so adapter and KeyValueStorageService are created
only once; update initializeKVService (or provide a new getKVService wrapper) to
perform the one-time creation using the existing createServerFetchAdapter,
getServiceUrls, and KeyValueStorageService logic and then return the cached
instance for all callers (ensure the env checks remain on first init).
| } catch (error) { | ||
| console.error(`Failed to get KV value for key '${key}':`, error); | ||
| return { | ||
| success: false, | ||
| error: error instanceof Error ? error.message : 'Unknown error occurred' | ||
| exists: false | ||
| }; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
rg -nP --type=ts -C3 'await\s+getKVValue\s*\('Repository: agentuity/docs
Length of output: 40
🏁 Script executed:
rg -nP --type=ts 'getKVValue'Repository: agentuity/docs
Length of output: 1990
🏁 Script executed:
cat -n app/api/sessions/route.ts | head -50Repository: agentuity/docs
Length of output: 2528
🏁 Script executed:
cat -n app/api/sessions/[sessionId]/route.ts | head -50Repository: agentuity/docs
Length of output: 1996
🏁 Script executed:
cat -n lib/kv-store.tsRepository: agentuity/docs
Length of output: 4334
Reconsider error handling strategy to distinguish between missing data and service errors.
All errors are silently caught and logged to console, with exists: false returned regardless of error type. This masks critical failures (network issues, auth failures, missing environment variables) as missing data. Callers treat exists: false as "key doesn't exist" — for example, [sessionId]/route.ts returns a 404 when the actual issue may be a service error that should return 5xx.
This pattern prevents callers from:
- Returning appropriate HTTP status codes (distinguishing 404 from 500)
- Implementing error-specific handling logic
- Properly diagnosing production issues beyond console logs
Consider either propagating errors or returning a richer result type (e.g., { exists?: boolean; error?: string }) so callers can handle different failure modes appropriately.
🤖 Prompt for AI Agents
In @lib/kv-store.ts around lines 70 - 75, The catch block that currently logs
the error and returns { exists: false } (catch(error) { console.error(...);
return { exists: false }; }) masks service errors as “key not found”; change the
error handling in the KV access function so it no longer conflates failures with
missing keys—either rethrow the caught error (propagate the exception) or return
a richer result shape (e.g., include an error string or code like { exists?:
boolean; error?: string }) so callers such as the [sessionId]/route.ts handler
can distinguish 404 from 5xx and handle/translate errors appropriately; ensure
you update the function signature and any callers to handle the new behavior.
| } catch (error) { | ||
| return { | ||
| success: false, | ||
| error: error instanceof Error ? error.message : 'Unknown error occurred' | ||
| }; | ||
| } | ||
| } | ||
|
|
||
| /** | ||
| * Search for keys in the KV store by keyword pattern | ||
| * @param keyword - The keyword pattern to search for | ||
| * @param options - Optional configuration | ||
| * @returns Promise<KVStoreResponse<Array<{key: string, value: any, metadata?: any}>>> | ||
| */ | ||
| export async function searchKVByKeyword<T = any>( | ||
| keyword: string, | ||
| options: KVStoreOptions = {} | ||
| ): Promise<KVStoreResponse<Array<{key: string, value: T, metadata?: any}>>> { | ||
| const { storeName } = options; | ||
| const finalStoreName = storeName || config.defaultStoreName; | ||
|
|
||
| // Validate API key | ||
| if (!process.env.AGENTUITY_API_KEY) { | ||
| return { | ||
| success: false, | ||
| error: 'AGENTUITY_API_KEY environment variable is required' | ||
| }; | ||
| } | ||
|
|
||
| try { | ||
| const url = `${config.baseUrl}/sdk/kv/search/${encodeURIComponent(finalStoreName)}/${encodeURIComponent(keyword)}`; | ||
|
|
||
| const response = await fetch(url, { | ||
| method: 'GET', | ||
| headers: { | ||
| 'Authorization': `Bearer ${process.env.AGENTUITY_API_KEY}`, | ||
| 'Content-Type': 'application/json', | ||
| 'User-Agent': 'Next.js KV Client' | ||
| } | ||
| }); | ||
|
|
||
| if (!response.ok) { | ||
| const errorText = await response.text(); | ||
| return { | ||
| success: false, | ||
| error: `HTTP ${response.status}: ${errorText}`, | ||
| statusCode: response.status | ||
| }; | ||
| } | ||
|
|
||
| const data = await response.text(); | ||
|
|
||
| try { | ||
| const jsonData = JSON.parse(data); | ||
| return { | ||
| success: true, | ||
| data: jsonData, | ||
| statusCode: response.status | ||
| }; | ||
| } catch (parseError) { | ||
| return { | ||
| success: false, | ||
| error: 'Failed to parse search results as JSON', | ||
| statusCode: response.status | ||
| }; | ||
| } | ||
|
|
||
| } catch (error) { | ||
| return { | ||
| success: false, | ||
| error: error instanceof Error ? error.message : 'Unknown error occurred' | ||
| }; | ||
| console.error(`Failed to delete KV value for key '${key}':`, error); | ||
| return false; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
cat -n lib/kv-store.ts | head -150Repository: agentuity/docs
Length of output: 4334
🏁 Script executed:
rg -nP --type=ts -C3 'await\s+deleteKVValue\s*\('Repository: agentuity/docs
Length of output: 594
🏁 Script executed:
cat -n app/api/sessions/[sessionId]/route.ts | sed -n '140,160p'Repository: agentuity/docs
Length of output: 869
🏁 Script executed:
rg -nP --type=ts 'await\s+(deleteKVValue|setKVValue|getKVValue)\s*\(' --max-count=20Repository: agentuity/docs
Length of output: 1429
🏁 Script executed:
cat -n app/api/sessions/[sessionId]/messages/route.ts | sed -n '80,85p; 98,105p; 196,205p'Repository: agentuity/docs
Length of output: 1100
Consider returning KVDeleteResult from deleteKVValue for consistency with getKVValue.
The KVDeleteResult interface is defined (lines 27-29) but unused. getKVValue returns KVGetResult<T> with {exists: boolean, data?: T}, allowing callers to distinguish between "not found" and "error" states. For consistency, deleteKVValue should return Promise<KVDeleteResult> instead of Promise<boolean>. Note that setKVValue has the same issue—consider whether it should also return a result object rather than a simple boolean.
The error handling pattern (catch, log, return false) is intentional and working as designed—callers like the session deletion endpoint properly check the return value and respond with appropriate HTTP errors.
🤖 Prompt for AI Agents
In @lib/kv-store.ts around lines 125 - 128, The deleteKVValue function currently
returns Promise<boolean> while a KVDeleteResult interface exists; update
deleteKVValue to return Promise<KVDeleteResult> with shape {deleted: boolean,
error?: string} (mirroring getKVValue's KVGetResult pattern) so callers can
distinguish "not found" vs "error", adjust catch to populate error and
deleted=false, and update any callers accordingly; also consider making
setKVValue follow the same pattern (Promise<KVDeleteResult>-style result or a
KVSetResult) for consistency with getKVValue and to surface error details from
setKVValue.
- Updated the agent pulse API to streamline message handling and response streaming. - Introduced a new state manager to handle tutorial actions and fetch complete tutorial step data. - Refactored the streaming processor to format AI responses as SSE-compatible chunks. - Improved error handling and logging throughout the agent pulse services. - Added tutorial data handling to provide users with relevant content during interactions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
🤖 Fix all issues with AI agents
In @app/api/sessions/[sessionId]/messages/route.ts:
- Around line 169-185: The async onTutorialData handler can throw and is invoked
without awaiting in callAgentPulseStreaming, risking unhandled rejections;
modify the invocation to await callbacks.onTutorialData?.(data) or wrap the
handler body in a try-catch so TutorialStateManager.updateTutorialProgress
errors are caught and logged before calling controller.enqueue/encoder.encode,
ensuring any thrown error is handled and does not become an unhandled promise
rejection.
In @doc-agents/src/agent/agent_pulse/context-builder.ts:
- Around line 85-86: The system prompt rule incorrectly references
startTutorialById; update the prompt text to reference the actual tool name
startTutorialAtStep (used in tools.ts and around lines 80-81) and adjust the
rule wording so it instructs using startTutorialAtStep only when the user
selects a tutorial, defaulting the step to 1 for new tutorials and validating
that the step is between 1 and the tutorial's totalSteps.
In @doc-agents/src/agent/agent_pulse/streaming-processor.ts:
- Around line 74-75: The call site for handleTutorialState in
streaming-processor.ts passes logger directly but handleTutorialState expects a
ctx object with a logger property (it calls ctx.logger.info, etc.); fix by
passing an object containing the logger (e.g., { logger }) or the existing ctx
object instead of the bare logger so the parameter shape matches
handleTutorialState, and update any other nearby calls to handleTutorialState to
use the same ctx-shaped argument.
- Around line 9-13: The function signature uses StreamTextResult<> with an empty
type parameter which is a parse/TS error; update the signature in
createStreamingProcessor to supply an appropriate generic (e.g.,
StreamTextResult<string> or StreamTextResult<any> / StreamTextResult<unknown>
depending on expected payload) and adjust any callers or imports accordingly so
the generic is consistent across usages (ensure the import/definition of
StreamTextResult supports the chosen type).
In @doc-agents/src/agent/agent_pulse/tutorial.ts:
- Around line 124-135: The interface TutorialStepResponseData is declared but
not exported, yet it's used as the public return type of getTutorialStep; make
it available to consumers by adding an export to the declaration (export
interface TutorialStepResponseData) so external callers can import the type, and
ensure any related exports/barrel files that should re-export it include this
symbol if necessary; update references to import the exported
TutorialStepResponseData where used.
In @lib/api/services/agentPulse.ts:
- Around line 124-136: The final SSE buffer parsing only handles text-delta and
finish types; update the try block that parses StreamingChunk from buffer (the
buffer variable and JSON.parse(buffer.slice(6))) to also handle other chunk
types consistently by invoking the corresponding callbacks on the callbacks
object: call callbacks.onStatus for type === 'status', callbacks.onTutorialData
for type === 'tutorial-data' and callbacks.onError for type === 'error'
(preserving the existing callbacks.onTextDelta and callbacks.onFinish handling),
and add a default branch to log or ignore unknown chunk.type values; keep the
surrounding try/catch intact to report parse errors.
🧹 Nitpick comments (13)
app/api/sessions/[sessionId]/messages/route.ts (1)
14-16: Unused constantAGENT_REQUEST_TIMEOUT.The
AGENT_REQUEST_TIMEOUTconstant is defined but never used in this file. The timeout is now handled withincallAgentPulseStreaming(which usesAGENT_PULSE_TIMEOUT= 30000ms internally).🧹 Remove unused constant
// Constants const DEFAULT_CONVERSATION_HISTORY_LIMIT = 10; -const AGENT_REQUEST_TIMEOUT = 30000; // 30 secondsdoc-agents/src/api/agent_pulse/route.ts (2)
14-38: Potential code duplication with existing parser.This
parseAgentRequestfunction appears to duplicate logic fromagent-docs/src/agents/agent-pulse/request/parser.ts. Consider extracting to a shared utility to maintain consistency.Additionally, the author field cast on line 24 may produce invalid values (e.g.,
'SYSTEM'.toUpperCase()yields'SYSTEM'which doesn't match'USER' | 'ASSISTANT').🔧 Suggested type-safe normalization
conversationHistory = body.conversationHistory.map((msg: any) => ({ - author: (msg.author || msg.role || 'USER').toUpperCase(), + author: ((msg.author || msg.role || 'USER').toUpperCase() === 'ASSISTANT' ? 'ASSISTANT' : 'USER') as 'USER' | 'ASSISTANT', content: msg.content || '', }));
73-86: Fragile error status code determination.Using
error.message.includes('Invalid')to distinguish 400 vs 500 errors is brittle. Consider using a custom error class or error codes for more reliable classification.// Example: Define a ValidationError class class ValidationError extends Error { constructor(message: string) { super(message); this.name = 'ValidationError'; } } // Then in catch block: const statusCode = error instanceof ValidationError ? 400 : 500;doc-agents/src/agent/agent_pulse/state-manager.ts (2)
40-46: Remove unnecessaryelseclause.As indicated by static analysis, the
elseblock is unnecessary because theifbranch at line 26 ends withreturn(implicitly viabreakfollowed byreturn null). The code afterif (tutorialStep.success && tutorialStep.data)can be flattened.🧹 Suggested fix
if (tutorialStep.success && tutorialStep.data) { const tutorialData: TutorialData = { tutorialId: action.tutorialId, totalSteps: action.totalSteps, currentStep: action.currentStep, tutorialStep: { title: tutorialStep.data.tutorialStep.title, mdx: tutorialStep.data.tutorialStep.mdx, snippets: tutorialStep.data.tutorialStep.snippets, totalSteps: tutorialStep.data.tutorialStep.totalSteps, }, }; ctx.logger.info('Tutorial state processed successfully'); return tutorialData; - } else { - // Handle API errors gracefully - ctx.logger.error('Failed to fetch tutorial step: %s', tutorialStep.error || 'Unknown error'); - if (tutorialStep.details) { - ctx.logger.error('Error details: %s', JSON.stringify(tutorialStep.details)); - } } + // Handle API errors gracefully + ctx.logger.error('Failed to fetch tutorial step: %s', tutorialStep.error || 'Unknown error'); + if (tutorialStep.details) { + ctx.logger.error('Error details: %s', JSON.stringify(tutorialStep.details)); + } } break;
54-60: Consider returningnullinstead of re-throwing.Re-throwing the error may cause unhandled promise rejections in the streaming context (the caller in
streaming-processor.tswraps this in try-catch, but returningnullwould be more consistent with the function's return signature and error handling pattern).🛡️ Suggested change
} catch (error) { ctx.logger.error( 'Failed to handle tutorial state: %s', error instanceof Error ? error.message : String(error) ); - throw error; + return null; }doc-agents/src/agent/agent_pulse/context-builder.ts (1)
1-2: Unused importgetTutorialMeta.
getTutorialMetais imported but not used in this file.🧹 Remove unused import
-import { getTutorialList, getTutorialMeta, type Tutorial } from './tutorial'; +import { getTutorialList, type Tutorial } from './tutorial';lib/api/services/agentPulse.ts (2)
8-16: Type duplication with doc-agents package.
ConversationMessageandTutorialStateinterfaces duplicate definitions fromdoc-agents/src/agent/agent_pulse/types.ts. Consider sharing types via a common package or re-exporting to ensure consistency.
34-40: Async callbacks are not awaited.The callbacks in
AgentPulseCallbacksmay be async (as used in the messages route), butcallAgentPulseStreaminginvokes them withoutawait. This means any async work in callbacks becomes fire-and-forget, and errors become unhandled rejections.Consider either:
- Documenting that callbacks must be synchronous
- Awaiting callbacks (requires making them
Promise<void>typed)🔧 Option: Await callbacks
export interface AgentPulseCallbacks { - onTextDelta?: (text: string) => void; - onStatus?: (message: string, category?: string) => void; - onTutorialData?: (data: any) => void; - onFinish?: () => void; - onError?: (error: string) => void; + onTextDelta?: (text: string) => void | Promise<void>; + onStatus?: (message: string, category?: string) => void | Promise<void>; + onTutorialData?: (data: any) => void | Promise<void>; + onFinish?: () => void | Promise<void>; + onError?: (error: string) => void | Promise<void>; }Then in the processing loop:
- callbacks.onTextDelta?.(chunk.textDelta); + await callbacks.onTextDelta?.(chunk.textDelta);doc-agents/src/agent/agent_pulse/tutorial.ts (3)
1-5: Circular dependency betweentutorial.tsandtypes.ts.This file imports from
./types, whiletypes.tsimportsTutorialSnippetfrom this file. This circular dependency can cause issues with some bundlers and module resolution. Consider movingTutorialSnippettotypes.tsalongside the other shared types to break the cycle.
7-47: Consider adding request timeout for reliability.The
fetchcall has no timeout, which could cause requests to hang indefinitely if the Tutorial API is unresponsive. Consider usingAbortControllerwith a timeout.Example with timeout
const controller = new AbortController(); const timeout = setTimeout(() => controller.abort(), 10000); // 10s timeout try { const response = await fetch(`${TUTORIAL_API_BASE_URL}/api/tutorials`, { signal: controller.signal }); // ... rest of logic } finally { clearTimeout(timeout); }
59-59: Consider encodingtutorialIdin URL path.If
tutorialIdcould contain special characters (e.g.,/,?,#), the URL would be malformed. UseencodeURIComponentfor safety.- const response = await fetch(`${TUTORIAL_API_BASE_URL}/api/tutorials/${tutorialId}`); + const response = await fetch(`${TUTORIAL_API_BASE_URL}/api/tutorials/${encodeURIComponent(tutorialId)}`);doc-agents/src/agent/agent_pulse/types.ts (2)
1-1: Circular import creates potential bundling issues.Importing
TutorialSnippetfrom./tutorialwhiletutorial.tsimports from this file creates a circular dependency. Consider definingTutorialSnippetin this types file to centralize all shared types and break the cycle.
41-52: Note:totalStepsappears at two levels.Both
TutorialData.totalStepsandTutorialData.tutorialStep.totalStepsexist. If this duplication is intentional to match the frontend's expected format, consider adding a comment clarifying they should always be in sync, or document why both are needed.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (10)
app/api/sessions/[sessionId]/messages/route.tsdoc-agents/src/agent/agent_pulse/agent.tsdoc-agents/src/agent/agent_pulse/context-builder.tsdoc-agents/src/agent/agent_pulse/state-manager.tsdoc-agents/src/agent/agent_pulse/streaming-processor.tsdoc-agents/src/agent/agent_pulse/tutorial.tsdoc-agents/src/agent/agent_pulse/types.tsdoc-agents/src/api/agent_pulse/route.tslib/api/services/agentPulse.tslib/api/services/index.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- lib/api/services/index.ts
🧰 Additional context used
🧬 Code graph analysis (8)
doc-agents/src/api/agent_pulse/route.ts (2)
doc-agents/src/agent/agent_pulse/types.ts (1)
ConversationMessage(3-6)agent-docs/src/agents/agent-pulse/request/parser.ts (1)
parseAgentRequest(4-46)
doc-agents/src/agent/agent_pulse/agent.ts (4)
doc-agents/src/agent/agent_pulse/types.ts (2)
Action(34-39)ConversationMessage(3-6)doc-agents/src/agent/agent_pulse/tools.ts (1)
createTools(15-78)doc-agents/src/agent/agent_pulse/context-builder.ts (2)
buildContext(7-32)buildSystemPrompt(63-118)doc-agents/src/agent/agent_pulse/streaming-processor.ts (1)
createStreamingProcessor(9-109)
doc-agents/src/agent/agent_pulse/state-manager.ts (3)
doc-agents/src/agent/agent_pulse/types.ts (2)
Action(34-39)TutorialData(42-52)agent-docs/src/agents/agent-pulse/state.ts (1)
ActionType(46-46)doc-agents/src/agent/agent_pulse/tutorial.ts (1)
getTutorialStep(137-190)
lib/api/services/agentPulse.ts (3)
doc-agents/src/agent/agent_pulse/types.ts (3)
ConversationMessage(3-6)TutorialState(8-11)StreamingChunk(54-62)lib/api/services/index.ts (5)
AgentPulseRequest(9-9)StreamingChunk(9-9)AgentPulseCallbacks(9-9)callAgentPulseStreaming(8-8)agentPulseService(8-8)lib/config.ts (1)
config(6-10)
doc-agents/src/agent/agent_pulse/context-builder.ts (2)
doc-agents/src/agent/agent_pulse/types.ts (2)
TutorialState(8-11)Tutorial(13-20)doc-agents/src/agent/agent_pulse/tutorial.ts (2)
getTutorialList(7-47)Tutorial(3-3)
doc-agents/src/agent/agent_pulse/types.ts (2)
doc-agents/src/agent/agent_pulse/tutorial.ts (2)
Tutorial(3-3)TutorialSnippet(115-122)lib/api/services/agentPulse.ts (1)
StreamingChunk(24-32)
doc-agents/src/agent/agent_pulse/tutorial.ts (1)
doc-agents/src/agent/agent_pulse/types.ts (2)
ApiResponse(22-28)Tutorial(13-20)
app/api/sessions/[sessionId]/messages/route.ts (7)
lib/config.ts (1)
config(6-10)app/chat/types.ts (3)
Session(54-54)TutorialData(27-27)Message(37-37)lib/api/services/index.ts (2)
titleGeneratorService(6-6)callAgentPulseStreaming(8-8)lib/api/services/titleGenerator.ts (1)
titleGeneratorService(42-44)lib/kv-store.ts (2)
getKVValue(56-76)setKVValue(85-107)lib/api/services/agentPulse.ts (1)
callAgentPulseStreaming(48-145)app/chat/utils/dateUtils.ts (1)
getCurrentTimestamp(35-37)
🪛 Biome (2.1.2)
doc-agents/src/agent/agent_pulse/streaming-processor.ts
[error] 10-10: Expected a type parameter but instead found '>'.
Expected a type parameter here.
(parse)
doc-agents/src/agent/agent_pulse/state-manager.ts
[error] 40-46: This else clause can be omitted because previous branches break early.
Safe fix: Omit the else clause.
(lint/style/noUselessElse)
🔇 Additional comments (18)
app/api/sessions/[sessionId]/messages/route.ts (2)
57-88: LGTM - Background title generation is well-designed.The function properly handles race conditions by re-fetching the session before updating, includes appropriate null checks, and uses fire-and-forget pattern (
void) at the call site. Error handling is in place.
248-254: LGTM - SSE response setup is correct.Proper headers for server-sent events:
Content-Type: text/event-stream,Cache-Control: no-cache, andConnection: keep-alive.doc-agents/src/api/agent_pulse/route.ts (1)
58-72: LGTM - Streaming response setup.The agent invocation and SSE response headers are correctly configured.
doc-agents/src/agent/agent_pulse/agent.ts (3)
10-31: LGTM - Agent schema definition.Input schema is well-structured with proper optional fields and descriptions. Using
s.any()for output is acceptable since the handler returns aReadableStream.
63-74: LGTM - Logging callbacks for debugging.The
onStepFinishandonFinishcallbacks provide comprehensive debugging information for the streaming flow.
55-56: No action needed. The modelgpt-4.1is a valid OpenAI model name (launched April 14, 2025) and will not cause runtime errors.Likely an incorrect or invalid review comment.
doc-agents/src/agent/agent_pulse/state-manager.ts (1)
8-25: LGTM - Function signature and null checks.The function correctly handles null state and provides detailed logging. Type annotations are appropriate.
doc-agents/src/agent/agent_pulse/streaming-processor.ts (2)
28-39: Runtime type workaround noted.The comment on line 30-31 indicates a runtime mismatch with AI SDK types (
textDeltavstext). This defensive fallback is reasonable, but consider filing an issue with the AI SDK if the types are incorrect.
94-106: LGTM - Error handling in stream processor.Proper error chunk emission and stream closure on failure.
doc-agents/src/agent/agent_pulse/context-builder.ts (2)
7-32: LGTM - Context building with graceful fallback.Good error handling that returns fallback context on API failure rather than throwing.
37-50: LGTM - Tutorial info helper.Handles edge cases well: missing tutorialId, unknown tutorial, completed tutorial, and in-progress tutorial.
lib/api/services/agentPulse.ts (1)
48-72: LGTM - Streaming setup with timeout.Good use of
AbortControllerwith timeout, proper headers including optional Bearer token, and cleanup in finally block.doc-agents/src/agent/agent_pulse/tutorial.ts (2)
115-122: LGTM!The
TutorialSnippetinterface is well-defined with appropriate optional fields.
137-190: LGTM!The function follows the same consistent pattern as the other API functions with proper error handling and logging.
doc-agents/src/agent/agent_pulse/types.ts (4)
3-20: LGTM!The
ConversationMessage,TutorialState, andTutorialinterfaces are well-defined with appropriate types.
22-28: LGTM!The generic
ApiResponse<T>interface provides a consistent structure for API responses with proper error handling fields.
30-39: LGTM!The
ActionTypeenum andActioninterface are appropriately structured. The single enum value is acceptable as a foundation for future action types.
54-62: ConsolidateStreamingChunkinterface to a single location.
StreamingChunkis defined in bothdoc-agents/src/agent/agent_pulse/types.tsandlib/api/services/agentPulse.ts. The definitions differ on thetutorialDatafield—the first usesTutorialData(strongly typed) while the second usesany(untyped). This duplication creates a maintenance burden and risks type drift. Move the definition to a shared location and export it from there.
| onTutorialData: async (data) => { | ||
| finalTutorialData = data; | ||
|
|
||
| // Update user's tutorial progress | ||
| await TutorialStateManager.updateTutorialProgress( | ||
| userId, | ||
| data.tutorialId, | ||
| data.currentStep, | ||
| data.totalSteps | ||
| ); | ||
|
|
||
| controller.enqueue( | ||
| encoder.encode( | ||
| `data: ${JSON.stringify({ type: 'tutorial-data', tutorialData: data })}\n\n` | ||
| ) | ||
| ); | ||
| }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Async callback error may be silently lost.
The onTutorialData callback is async but callAgentPulseStreaming invokes it via callbacks.onTutorialData?.(data) without await. If TutorialStateManager.updateTutorialProgress throws, the error becomes an unhandled promise rejection.
Consider wrapping the async operation in a try-catch:
🛡️ Suggested fix
onTutorialData: async (data) => {
finalTutorialData = data;
// Update user's tutorial progress
- await TutorialStateManager.updateTutorialProgress(
- userId,
- data.tutorialId,
- data.currentStep,
- data.totalSteps
- );
+ try {
+ await TutorialStateManager.updateTutorialProgress(
+ userId,
+ data.tutorialId,
+ data.currentStep,
+ data.totalSteps
+ );
+ } catch (err) {
+ console.error('[tutorial-progress] update failed:', err instanceof Error ? err.message : String(err));
+ }
controller.enqueue(📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| onTutorialData: async (data) => { | |
| finalTutorialData = data; | |
| // Update user's tutorial progress | |
| await TutorialStateManager.updateTutorialProgress( | |
| userId, | |
| data.tutorialId, | |
| data.currentStep, | |
| data.totalSteps | |
| ); | |
| controller.enqueue( | |
| encoder.encode( | |
| `data: ${JSON.stringify({ type: 'tutorial-data', tutorialData: data })}\n\n` | |
| ) | |
| ); | |
| }, | |
| onTutorialData: async (data) => { | |
| finalTutorialData = data; | |
| // Update user's tutorial progress | |
| try { | |
| await TutorialStateManager.updateTutorialProgress( | |
| userId, | |
| data.tutorialId, | |
| data.currentStep, | |
| data.totalSteps | |
| ); | |
| } catch (err) { | |
| console.error('[tutorial-progress] update failed:', err instanceof Error ? err.message : String(err)); | |
| } | |
| controller.enqueue( | |
| encoder.encode( | |
| `data: ${JSON.stringify({ type: 'tutorial-data', tutorialData: data })}\n\n` | |
| ) | |
| ); | |
| }, |
🤖 Prompt for AI Agents
In @app/api/sessions/[sessionId]/messages/route.ts around lines 169 - 185, The
async onTutorialData handler can throw and is invoked without awaiting in
callAgentPulseStreaming, risking unhandled rejections; modify the invocation to
await callbacks.onTutorialData?.(data) or wrap the handler body in a try-catch
so TutorialStateManager.updateTutorialProgress errors are caught and logged
before calling controller.enqueue/encoder.encode, ensuring any thrown error is
handled and does not become an unhandled promise rejection.
| === TOOL-USAGE RULES (must follow) === | ||
| - startTutorialById must only be used when user select a tutorial. If the user starts a new tutorial, the step number should be set to one. Valid step is between 1 and totalSteps of the specific tutorial. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tool name mismatch in system prompt.
The prompt references startTutorialById but the actual tool is named startTutorialAtStep (as seen in lines 80-81 and in tools.ts). This inconsistency may confuse the LLM.
🐛 Fix tool name reference
=== TOOL-USAGE RULES (must follow) ===
-- startTutorialById must only be used when user select a tutorial. If the user starts a new tutorial, the step number should be set to one. Valid step is between 1 and totalSteps of the specific tutorial.
+- startTutorialAtStep must only be used when user select a tutorial. If the user starts a new tutorial, the step number should be set to one. Valid step is between 1 and totalSteps of the specific tutorial.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| === TOOL-USAGE RULES (must follow) === | |
| - startTutorialById must only be used when user select a tutorial. If the user starts a new tutorial, the step number should be set to one. Valid step is between 1 and totalSteps of the specific tutorial. | |
| === TOOL-USAGE RULES (must follow) === | |
| - startTutorialAtStep must only be used when user select a tutorial. If the user starts a new tutorial, the step number should be set to one. Valid step is between 1 and totalSteps of the specific tutorial. |
🤖 Prompt for AI Agents
In @doc-agents/src/agent/agent_pulse/context-builder.ts around lines 85 - 86,
The system prompt rule incorrectly references startTutorialById; update the
prompt text to reference the actual tool name startTutorialAtStep (used in
tools.ts and around lines 80-81) and adjust the rule wording so it instructs
using startTutorialAtStep only when the user selects a tutorial, defaulting the
step to 1 for new tutorials and validating that the step is between 1 and the
tutorial's totalSteps.
| // Process tutorial state and fetch complete tutorial data if available | ||
| const finalTutorialData = await handleTutorialState(state, logger); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Parameter mismatch for handleTutorialState.
handleTutorialState expects a ctx object with a logger property (uses ctx.logger.info, etc.), but this call passes logger directly as the second argument. This will cause runtime errors.
🐛 Suggested fix
// Process tutorial state and fetch complete tutorial data if available
- const finalTutorialData = await handleTutorialState(state, logger);
+ const finalTutorialData = await handleTutorialState(state, { logger });📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Process tutorial state and fetch complete tutorial data if available | |
| const finalTutorialData = await handleTutorialState(state, logger); | |
| // Process tutorial state and fetch complete tutorial data if available | |
| const finalTutorialData = await handleTutorialState(state, { logger }); |
🤖 Prompt for AI Agents
In @doc-agents/src/agent/agent_pulse/streaming-processor.ts around lines 74 -
75, The call site for handleTutorialState in streaming-processor.ts passes
logger directly but handleTutorialState expects a ctx object with a logger
property (it calls ctx.logger.info, etc.); fix by passing an object containing
the logger (e.g., { logger }) or the existing ctx object instead of the bare
logger so the parameter shape matches handleTutorialState, and update any other
nearby calls to handleTutorialState to use the same ctx-shaped argument.
| interface TutorialStepResponseData { | ||
| tutorialId: string; | ||
| totalSteps: number; | ||
| currentStep: number; | ||
| tutorialStep: { | ||
| title: string; | ||
| mdx: string; | ||
| snippets: TutorialSnippet[]; | ||
| totalSteps: number; | ||
| estimatedTime?: string; | ||
| }; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Export TutorialStepResponseData for consumers.
This interface is used in the public return type of getTutorialStep (line 141) but is not exported. Consumers won't be able to properly type variables receiving this data.
-interface TutorialStepResponseData {
+export interface TutorialStepResponseData {🤖 Prompt for AI Agents
In @doc-agents/src/agent/agent_pulse/tutorial.ts around lines 124 - 135, The
interface TutorialStepResponseData is declared but not exported, yet it's used
as the public return type of getTutorialStep; make it available to consumers by
adding an export to the declaration (export interface TutorialStepResponseData)
so external callers can import the type, and ensure any related exports/barrel
files that should re-export it include this symbol if necessary; update
references to import the exported TutorialStepResponseData where used.
| // Process any remaining data | ||
| if (buffer.length > 0 && buffer.startsWith('data: ')) { | ||
| try { | ||
| const chunk = JSON.parse(buffer.slice(6)) as StreamingChunk; | ||
| if (chunk.type === 'text-delta' && chunk.textDelta) { | ||
| callbacks.onTextDelta?.(chunk.textDelta); | ||
| } else if (chunk.type === 'finish') { | ||
| callbacks.onFinish?.(); | ||
| } | ||
| } catch (error) { | ||
| console.error('Error parsing final SSE chunk:', error); | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incomplete SSE handling for remaining buffer.
The final buffer processing only handles text-delta and finish chunk types, but other types like status, tutorial-data, and error could also be in the final buffer. Consider handling all chunk types consistently.
🛡️ Suggested fix
// Process any remaining data
if (buffer.length > 0 && buffer.startsWith('data: ')) {
try {
const chunk = JSON.parse(buffer.slice(6)) as StreamingChunk;
if (chunk.type === 'text-delta' && chunk.textDelta) {
callbacks.onTextDelta?.(chunk.textDelta);
+ } else if (chunk.type === 'status') {
+ callbacks.onStatus?.(chunk.message || '', chunk.category);
+ } else if (chunk.type === 'tutorial-data' && chunk.tutorialData) {
+ callbacks.onTutorialData?.(chunk.tutorialData);
} else if (chunk.type === 'finish') {
callbacks.onFinish?.();
+ } else if (chunk.type === 'error') {
+ callbacks.onError?.(chunk.error || 'Unknown error');
}
} catch (error) {
console.error('Error parsing final SSE chunk:', error);
}
}🤖 Prompt for AI Agents
In @lib/api/services/agentPulse.ts around lines 124 - 136, The final SSE buffer
parsing only handles text-delta and finish types; update the try block that
parses StreamingChunk from buffer (the buffer variable and
JSON.parse(buffer.slice(6))) to also handle other chunk types consistently by
invoking the corresponding callbacks on the callbacks object: call
callbacks.onStatus for type === 'status', callbacks.onTutorialData for type ===
'tutorial-data' and callbacks.onError for type === 'error' (preserving the
existing callbacks.onTextDelta and callbacks.onFinish handling), and add a
default branch to log or ignore unknown chunk.type values; keep the surrounding
try/catch intact to report parse errors.
- Introduced a new API for managing user sessions, including endpoints for creating, retrieving, updating, and deleting sessions. - Added message handling within sessions to allow users to append messages to existing sessions. - Implemented cookie-based authentication middleware to secure session endpoints. - Refactored session persistence logic to streamline data storage and retrieval. - Enhanced error handling and logging for improved debugging and user feedback.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
app/chat/services/sessionService.ts (1)
292-296: Inconsistent error handling for non-JSON error responses.Unlike
addMessage(lines 236-247), this code doesn't handle the case where the error response body isn't valid JSON. If the server returns an HTML error page (e.g., 502 gateway error),response.json()will throw and bubble up to the outer catch, resulting in a confusing "Unexpected token" error instead of the HTTP status.Proposed fix
if (!response.ok) { - const errorData = await response.json(); - callbacks.onError?.(errorData.error || `Error: ${response.status}`); + try { + const errorData = await response.json(); + callbacks.onError?.(errorData.error || `Error: ${response.status}`); + } catch { + callbacks.onError?.(`Error: ${response.status} ${response.statusText}`); + } return; }
🤖 Fix all issues with AI agents
In `@app/chat/layout.tsx`:
- Around line 101-106: The Retry button element that calls setSize(1) should
include an explicit type attribute to avoid accidental form submission; update
the <button ...> in the component (the element invoking setSize(1)) to be
<button type="button" ...> so it is always a non-submit button even if later
moved inside a form.
In `@doc-agents/.agents/skills/README.md`:
- Line 135: Update the generation template/source that produces the table row
for the command entry named "agentuity-cli-project-show" so its description
string reads "Show project details" (plural) instead of "Show project detail";
do not manually edit the autogenerated README.md, change the generator/template
key that maps the command identifier agentuity-cli-project-show to its
description, and leave the REPL/“repl” text unchanged because the LanguageTool
hint is a false positive.
In `@doc-agents/src/api/agent_pulse/route.ts`:
- Around line 117-127: The current title normalization block incorrectly forces
sentence case by calling toLowerCase() on the entire title (variable title),
which destroys proper nouns and acronyms; fix it by removing the .toLowerCase()
call and instead only ensure the first character is uppercased if present while
leaving the rest of title unchanged, keeping the existing trimming,
quote-stripping, and max-length logic (i.e., sanitize quotes, slice to 60 chars,
then set title = title.charAt(0).toUpperCase() + title.slice(1) without
lowercasing).
In `@doc-agents/src/api/sessions/route.ts`:
- Around line 178-228: The PUT route handler defined in
router.put('/:sessionId', cookieAuth, async (c) => { ... }) parses request JSON
directly and lacks the POST-style validation; update this handler to use the
same request validation middleware as POST (e.g., validator({ input:
SessionSchema })) and consume the validated body instead of
c.req.json<Session>(), ensuring you reference the SessionSchema and the
validator call so malformed sessions are rejected before processing and storage.
- Around line 95-145: The session POST handler currently calls new
Date(message.timestamp).toISOString() inside the session.messages map which will
throw RangeError for invalid dates; update the mapping in the router.post(...)
handler to validate the parsed date first (e.g. const date = new
Date(message.timestamp); if (isNaN(date.getTime())) then either omit/normalize
the timestamp (set timestamp: null or delete timestamp) or fallback to a safe
value (e.g. new Date().toISOString()); only call date.toISOString() when date is
valid to prevent exceptions during session creation.
In `@doc-agents/src/middleware/auth.ts`:
- Around line 47-48: Update the TODO comment in auth middleware that currently
reads "// TODO: With clerk, we can set user apayload here sing c.set('user',
payload...)" to correct typos: change "apayload" to "a payload" (and "sing" to
"using") so the comment reads e.g. "// TODO: With clerk, we can set user a
payload here using c.set('user', payload...)".
In `@doc-agents/src/services/stream-persistence.ts`:
- Around line 61-137: In withPersistence, ensure onTutorialProgress failures
don't abort the stream: when you detect tutorial (inside the loop where
extractTutorialData is called), invoke onTutorialProgress(tutorial) inside a
try/catch (or fire-and-forget without awaiting) and on error log via
logger.error (include error details) but do not rethrow; keep
capturedTutorialData assignment and continue processing remaining chunks so the
stream forwarding is not interrupted.
- Around line 162-172: The extractTutorialData function uses the regex
/\{[^}]+\}/ which fails for nested objects; change the extraction to locate the
`"tutorialData":` token, find the first `{` after it, then walk the string
counting opening and closing braces until the matching closing `}` is found to
extract the full JSON substring, then JSON.parse that substring inside the
existing try/catch (keep function name extractTutorialData and preserve null on
parse error). This ensures nested objects are handled correctly instead of
relying on the simple regex.
♻️ Duplicate comments (4)
doc-agents/package.json (2)
16-20: Pin dependency versions instead of using "latest".This issue was already flagged in a previous review. Using
"latest"for@agentuity/*,@ai-sdk/openai, andaicreates non-deterministic builds.Also applies to: 22-22
30-31: Pin devDependency versions instead of using "latest".This issue was already flagged in a previous review for
@agentuity/cliand@types/bun.doc-agents/src/agent/agent_pulse/streaming-processor.ts (1)
95-96: Parameter mismatch forhandleTutorialState.
handleTutorialStateexpects actxobject with aloggerproperty (it callsctx.logger.info, etc.), but this call passesloggerdirectly as the second argument. This will cause a runtime error.🐛 Suggested fix
// Process tutorial state and fetch complete tutorial data if available - const finalTutorialData = await handleTutorialState(state, logger); + const finalTutorialData = await handleTutorialState(state, { logger });doc-agents/src/middleware/auth.ts (1)
39-45: Use constant-time comparison to prevent timing attacks.The direct
!==comparison on line 42 is vulnerable to timing attacks, allowing an attacker to guess the token byte-by-byte by measuring response times. Usecrypto.timingSafeEqual()for secure comparison.🔒 Suggested fix
+import { timingSafeEqual } from 'crypto'; + // ... const token = authHeader.slice(7); // Validate token matches expected value - if (!expectedToken || token !== expectedToken) { + if (!expectedToken) { + c.var.logger.warn('Invalid bearer token'); + return c.json({ error: 'Invalid bearer token' }, 401); + } + + const tokenBuffer = Buffer.from(token); + const expectedBuffer = Buffer.from(expectedToken); + if (tokenBuffer.length !== expectedBuffer.length || + !timingSafeEqual(tokenBuffer, expectedBuffer)) { c.var.logger.warn('Invalid bearer token'); return c.json({ error: 'Invalid bearer token' }, 401); }
🧹 Nitpick comments (9)
doc-agents/src/services/tutorial-state-manager.ts (1)
115-121: Minor: Use single timestamp for consistency.Creating two separate
Dateinstances could result in slightly different timestamps forcompletedAtandlastAccessedAt. Consider reusing a single value.✨ Suggested improvement
if (state.tutorials[tutorialId]) { + const now = new Date().toISOString(); - state.tutorials[tutorialId].completedAt = new Date().toISOString(); - state.tutorials[tutorialId].lastAccessedAt = new Date().toISOString(); + state.tutorials[tutorialId].completedAt = now; + state.tutorials[tutorialId].lastAccessedAt = now; const key = getTutorialKey(userId); await kv.set(config.kvStoreName, key, state); }app/chat/services/sessionService.ts (2)
317-366: Consider extracting duplicated SSE event handling logic.The SSE parsing logic (handling
text-delta,tutorial-data,finish,session-saved,error) is duplicated between the main streaming loop (lines 317-339) and the buffer remainder processing (lines 343-366). This violates DRY and increases maintenance burden.Proposed refactor
+ private handleSSEEvent( + data: any, + callbacks: { + onTextDelta?: (text: string) => void, + onTutorialData?: (tutorialData: TutorialData) => void, + onFinish?: (session: Session) => void, + onError?: (error: string) => void + } + ): void { + if (data.type === 'text-delta' && data.textDelta) { + callbacks.onTextDelta?.(data.textDelta); + } else if (data.type === 'tutorial-data' && data.tutorialData) { + callbacks.onTutorialData?.(data.tutorialData); + } else if (data.type === 'session-saved' && data.session) { + callbacks.onFinish?.(data.session); + } else if (data.type === 'error') { + callbacks.onError?.(data.error || 'Unknown error'); + } + }Then replace both inline blocks with:
const data = JSON.parse(line.slice(6)); this.handleSSEEvent(data, callbacks);
396-401: Consider extracting the conversation history limit as a named constant.The magic number
10in.slice(-10)represents the conversation history limit. A named constant would improve readability and make it easier to adjust consistently if used elsewhere.const CONVERSATION_HISTORY_LIMIT = 10; // ... .slice(-CONVERSATION_HISTORY_LIMIT)next.config.mjs (1)
23-34: Consider validatingAGENT_BASE_URLor providing a fallback.If
AGENT_BASE_URLis not set, the rewrite destinations will resolve toundefined/api/..., causing routing failures. Consider adding validation or a sensible default.💡 Suggested improvement
// At the top of the file const AGENT_BASE_URL = process.env.AGENT_BASE_URL || 'http://localhost:3001'; // Then use AGENT_BASE_URL in rewritesIs
AGENT_BASE_URLalways guaranteed to be set in deployment environments? If not, this could cause silent routing failures.doc-agents/src/api/agent_pulse/route.ts (1)
38-45: Author coercion may accept invalid values silently.The cast to
'USER' | 'ASSISTANT'on line 41 doesn't validate that the value is actually one of these options. Invalid values like'SYSTEM'would pass through as-is.💡 Consider adding validation
if (Array.isArray(body.conversationHistory)) { conversationHistory = body.conversationHistory.map((msg: unknown) => { const m = msg as Record<string, unknown>; + const rawAuthor = String(m.author || m.role || 'USER').toUpperCase(); + const author = rawAuthor === 'ASSISTANT' ? 'ASSISTANT' : 'USER'; return { - author: String(m.author || m.role || 'USER').toUpperCase() as 'USER' | 'ASSISTANT', + author, content: typeof m.content === 'string' ? m.content : '', }; }); }doc-agents/src/services/stream-persistence.ts (2)
143-157: Address static analysis warnings and improve robustness.The static analyzer flagged two issues:
- Line 146:
matchhas implicitanytype- Line 147: Assignment in
whileconditionAdditionally, the regex approach may miss text deltas that span chunk boundaries since you're decoding each chunk independently.
Proposed fix addressing the static analysis warnings
function extractTextDeltas(text: string): string { let result = ''; const regex = /"type":"text-delta","textDelta":"((?:[^"\\]|\\.)*)"/g; - let match; - while ((match = regex.exec(text)) !== null) { + let match: RegExpExecArray | null = null; + while ((match = regex.exec(text)) !== null) { try { // Unescape JSON string (handles \n, \t, unicode, etc.) result += JSON.parse(`"${match[1]}"`); } catch { // If parse fails, use raw match result += match[1]; } } return result; }
223-242: Consider adding a cap on MRU list size.The MRU list grows unbounded as new sessions are added. Over time, this could lead to large arrays being stored and retrieved, impacting performance.
Proposed fix to cap the MRU list
+const MAX_MRU_SIZE = 100; + async function updateMRU( kv: KVStore, storeName: string, userId: string, sessionKey: string, logger: Logger ): Promise<void> { try { const mruResult = await kv.get<string[]>(storeName, userId); const mruList = mruResult.exists && mruResult.data ? mruResult.data : []; // Remove if exists, add to front const filtered = mruList.filter((k) => k !== sessionKey); filtered.unshift(sessionKey); + + // Cap the list size + if (filtered.length > MAX_MRU_SIZE) { + filtered.length = MAX_MRU_SIZE; + } await kv.set(storeName, userId, filtered); } catch (error) { logger.error('Failed to update MRU: %s', error instanceof Error ? error.message : String(error)); } }doc-agents/src/api/sessions/route.ts (2)
12-22: Avoidanytype and consider reusing theMessageinterface.The
tutorialData?: anydefeats type safety. Additionally, thisMessageinterface appears to duplicate the one defined instream-persistence.ts. Consider importing and reusing it.Proposed fix
+import type { Message, TutorialData } from '../../services/stream-persistence'; + -interface Message { - id: string; - author: 'USER' | 'ASSISTANT'; - content: string; - timestamp: string; - tutorialData?: any; -} - interface MessageRequest { message: Message; }
265-313: Message validation is minimal - consider validating the message structure.The endpoint only checks if
messageexists but doesn't validate its structure (required fields likeid,author,content). Malformed messages could corrupt session data.Consider adding message validation
+import { MessageSchema } from '../../../../app/chat/types'; + router.post('/:sessionId/messages', cookieAuth, async (c) => { try { const userId = (c.get as (key: string) => string)('userId'); const sessionId = c.req.param('sessionId'); const sessionKey = `${userId}_${sessionId}`; const { message } = await c.req.json<MessageRequest>(); if (!message) { return c.json({ error: 'Message is required' }, { status: 400 }); } + + // Validate message structure + const parseResult = MessageSchema.safeParse(message); + if (!parseResult.success) { + return c.json({ error: 'Invalid message format' }, { status: 400 }); + }
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (3)
doc-agents/bun.lockis excluded by!**/*.lockdoc-agents/src/generated/app.tsis excluded by!**/generated/**doc-agents/src/generated/routes.tsis excluded by!**/generated/**
📒 Files selected for processing (16)
app/api/sessions/[sessionId]/messages/route.tsapp/api/sessions/[sessionId]/route.tsapp/api/sessions/route.tsapp/chat/layout.tsxapp/chat/services/sessionService.tsdoc-agents/.agents/skills/README.mddoc-agents/config.tsdoc-agents/package.jsondoc-agents/src/agent/agent_pulse/agent.tsdoc-agents/src/agent/agent_pulse/streaming-processor.tsdoc-agents/src/api/agent_pulse/route.tsdoc-agents/src/api/sessions/route.tsdoc-agents/src/middleware/auth.tsdoc-agents/src/services/stream-persistence.tsdoc-agents/src/services/tutorial-state-manager.tsnext.config.mjs
💤 Files with no reviewable changes (3)
- app/api/sessions/route.ts
- app/api/sessions/[sessionId]/route.ts
- app/api/sessions/[sessionId]/messages/route.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- doc-agents/config.ts
🧰 Additional context used
🧬 Code graph analysis (6)
doc-agents/src/agent/agent_pulse/agent.ts (5)
doc-agents/src/agent/agent_pulse/types.ts (2)
Action(34-39)ConversationMessage(3-6)app/chat/services/sessionService.ts (1)
ConversationMessage(20-23)doc-agents/src/agent/agent_pulse/tools.ts (1)
createTools(15-78)doc-agents/src/agent/agent_pulse/context-builder.ts (2)
buildContext(7-32)buildSystemPrompt(63-118)doc-agents/src/agent/agent_pulse/streaming-processor.ts (1)
createStreamingProcessor(9-125)
app/chat/layout.tsx (3)
app/chat/components/SessionSidebarSkeleton.tsx (1)
SessionSidebarSkeleton(7-98)app/chat/components/SessionSidebar.tsx (1)
SessionSidebar(36-204)app/chat/SessionContext.tsx (1)
SessionContext(13-13)
doc-agents/src/agent/agent_pulse/streaming-processor.ts (4)
doc-agents/src/agent/agent_pulse/types.ts (2)
Action(34-39)StreamingChunk(54-62)lib/api/services/index.ts (1)
StreamingChunk(9-9)lib/api/services/agentPulse.ts (1)
StreamingChunk(24-32)doc-agents/src/agent/agent_pulse/state-manager.ts (1)
handleTutorialState(8-61)
doc-agents/src/api/sessions/route.ts (3)
doc-agents/src/middleware/auth.ts (1)
cookieAuth(9-17)doc-agents/config.ts (1)
config(5-9)app/chat/types.ts (1)
SessionSchema(47-52)
doc-agents/src/services/tutorial-state-manager.ts (1)
doc-agents/config.ts (1)
config(5-9)
doc-agents/src/services/stream-persistence.ts (1)
doc-agents/config.ts (1)
config(5-9)
🪛 Biome (2.1.2)
app/chat/layout.tsx
[error] 101-104: Provide an explicit type prop for the button element.
The default type of a button is submit, which causes the submission of a form when placed inside a form element. This is likely not the behaviour that you want inside a React application.
Allowed button types are: submit, button or reset
(lint/a11y/useButtonType)
doc-agents/src/services/stream-persistence.ts
[error] 146-146: This variable implicitly has the any type.
Variable declarations without type annotation and initialization implicitly have the any type. Declare a type or initialize the variable with some value.
(lint/suspicious/noImplicitAnyLet)
[error] 147-147: The assignment should not be in an expression.
The use of assignments in expressions is confusing.
Expressions are often considered as side-effect free.
(lint/suspicious/noAssignInExpressions)
🪛 LanguageTool
doc-agents/.agents/skills/README.md
[grammar] ~137-~137: Ensure spelling is correct
Context: ...ject show` | Show project detail | ### repl | Skill | Command | Description | |-----...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
🔇 Additional comments (32)
doc-agents/src/services/tutorial-state-manager.ts (7)
4-30: LGTM!Type definitions are well-structured. The
KVServiceinterface provides a clean abstraction, and theTutorialProgressinterface correctly makescompletedAtoptional for in-progress tutorials.
32-47: LGTM!Key generation and state retrieval logic are clean. The default state initialization correctly handles both non-existent keys and null data cases.
64-71: Verify:completedAtmay be unintentionally cleared on progress update.If a tutorial was previously completed and
updateTutorialProgressis called withcurrentStep < totalSteps, the existingcompletedAtwill be lost since the new object doesn't preserve it.If this is intentional (allowing progress reset), consider documenting this behavior. If completion should be preserved, adjust the logic:
🔧 Suggested fix to preserve completedAt
state.tutorials[tutorialId] = { tutorialId, currentStep, totalSteps, startedAt: existing?.startedAt || now, lastAccessedAt: now, - ...(currentStep >= totalSteps ? { completedAt: now } : {}) + ...(currentStep >= totalSteps ? { completedAt: existing?.completedAt || now } : + existing?.completedAt ? { completedAt: existing.completedAt } : {}) };
81-99: LGTM!The logic correctly handles edge cases with early returns before the reduce operation. Finding the most recently accessed active tutorial is a sensible default for agent communication.
104-107: LGTM!Clean implementation with appropriate null fallback.
127-138: LGTM!Clean and symmetric implementations for filtering completed vs. active tutorials.
141-149: LGTM!The namespace export provides a convenient access pattern while still allowing individual function imports.
app/chat/services/sessionService.ts (3)
20-23: LGTM!The
ConversationMessageinterface is well-defined with appropriate type constraints for the author field.
32-34: LGTM!Consistent addition of
credentials: 'include'across all fetch calls ensures proper cookie-based authentication handling.
220-261: LGTM!The
addMessagemethod has robust error handling with a nested try-catch for JSON parsing failures, providing meaningful fallback error messages.doc-agents/package.json (3)
1-7: LGTM!Package metadata is well-configured. The
private: trueflag appropriately prevents accidental publication to npm.
8-14: LGTM!Scripts are well-structured and consistent with the agentuity tooling conventions.
32-35: LGTM!Type definitions are correctly pinned and match their runtime counterparts (
@types/reactmatchesreact, etc.).app/chat/layout.tsx (3)
1-11: LGTM!The
'use client'directive is correctly placed, and the new imports forAgentuityProviderandSessionSidebarSkeletonalign with the layout wrapper changes below.
14-66: LGTM!The SWR infinite pagination logic, session deduplication, and handler functions are well-structured. The
mutatewrapper correctly propagates updates across paginated data.
69-114: No action required. The official@agentuity/reactdocumentation shows thatAgentuityProvideris correctly used without any configuration props, as demonstrated in the framework's tutorial and introduction pages. The current implementation is aligned with the intended usage.Likely an incorrect or invalid review comment.
doc-agents/src/agent/agent_pulse/streaming-processor.ts (3)
1-18: LGTM!The imports and SSE formatting logic are well-structured. The generic type parameter
<TOOLS extends Record<string, Tool>>properly constrains the tool types.
20-88: LGTM!The transform logic properly handles all AI SDK chunk types with appropriate logging and error handling. The error chunks are correctly formatted for client consumption.
127-139: LGTM!The
getToolStatusMessagehelper cleanly maps tool names to user-friendly messages with a sensible default.doc-agents/src/agent/agent_pulse/agent.ts (4)
1-9: LGTM!Imports are correctly structured, pulling in the necessary runtime, schema, AI SDK, and internal module dependencies.
10-31: LGTM!The agent schema properly defines the input structure with appropriate optional fields for conversation history and tutorial state. Using
s.any()for output is acceptable for streaming responses.
32-51: LGTM!Handler initialization correctly extracts input parameters, initializes state, builds messages array, and creates tools with proper context passing.
63-79: LGTM!The
stopWhen: stepCountIs(5)properly limits agent steps, and the logging callbacks provide good observability. The streaming processor integration returns the expectedReadableStream.doc-agents/src/middleware/auth.ts (1)
1-17: LGTM!The
cookieAuthmiddleware is clean and correctly validates the cookie presence before setting the userId in context.doc-agents/src/api/agent_pulse/route.ts (3)
1-15: LGTM!Imports are well-organized, pulling in the router, authentication middleware, agent, and service dependencies correctly.
71-141: Title generation logic is well-structured.Good practices observed:
- Proper early return if title already exists
- Reasonable limits on history size and content length
- Re-fetches session before saving to avoid race conditions
- Comprehensive error handling with non-blocking execution
152-220: LGTM!The route handler is well-structured with:
- Proper input validation
- Fallback to stored tutorial state
- Clean separation of persisted vs non-persisted streams
- Appropriate error responses with status codes
doc-agents/src/services/stream-persistence.ts (2)
12-52: LGTM! Well-structured type definitions.The interfaces are clean and appropriately typed. The
PersistenceConfigprovides good flexibility with optional callbacks for tutorial progress and session saved events.
177-218: LGTM! Solid implementation with appropriate error handling.The function correctly handles the case where the session doesn't exist, and the conditional spread for
tutorialDatais clean.doc-agents/src/api/sessions/route.ts (3)
27-90: LGTM! Pagination logic is well implemented.Good use of clamping for limit and cursor values, proper handling of empty responses, and clean pagination metadata. The parallel fetching of session data with
Promise.allis efficient.
150-173: LGTM! Clean single-session retrieval.Proper 404 handling and straightforward implementation.
233-260: Address potential race condition in session deletion with atomic operations.The read-modify-write pattern on the session list (reading all sessions, filtering out the deleted one, and writing back) is susceptible to lost updates under concurrent DELETE requests. This concern applies to other endpoints that modify the shared session list as well.
Verify your KV store's capabilities: if it supports transactions, conditional writes, or compare-and-swap operations, implement one of these approaches instead of the current pattern. If atomic operations aren't available, consider using a distributed lock or alternative session storage structure.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
| <button | ||
| onClick={() => setSize(1)} | ||
| className="px-2 py-0.5 text-xs rounded bg-red-500/30 hover:bg-red-500/40 border border-red-500/50" | ||
| > | ||
| Retry | ||
| </button> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add explicit type="button" to prevent unintended form submission.
The retry button lacks an explicit type attribute. While it's not currently inside a form element, adding type="button" is a defensive best practice that prevents unintended form submissions if the component hierarchy changes.
Proposed fix
<button
+ type="button"
onClick={() => setSize(1)}
className="px-2 py-0.5 text-xs rounded bg-red-500/30 hover:bg-red-500/40 border border-red-500/50"
>
Retry
</button>📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| <button | |
| onClick={() => setSize(1)} | |
| className="px-2 py-0.5 text-xs rounded bg-red-500/30 hover:bg-red-500/40 border border-red-500/50" | |
| > | |
| Retry | |
| </button> | |
| <button | |
| type="button" | |
| onClick={() => setSize(1)} | |
| className="px-2 py-0.5 text-xs rounded bg-red-500/30 hover:bg-red-500/40 border border-red-500/50" | |
| > | |
| Retry | |
| </button> |
🧰 Tools
🪛 Biome (2.1.2)
[error] 101-104: Provide an explicit type prop for the button element.
The default type of a button is submit, which causes the submission of a form when placed inside a form element. This is likely not the behaviour that you want inside a React application.
Allowed button types are: submit, button or reset
(lint/a11y/useButtonType)
🤖 Prompt for AI Agents
In `@app/chat/layout.tsx` around lines 101 - 106, The Retry button element that
calls setSize(1) should include an explicit type attribute to avoid accidental
form submission; update the <button ...> in the component (the element invoking
setSize(1)) to be <button type="button" ...> so it is always a non-submit button
even if later moved inside a form.
| | [agentuity-cli-project-create](./agentuity-cli-project-create) | `agentuity project create` | Create a new project | | ||
| | [agentuity-cli-project-delete](./agentuity-cli-project-delete) | `agentuity project delete` | Delete a project | | ||
| | [agentuity-cli-project-list](./agentuity-cli-project-list) | `agentuity project list` | List all projects | | ||
| | [agentuity-cli-project-show](./agentuity-cli-project-show) | `agentuity project show` | Show project detail | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix grammar in description.
"Show project detail" should be "Show project details" for grammatical correctness and consistency with other command descriptions.
Since this file is auto-generated (line 158), the fix should be applied to the generation source/template rather than manually editing this file.
Note on static analysis hint: The LanguageTool hint flagging "repl" on line 137 is a false positive. REPL (Read-Eval-Print Loop) is a standard technical term and is correctly used here as a command category name.
🤖 Prompt for AI Agents
In `@doc-agents/.agents/skills/README.md` at line 135, Update the generation
template/source that produces the table row for the command entry named
"agentuity-cli-project-show" so its description string reads "Show project
details" (plural) instead of "Show project detail"; do not manually edit the
autogenerated README.md, change the generator/template key that maps the command
identifier agentuity-cli-project-show to its description, and leave the
REPL/“repl” text unchanged because the LanguageTool hint is a false positive.
| let title = response.text.trim(); | ||
| // Sanitize | ||
| if ( | ||
| (title.startsWith('"') && title.endsWith('"')) || | ||
| (title.startsWith("'") && title.endsWith("'")) | ||
| ) { | ||
| title = title.slice(1, -1).trim(); | ||
| } | ||
| if (title.length > 60) title = title.slice(0, 60).trim(); | ||
| title = title.charAt(0).toUpperCase() + title.slice(1).toLowerCase(); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Title case transformation contradicts "sentence case" requirement.
Line 126 applies toLowerCase() to the entire title after capitalizing the first character, which doesn't produce proper sentence case. "Sentence case" should preserve capitalization of proper nouns and acronyms.
💡 Suggested fix
- title = title.charAt(0).toUpperCase() + title.slice(1).toLowerCase();
+ // Only ensure first character is uppercase; preserve rest as-is for proper nouns
+ title = title.charAt(0).toUpperCase() + title.slice(1);For example, "Using Agentuity SDK" would become "Using agentuity sdk" with the current logic.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let title = response.text.trim(); | |
| // Sanitize | |
| if ( | |
| (title.startsWith('"') && title.endsWith('"')) || | |
| (title.startsWith("'") && title.endsWith("'")) | |
| ) { | |
| title = title.slice(1, -1).trim(); | |
| } | |
| if (title.length > 60) title = title.slice(0, 60).trim(); | |
| title = title.charAt(0).toUpperCase() + title.slice(1).toLowerCase(); | |
| let title = response.text.trim(); | |
| // Sanitize | |
| if ( | |
| (title.startsWith('"') && title.endsWith('"')) || | |
| (title.startsWith("'") && title.endsWith("'")) | |
| ) { | |
| title = title.slice(1, -1).trim(); | |
| } | |
| if (title.length > 60) title = title.slice(0, 60).trim(); | |
| // Only ensure first character is uppercase; preserve rest as-is for proper nouns | |
| title = title.charAt(0).toUpperCase() + title.slice(1); |
🤖 Prompt for AI Agents
In `@doc-agents/src/api/agent_pulse/route.ts` around lines 117 - 127, The current
title normalization block incorrectly forces sentence case by calling
toLowerCase() on the entire title (variable title), which destroys proper nouns
and acronyms; fix it by removing the .toLowerCase() call and instead only ensure
the first character is uppercased if present while leaving the rest of title
unchanged, keeping the existing trimming, quote-stripping, and max-length logic
(i.e., sanitize quotes, slice to 60 chars, then set title =
title.charAt(0).toUpperCase() + title.slice(1) without lowercasing).
doc-agents/src/api/sessions/route.ts
Outdated
| router.post('/', cookieAuth, validator({ input: SessionSchema }), async (c) => { | ||
| try { | ||
| const userId = (c.get as (key: string) => string)('userId'); | ||
|
|
||
| const session = c.req.valid('json'); | ||
|
|
||
| // Process any messages to ensure timestamps are in ISO string format | ||
| if (session.messages && session.messages.length > 0) { | ||
| session.messages = session.messages.map((message) => { | ||
| if (message.timestamp) { | ||
| // Ensure timestamp is ISO string | ||
| const date = new Date(message.timestamp); | ||
| return { | ||
| ...message, | ||
| timestamp: date.toISOString(), | ||
| }; | ||
| } | ||
| return message; | ||
| }); | ||
| } | ||
|
|
||
| const sessionKey = `${userId}_${session.sessionId}`; | ||
|
|
||
| // Save the session data | ||
| await c.var.kv.set(config.kvStoreName, sessionKey, session); | ||
|
|
||
| // Update the sessions list with just the session ID | ||
| const allSessionsResponse = await c.var.kv.get<string[]>(config.kvStoreName, userId); | ||
| const sessionIds = allSessionsResponse.exists ? allSessionsResponse.data || [] : []; | ||
|
|
||
| // Add the new session ID to the beginning of the array | ||
| const updatedSessionIds = [sessionKey, ...sessionIds.filter((id) => id !== sessionKey)]; | ||
|
|
||
| await c.var.kv.set(config.kvStoreName, userId, updatedSessionIds); | ||
|
|
||
| return c.json({ | ||
| success: true, | ||
| session, | ||
| ...(session.title ? {} : { titleGeneration: 'pending' }), | ||
| }); | ||
| } catch (error) { | ||
| c.var.logger.error( | ||
| 'Create session failed: %s', | ||
| error instanceof Error ? error.message : String(error) | ||
| ); | ||
| return c.json( | ||
| { error: error instanceof Error ? error.message : 'Unknown error occurred' }, | ||
| { status: 500 } | ||
| ); | ||
| } | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Timestamp normalization can produce Invalid Date.
If message.timestamp is an invalid date string, new Date(message.timestamp).toISOString() will throw RangeError: Invalid time value. Consider validating the date before conversion.
Proposed fix
if (session.messages && session.messages.length > 0) {
session.messages = session.messages.map((message) => {
if (message.timestamp) {
- // Ensure timestamp is ISO string
const date = new Date(message.timestamp);
+ if (Number.isNaN(date.getTime())) {
+ // Invalid date - keep original or use current time
+ return message;
+ }
return {
...message,
timestamp: date.toISOString(),
};
}
return message;
});
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| router.post('/', cookieAuth, validator({ input: SessionSchema }), async (c) => { | |
| try { | |
| const userId = (c.get as (key: string) => string)('userId'); | |
| const session = c.req.valid('json'); | |
| // Process any messages to ensure timestamps are in ISO string format | |
| if (session.messages && session.messages.length > 0) { | |
| session.messages = session.messages.map((message) => { | |
| if (message.timestamp) { | |
| // Ensure timestamp is ISO string | |
| const date = new Date(message.timestamp); | |
| return { | |
| ...message, | |
| timestamp: date.toISOString(), | |
| }; | |
| } | |
| return message; | |
| }); | |
| } | |
| const sessionKey = `${userId}_${session.sessionId}`; | |
| // Save the session data | |
| await c.var.kv.set(config.kvStoreName, sessionKey, session); | |
| // Update the sessions list with just the session ID | |
| const allSessionsResponse = await c.var.kv.get<string[]>(config.kvStoreName, userId); | |
| const sessionIds = allSessionsResponse.exists ? allSessionsResponse.data || [] : []; | |
| // Add the new session ID to the beginning of the array | |
| const updatedSessionIds = [sessionKey, ...sessionIds.filter((id) => id !== sessionKey)]; | |
| await c.var.kv.set(config.kvStoreName, userId, updatedSessionIds); | |
| return c.json({ | |
| success: true, | |
| session, | |
| ...(session.title ? {} : { titleGeneration: 'pending' }), | |
| }); | |
| } catch (error) { | |
| c.var.logger.error( | |
| 'Create session failed: %s', | |
| error instanceof Error ? error.message : String(error) | |
| ); | |
| return c.json( | |
| { error: error instanceof Error ? error.message : 'Unknown error occurred' }, | |
| { status: 500 } | |
| ); | |
| } | |
| }); | |
| router.post('/', cookieAuth, validator({ input: SessionSchema }), async (c) => { | |
| try { | |
| const userId = (c.get as (key: string) => string)('userId'); | |
| const session = c.req.valid('json'); | |
| // Process any messages to ensure timestamps are in ISO string format | |
| if (session.messages && session.messages.length > 0) { | |
| session.messages = session.messages.map((message) => { | |
| if (message.timestamp) { | |
| const date = new Date(message.timestamp); | |
| if (Number.isNaN(date.getTime())) { | |
| // Invalid date - keep original or use current time | |
| return message; | |
| } | |
| return { | |
| ...message, | |
| timestamp: date.toISOString(), | |
| }; | |
| } | |
| return message; | |
| }); | |
| } | |
| const sessionKey = `${userId}_${session.sessionId}`; | |
| // Save the session data | |
| await c.var.kv.set(config.kvStoreName, sessionKey, session); | |
| // Update the sessions list with just the session ID | |
| const allSessionsResponse = await c.var.kv.get<string[]>(config.kvStoreName, userId); | |
| const sessionIds = allSessionsResponse.exists ? allSessionsResponse.data || [] : []; | |
| // Add the new session ID to the beginning of the array | |
| const updatedSessionIds = [sessionKey, ...sessionIds.filter((id) => id !== sessionKey)]; | |
| await c.var.kv.set(config.kvStoreName, userId, updatedSessionIds); | |
| return c.json({ | |
| success: true, | |
| session, | |
| ...(session.title ? {} : { titleGeneration: 'pending' }), | |
| }); | |
| } catch (error) { | |
| c.var.logger.error( | |
| 'Create session failed: %s', | |
| error instanceof Error ? error.message : String(error) | |
| ); | |
| return c.json( | |
| { error: error instanceof Error ? error.message : 'Unknown error occurred' }, | |
| { status: 500 } | |
| ); | |
| } | |
| }); |
🤖 Prompt for AI Agents
In `@doc-agents/src/api/sessions/route.ts` around lines 95 - 145, The session POST
handler currently calls new Date(message.timestamp).toISOString() inside the
session.messages map which will throw RangeError for invalid dates; update the
mapping in the router.post(...) handler to validate the parsed date first (e.g.
const date = new Date(message.timestamp); if (isNaN(date.getTime())) then either
omit/normalize the timestamp (set timestamp: null or delete timestamp) or
fallback to a safe value (e.g. new Date().toISOString()); only call
date.toISOString() when date is valid to prevent exceptions during session
creation.
doc-agents/src/api/sessions/route.ts
Outdated
| router.put('/:sessionId', cookieAuth, async (c) => { | ||
| try { | ||
| const userId = (c.get as (key: string) => string)('userId'); | ||
| const sessionId = c.req.param('sessionId'); | ||
| const sessionKey = `${userId}_${sessionId}`; | ||
|
|
||
| const session = await c.req.json<Session>(); | ||
|
|
||
| if (session.sessionId !== sessionId) { | ||
| return c.json({ error: 'Session ID mismatch' }, { status: 400 }); | ||
| } | ||
|
|
||
| // Process any messages to ensure timestamps are in ISO string format | ||
| if (session.messages && session.messages.length > 0) { | ||
| session.messages = session.messages.map((message) => { | ||
| if (message.timestamp) { | ||
| const date = new Date(message.timestamp); | ||
| return { | ||
| ...message, | ||
| timestamp: date.toISOString(), | ||
| }; | ||
| } | ||
| return message; | ||
| }); | ||
| } | ||
|
|
||
| // Update the individual session | ||
| await c.var.kv.set(config.kvStoreName, sessionKey, session); | ||
|
|
||
| // Update the master list if needed (ensure the session ID is in the list) | ||
| const allSessionsResponse = await c.var.kv.get<string[]>(config.kvStoreName, userId); | ||
| const sessionIds = allSessionsResponse.exists ? allSessionsResponse.data || [] : []; | ||
|
|
||
| // If the session ID isn't in the list, add it to the beginning | ||
| if (!sessionIds.includes(sessionKey)) { | ||
| const updatedSessionIds = [sessionKey, ...sessionIds]; | ||
| await c.var.kv.set(config.kvStoreName, userId, updatedSessionIds); | ||
| } | ||
|
|
||
| return c.json({ success: true, session }); | ||
| } catch (error) { | ||
| c.var.logger.error( | ||
| 'Update session failed: %s', | ||
| error instanceof Error ? error.message : String(error) | ||
| ); | ||
| return c.json( | ||
| { error: error instanceof Error ? error.message : 'Unknown error occurred' }, | ||
| { status: 500 } | ||
| ); | ||
| } | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PUT endpoint lacks input validation unlike POST.
The POST endpoint uses validator({ input: SessionSchema }) but the PUT endpoint directly parses JSON without validation. This inconsistency could allow malformed data to be stored.
Proposed fix to add validation
-router.put('/:sessionId', cookieAuth, async (c) => {
+router.put('/:sessionId', cookieAuth, validator({ input: SessionSchema }), async (c) => {
try {
const userId = (c.get as (key: string) => string)('userId');
const sessionId = c.req.param('sessionId');
const sessionKey = `${userId}_${sessionId}`;
- const session = await c.req.json<Session>();
+ const session = c.req.valid('json');📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| router.put('/:sessionId', cookieAuth, async (c) => { | |
| try { | |
| const userId = (c.get as (key: string) => string)('userId'); | |
| const sessionId = c.req.param('sessionId'); | |
| const sessionKey = `${userId}_${sessionId}`; | |
| const session = await c.req.json<Session>(); | |
| if (session.sessionId !== sessionId) { | |
| return c.json({ error: 'Session ID mismatch' }, { status: 400 }); | |
| } | |
| // Process any messages to ensure timestamps are in ISO string format | |
| if (session.messages && session.messages.length > 0) { | |
| session.messages = session.messages.map((message) => { | |
| if (message.timestamp) { | |
| const date = new Date(message.timestamp); | |
| return { | |
| ...message, | |
| timestamp: date.toISOString(), | |
| }; | |
| } | |
| return message; | |
| }); | |
| } | |
| // Update the individual session | |
| await c.var.kv.set(config.kvStoreName, sessionKey, session); | |
| // Update the master list if needed (ensure the session ID is in the list) | |
| const allSessionsResponse = await c.var.kv.get<string[]>(config.kvStoreName, userId); | |
| const sessionIds = allSessionsResponse.exists ? allSessionsResponse.data || [] : []; | |
| // If the session ID isn't in the list, add it to the beginning | |
| if (!sessionIds.includes(sessionKey)) { | |
| const updatedSessionIds = [sessionKey, ...sessionIds]; | |
| await c.var.kv.set(config.kvStoreName, userId, updatedSessionIds); | |
| } | |
| return c.json({ success: true, session }); | |
| } catch (error) { | |
| c.var.logger.error( | |
| 'Update session failed: %s', | |
| error instanceof Error ? error.message : String(error) | |
| ); | |
| return c.json( | |
| { error: error instanceof Error ? error.message : 'Unknown error occurred' }, | |
| { status: 500 } | |
| ); | |
| } | |
| }); | |
| router.put('/:sessionId', cookieAuth, validator({ input: SessionSchema }), async (c) => { | |
| try { | |
| const userId = (c.get as (key: string) => string)('userId'); | |
| const sessionId = c.req.param('sessionId'); | |
| const sessionKey = `${userId}_${sessionId}`; | |
| const session = c.req.valid('json'); | |
| if (session.sessionId !== sessionId) { | |
| return c.json({ error: 'Session ID mismatch' }, { status: 400 }); | |
| } | |
| // Process any messages to ensure timestamps are in ISO string format | |
| if (session.messages && session.messages.length > 0) { | |
| session.messages = session.messages.map((message) => { | |
| if (message.timestamp) { | |
| const date = new Date(message.timestamp); | |
| return { | |
| ...message, | |
| timestamp: date.toISOString(), | |
| }; | |
| } | |
| return message; | |
| }); | |
| } | |
| // Update the individual session | |
| await c.var.kv.set(config.kvStoreName, sessionKey, session); | |
| // Update the master list if needed (ensure the session ID is in the list) | |
| const allSessionsResponse = await c.var.kv.get<string[]>(config.kvStoreName, userId); | |
| const sessionIds = allSessionsResponse.exists ? allSessionsResponse.data || [] : []; | |
| // If the session ID isn't in the list, add it to the beginning | |
| if (!sessionIds.includes(sessionKey)) { | |
| const updatedSessionIds = [sessionKey, ...sessionIds]; | |
| await c.var.kv.set(config.kvStoreName, userId, updatedSessionIds); | |
| } | |
| return c.json({ success: true, session }); | |
| } catch (error) { | |
| c.var.logger.error( | |
| 'Update session failed: %s', | |
| error instanceof Error ? error.message : String(error) | |
| ); | |
| return c.json( | |
| { error: error instanceof Error ? error.message : 'Unknown error occurred' }, | |
| { status: 500 } | |
| ); | |
| } | |
| }); |
🤖 Prompt for AI Agents
In `@doc-agents/src/api/sessions/route.ts` around lines 178 - 228, The PUT route
handler defined in router.put('/:sessionId', cookieAuth, async (c) => { ... })
parses request JSON directly and lacks the POST-style validation; update this
handler to use the same request validation middleware as POST (e.g., validator({
input: SessionSchema })) and consume the validated body instead of
c.req.json<Session>(), ensuring you reference the SessionSchema and the
validator call so malformed sessions are rejected before processing and storage.
doc-agents/src/middleware/auth.ts
Outdated
| // TODO: With clerk, we can set user apayload here sing c.set('user', payload...) | ||
| // Use cookie for now |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix typo in TODO comment.
"apayload" should be "a payload".
📝 Suggested fix
- // TODO: With clerk, we can set user apayload here sing c.set('user', payload...)
+ // TODO: With Clerk, we can set user payload here using c.set('user', payload...)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // TODO: With clerk, we can set user apayload here sing c.set('user', payload...) | |
| // Use cookie for now | |
| // TODO: With Clerk, we can set user payload here using c.set('user', payload...) | |
| // Use cookie for now |
🤖 Prompt for AI Agents
In `@doc-agents/src/middleware/auth.ts` around lines 47 - 48, Update the TODO
comment in auth middleware that currently reads "// TODO: With clerk, we can set
user apayload here sing c.set('user', payload...)" to correct typos: change
"apayload" to "a payload" (and "sing" to "using") so the comment reads e.g. "//
TODO: With clerk, we can set user a payload here using c.set('user',
payload...)".
| export function withPersistence( | ||
| stream: ReadableStream<Uint8Array>, | ||
| config: PersistenceConfig | ||
| ): ReadableStream<Uint8Array> { | ||
| const { kv, userId, sessionId, kvStoreName, logger, onTutorialProgress, onSessionSaved } = config; | ||
| const sessionKey = `${userId}_${sessionId}`; | ||
|
|
||
| let accumulated = ''; | ||
| let capturedTutorialData: TutorialData | null = null; | ||
|
|
||
| const decoder = new TextDecoder(); | ||
| const encoder = new TextEncoder(); | ||
|
|
||
| return new ReadableStream<Uint8Array>({ | ||
| async start(controller) { | ||
| const reader = stream.getReader(); | ||
|
|
||
| try { | ||
| // Phase 1: Forward all chunks, accumulate content | ||
| while (true) { | ||
| const { done, value } = await reader.read(); | ||
| if (done) break; | ||
|
|
||
| // Forward immediately (zero latency added) | ||
| controller.enqueue(value); | ||
|
|
||
| // Accumulate in background | ||
| const text = decoder.decode(value, { stream: true }); | ||
| accumulated += extractTextDeltas(text); | ||
|
|
||
| const tutorial = extractTutorialData(text); | ||
| if (tutorial) { | ||
| capturedTutorialData = tutorial; | ||
| if (onTutorialProgress) { | ||
| await onTutorialProgress(tutorial); | ||
| } | ||
| } | ||
| } | ||
|
|
||
| // Phase 2: Stream ended - save to KV | ||
| const session = await saveAssistantMessage( | ||
| kv, | ||
| kvStoreName, | ||
| sessionKey, | ||
| accumulated, | ||
| capturedTutorialData, | ||
| logger | ||
| ); | ||
|
|
||
| if (session) { | ||
| await updateMRU(kv, kvStoreName, userId, sessionKey, logger); | ||
|
|
||
| if (onSessionSaved) { | ||
| onSessionSaved(session); | ||
| } | ||
|
|
||
| // Emit session-saved event (with leading newline to ensure separation) | ||
| controller.enqueue( | ||
| encoder.encode(`\ndata: ${JSON.stringify({ type: 'session-saved', session })}\n\n`) | ||
| ); | ||
| } | ||
| } catch (error) { | ||
| logger.error( | ||
| 'Stream persistence error: %s', | ||
| error instanceof Error ? error.message : String(error) | ||
| ); | ||
| controller.enqueue( | ||
| encoder.encode( | ||
| `data: ${JSON.stringify({ type: 'error', error: 'Failed to save session' })}\n\n` | ||
| ) | ||
| ); | ||
| } finally { | ||
| controller.close(); | ||
| } | ||
| }, | ||
| }); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider error handling for onTutorialProgress callback failures.
The await onTutorialProgress(tutorial) call on line 95 could throw an exception that would bubble up and abort the stream forwarding. If this callback fails, it would stop forwarding remaining chunks even though the stream data itself is valid.
Proposed fix to handle callback errors gracefully
if (tutorial) {
capturedTutorialData = tutorial;
if (onTutorialProgress) {
- await onTutorialProgress(tutorial);
+ try {
+ await onTutorialProgress(tutorial);
+ } catch (e) {
+ logger.warn('onTutorialProgress callback failed: %s', e instanceof Error ? e.message : String(e));
+ }
}
}🤖 Prompt for AI Agents
In `@doc-agents/src/services/stream-persistence.ts` around lines 61 - 137, In
withPersistence, ensure onTutorialProgress failures don't abort the stream: when
you detect tutorial (inside the loop where extractTutorialData is called),
invoke onTutorialProgress(tutorial) inside a try/catch (or fire-and-forget
without awaiting) and on error log via logger.error (include error details) but
do not rethrow; keep capturedTutorialData assignment and continue processing
remaining chunks so the stream forwarding is not interrupted.
| function extractTutorialData(text: string): TutorialData | null { | ||
| const match = text.match(/"type":"tutorial-data","tutorialData":(\{[^}]+\})/); | ||
| if (match && match[1]) { | ||
| try { | ||
| return JSON.parse(match[1]) as TutorialData; | ||
| } catch { | ||
| return null; | ||
| } | ||
| } | ||
| return null; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regex may fail on nested objects in tutorial data.
The pattern \{[^}]+\} will not match tutorial data containing nested objects (e.g., if tutorialData includes nested braces). This could silently fail to parse valid data.
Consider using a more robust extraction approach
function extractTutorialData(text: string): TutorialData | null {
- const match = text.match(/"type":"tutorial-data","tutorialData":(\{[^}]+\})/);
- if (match && match[1]) {
- try {
- return JSON.parse(match[1]) as TutorialData;
- } catch {
- return null;
+ const marker = '"type":"tutorial-data","tutorialData":';
+ const idx = text.indexOf(marker);
+ if (idx === -1) return null;
+
+ const start = idx + marker.length;
+ try {
+ // Find the matching closing brace
+ let depth = 0;
+ let end = start;
+ for (let i = start; i < text.length; i++) {
+ if (text[i] === '{') depth++;
+ else if (text[i] === '}') depth--;
+ if (depth === 0 && text[i] === '}') {
+ end = i + 1;
+ break;
+ }
}
+ return JSON.parse(text.slice(start, end)) as TutorialData;
+ } catch {
+ return null;
}
- return null;
}🤖 Prompt for AI Agents
In `@doc-agents/src/services/stream-persistence.ts` around lines 162 - 172, The
extractTutorialData function uses the regex /\{[^}]+\}/ which fails for nested
objects; change the extraction to locate the `"tutorialData":` token, find the
first `{` after it, then walk the string counting opening and closing braces
until the matching closing `}` is found to extract the full JSON substring, then
JSON.parse that substring inside the existing try/catch (keep function name
extractTutorialData and preserve null on parse error). This ensures nested
objects are handled correctly instead of relying on the simple regex.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In @.env.example:
- Around line 6-8: Remove the duplicate AGENT_BEARER_TOKEN entry (keep only one
occurrence of AGENT_BEARER_TOKEN), ensure the remaining AGENTUITY_* keys
(AGENTUITY_SDK_KEY, AGENT_BEARER_TOKEN, AGENTUITY_REGION) are present and
optionally sort them alphabetically for consistency, and add a final blank line
at the end of the file.
In `@package.json`:
- Around line 24-29: The package.json dependency for "@clerk/clerk-react"
(v5.59.2) is incompatible with the current React range (^19.0.0); update
package.json to either pin React to a compatible patch (change the React and
react-dom entries to "~19.0.3") or upgrade "@clerk/clerk-react" to a version
that explicitly supports React ^19.0.0, then run your package manager to
regenerate the lockfile and verify installs; ensure you update both "react" and
"react-dom" entries and re-run installation/tests to confirm peer dependency
warnings are resolved.
🧹 Nitpick comments (3)
app/chat/services/sessionService.ts (3)
20-23: Consider importingConversationMessagefrom the shared type definition.This interface duplicates the one in
agent-docs/src/agents/agent-pulse/request/types.ts. If both packages are in the same monorepo, importing from a shared location would prevent drift.
317-366: Extract duplicated SSE parsing logic into a helper function.The event-parsing block (lines 317-338) is duplicated verbatim in the buffer cleanup section (lines 345-365). Consider extracting to a private method to reduce maintenance burden.
♻️ Suggested refactor
+ private handleSSEEvent( + data: any, + callbacks: { + onTextDelta?: (text: string) => void, + onTutorialData?: (tutorialData: TutorialData) => void, + onFinish?: (session: Session) => void, + onError?: (error: string) => void + } + ): void { + if (data.type === 'text-delta' && data.textDelta) { + callbacks.onTextDelta?.(data.textDelta); + } else if (data.type === 'tutorial-data' && data.tutorialData) { + callbacks.onTutorialData?.(data.tutorialData); + } else if (data.type === 'finish') { + // AI done streaming - session comes in session-saved event + } else if (data.type === 'session-saved') { + if (data.session) { + callbacks.onFinish?.(data.session); + } + } else if (data.type === 'error') { + callbacks.onError?.(data.error || 'Unknown error'); + } + }Then use
this.handleSSEEvent(data, callbacks)in both parsing locations.
396-401: Consider extracting the magic number10to a named constant.The history limit of 10 messages appears as a magic number. Extracting it would clarify intent and make future adjustments easier.
+const MAX_CONVERSATION_HISTORY = 10; + // In addMessageToSessionStreaming: - const conversationHistory: ConversationMessage[] = (saveResult.data?.messages || []) - .slice(-10) + const conversationHistory: ConversationMessage[] = (saveResult.data?.messages || []) + .slice(-MAX_CONVERSATION_HISTORY)
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
.env.exampleapp/api/users/tutorial-state/route.tsapp/chat/services/sessionService.tsnext.config.mjspackage.json
🚧 Files skipped from review as they are similar to previous changes (2)
- app/api/users/tutorial-state/route.ts
- next.config.mjs
🧰 Additional context used
🧬 Code graph analysis (1)
app/chat/services/sessionService.ts (4)
agent-docs/src/agents/agent-pulse/request/types.ts (1)
ConversationMessage(1-4)doc-agents/src/services/stream-persistence.ts (3)
Message(13-19)Session(21-25)TutorialData(27-31)app/chat/types.ts (3)
Message(37-37)Session(54-54)TutorialData(27-27)agent-docs/src/agents/agent-pulse/streaming/types.ts (1)
TutorialData(21-31)
🪛 dotenv-linter (4.0.0)
.env.example
[warning] 7-7: [DuplicatedKey] The AGENT_BEARER_TOKEN key is duplicated
(DuplicatedKey)
[warning] 8-8: [EndingBlankLine] No blank line at the end of the file
(EndingBlankLine)
[warning] 8-8: [UnorderedKey] The AGENTUITY_REGION key should go before the AGENTUITY_SDK_KEY key
(UnorderedKey)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: sync
🔇 Additional comments (3)
app/chat/services/sessionService.ts (3)
32-34: LGTM! Consistent addition ofcredentials: 'include'across all fetch calls.This ensures authentication cookies are properly sent with API requests. The change is applied consistently across all session-related endpoints.
220-261: LGTM! Clean implementation with robust error handling.The fallback from JSON error parsing to status text (lines 236-247) handles cases where the API returns non-JSON error responses gracefully.
326-331: Potential silent failure ifsession-savedevent is not received.The
finishevent is acknowledged but ignored, relying entirely onsession-savedto invokeonFinish. If the server fails to persist or sendsession-saved, the caller will never be notified of completion.Consider invoking
onFinishwith the current state onfinishas a fallback, or at minimum logging a warning if the stream ends withoutsession-saved.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
| "@agentuity/auth": "^0.0.111", | ||
| "@agentuity/core": "^0.0.111", | ||
| "@agentuity/react": "^0.0.111", | ||
| "@agentuity/server": "^0.0.111", | ||
| "@clerk/backend": "^2.29.0", | ||
| "@clerk/clerk-react": "^5.59.2", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
cd /root && find . -name "package.json" -type f | head -5Repository: agentuity/docs
Length of output: 113
🏁 Script executed:
cat -n package.json | head -50Repository: agentuity/docs
Length of output: 2140
🌐 Web query:
@clerk/clerk-react v5.59 React 19 compatibility
💡 Result:
Short answer: likely not fully—@clerk/clerk-react v5.59 requires React 18+ but does not officially include React 19 in its peerDependencies, so you may see peer-dep warnings or install blocks with React 19. (clerk.com)
Recommendation: check the package's peerDependencies in your lockfile or npm view, or upgrade to the latest Clerk v5 (or the specific Clerk package release that adds React 19 support) to avoid peer-dep issues. (clerk.com)
Citations:
- 1: https://clerk.com/docs/guides/development/upgrading/upgrade-guides/core-2/react?utm_source=openai
- 2: https://clerk.com/docs/guides/development/upgrading/upgrade-guides/core-2/react?utm_source=openai
🌐 Web query:
@clerk/backend v2.29.0 React compatibility requirements
💡 Result:
@clerk/backend v2.29.0 does not itself require React (it's a backend SDK). React compatibility requirements apply to Clerk's react-dependent SDKs — those now require React 18 or newer. [1][2]
Sources:
@clerk/backend(npm package / repo). [1]- Clerk upgrade guides noting "All react-dependent Clerk SDKs now require React 18 or higher." [2]
🌐 Web query:
@clerk/clerk-react 5.59.2 peerDependencies React requirements
💡 Result:
@clerk/clerk-react@5.59.2 requires React 18+ (peer): react and react-dom matching ^18.0.0 or compatible React 19 patch ranges used by v5.59.x. (See package.json for the v5.59.x series — example: v5.59.3 peerDependencies: "react": "^18.0.0 || ~19.0.3 || ~19.1.4 || ~19.2.3 || ~19.3.0-0", same for "react-dom".) [1][2]
Sources:
@clerk/clerk-reactpackage.json (v5.59.3 example). [1]- Clerk docs: React-dependent SDKs require React 18 or higher. [2]
🏁 Script executed:
npm view `@clerk/clerk-react`@5.59.2 peerDependencies 2>/dev/null || echo "Checking with web search"Repository: agentuity/docs
Length of output: 198
@clerk/clerk-react v5.59.2 is incompatible with React ^19.0.0—this will cause peer dependency violations.
The package.json specifies React ^19.0.0, which could install React 19.0.0, 19.0.1, or 19.0.2. However, @clerk/clerk-react v5.59.2's peerDependencies require React ^18.0.0 || ~19.0.3 || ~19.1.4 || ~19.2.3 || ~19.3.0-0, meaning it needs at least React 19.0.3 for React 19 support. This version mismatch will cause peer dependency warnings or installation failures.
Fix: Either pin React to ~19.0.3 or upgrade @clerk/clerk-react to a version that supports React 19.0.0.
🤖 Prompt for AI Agents
In `@package.json` around lines 24 - 29, The package.json dependency for
"@clerk/clerk-react" (v5.59.2) is incompatible with the current React range
(^19.0.0); update package.json to either pin React to a compatible patch (change
the React and react-dom entries to "~19.0.3") or upgrade "@clerk/clerk-react" to
a version that explicitly supports React ^19.0.0, then run your package manager
to regenerate the lockfile and verify installs; ensure you update both "react"
and "react-dom" entries and re-run installation/tests to confirm peer dependency
warnings are resolved.
- Implemented middleware to handle user session management and proxy API requests to the Agentuity backend with bearer token authentication. - Updated session and agent pulse API routes to utilize bearer token authentication alongside cookie authentication. - Refactored API rewrites to be managed within middleware, improving configuration clarity. - Added cookie management for user sessions and ensured proper error handling for missing configuration. - Updated documentation and comments for clarity on authentication mechanisms.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@doc-agents/src/api/agent_pulse/route.ts`:
- Around line 38-44: The current mapping in conversationHistory unsafely casts
author to 'USER' | 'ASSISTANT'; replace that cast with an explicit validation:
normalize the raw role (e.g., String(m.author || m.role ||
'USER').trim().toUpperCase()), then set author = normalized === 'ASSISTANT' ?
'ASSISTANT' : 'USER' so only those two values are allowed (default to 'USER' for
anything else), and keep content extraction the same; update the mapping inside
the conversationHistory assignment in route.ts (the function that maps msg -> {
author, content }) to perform this check rather than using the type assertion.
In `@doc-agents/src/api/sessions/route.ts`:
- Around line 191-202: The timestamp normalization for session.messages (inside
the mapping over session.messages) can throw a RangeError for invalid
timestamps; update the mapping in route.ts to validate the parsed Date before
calling toISOString (e.g., parse timestamp into const date = new
Date(message.timestamp) and check if isNaN(date.getTime()) or otherwise handle
invalid dates) and only replace timestamp with date.toISOString() when valid,
otherwise leave the original timestamp or set a safe default; apply the same
validation logic used in the POST endpoint to the session.messages mapping to
prevent RangeError.
♻️ Duplicate comments (8)
doc-agents/src/agent/doc_processing/docs-orchestrator.ts (4)
8-50: Previously flagged issues apply to this function.The concerns raised in past reviews regarding:
- Error handling inside the deletion loop (lines 17-39)
- Using
' 'instead of'A'for the query placeholder (lines 18-22) to match codebase patternsThese still apply. Additionally, the
ctx: anytype issue flagged forsyncDocsFromPayloadequally applies here at line 9.
55-77: Good error isolation for removed files.The per-item try/catch pattern with error tracking via
errorFilesarray provides good resilience - a failure processing one file won't prevent others from being processed. The stats aggregation is clean.Note: The
ctx: anytype issue at line 56 was previously flagged.
79-138: Empty chunks guard properly implemented.The guard at line 115 (
if (chunksWithMetadata.length > 0)) correctly addresses the previously flagged concern about calling upsert with empty arrays. Good fix.The base64 validation concern at lines 86-97 was previously flagged and remains applicable.
145-158: Previously flagged concerns apply.The error handling and safety confirmation concerns were raised in past reviews.
One observation: this function correctly uses
' 'as the query placeholder (line 149), which is inconsistent withremoveVectorsByPathusing'A'(line 19). This reinforces the earlier suggestion to standardize on' 'throughout.doc-agents/src/agent/agent_pulse/context-builder.ts (1)
85-86: Tool name mismatch in system prompt.Line 81 correctly lists the tool as
startTutorialAtStep, but line 86 referencesstartTutorialById. This inconsistency may confuse the LLM when it tries to use the tool.=== TOOL-USAGE RULES (must follow) === -- startTutorialById must only be used when user select a tutorial. If the user starts a new tutorial, the step number should be set to one. Valid step is between 1 and totalSteps of the specific tutorial. +- startTutorialAtStep must only be used when user selects a tutorial. If the user starts a new tutorial, the step number should be set to one. Valid step is between 1 and totalSteps of the specific tutorial.doc-agents/src/api/sessions/route.ts (2)
102-114: Timestamp normalization can throwRangeErrorfor invalid date strings.If
message.timestampis an invalid date string (e.g.,"not-a-date"),new Date(message.timestamp).toISOString()will throw aRangeError. Add validation before conversion.Proposed fix
if (session.messages && session.messages.length > 0) { session.messages = session.messages.map((message) => { if (message.timestamp) { - // Ensure timestamp is ISO string const date = new Date(message.timestamp); + if (Number.isNaN(date.getTime())) { + return message; // Keep original if invalid + } return { ...message, timestamp: date.toISOString(), }; } return message; }); }
178-188: PUT endpoint lacks input validation unlike POST.The POST endpoint uses
validator({ input: SessionSchema })but PUT directly parses JSON without validation. This inconsistency could allow malformed or invalid session data to be stored.Proposed fix
-router.put('/:sessionId', bearerTokenAuth, cookieAuth, async (c) => { +router.put('/:sessionId', bearerTokenAuth, cookieAuth, validator({ input: SessionSchema }), async (c) => { try { const userId = (c.get as (key: string) => string)('userId'); const sessionId = c.req.param('sessionId'); const sessionKey = `${userId}_${sessionId}`; - const session = await c.req.json<Session>(); + const session = c.req.valid('json');doc-agents/src/api/agent_pulse/route.ts (1)
117-127: Title case transformation contradicts "sentence case" requirement.Line 126 applies
.toLowerCase()to the entire title after capitalizing the first character, which destroys proper nouns and acronyms. For example, "Using Agentuity SDK" becomes "Using agentuity sdk".🔧 Suggested fix
- title = title.charAt(0).toUpperCase() + title.slice(1).toLowerCase(); + // Only ensure first character is uppercase; preserve rest as-is for proper nouns + title = title.charAt(0).toUpperCase() + title.slice(1);
🧹 Nitpick comments (8)
doc-agents/src/agent/agent_pulse/context-builder.ts (1)
1-1: Unused import:getTutorialMeta
getTutorialMetais imported but never used in this file. Remove it to keep the imports clean.-import { getTutorialList, getTutorialMeta, type Tutorial } from './tutorial'; +import { getTutorialList, type Tutorial } from './tutorial';doc-agents/.env.example (1)
1-3: Add trailing newline for POSIX compliance.The file is missing a newline at the end. Most tools expect text files to end with a newline character.
♻️ Proposed fix
TUTORIAL_API_URL=https://agentuity.dev VECTOR_STORE_NAME=agentuity-docs AGENTUITY_SDK_KEY= +Alphabetical key ordering (flagged by dotenv-linter) is purely stylistic and can be ignored here since semantic grouping may be preferred.
.env.example (1)
6-7: Add trailing newline.Missing newline at end of file. Based on static analysis hints.
♻️ Proposed fix
AGENTUITY_SDK_KEY= AGENTUITY_REGION=use +middleware.ts (2)
43-47: Remove redundant token check and consider filtering forwarded headers.The
if (AGENT_BEARER_TOKEN)check on line 45 is redundant since line 34 already ensures it's truthy.Additionally, forwarding all client headers (
new Headers(request.headers)) may unintentionally pass sensitive headers (e.g.,Cookie,Host,X-Forwarded-*) to the backend. Consider explicitly selecting which headers to forward.♻️ Proposed fix (removes redundant check)
// Create headers with bearer token const headers = new Headers(request.headers); - if (AGENT_BEARER_TOKEN) { - headers.set('Authorization', `Bearer ${AGENT_BEARER_TOKEN}`); - } + headers.set('Authorization', `Bearer ${AGENT_BEARER_TOKEN}`);If header filtering is needed, consider an allowlist approach:
const allowedHeaders = ['content-type', 'accept', 'user-agent']; const headers = new Headers(); for (const name of allowedHeaders) { const value = request.headers.get(name); if (value) headers.set(name, value); } headers.set('Authorization', `Bearer ${AGENT_BEARER_TOKEN}`);
54-61: Consider extracting cookie-setting helper to reduce duplication.The cookie configuration is duplicated across both branches. A helper function would centralize the logic.
♻️ Optional refactor
function setChatUserIdCookie(response: NextResponse, userId: string) { response.cookies.set(COOKIE_NAME, userId, { maxAge: COOKIE_MAX_AGE, httpOnly: true, secure: process.env.NODE_ENV === 'production', sameSite: 'lax', }); }Then use
if (needsCookie) setChatUserIdCookie(response, userId);in both branches.Also applies to: 70-77
doc-agents/src/api/sessions/route.ts (1)
12-22: Import theMessagetype fromapp/chat/types.tsinstead of redefining it locally.The file already imports
SessionSchemafromapp/chat/types.ts. AMessagetype is exported there with the identical structure (id, author, content, timestamp, tutorialData). Redefining it here creates duplication and risks type drift. Change line 12-22 to import the type instead.doc-agents/src/api/agent_pulse/route.ts (2)
71-76: Consider adding proper types forkvandloggerparameters.Using
anyforkvandloggerreduces type safety. Consider importing or defining proper interfaces for these dependencies.
210-218: Fragile status code determination based on error message content.Checking
error.message.includes('Invalid')to determine the HTTP status code is brittle. If error messages change or are localized, this logic may incorrectly classify errors.🔧 Suggested approach using custom error types
+// At the top of the file or in a shared errors module +class ValidationError extends Error { + constructor(message: string) { + super(message); + this.name = 'ValidationError'; + } +} // In the catch block: -const statusCode = error instanceof Error && error.message.includes('Invalid') ? 400 : 500; +const statusCode = error instanceof ValidationError ? 400 : 500;Alternatively, consider creating a custom error class hierarchy or using error codes for more reliable status mapping.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (3)
doc-agents/bun.lockis excluded by!**/*.lockdoc-agents/src/generated/app.tsis excluded by!**/generated/**doc-agents/src/generated/routes.tsis excluded by!**/generated/**
📒 Files selected for processing (12)
.env.exampleapp/chat/utils/useStreaming.tsdoc-agents/.env.exampledoc-agents/package.jsondoc-agents/src/agent/agent_pulse/context-builder.tsdoc-agents/src/agent/doc_processing/docs-orchestrator.tsdoc-agents/src/api/agent_pulse/route.tsdoc-agents/src/api/doc-qa/route.tsdoc-agents/src/api/sessions/route.tsdoc-agents/src/middleware/auth.tsmiddleware.tsnext.config.mjs
✅ Files skipped from review due to trivial changes (1)
- app/chat/utils/useStreaming.ts
🚧 Files skipped from review as they are similar to previous changes (4)
- doc-agents/src/api/doc-qa/route.ts
- next.config.mjs
- doc-agents/package.json
- doc-agents/src/middleware/auth.ts
🧰 Additional context used
🧬 Code graph analysis (2)
doc-agents/src/api/sessions/route.ts (3)
doc-agents/src/middleware/auth.ts (2)
bearerTokenAuth(24-46)cookieAuth(9-17)doc-agents/config.ts (1)
config(5-9)app/chat/types.ts (1)
SessionSchema(47-52)
doc-agents/src/agent/agent_pulse/context-builder.ts (2)
doc-agents/src/agent/agent_pulse/types.ts (2)
TutorialState(8-11)Tutorial(13-20)doc-agents/src/agent/agent_pulse/tutorial.ts (2)
getTutorialList(7-47)Tutorial(3-3)
🪛 dotenv-linter (4.0.0)
doc-agents/.env.example
[warning] 3-3: [EndingBlankLine] No blank line at the end of the file
(EndingBlankLine)
[warning] 3-3: [UnorderedKey] The AGENTUITY_SDK_KEY key should go before the TUTORIAL_API_URL key
(UnorderedKey)
.env.example
[warning] 7-7: [EndingBlankLine] No blank line at the end of the file
(EndingBlankLine)
[warning] 7-7: [UnorderedKey] The AGENTUITY_REGION key should go before the AGENTUITY_SDK_KEY key
(UnorderedKey)
🔇 Additional comments (17)
doc-agents/src/agent/doc_processing/docs-orchestrator.ts (2)
1-4: LGTM!The imports are clean and properly organized. Good use of
typekeyword for type-only imports.
140-143: LGTM!Clean stats aggregation and final logging for observability.
doc-agents/src/agent/agent_pulse/context-builder.ts (3)
7-32: LGTM!The function has proper error handling with fallback context for both API failures and exceptions. The logic flow is clear and robust.
37-50: LGTM!The completion check on line 46 correctly uses
>=to match the state-manager logic. All edge cases (no state, unknown tutorial, completed, in-progress) are handled appropriately.
63-118: LGTM overall.The system prompt is well-structured with clear sections for role, personality, tools, rules, and boundaries. Error handling appropriately logs and rethrows for the caller to handle.
middleware.ts (3)
34-39: Good validation with clear error response.Returning 503 Service Unavailable for missing configuration is appropriate. The error message is helpful for debugging, though you may want to make it more generic in production to avoid exposing internal details.
66-77: LGTM!The default response path correctly handles cookie setting when needed, and the existing error code redirect logic is preserved.
32-33: The asymmetry is intentional and correctly implemented./api/sessionsusesstartsWith()because the backend provides multiple sub-routes (/:sessionId,/:sessionId/messages, query parameters), while/api/agent_pulseis a single endpoint with no sub-routes, justifying the exact match. The middleware routing is correct.doc-agents/src/api/sessions/route.ts (4)
1-11: LGTM!Imports and pagination constants are appropriate. The limits are reasonable for session pagination.
27-90: LGTM!Pagination logic is well-implemented with proper clamping of
limitandcursorparameters. The parallel fetching of sessions usingPromise.allis appropriate given theMAX_SESSIONS_LIMITof 50.
150-173: LGTM!The endpoint correctly constructs a user-scoped key and handles the not-found case appropriately.
233-260: LGTM!Delete logic correctly removes both the session data and updates the user's master session list.
doc-agents/src/api/agent_pulse/route.ts (5)
1-14: LGTM!Imports are well-organized with appropriate type imports for
ConversationMessage,Session, andTutorialData.
16-21: LGTM!The interface clearly defines the expected request structure with appropriate optional fields.
143-149: LGTM!Standard SSE headers are correctly defined for streaming responses.
222-222: LGTM!Standard default export pattern for route modules.
152-152: Middleware chain cannot implement fallback authentication.The sequential middleware chain
bearerTokenAuth, cookieAuthdoes not provide fallback/OR logic. SincebearerTokenAuthreturns 401 when the Authorization header is missing or invalid (lines 24-43 ofauth.ts),cookieAuthnever executes. This makes the bearer token required, not optional.To support either bearer token OR cookie authentication, the middleware implementations need to support optional/skip behavior, or this endpoint needs a single middleware that handles both methods with proper OR logic.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
| conversationHistory = body.conversationHistory.map((msg: unknown) => { | ||
| const m = msg as Record<string, unknown>; | ||
| return { | ||
| author: String(m.author || m.role || 'USER').toUpperCase() as 'USER' | 'ASSISTANT', | ||
| content: typeof m.content === 'string' ? m.content : '', | ||
| }; | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unsafe type cast for author field.
The cast as 'USER' | 'ASSISTANT' doesn't validate the actual value. If the input contains an unexpected role like "SYSTEM" or "admin", it will pass through without validation and may cause unexpected behavior downstream.
🔧 Suggested fix with validation
conversationHistory = body.conversationHistory.map((msg: unknown) => {
const m = msg as Record<string, unknown>;
+ const rawAuthor = String(m.author || m.role || 'USER').toUpperCase();
+ const author = rawAuthor === 'ASSISTANT' ? 'ASSISTANT' : 'USER';
return {
- author: String(m.author || m.role || 'USER').toUpperCase() as 'USER' | 'ASSISTANT',
+ author,
content: typeof m.content === 'string' ? m.content : '',
};
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| conversationHistory = body.conversationHistory.map((msg: unknown) => { | |
| const m = msg as Record<string, unknown>; | |
| return { | |
| author: String(m.author || m.role || 'USER').toUpperCase() as 'USER' | 'ASSISTANT', | |
| content: typeof m.content === 'string' ? m.content : '', | |
| }; | |
| }); | |
| conversationHistory = body.conversationHistory.map((msg: unknown) => { | |
| const m = msg as Record<string, unknown>; | |
| const rawAuthor = String(m.author || m.role || 'USER').toUpperCase(); | |
| const author = rawAuthor === 'ASSISTANT' ? 'ASSISTANT' : 'USER'; | |
| return { | |
| author, | |
| content: typeof m.content === 'string' ? m.content : '', | |
| }; | |
| }); |
🤖 Prompt for AI Agents
In `@doc-agents/src/api/agent_pulse/route.ts` around lines 38 - 44, The current
mapping in conversationHistory unsafely casts author to 'USER' | 'ASSISTANT';
replace that cast with an explicit validation: normalize the raw role (e.g.,
String(m.author || m.role || 'USER').trim().toUpperCase()), then set author =
normalized === 'ASSISTANT' ? 'ASSISTANT' : 'USER' so only those two values are
allowed (default to 'USER' for anything else), and keep content extraction the
same; update the mapping inside the conversationHistory assignment in route.ts
(the function that maps msg -> { author, content }) to perform this check rather
than using the type assertion.
| if (session.messages && session.messages.length > 0) { | ||
| session.messages = session.messages.map((message) => { | ||
| if (message.timestamp) { | ||
| const date = new Date(message.timestamp); | ||
| return { | ||
| ...message, | ||
| timestamp: date.toISOString(), | ||
| }; | ||
| } | ||
| return message; | ||
| }); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same timestamp validation issue as in POST endpoint.
The timestamp normalization here has the same RangeError risk for invalid dates. Apply the same fix as suggested for the POST endpoint.
🤖 Prompt for AI Agents
In `@doc-agents/src/api/sessions/route.ts` around lines 191 - 202, The timestamp
normalization for session.messages (inside the mapping over session.messages)
can throw a RangeError for invalid timestamps; update the mapping in route.ts to
validate the parsed Date before calling toISOString (e.g., parse timestamp into
const date = new Date(message.timestamp) and check if isNaN(date.getTime()) or
otherwise handle invalid dates) and only replace timestamp with
date.toISOString() when valid, otherwise leave the original timestamp or set a
safe default; apply the same validation logic used in the POST endpoint to the
session.messages mapping to prevent RangeError.
| router.post('/:sessionId/messages', bearerTokenAuth, cookieAuth, async (c) => { | ||
| try { | ||
| const userId = (c.get as (key: string) => string)('userId'); | ||
| const sessionId = c.req.param('sessionId'); | ||
| const sessionKey = `${userId}_${sessionId}`; | ||
|
|
||
| const { message } = await c.req.json<MessageRequest>(); | ||
|
|
||
| if (!message) { | ||
| return c.json({ error: 'Message is required' }, { status: 400 }); | ||
| } | ||
|
|
||
| // Get current session | ||
| const sessionResponse = await c.var.kv.get<Session>(config.kvStoreName, sessionKey); | ||
| if (!sessionResponse.exists || !sessionResponse.data) { | ||
| return c.json({ error: 'Session not found' }, { status: 404 }); | ||
| } | ||
|
|
||
| const session = sessionResponse.data; | ||
| const updatedSession: Session = { | ||
| ...session, | ||
| messages: [...session.messages, message], | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing input validation for message endpoint.
This endpoint parses MessageRequest without schema validation, unlike the POST endpoint for sessions. Additionally, unlike POST and PUT which normalize timestamps, messages added here aren't processed for timestamp consistency.
Suggested approach
Consider creating a MessageRequestSchema using Zod and applying validation:
import { z } from 'zod';
const MessageRequestSchema = z.object({
message: z.object({
id: z.string(),
author: z.enum(['USER', 'ASSISTANT']),
content: z.string(),
timestamp: z.string(),
tutorialData: z.any().optional(),
}),
});Then apply the validator middleware and normalize the timestamp:
-router.post('/:sessionId/messages', bearerTokenAuth, cookieAuth, async (c) => {
+router.post('/:sessionId/messages', bearerTokenAuth, cookieAuth, validator({ input: MessageRequestSchema }), async (c) => {
try {
const userId = (c.get as (key: string) => string)('userId');
const sessionId = c.req.param('sessionId');
const sessionKey = `${userId}_${sessionId}`;
- const { message } = await c.req.json<MessageRequest>();
-
- if (!message) {
- return c.json({ error: 'Message is required' }, { status: 400 });
- }
+ const { message } = c.req.valid('json');
+
+ // Normalize timestamp
+ if (message.timestamp) {
+ const date = new Date(message.timestamp);
+ if (!Number.isNaN(date.getTime())) {
+ message.timestamp = date.toISOString();
+ }
+ }
Summary by CodeRabbit
Release Notes
New Features
Chores
✏️ Tip: You can customize this high-level summary in your review settings.