Skip to content

chore: Update AI SDK dependencies to v2 and migrate tests to Bun#40

Merged
0xbbjoker merged 1 commit into1.xfrom
update/dependencies-ai-sdk
Sep 22, 2025
Merged

chore: Update AI SDK dependencies to v2 and migrate tests to Bun#40
0xbbjoker merged 1 commit into1.xfrom
update/dependencies-ai-sdk

Conversation

@standujar
Copy link
Copy Markdown

@standujar standujar commented Sep 21, 2025

Summary

  • Updated AI SDK dependencies from v1 to v2 (@ai-sdk/anthropic, @ai-sdk/google, @ai-sdk/openai)
  • Migrated test suite from Vitest to Bun test framework
  • Updated @openrouter/ai-sdk-provider to v1.2.0 for compatibility

Changes

Dependencies Updated

  • @ai-sdk/anthropic: ^1.2.11 → ^2.0.17
  • @ai-sdk/google: ^1.2.18 → ^2.0.14
  • @ai-sdk/openai: ^1.3.22 → ^2.0.32
  • @openrouter/ai-sdk-provider: ^0.4.5 → ^1.2.0
  • ai: ^4.3.17 → ^5.0.48

Testing Framework

  • Replaced Vitest with Bun's built-in test framework
  • Updated all test imports and mocking patterns
  • Maintained 100% test coverage

Breaking Changes

None - Plugin API remains unchanged

Testing

All existing tests pass with the new Bun test runner

Summary by CodeRabbit

  • New Features

    • Added abilities to list and delete memories within the knowledge service.
  • Improvements

    • More reliable fragment handling and ID generation when processing content.
    • Consistent token usage reporting across providers.
    • Safer fragment count reporting in processing responses.
    • Updated AI generation to use maxOutputTokens for clearer limits.
  • Chores

    • Bumped package version to 1.5.10 and updated AI-related dependencies.
  • Tests

    • Migrated test suite to Bun’s Jest-like runner and updated mocking/assertions accordingly.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Sep 21, 2025

Walkthrough

Migrates tests to Bun’s Jest-like API, updates imports to node:fs/path, refactors LLM embedding and generation options (token usage fields, maxOutputTokens), adjusts OpenRouter caching handling, modifies fragment ID generation, adds KnowledgeService getMemories/deleteMemory, and bumps package and dependencies.

Changes

Cohort / File(s) Change Summary
Test framework migration
__tests__/action.test.ts
Switches from Vitest to Bun’s Jest-like test API; replaces vi.mock with mock/module, jest.fn, jest.spyOn; updates assertions and mock handling; aligns FS/path mocks; maintains test behavior.
Version and dependencies
package.json
Bumps version to 1.5.10; upgrades AI SDK deps (@ai-sdk/*, ai, @openrouter/ai-sdk-provider, @elizaos/core); adds devDependency @types/bun.
Actions adjustments
src/actions.ts
Uses node:fs and node:path specifiers; safeguards fragmentCount via optional chaining; removes unused result assignment when adding knowledge from direct text.
LLM refactor and token accounting
src/llm.ts
Refactors OpenAI embedding to single embedOptions call; standardizes token usage to inputTokens/outputTokens; renames maxTokens to maxOutputTokens across providers; adjusts OpenRouter caching handling (ignores external cacheOptions, internal detection).
Knowledge service updates
src/service.ts
Changes fragment ID generation input to createUniqueUuid(this.runtime); adds public methods: getMemories(params) and deleteMemory(memoryId); minor logging tweak.
Import specifier normalization
src/tests.ts
Replaces "fs"/"path" with "node:fs"/"node:path"; no behavior change.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor Caller
  participant LLM as LLM Module
  participant Provider as AI Provider
  Note over LLM,Provider: Text generation (updated params & token accounting)

  Caller->>LLM: generateText(prompt, { maxOutputTokens, model, ... })
  rect rgba(230,240,255,0.5)
    LLM->>Provider: generate({ maxOutputTokens, ... })
    Provider-->>LLM: result { usage: { inputTokens, outputTokens }, text }
  end
  LLM-->>Caller: { text, usage: { inputTokens, outputTokens } }
Loading
sequenceDiagram
  autonumber
  actor Caller
  participant LLM as LLM Module
  participant OpenAI as OpenAI Embed API
  Note over LLM,OpenAI: Embedding (refactored embedOptions)

  Caller->>LLM: generateTextEmbedding(value, { model, dimensions? })
  rect rgba(230,255,230,0.5)
    LLM->>OpenAI: embed({ model, value, dimensions? })
    OpenAI-->>LLM: embedding[]
  end
  LLM-->>Caller: embedding[]
Loading
sequenceDiagram
  autonumber
  actor Client
  participant KS as KnowledgeService
  participant Store as Memory Store
  Note over KS,Store: New public methods

  Client->>KS: getMemories(params)
  KS->>Store: query(params + agent scope)
  Store-->>KS: Memory[]
  KS-->>Client: Memory[]

  Client->>KS: deleteMemory(memoryId)
  KS->>Store: delete(memoryId)
  Store-->>KS: ack
  KS-->>Client: void
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Poem

A nibble of tests, a bundle of cheer,
I thump to new tokens: input, output clear.
Embeds hop swift with tidy options in tow,
Memories fetched, old ones let go.
IDs sprout fresh where fragments appear—
Version burrows forward. Onward, my dear! 🐇✨

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title succinctly and accurately describes the primary work in this changeset—upgrading AI SDK dependencies to v2 and migrating tests to Bun—matching the package.json dependency updates and the test-file changes in the PR summary, and it is concise and readable for repository history scanning.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch update/dependencies-ai-sdk

Tip

👮 Agentic pre-merge checks are now available in preview!

Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.

  • Built-in checks – Quickly apply ready-made checks to enforce title conventions, require pull request descriptions that follow templates, validate linked issues for compliance, and more.
  • Custom agentic checks – Define your own rules using CodeRabbit’s advanced agentic capabilities to enforce organization-specific policies and workflows. For example, you can instruct CodeRabbit’s agent to verify that API documentation is updated whenever API schema files are modified in a PR. Note: Upto 5 custom checks are currently allowed during the preview period. Pricing for this feature will be announced in a few weeks.

Please see the documentation for more information.

Example:

reviews:
  pre_merge_checks:
    custom_checks:
      - name: "Undocumented Breaking Changes"
        mode: "warning"
        instructions: |
          Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal).

Please share your feedback with us on this Discord post.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@standujar standujar requested a review from 0xbbjoker September 21, 2025 20:06
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
src/actions.ts (1)

188-188: Inconsistency with file processing: missing fragment count in response.

The file processing path handles the fragmentCount from the result (line 157), but the text processing path doesn't use the returned value from addKnowledge. This inconsistency means users won't get information about how many fragments their text was split into, unlike when processing files.

Consider capturing and using the result to provide consistent feedback:

- await service.addKnowledge(knowledgeOptions);
+ const result = await service.addKnowledge(knowledgeOptions);

  response = {
-   text: `I've added that information to my knowledge base. It has been stored and indexed for future reference.`,
+   text: `I've added that information to my knowledge base. It has been split into ${result?.fragmentCount || 1} searchable fragment${(result?.fragmentCount || 1) > 1 ? 's' : ''} and indexed for future reference.`,
  };
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to data retention organization setting

📥 Commits

Reviewing files that changed from the base of the PR and between 3faf921 and 5e8a803.

⛔ Files ignored due to path filters (1)
  • bun.lock is excluded by !**/*.lock
📒 Files selected for processing (6)
  • __tests__/action.test.ts (15 hunks)
  • package.json (3 hunks)
  • src/actions.ts (3 hunks)
  • src/llm.ts (11 hunks)
  • src/service.ts (2 hunks)
  • src/tests.ts (1 hunks)
🔇 Additional comments (18)
src/tests.ts (1)

16-17: LGTM! Import path migration to Node.js conventions.

The migration from fs and path to node:fs and node:path aligns with Node.js best practices for built-in module imports. This change makes it explicit that these are core Node.js modules rather than npm packages.

src/actions.ts (2)

10-11: LGTM! Import path migration to Node.js conventions.

The migration from fs and path to node:fs and node:path aligns with Node.js best practices for built-in module imports.


157-157: Remove unnecessary optional chaining for fragmentCount

addKnowledge's return type includes fragmentCount: number (see src/service.ts), so result.fragmentCount is always present; replace ${result?.fragmentCount || 0} with ${result.fragmentCount} (or keep || 0 only if you intentionally want a defensive display).
Location: src/actions.ts:157

Likely an incorrect or invalid review comment.

src/service.ts (3)

480-480: LGTM! Improved logging readability.

Using template literal instead of string concatenation improves code readability.


876-902: LGTM! New public API methods for memory management.

The addition of getMemories and deleteMemory methods provides essential memory management capabilities that were previously missing from the public API. The methods are well-documented and properly scoped to the agent.


843-844: No change required — createUniqueUuid correctly takes runtime as first arg.
Both src/service.ts (lines 841–845) and src/routes.ts (around line 383) call createUniqueUuid(runtime, …), so passing this.runtime + fragmentIdContent is consistent with existing usage and does not change fragment-ID semantics.

Likely an incorrect or invalid review comment.

__tests__/action.test.ts (2)

1-17: LGTM! Successful migration to Bun test framework.

The migration from Vitest to Bun's built-in test framework is well executed, maintaining Jest-like syntax for familiarity.


9-17: LGTM! Proper module mocking for Node.js core modules.

The mocking setup correctly handles the node:fs and node:path imports using Bun's mock.module API.

package.json (3)

4-4: Verify version bump alignment with project versioning strategy.

The version has jumped from 1.2.3 to 1.5.10. Please confirm this aligns with the project's versioning strategy and that all intermediate versions have been properly released or documented.


52-52: LGTM! Addition of Bun types for test migration.

The addition of @types/bun as a devDependency properly supports the test framework migration.


30-36: Verify AI SDK v2 compatibility and ai v5 interoperability

package.json now depends on @ai-sdk/* (v2) and ai@^5.0.48 — these are major-version upgrades; confirm migration and compatibility.

  • Update all SDK usages to the v2 API (search for "@ai-sdk/" imports and adjust call signatures).
  • Follow each provider's v2 migration guide (Anthropic, Google, OpenAI).
  • Confirm ai@5 is compatible with the @ai-sdk v2 providers; run unit/integration tests and a full build.

Location: package.json (dependencies) — @ai-sdk/anthropic ^2.0.17, @ai-sdk/google ^2.0.14, @ai-sdk/openai ^2.0.32, ai ^5.0.48.

src/llm.ts (7)

126-151: LGTM! Improved embedding configuration structure.

The refactor to use embedOptions object with structured parameters is cleaner and more maintainable than the previous approach. The logging correctly references embedOptions.dimensions and the conditional dimension parameter handling for OpenAI models is appropriate.


281-287: Parameter rename aligns with AI SDK v2.

The change from maxTokens to maxOutputTokens and token usage field updates (inputTokens/outputTokens instead of promptTokens/completionTokens) correctly align with AI SDK v2 conventions.


337-343: Token usage tracking updated consistently.

The OpenAI text generation correctly uses the new token field names and maxOutputTokens parameter, maintaining consistency with the AI SDK v2 migration.


373-379: Google provider updates aligned with SDK changes.

The parameter and token usage field updates are consistent with the overall AI SDK v2 migration pattern seen throughout the file.


549-563: Caching implementation uses updated token fields correctly.

The Claude caching implementation correctly uses maxOutputTokens and the new token usage field names (inputTokens/outputTokens). The caching logic appears intact despite the parameter changes.


623-638: Gemini caching maintains functionality with updated parameters.

The Gemini caching implementation correctly uses the updated parameter names and token fields while preserving the implicit caching logic for Gemini 2.5 models.


658-671: Standard OpenRouter text generation updated consistently.

The standard (non-caching) OpenRouter implementation uses the correct updated parameter names and token field references, maintaining consistency with the rest of the file.

maxTokens: number,
cacheDocument?: string,
cacheOptions?: { type: 'ephemeral' },
_cacheOptions?: { type: 'ephemeral' },
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

🧩 Analysis chain

Verify caching functionality still works as intended.

The parameter renaming to _cacheOptions (with underscore prefix) suggests it's now ignored, but the JSDoc comment on lines 208-211 still documents cacheOptions: { type: "ephemeral" } as a valid usage pattern. This could mislead users.

Run the following script to check if the caching functionality and documentation are consistent:


🏁 Script executed:

#!/bin/bash
# Verify caching parameter usage and documentation consistency

echo "=== Checking for cacheOptions usage in codebase ==="
rg -n "cacheOptions" --type=ts -C3

echo "=== Checking JSDoc examples that mention cacheOptions ==="
rg -n "cacheOptions.*ephemeral" --type=ts -A5 -B5

echo "=== Checking if _cacheOptions parameter is actually used ==="
ast-grep --pattern 'function generateOpenRouterText($$$, _cacheOptions$$$) {
  $$$
}'

Length of output: 4598


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "=== grep for _cacheOptions ==="
rg -n "_cacheOptions" -S || true

echo
echo "=== grep for cacheOptions (all hits) ==="
rg -n "cacheOptions" --type=ts -S -n -C2 || true

echo
echo "=== all generateText call sites (with context) ==="
rg -n --type=ts -C3 "generateText\(" || true

echo
echo "=== src/llm.ts lines 180-260 (JSDoc + nearby code) ==="
sed -n '180,260p' src/llm.ts || true

echo
echo "=== src/llm.ts lines 380-440 (function signature + body head) ==="
sed -n '380,440p' src/llm.ts || true

echo
echo "=== src/document-processor.ts lines 660-700 (call site shown earlier) ==="
sed -n '660,700p' src/document-processor.ts || true

echo
echo "=== types.ts cacheOptions type area ==="
sed -n '80,110p' src/types.ts || true

Length of output: 12384


Fix ignored cacheOptions in generateOpenRouterText

The parameter _cacheOptions?: { type: 'ephemeral' } in src/llm.ts (generateOpenRouterText — line ~409) is never referenced; generateText passes overrideConfig?.cacheOptions (src/llm.ts ~lines 239–243) and callers (src/document-processor.ts:676–679) supply cacheOptions, so the option is effectively ignored. Either remove the underscore and actually consume cacheOptions (implement ephemeral behavior) or stop accepting/passing it and update the JSDoc/types to avoid misleading users.

Offending line (still present):
_cacheOptions?: { type: 'ephemeral' },

🤖 Prompt for AI Agents
In src/llm.ts around line 409, the parameter `_cacheOptions?: { type:
'ephemeral' }` is declared but never used; either consume it or stop accepting
it — choose to remove it to avoid misleading API: delete the unused
`_cacheOptions` parameter from generateOpenRouterText, update the function's
JSDoc/type signature to remove cacheOptions, remove any passing of this arg from
callers (e.g., update calls that supply cacheOptions so they no longer pass it),
and run/adjust any affected typings or docs so the option is no longer
advertised; if you prefer the other approach instead, rename `_cacheOptions` to
`cacheOptions`, implement handling for `{ type: 'ephemeral' }` within
generateOpenRouterText (apply ephemeral behavior to caching logic) and ensure
callers pass the option and types/JSDoc reflect the implemented behavior.

Copy link
Copy Markdown

@0xbbjoker 0xbbjoker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@0xbbjoker 0xbbjoker merged commit 279f21e into 1.x Sep 22, 2025
2 checks passed
@0xbbjoker 0xbbjoker deleted the update/dependencies-ai-sdk branch September 22, 2025 05:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants