feat: add Mistral AI plugin with LLM support#1226
Conversation
🦋 Changeset detectedLatest commit: 2f93d8d The changes in this PR will be included in the next version bump. This PR includes changesets to release 23 packages
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
c4e70a5 to
8e5003e
Compare
201898d to
5c0b12b
Compare
6bfd684 to
57d43f6
Compare
- Implements LLM and LLMStream using the official @mistralai/mistralai SDK - Supports streaming, tool calling, parallel tool calls, and tool choice - Handles all ChatContext item types: messages, function calls, outputs - Includes test suite using shared @livekit/agents-plugins-test helpers - Adds MistralChatModels type for all supported model identifiers
33913b5 to
d93cb9b
Compare
|
@codex review and check the plugin package format against others. Review for web-socket patterns compared with other LLM plugins and check for alignment or discrepancies |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 2f93d8d64e
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| throw new APIConnectionError({ | ||
| message: `Mistral LLM: connection error - ${err.message ?? 'unknown error'}`, | ||
| options: { retryable: true }, |
There was a problem hiding this comment.
Stop retrying after emitting partial stream output
This catch path always throws APIConnectionError with retryable: true, even when chunks were already queued from the current stream attempt; if a network drop happens mid-response, the base retry loop starts a fresh request and appends a second completion to already-emitted text/tool calls, which can corrupt downstream tool execution and user-visible output. Track a retryable flag that flips to false after the first emitted chunk/usage event and reuse it in the thrown API errors.
Useful? React with 👍 / 👎.
| "scripts": { | ||
| "build": "tsup --onSuccess \"pnpm build:types\"", | ||
| "build:types": "tsc --declaration --emitDeclarationOnly && node ../../scripts/copyDeclarationOutput.js", | ||
| "clean": "rm -rf dist", | ||
| "clean:build": "pnpm clean && pnpm build", | ||
| "lint": "eslint -f unix \"src/**/*.{ts,js}\"", | ||
| "test": "vitest" | ||
| }, |
There was a problem hiding this comment.
Add API check scripts for the new plugin package
The new plugin package format diverges from other published plugins by omitting api:check/api:update scripts, so turbo run api:check will skip this package and public API regressions in its generated declarations can ship unnoticed. Since this package already includes api-extractor.json, wiring the standard scripts (and matching dev dependency) keeps it aligned with existing LLM plugin release validation.
Useful? React with 👍 / 👎.
Description
Changes Made
Pre-Review Checklist
Testing
restaurant_agent.tsandrealtime_agent.tswork properly (for major changes)Additional Notes
Note to reviewers: Please ensure the pre-review checklist is completed before starting your review.
restaurant_agent.ts and realtime_agent.ts not tested because no changes to the core code were applied.