-
-
Notifications
You must be signed in to change notification settings - Fork 130
Adapter/groq #289
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. Weβll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Adapter/groq #289
Conversation
Introduce a new Groq adapter to enable fast LLM inference via Groq's API. Includes TypeScript configuration and Vite build setup for consistent tooling across the AI packages.
Introduce the Groq SDK package and add utility functions for creating a Groq client, retrieving the API key from environment variables, and generating prefixed IDs. This also updates the lockfile to include required Groq SDK dependencies.
β¦ck#278) * feat: opus 4.6 model & additional config for provider clients * fix: isue with gemini adapter
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
* fix: anthropic tool call issues * fixing pnpm lock * ci: apply automated fixes * reworking model to uimessage conversions * simplifying the message conversion handling * ci: apply automated fixes * more small fixups * simplifying the message conversion handling * small test fixups * ci: apply automated fixes --------- Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
β¦e types, utilities, and text adapter.
β¦ck#278) * feat: opus 4.6 model & additional config for provider clients * fix: isue with gemini adapter
* ci: Version Packages * fix version numbers * fix changelogs * ci: apply automated fixes --------- Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Jack Herrington <jherr@pobox.com> Co-authored-by: Alem Tuzlak <t.zlak@hotmail.com> Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Introduce a new Groq adapter to enable fast LLM inference via Groq's API. Includes TypeScript configuration and Vite build setup for consistent tooling across the AI packages.
β¦ck#278) * feat: opus 4.6 model & additional config for provider clients * fix: isue with gemini adapter
π WalkthroughWalkthroughAdds a Groq adapter, breaks types by making stream deltas required, refactors stream processing for lazy assistant-message creation, preserves interleaved message ordering, enhances Anthropic/Gemini adapters (merge/dedupe/tool handling), expands model metadata (Claude Opus 4.6), and adds extensive tests and docs. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant ChatClient
participant StreamProcessor
participant Adapter
participant ProviderAPI
Client->>ChatClient: start chat stream
ChatClient->>Adapter: request streaming
Adapter->>ProviderAPI: open provider stream
ProviderAPI-->>Adapter: RUN_STARTED / TEXT_MESSAGE_START
Adapter->>StreamProcessor: emit TEXT_MESSAGE_START
StreamProcessor->>StreamProcessor: prepareAssistantMessage() (defer creation)
ProviderAPI-->>Adapter: TEXT_MESSAGE_CONTENT (deltas)
Adapter->>StreamProcessor: emit TEXT_MESSAGE_CONTENT
StreamProcessor->>StreamProcessor: ensureAssistantMessage() -> create message, emit messageAppended
ProviderAPI-->>Adapter: TOOL_CALL_START / TOOL_CALL_ARGS / TOOL_CALL_END
Adapter->>StreamProcessor: emit TOOL_CALL_* events (tool lifecycle)
StreamProcessor->>ChatClient: queue/wait for tool execution results
ProviderAPI-->>Adapter: RUN_FINISHED
Adapter->>StreamProcessor: emit RUN_FINISHED
StreamProcessor->>ChatClient: finalizeStream() -> return final UIMessage
ChatClient-->>Client: resolve UIMessage (or null)
sequenceDiagram
participant UI
participant uiMessageToModelMessages
participant ModelMessages
UI->>uiMessageToModelMessages: pass UIMessage array
uiMessageToModelMessages->>uiMessageToModelMessages: collapse non-assistant parts
uiMessageToModelMessages->>uiMessageToModelMessages: buildAssistantMessages to preserve interleaving
uiMessageToModelMessages-->>ModelMessages: return ordered ModelMessages preserving text/tool-call/tool-result order
Estimated code review effortπ― 5 (Critical) | β±οΈ ~120 minutes Possibly related PRs
Suggested reviewers
Poem
π₯ Pre-merge checks | β 2 | β 1β Failed checks (1 inconclusive)
β Passed checks (2 passed)
βοΈ Tip: You can configure your own custom pre-merge checks in the settings. β¨ Finishing touchesπ§ͺ Generate unit tests (beta)
No actionable comments were generated in the recent review. π Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 15
Caution
Some comments are outside the diff and canβt be posted inline due to platform limitations.
β οΈ Outside diff range comments (3)
packages/typescript/ai/src/activities/chat/tools/tool-calls.ts (1)
296-306:β οΈ Potential issue | π MajorInconsistent
"null"handling between the two argument-parsing paths.
ToolCallManager.executeTools(line 134) explicitly normalizes"null"β"{}"before parsing, but this path inexecuteToolCallsdoes not. If a provider (e.g., Anthropic for emptytool_useblocks) sends"null"as tool arguments, this function will passnulltotool.execute()and schema validation, while the other path passes{}. This divergence can cause downstreamTypeErrors or schema-validation failures depending on which code path is taken.Additionally, the
if (argsStr)guard on line 299 is always true because line 298's|| '{}'fallback guarantees a non-empty string, making the conditional dead code.Proposed fix β align with ToolCallManager and remove dead branch
// Parse arguments, throwing error if invalid JSON let input: unknown = {} const argsStr = toolCall.function.arguments.trim() || '{}' - if (argsStr) { - try { - input = JSON.parse(argsStr) - } catch (parseError) { - // If parsing fails, throw error to fail fast - throw new Error(`Failed to parse tool arguments as JSON: ${argsStr}`) - } + try { + input = JSON.parse(argsStr === 'null' ? '{}' : argsStr) + } catch (parseError) { + // If parsing fails, throw error to fail fast + throw new Error(`Failed to parse tool arguments as JSON: ${argsStr}`) }packages/typescript/ai-anthropic/src/text/text-provider-options.ts (1)
147-156:β οΈ Potential issue | π MajorType intersection makes
type: 'adaptive'unreachable.
ExternalTextProviderOptionsintersectsAnthropicThinkingOptions(Line 152, which definesthinkingwith variants'enabled' | 'disabled') withPartial<AnthropicAdaptiveThinkingOptions>(Line 155, which adds the'adaptive'variant). In TypeScript, intersecting two properties with the same key intersects their value types β the'adaptive'discriminant doesn't overlap with'enabled' | 'disabled', so it gets eliminated. Users will never be able to pass{ thinking: { type: 'adaptive' } }through this type.Consider replacing
AnthropicThinkingOptionswithAnthropicAdaptiveThinkingOptionsdirectly (which is a superset), instead of intersecting both:Proposed fix
export type ExternalTextProviderOptions = AnthropicContainerOptions & AnthropicContextManagementOptions & AnthropicMCPOptions & AnthropicServiceTierOptions & AnthropicStopSequencesOptions & - AnthropicThinkingOptions & + AnthropicAdaptiveThinkingOptions & AnthropicToolChoiceOptions & - AnthropicSamplingOptions & - Partial<AnthropicAdaptiveThinkingOptions> & - Partial<AnthropicEffortOptions> + AnthropicSamplingOptions & + AnthropicEffortOptionspackages/typescript/ai-gemini/tests/gemini-adapter.test.ts (1)
55-56:β οΈ Potential issue | π Major
createSummarizeAdapterpasses a raw string instead of a config object.The
GeminiSummarizeAdapterconstructor expects aGeminiSummarizeConfigobject (with anapiKeyfield), but line 56 passes a plain string. Compare withcreateTextAdapteron line 53-54, which correctly uses{ apiKey: 'test-key' }.Proposed fix
const createSummarizeAdapter = () => - new GeminiSummarizeAdapter('test-key', 'gemini-2.0-flash') + new GeminiSummarizeAdapter({ apiKey: 'test-key' }, 'gemini-2.0-flash')
π€ Fix all issues with AI agents
In @.changeset/slimy-ways-wave.md:
- Around line 1-7: The changeset forgot to include `@tanstack/ai` even though
types.ts made the delta field required on TextMessageContentEvent and
StepFinishedEvent; update the .changeset to add '@tanstack/ai' with an
appropriate version bump (at least minor, or major if your semver policy treats
type-level breaking changes as major) so downstream consumers are alerted, and
mention in the changeset message that delta was made required on
TextMessageContentEvent and StepFinishedEvent to clarify the breaking change.
In `@packages/typescript/ai-anthropic/src/model-meta.ts`:
- Around line 51-79: The CLAUDE_OPUS_4_6 model metadata is missing the
adaptive_thinking flag in its supports object; update the supports block inside
the CLAUDE_OPUS_4_6 constant to include adaptive_thinking: true (alongside
input, extended_thinking, priority_tier) so the supports object accurately
reflects the model capabilities.
In `@packages/typescript/ai-groq/package.json`:
- Around line 45-48: The package.json is missing runtime dependency "groq-sdk"
(used by src/utils/client.ts and src/adapters/text.ts) and should also mirror
other adapters by moving or adding "@tanstack/ai" to devDependencies as
workspace:*. Update package.json to add a dependencies section containing
"groq-sdk" with an appropriate semver (e.g., ^0.x.x) and add a devDependencies
section that includes "@tanstack/ai": "workspace:*" along with the existing dev
deps (e.g., "@vitest/coverage-v8" and "vite") so the adapter's imports resolve
correctly at runtime and during development.
In `@packages/typescript/ai-groq/src/adapters/text.ts`:
- Around line 385-428: mapTextOptionsToGroq is dropping provider-specific
options because modelOptions is validated but never merged into the returned
request; update the return object in mapTextOptionsToGroq to spread modelOptions
first (e.g., ...modelOptions) so GroqTextProviderOptions (frequency_penalty,
presence_penalty, stop, seed, response_format, reasoning_effort, logprobs, etc.)
are included, while keeping explicit fields (model, messages, temperature,
max_tokens/maxTokens, top_p/topP, tools, stream) afterwards so they take
precedence; locate modelOptions, validateTextProviderOptions,
convertToolsToProviderFormat and adjust the returned
ChatCompletionCreateParamsStreaming object to include the spread.
- Around line 349-362: The RUN_FINISHED yield reads usage only from
chunk.x_groq?.usage and thus drops usage when the SDK populates chunk.usage;
create a fallback local variable (e.g., const usageData = chunk.x_groq?.usage ??
chunk.usage) and then reference usageData when constructing the usage object
(prompt_tokens, completion_tokens, total_tokens) in the yield block instead of
chunk.x_groq.usage so both legacy x_groq and standard chunk.usage are supported.
In `@packages/typescript/ai-groq/src/message-types.ts`:
- Around line 68-73: ChatCompletionNamedToolChoice currently uses a capitalized
Function property and lacks the required type field; update the interface
ChatCompletionNamedToolChoice to have a lowercase property named function and
add a type property typed to the literal "function" (i.e., { type: "function";
function: { name: string } }) so it matches the Groq API schema and will
serialize correctly for tool_choice named tool choices.
In `@packages/typescript/ai-groq/src/model-meta.ts`:
- Around line 308-321: GROQ_CHAT_MODELS is currently typed as a widened string[]
which makes (typeof GROQ_CHAT_MODELS)[number] resolve to string and breaks
per-model type narrowing for GroqChatModels, ResolveProviderOptions<TModel>,
ResolveInputModalities<TModel>, and GroqTextAdapter<TModel>; fix it by marking
the exported array literal GROQ_CHAT_MODELS with as const so the element types
remain string literals (preserving per-model type safety) and update any related
usages if needed to accept the readonly tuple type.
In `@packages/typescript/ai-groq/src/utils/client.ts`:
- Around line 1-6: The module imports Groq_SDK (import Groq_SDK and the
ClientOptions-based GroqClientConfig) but the package.json lacks groq-sdk in
dependencies, so add "groq-sdk" to this package's dependencies in package.json
(use an appropriate semver like ^latest-or-matched-version for the repo), then
reinstall/update lockfile so the runtime import resolves correctly.
In `@packages/typescript/ai-groq/src/utils/schema-converter.ts`:
- Around line 12-33: The function transformNullsToUndefined currently drops
object keys when the transformed value is undefined, which removes keys that
were originally undefined; in the for-loop inside transformNullsToUndefined
check the original value (the variable value) to decide whether to keep the key:
if transformed !== undefined OR value === undefined then assign result[key] =
transformed (which may be undefined) so only nulls get converted to undefined
but pre-existing undefined properties are preserved; update the conditional in
transformNullsToUndefined accordingly.
- Around line 57-86: The loop in makeGroqStructuredOutputCompatible currently
recurses into object/array props but uses else-if so optional object/array
fields never get 'null' added; after you recurse for prop.type === 'object' or
'array' (the blocks that set properties[propName]) check wasOptional and then
ensure properties[propName].type includes 'null' (if type is a string wrap into
[type,'null'], or if array push 'null' when missing), similar to the existing
wasOptional handling for primitives; update the logic in the for-loop around
propName/prop to always apply the optional-null union after recursion.
In `@packages/typescript/ai-groq/tests/groq-adapter.test.ts`:
- Around line 136-141: The mock test chunks place usage at the top level but the
adapter reads it from chunk.x_groq?.usage (see usage access in text.ts around
the adapter logic), so update each mock chunk in groq-adapter.test.ts to nest
the usage object under x_groq (e.g., x_groq: { usage: { prompt_tokens: 5,
completion_tokens: 1, total_tokens: 6 } }) so that runFinishedChunk.usage
assertions (and other expectations at the noted ranges) receive the expected
shape; apply this same nesting fix to all mentioned mock entries (lines
corresponding to the other occurrences).
In `@packages/typescript/ai-svelte/CHANGELOG.md`:
- Line 7: Remove the duplicated commit reference in the CHANGELOG entry by
deleting the repeated `5d98472` token so the line reads with each commit hash
only once; edit the line that currently contains "Updated dependencies
[[`5d98472`](https://github.com/TanStack/ai/commit/5d984722e1f84725e3cfda834fbda3d0341ecedd),
[`5d98472`](https://github.com/TanStack/ai/commit/5d984722e1f84725e3cfda834fbda3d0341ecedd)]"
to contain a single reference to that commit (or list distinct commits if
intended).
In `@packages/typescript/ai/tests/stream-processor.test.ts`:
- Line 1: The import mixes a type-only symbol with value imports; remove Mock
from the value import and add a separate type-only import for it (i.e. keep
describe, expect, it, vi in the existing import and add "import type { Mock }
from 'vitest'") so Mock is imported only as a type and ESLint's prefer top-level
type-only import rule is satisfied.
In `@packages/typescript/smoke-tests/e2e/CHANGELOG.md`:
- Around line 18-19: The changelog entry includes an empty commit reference list
"- Updated dependencies []:" which looks incomplete; either remove the empty
brackets so the line reads "- Updated dependencies:" or replace "[]" with the
appropriate commit link(s) referencing the update (e.g., include the commit hash
or PR URL) associated with "@tanstack/tests-adapters@0.1.11" so the entry is no
longer empty.
In `@testing/panel/package.json`:
- Around line 24-27: Remove the unused dependency `@tanstack/start` from
testing/panel's package.json: delete the "@tanstack/start": "^1.120.20" entry,
then run the project's package manager (npm/yarn/pnpm) to update node_modules
and the lockfile (e.g., npm install or yarn install) to keep lockfiles in sync;
verify no import or build errors referencing `@tanstack/start` remain (search for
"@tanstack/start" and confirm zero usages).
π§Ή Nitpick comments (27)
packages/typescript/smoke-tests/e2e/CHANGELOG.md (1)
7-12: Remove the duplicate commit reference.Line 7 lists the same commit twice; keep a single link to avoid confusion.
Suggested fix
-- Updated dependencies [[`5d98472`](https://github.com/TanStack/ai/commit/5d984722e1f84725e3cfda834fbda3d0341ecedd), [`5d98472`](https://github.com/TanStack/ai/commit/5d984722e1f84725e3cfda834fbda3d0341ecedd)]: +- Updated dependencies [[`5d98472`](https://github.com/TanStack/ai/commit/5d984722e1f84725e3cfda834fbda3d0341ecedd)]:packages/typescript/ai/tests/tool-call-manager.test.ts (2)
249-251: Avoideval()even in test code.Using
evalhere works but is a code-smell that linters will flag and could mask issues if the expression string ever comes from test fixtures or parameterized inputs. A simple hardcoded return would be safer and clearer.β»οΈ Suggested fix
execute: vi.fn((args: any) => { - return JSON.stringify({ result: eval(args.expression) }) + return JSON.stringify({ result: 8 }) // 5+3 }),
398-405: Consider clearing mock state between tests.
serverToolWithApproval.executeis a sharedvi.fn()that isn't reset between tests. Currently only one test invokes it, but if more tests are added, stale call counts / arguments will leak across tests.β»οΈ Add a beforeEach reset
describe('executeToolCalls', () => { + beforeEach(() => { + vi.clearAllMocks() + }) + // Client tool (no execute function) with needsApprovalpackages/typescript/ai-react/CHANGELOG.md (1)
7-7: Consider deduplicating the commit reference for clarity.The same commit hash
5d98472appears twice in the dependency update list. While this may be auto-generated by changesets, it's redundant since both dependency updates came from the same commit. For consistency with other changelog entries (e.g., line 15), consider showing the commit hash once.π Suggested simplification
-- Updated dependencies [[`5d98472`](https://github.com/TanStack/ai/commit/5d984722e1f84725e3cfda834fbda3d0341ecedd), [`5d98472`](https://github.com/TanStack/ai/commit/5d984722e1f84725e3cfda834fbda3d0341ecedd)]: +- Updated dependencies [[`5d98472`](https://github.com/TanStack/ai/commit/5d984722e1f84725e3cfda834fbda3d0341ecedd)]: - `@tanstack/ai`@0.5.0 - `@tanstack/ai-client`@0.4.4packages/typescript/ai/src/activities/chat/tools/tool-calls.ts (1)
419-424:result || nullloses valid falsy tool results like0,false, or"".Both result-mapping expressions use
result || null, which coerces any falsy return value (e.g.,0,false,"") tonull. This is pre-existing and not introduced by this PR, but it's a latent correctness issue in the tool execution flow worth tracking.A nullish-coalescing operator (
result ?? null) would preserve intentional falsy values.Proposed fix
- typeof result === 'string' - ? JSON.parse(result) - : result || null, + typeof result === 'string' + ? JSON.parse(result) + : result ?? null,Apply the same change at both locations (lines 421β424 and 479β480).
Also applies to: 476-481
packages/typescript/smoke-tests/adapters/src/tests/apr-approval-flow.ts (1)
117-122:hasHammerInResponseis computed but no longer contributes to the pass/fail condition.It's still included in
debugData.summary(line 129) which is useful for debugging, so this is fine as-is. If the intent is to fully decouple from response content checks, you could consider removing the computation on line 117 as well and the summary reference on line 129, but keeping it for diagnostics is reasonable.testing/panel/src/lib/model-selection.ts (1)
83-87: Consider a more consistent label style.The label
"Grok - Grok 4 - slow thinking"uses informal language that differs from the convention of other entries (e.g.,"Grok - Grok 3 Mini","OpenAI - GPT-4o"). Something like"Grok - Grok 4 (Reasoning)"would be more consistent with the rest of the list, but this is a testing panel so it's not critical.packages/typescript/ai/src/activities/chat/index.ts (1)
794-805: Correct fix for the approval-execution bypass (issue#225).The guard properly skips synthetic
pendingExecutiontool messages so they don't polluteclientToolResultsand prevent actual client-side execution after approval.One observation: this relies on a magic
pendingExecutionproperty in serialized tool content as a signaling mechanism betweenuiMessageToModelMessagesand this consumer. Consider defining a shared constant or type for this marker to avoid silent breakage if the upstream shape changes.packages/typescript/ai-groq/src/tools/tool-converter.ts (1)
9-15: Nit: simplify the map callback.The callback can be written point-free for conciseness, consistent with the OpenRouter adapter's implementation.
β»οΈ Simplify
export function convertToolsToProviderFormat( tools: Array<Tool>, ): Array<FunctionTool> { - return tools.map((tool) => { - return convertFunctionToolToAdapterFormat(tool) - }) + return tools.map((tool) => convertFunctionToolToAdapterFormat(tool)) }packages/typescript/ai-anthropic/src/text/text-provider-options.ts (1)
207-217:validateThinkingdoesn't handle the new'adaptive'type.The validator only checks
type === 'enabled'but doesn't account fortype === 'adaptive'. If'adaptive'thinking is meant to skipbudget_tokensvalidation, this is fine. But it may be worth adding an explicit guard or comment to clarify that'adaptive'requires no validation.packages/typescript/ai-groq/src/message-types.ts (1)
313-350: Replace empty interfaces with type aliases per linter recommendation.Biome flags these four empty interfaces (
GroqDocumentMetadata,GroqTextMetadata,GroqAudioMetadata,GroqVideoMetadata). Empty interfaces are equivalent to{}. If declaration merging isn't needed,typealiases are preferred.However, if you want to preserve the pattern used in other adapter packages (e.g., Gemini), keeping interfaces is fine for consistency and future extensibility β just suppress the lint rule.
Option: convert to type aliases
-export interface GroqDocumentMetadata { } +export type GroqDocumentMetadata = Record<string, never> -export interface GroqTextMetadata { } +export type GroqTextMetadata = Record<string, never> -export interface GroqAudioMetadata { } +export type GroqAudioMetadata = Record<string, never> -export interface GroqVideoMetadata { } +export type GroqVideoMetadata = Record<string, never>scripts/fix-version-bump.ts (2)
77-77: Fix ESLintarray-typeerrors: useArray<T>instead ofT[].The project's ESLint config enforces
Array<T>syntax. Two violations:Proposed fix
- const packagesToFix: PackageToFix[] = [] + const packagesToFix: Array<PackageToFix> = []- const errors: string[] = [] + const errors: Array<string> = []Also applies to: 110-110
55-57:newVersionin the replacement string could be affected by$patterns.
String.prototype.replaceinterprets$&,$', etc. in the replacement string. IfnewVersionever contains a$character, the replacement would be corrupted. For a developer-only script with semver inputs this is very low risk, but using a function replacer would be safer:Safer replacement
- const updated = content.replace( - /"version":\s*"1\.0\.0"/, - `"version": "${newVersion}"`, - ) + const updated = content.replace( + /"version":\s*"1\.0\.0"/, + () => `"version": "${newVersion}"`, + )The same applies to
fixChangelogon line 63.packages/typescript/ai/src/activities/chat/messages.ts (1)
33-45:collapseContentPartsaccesses.contentwithout narrowing toTextPart.On line 40,
parts.map((p) => p.content)accesses.contentonContentPart, but onlyTextParthas acontent: stringproperty. TheallTextguard ensures this is safe at runtime, but TypeScript'severy()doesn't narrow the type inside the subsequentmap(). If this compiles without error, the project likely has acontentfield on the union or TS is lenient here β just flagging in case a future refactor changes the union shape.packages/typescript/ai-anthropic/src/adapters/text.ts (1)
696-708:RUN_FINISHEDfrommessage_stoplacks usage data.When
message_deltadoesn't fire (edge case), the fallbackRUN_FINISHEDat line 701-707 has nousagefield. If downstream consumers expect usage data on allRUN_FINISHEDevents, this could cause issues. Consider adding a zero-valued usage fallback for consistency.packages/typescript/ai-anthropic/src/model-meta.ts (2)
22-22:adaptive_thinkingadded toModelMeta.supportsbut unused.The
adaptive_thinking?: booleanfield is declared but not set on any model definition. This is fine as a forward-compatible addition, just noting it for tracking.
350-393: Significant block of commented-out code remains.These commented-out type definitions and the
ANTHROPIC_MODEL_METAmap appear to be from an earlier iteration. Consider removing them or adding a TODO explaining why they're retained. Stale commented-out code adds noise to the file.packages/typescript/ai/src/activities/chat/stream/processor.ts (1)
1080-1089: Clarify intent:isWhitespaceOnlyMessagereturnsfalsefor zero-part messages.The
parts.length === 0guard means an assistant message with no parts at all is not considered "whitespace-only" and will survive the cleanup infinalizeStream. SinceArray.prototype.everyreturnstruefor an empty array, without this guard, empty-parts messages would be removed. If the intent is to keep empty messages alive (e.g., for error state), consider adding a brief inline comment explaining why zero-part messages are explicitly excluded.π Suggested documentation improvement
private isWhitespaceOnlyMessage(message: UIMessage): boolean { - if (message.parts.length === 0) return false + // Empty-parts messages are NOT treated as whitespace-only β they may + // exist for error state rendering (RUN_ERROR creates parts: []). + if (message.parts.length === 0) return false return message.parts.every( (part) => part.type === 'text' && part.content.trim() === '', ) }packages/typescript/ai/tests/stream-processor.test.ts (1)
15-92: Duplicate test helpers acrosschat.test.tsandstream-processor.test.ts.The
chunk()factory,evshorthand builders, andstreamOfutility are duplicated between this file andchat.test.ts. Consider extracting shared test utilities into a commontests/helpers.tsmodule to reduce duplication. This is a nice-to-have and doesn't block merging.packages/typescript/ai-groq/src/utils/client.ts (1)
40-42:generateIdcould produce very short suffixes in rare cases.
Math.random().toString(36).substring(7)yields a variable-length suffix β extremely rarely it could be very short (e.g., ifMath.random()returns0.5, the base-36 representation"0.i"givessubstring(7)as""). This is the same pattern used inchat-client.tsand is fine for non-cryptographic correlation IDs, but worth noting.A more robust alternative would be to use
substring(2, 10)for a consistent 8-character suffix:π‘ Optional fix for consistent suffix length
export function generateId(prefix: string): string { - return `${prefix}-${Date.now()}-${Math.random().toString(36).substring(7)}` + return `${prefix}-${Date.now()}-${Math.random().toString(36).substring(2, 10)}` }packages/typescript/ai-groq/tests/groq-adapter.test.ts (2)
39-48:nonStreamResponseparameter is unused across all tests.The
setupMockSdkClientfunction accepts anonStreamResponseparameter, but no test ever provides it. Consider removing it to keep the test helpers lean, or add a test for the non-streaming path (e.g.,structuredOutput).
159-162:ifguards around assertions can silently pass β use type narrowing withassertorexpectinstead.Multiple assertions are wrapped in
if (chunk?.type === '...')guards. If the chunk type doesn't match (e.g., due to a bug in the adapter), the assertions are silently skipped and the test passes. The earlierexpect(...).toBeDefined()catches existence but not the property values.Consider using a stricter pattern:
// Instead of: if (chunks[0]?.type === 'RUN_STARTED') { expect(chunks[0].runId).toBeDefined() } // Use: expect(chunks[0]?.type).toBe('RUN_STARTED') expect((chunks[0] as any).runId).toBeDefined()Or use Vitest's
assertto narrow the type and fail fast:assert(chunks[0]?.type === 'RUN_STARTED', 'Expected RUN_STARTED') expect(chunks[0].runId).toBeDefined()Also applies to: 220-223, 271-273, 366-369, 376-379, 384-386, 436-437, 504-507
packages/typescript/ai-groq/src/text/text-provider-options.ts (1)
188-225:InternalTextProviderOptionsrequiresmessages, but the validation call site never provides it.
InternalTextProviderOptionshas a requiredmessagesfield (Line 194), but intext.ts(Line 396β399), the callvalidateTextProviderOptions({ ...modelOptions, model: options.model })doesn't includemessages. SincevalidateTextProviderOptionsis a no-op stub, this isn't a runtime issue, but theascast intext.ts(Line 388β393) misrepresents the actual shape βoptions.modelOptionsis user-providedGroqTextProviderOptions, which has neithermessagesnormodel.Consider either making
messagesoptional inInternalTextProviderOptionsor splitting the validation parameter type so it accurately reflects what's actually passed.packages/typescript/ai-groq/src/adapters/text.ts (3)
118-121: Remove or downgrade debugconsole.error/console.logstatements.Lines 118β121 and 181β182 use
console.errorwith>>>markers, and Line 367 usesconsole.log. These look like development debug artifacts. In a library consumed by others, these will be noisy. Consider removing them or gating behind a debug flag/structured logger.Also applies to: 181-182, 367-367
137-185:mapTextOptionsToGroqreturn type isChatCompletionCreateParamsStreamingbutstructuredOutputoverridesstream: false.
mapTextOptionsToGroq(Line 387) returnsChatCompletionCreateParamsStreamingand hardcodesstream: true. ButstructuredOutput(Line 151) spreads it and overrides withstream: false, creating a type-level inconsistency. Consider returning a more general params type or extracting a shared builder.
503-503:as anycast on multimodal content array.
parts as anybypasses type safety for the Groq message content. This is a common workaround for SDK type mismatches but could hide future breakage. Consider defining a more specific union type or using the Groq SDK's content part types directly if available.packages/typescript/ai-groq/src/model-meta.ts (1)
236-262: Minor: inconsistent numeric separator formatting.Line 239 uses
65536while other models use underscore separators (e.g.,65_536on Line 180,131_072throughout). Consider aligning for readability.βοΈ Fix
- max_completion_tokens: 65536, + max_completion_tokens: 65_536,
| --- | ||
| '@tanstack/ai-anthropic': patch | ||
| '@tanstack/ai-gemini': patch | ||
| '@tanstack/ai-grok': patch | ||
| --- | ||
|
|
||
| Add in opus 4.6 and enhance acceptable config options by providers |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing @tanstack/ai from the changeset β the delta field was made required in TextMessageContentEvent and StepFinishedEvent.
The types.ts change in @tanstack/ai makes delta a required field on two public event interfaces (TextMessageContentEvent and StepFinishedEvent). This is a breaking change for any downstream consumer that constructs these event objects without delta. This changeset only bumps the adapter packages (ai-anthropic, ai-gemini, ai-grok) but omits @tanstack/ai itself.
Consider adding @tanstack/ai to this changeset (at minimum as a minor bump, or major depending on the project's semver policy for type-level breaking changes).
π€ Prompt for AI Agents
In @.changeset/slimy-ways-wave.md around lines 1 - 7, The changeset forgot to
include `@tanstack/ai` even though types.ts made the delta field required on
TextMessageContentEvent and StepFinishedEvent; update the .changeset to add
'@tanstack/ai' with an appropriate version bump (at least minor, or major if
your semver policy treats type-level breaking changes as major) so downstream
consumers are alerted, and mention in the changeset message that delta was made
required on TextMessageContentEvent and StepFinishedEvent to clarify the
breaking change.
| const CLAUDE_OPUS_4_6 = { | ||
| name: 'claude-opus-4-6', | ||
| id: 'claude-opus-4-6', | ||
| context_window: 200_000, | ||
| max_output_tokens: 64_000, | ||
| knowledge_cutoff: '2025-09-29', | ||
| max_output_tokens: 128_000, | ||
| knowledge_cutoff: '2025-05-01', | ||
| pricing: { | ||
| input: { | ||
| normal: 3, | ||
| normal: 5, | ||
| }, | ||
| output: { | ||
| normal: 25, | ||
| }, | ||
| }, | ||
| supports: { | ||
| input: ['text', 'image', 'document'], | ||
| extended_thinking: true, | ||
| priority_tier: true, | ||
| }, | ||
| } as const satisfies ModelMeta< | ||
| AnthropicContainerOptions & | ||
| AnthropicContextManagementOptions & | ||
| AnthropicMCPOptions & | ||
| AnthropicServiceTierOptions & | ||
| AnthropicStopSequencesOptions & | ||
| AnthropicThinkingOptions & | ||
| AnthropicToolChoiceOptions & | ||
| AnthropicSamplingOptions | ||
| > |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
π§© Analysis chain
π Web query:
Anthropic claude-opus-4-6 official model documentation specifications
π‘ Result:
Claude Opus 4.6 (API model: claude-opus-4-6) β official specs (Anthropic)
Model identifiers / where itβs available
- Claude API ID / alias:
claude-opus-4-6[1] - AWS Bedrock ID:
anthropic.claude-opus-4-6-v1[1] - GCP Vertex AI ID:
claude-opus-4-6[1]
Modalities / general capabilities
- Supports text + image input and text output; includes vision and multilingual support [1].
Context + output limits
- Context window: 200K tokens, with 1M tokens (beta) available using the
context-1m-2025-08-07beta header (and long-context pricing applies above 200K) [1]. - Max output: 128K tokens [1].
βThinkingβ / agent features
- Extended thinking: Yes [1]
- Adaptive thinking: Yes [1]
- Priority Tier: Yes [1]
Knowledge/training cutoffs (as documented)
- Reliable knowledge cutoff: May 2025 [1]
- Training data cutoff: Aug 2025 [1]
Pricing (Claude API, USD)
- Base input: $5 / MTok
- Output: $25 / MTok
- (Docs also list prompt-caching rates, batch discounts, and feature-specific modifiers.) [2]
Official βlist modelsβ endpoint (to verify availability in your account)
GET /v1/models(Models API) [3]
Primary official docs to reference
- Models overview (includes Opus 4.6 comparison table and beta header note) [1]
- Pricing (includes Opus 4.6 rates and modifiers) [2]
- Models API: List models (
/v1/models) [3]
Sources: [1] [2] [3]
Add adaptive_thinking: true to CLAUDE_OPUS_4_6 supports object.
Official Anthropic documentation confirms all metadata values (context window, max output tokens, knowledge cutoff, pricing, extended thinking, priority tier) are accurate. However, the documentation explicitly states that claude-opus-4-6 supports adaptive thinking, which is missing from the supports object:
supports: {
input: ['text', 'image', 'document'],
extended_thinking: true,
adaptive_thinking: true, // <- Add this
priority_tier: true,
}
π€ Prompt for AI Agents
In `@packages/typescript/ai-anthropic/src/model-meta.ts` around lines 51 - 79, The
CLAUDE_OPUS_4_6 model metadata is missing the adaptive_thinking flag in its
supports object; update the supports block inside the CLAUDE_OPUS_4_6 constant
to include adaptive_thinking: true (alongside input, extended_thinking,
priority_tier) so the supports object accurately reflects the model
capabilities.
| "peerDependencies": { | ||
| "@tanstack/ai": "workspace:^", | ||
| "zod": "^4.0.0" | ||
| }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
π§© Analysis chain
π Script executed:
#!/bin/bash
# Check if groq-sdk is imported anywhere in the ai-groq package
echo "=== Checking for groq-sdk imports in ai-groq ==="
rg -n 'groq-sdk' packages/typescript/ai-groq/
# Check what other adapter packages use for `@tanstack/ai` peer dependency
echo "=== Checking `@tanstack/ai` versions in adapter packages ==="
rg '"@tanstack/ai"' packages/typescript/ai-*/package.json
# Check if groq-sdk is in any dependency section of ai-groq
echo "=== Full dependency check in ai-groq/package.json ==="
rg 'groq' packages/typescript/ai-groq/package.json
# Check exports in ai-groq/package.json
echo "=== Checking exports configuration in ai-groq ==="
rg -A 10 '"exports"' packages/typescript/ai-groq/package.jsonRepository: TanStack/ai
Length of output: 2836
π Script executed:
#!/bin/bash
# Get full ai-groq/package.json to see all dependencies
echo "=== Full ai-groq/package.json ==="
cat packages/typescript/ai-groq/package.json
# Compare with ai-openai structure for consistency
echo -e "\n=== ai-openai/package.json for comparison ==="
cat packages/typescript/ai-openai/package.json
# Check ai-anthropic as well
echo -e "\n=== ai-anthropic/package.json for comparison ==="
cat packages/typescript/ai-anthropic/package.jsonRepository: TanStack/ai
Length of output: 3679
Add groq-sdk to dependencies and include @tanstack/ai in devDependencies.
The adapter imports groq-sdk in src/utils/client.ts and src/adapters/text.ts, but the package.json does not declare it in dependencies. This matches the pattern used in other adapters (e.g., ai-openai has "openai" in dependencies, ai-anthropic has "@anthropic-ai/sdk").
Add a dependencies section with the groq-sdk package and include @tanstack/ai in devDependencies with workspace:* to match the structure of other adapter packages:
{
"dependencies": {
"groq-sdk": "^0.x.x"
},
"devDependencies": {
"@tanstack/ai": "workspace:*",
"@vitest/coverage-v8": "4.0.14",
"vite": "^7.2.7"
}
}π€ Prompt for AI Agents
In `@packages/typescript/ai-groq/package.json` around lines 45 - 48, The
package.json is missing runtime dependency "groq-sdk" (used by
src/utils/client.ts and src/adapters/text.ts) and should also mirror other
adapters by moving or adding "@tanstack/ai" to devDependencies as workspace:*.
Update package.json to add a dependencies section containing "groq-sdk" with an
appropriate semver (e.g., ^0.x.x) and add a devDependencies section that
includes "@tanstack/ai": "workspace:*" along with the existing dev deps (e.g.,
"@vitest/coverage-v8" and "vite") so the adapter's imports resolve correctly at
runtime and during development.
| yield { | ||
| type: 'RUN_FINISHED', | ||
| runId: aguiState.runId, | ||
| model: chunk.model || options.model, | ||
| timestamp, | ||
| usage: chunk.x_groq?.usage | ||
| ? { | ||
| promptTokens: chunk.x_groq.usage.prompt_tokens || 0, | ||
| completionTokens: chunk.x_groq.usage.completion_tokens || 0, | ||
| totalTokens: chunk.x_groq.usage.total_tokens || 0, | ||
| } | ||
| : undefined, | ||
| finishReason: computedFinishReason, | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
π§© Analysis chain
π Web query:
Groq SDK ChatCompletionChunk usage field documentation
π‘ Result:
In the Groq TypeScript SDK (groq-sdk), ChatCompletionChunk["usage"] is meant for streaming token-usage reporting and follows the same pattern as the OpenAI streaming spec:
- You only get a meaningful
usagevalue in the stream if you setstream_options: { include_usage: true }in your request. When enabled, all streamed chunks includeusage: null, and then an extra final chunk is sent before[DONE]wherechoicesis empty andusagecontains the total token usage for the whole request. [1] - The non-streaming
chat.completionresponse has ausageobject with fields likeprompt_tokens,completion_tokens,total_tokens, and Groq timing fields such asqueue_time,prompt_time,completion_time,total_time. [2]
Note: historically (and in some integrations) Groq streaming usage has been observed under x_groq.usage in the last chunk rather than usage; if youβre not seeing chunk.usage, check chunk.x_groq?.usage as a fallback. [3]
Sources
[1] https://github.com/conneroisu/groq-go (streaming docs showing stream_options.include_usage and usage behavior)
[2] https://console.groq.com/docs/api-reference (Chat Completions response usage fields)
[3] vercel/ai#1651 (report of streaming usage under x_groq.usage)
π Script executed:
# Find the file and examine the code at the specified lines
find . -name "text.ts" -path "*/ai-groq/src/adapters/*" -type fRepository: TanStack/ai
Length of output: 105
π Script executed:
# Look for test files related to text adapter
find . -path "*/ai-groq*" -name "*.test.*" -o -name "*.spec.*" | head -20Repository: TanStack/ai
Length of output: 1666
π Script executed:
# Read the file at the specified lines with context
cat -n packages/typescript/ai-groq/src/adapters/text.ts | sed -n '340,370p'Repository: TanStack/ai
Length of output: 1275
π Script executed:
# Check for stream_options configuration in the file
rg -A 5 -B 5 "stream_options" packages/typescript/ai-groq/src/adapters/text.tsRepository: TanStack/ai
Length of output: 37
π Script executed:
# Read the test file to see how usage is mocked
cat -n packages/typescript/ai-groq/tests/groq-adapter.test.ts | head -100Repository: TanStack/ai
Length of output: 3809
π Script executed:
# Search for usage mocking in tests
rg -A 3 -B 3 "\.usage\|x_groq" packages/typescript/ai-groq/tests/groq-adapter.test.tsRepository: TanStack/ai
Length of output: 37
π Script executed:
# Get the full test file to understand the mocks better
wc -l packages/typescript/ai-groq/tests/groq-adapter.test.tsRepository: TanStack/ai
Length of output: 113
π Script executed:
# Look at more of the test file, especially the stream chunks setup
cat -n packages/typescript/ai-groq/tests/groq-adapter.test.ts | sed -n '100,250p'Repository: TanStack/ai
Length of output: 5595
π Script executed:
# Search for any usage field in all groq test files
rg "usage" packages/typescript/ai-groq/tests/ -A 2 -B 2Repository: TanStack/ai
Length of output: 3601
π Script executed:
# Check the actual request parameters being sent - look for stream_options
rg "stream_options\|include_usage" packages/typescript/ai-groq/src/Repository: TanStack/ai
Length of output: 37
π Script executed:
# Look at the RUN_FINISHED assertion in the test more carefully
cat -n packages/typescript/ai-groq/tests/groq-adapter.test.ts | sed -n '250,280p'Repository: TanStack/ai
Length of output: 1384
π Script executed:
# Check if there are any other files that mock x_groq usage
rg "x_groq" packages/typescript/ai-groq/Repository: TanStack/ai
Length of output: 502
π Script executed:
# Look for the actual Groq types being imported
head -50 packages/typescript/ai-groq/src/adapters/text.tsRepository: TanStack/ai
Length of output: 1566
π Script executed:
# Check the entire implementation to understand the flow better
cat -n packages/typescript/ai-groq/src/adapters/text.ts | sed -n '1,100p'Repository: TanStack/ai
Length of output: 3631
π Script executed:
# Get the full assertion for usage in RUN_FINISHED
cat -n packages/typescript/ai-groq/tests/groq-adapter.test.ts | sed -n '275,285p'Repository: TanStack/ai
Length of output: 620
π Script executed:
# Check if there's any mapping or transformation of the mock data
rg "x_groq" packages/typescript/ai-groq/tests/ -B 5 -A 5Repository: TanStack/ai
Length of output: 37
π Script executed:
# Look at the complete stream setup to understand the full picture
cat -n packages/typescript/ai-groq/src/adapters/text.ts | sed -n '200,250p'Repository: TanStack/ai
Length of output: 1731
π Script executed:
# Check if the tests are actually verifying usage is populated correctly
# by looking for any additional setup or mock adjustments
cat -n packages/typescript/ai-groq/tests/groq-adapter.test.ts | sed -n '38,50p'Repository: TanStack/ai
Length of output: 567
π Script executed:
# Verify that the stream chunks in tests don't have x_groq.usage
rg "x_groq" packages/typescript/ai-groq/tests/groq-adapter.test.tsRepository: TanStack/ai
Length of output: 37
π Script executed:
# Check if the implementation ever reads from chunk.usage as a fallback
cat -n packages/typescript/ai-groq/src/adapters/text.ts | sed -n '340,365p'Repository: TanStack/ai
Length of output: 1071
π Script executed:
# Let me trace through the logic - the test expects usage to be populated
# but the code only reads from x_groq.usage. Let me check if maybe
# there's a transformation happening elsewhere or if this is truly a bug.
# Search for any code that might add x_groq to the chunk
rg "x_groq" packages/typescript/ai-groq/src/ -B 3 -A 3Repository: TanStack/ai
Length of output: 1062
π Script executed:
# Let me check the entire test to see if usage is actually expected to be undefined
# or if tests are actually passing with the current implementation
cat -n packages/typescript/ai-groq/tests/groq-adapter.test.ts | sed -n '226,290p'Repository: TanStack/ai
Length of output: 2643
π Script executed:
# Verify if tests are actually passing by checking if there's test configuration
# or if maybe the tests are currently failing. Let me also check if there's
# any transformation in the processGroqStreamChunks method
cat -n packages/typescript/ai-groq/src/adapters/text.ts | sed -n '180,200p'Repository: TanStack/ai
Length of output: 872
π Script executed:
# Check if the function signature shows what type chunk is expected to be
ast-grep --pattern 'processGroqStreamChunks($_) {
$$$
}'Repository: TanStack/ai
Length of output: 37
π Script executed:
# Let me also verify by looking at the full context of where this function is called
cat -n packages/typescript/ai-groq/src/adapters/text.ts | sed -n '150,210p'Repository: TanStack/ai
Length of output: 2148
π Script executed:
# Check the Groq SDK types to understand the ChatCompletionChunk structure
# Let me look for type definitions
find packages/typescript/ai-groq -name "*.d.ts" -o -name "types.ts" | head -10Repository: TanStack/ai
Length of output: 37
π Script executed:
# Let me verify the exact type annotation for ChatCompletionChunk
rg "ChatCompletionChunk" packages/typescript/ai-groq/src/adapters/text.ts -B 2 -A 2Repository: TanStack/ai
Length of output: 370
π Script executed:
# Check for any type casting or coercion of the chunk
cat -n packages/typescript/ai-groq/src/adapters/text.ts | sed -n '216,225p'Repository: TanStack/ai
Length of output: 420
π Script executed:
# Verify the actual test is checking for defined usage
cat -n packages/typescript/ai-groq/tests/groq-adapter.test.ts | sed -n '277,286p'Repository: TanStack/ai
Length of output: 494
π Script executed:
# Let me verify if the test is actually checking for .usage to be DEFINED
# vs checking the structure. The .toMatchObject() call will fail if usage is undefined
# So the test should be failing with the current code. Let me see the complete test function.
cat -n packages/typescript/ai-groq/tests/groq-adapter.test.ts | sed -n '226,287p'Repository: TanStack/ai
Length of output: 2524
π Script executed:
# Double-check: does the test suite define any setup that transforms chunk.usage to x_groq.usage?
cat packages/typescript/ai-groq/tests/groq-adapter.test.ts | grep -A 20 -B 5 "setupMockSdkClient\|x_groq"Repository: TanStack/ai
Length of output: 5476
π Script executed:
# Let me check if there are any other instances where usage is being read differently
rg "chunk.usage" packages/typescript/ai-groq/src/Repository: TanStack/ai
Length of output: 37
π Script executed:
# Verify the problem: test mocks have standard usage field but code only checks x_groq.usage
# Let me trace through to confirm the test would fail
python3 << 'EOF'
# Simulate the test scenario
test_chunk = {
"id": "chatcmpl-123",
"model": "llama-3.3-70b-versatile",
"choices": [
{
"delta": {},
"finish_reason": "stop",
},
],
"usage": {
"prompt_tokens": 5,
"completion_tokens": 1,
"total_tokens": 6,
},
}
# This is what the current implementation does
usage_from_x_groq = test_chunk.get("x_groq", {}).get("usage") if test_chunk.get("x_groq") else None
result_usage = (
{
"promptTokens": usage_from_x_groq["prompt_tokens"] or 0,
"completionTokens": usage_from_x_groq["completion_tokens"] or 0,
"totalTokens": usage_from_x_groq["total_tokens"] or 0,
}
if usage_from_x_groq
else None
)
print(f"chunk.x_groq: {test_chunk.get('x_groq')}")
print(f"usage_from_x_groq: {usage_from_x_groq}")
print(f"result_usage: {result_usage}")
print(f"\nBut test expects usage to match: {test_chunk['usage']}")
print(f"Will test pass? {result_usage == test_chunk['usage']}")
EOFRepository: TanStack/ai
Length of output: 236
π Script executed:
# Check if there are multiple tests and if they're all using standard usage field
rg "\"usage\":" packages/typescript/ai-groq/tests/groq-adapter.test.ts | wc -lRepository: TanStack/ai
Length of output: 56
π Script executed:
# Verify none of the test mocks use x_groq.usage
rg "x_groq" packages/typescript/ai-groq/tests/Repository: TanStack/ai
Length of output: 37
Add fallback to standard chunk.usage field when reading usage data.
The code only reads from chunk.x_groq?.usage, but the Groq SDK provides usage via the standard chunk.usage field (with stream_options: { include_usage: true }). Checking only x_groq.usage means usage data will be lost when the API populates the standard field, which is the primary way the SDK exposes this data. The test mocks confirm usage is provided as a standard field, not under x_groq.
Add a fallback:
const usageData = chunk.x_groq?.usage ?? chunk.usageThen reference usageData in the yield statement instead of chunk.x_groq.usage.
π€ Prompt for AI Agents
In `@packages/typescript/ai-groq/src/adapters/text.ts` around lines 349 - 362, The
RUN_FINISHED yield reads usage only from chunk.x_groq?.usage and thus drops
usage when the SDK populates chunk.usage; create a fallback local variable
(e.g., const usageData = chunk.x_groq?.usage ?? chunk.usage) and then reference
usageData when constructing the usage object (prompt_tokens, completion_tokens,
total_tokens) in the yield block instead of chunk.x_groq.usage so both legacy
x_groq and standard chunk.usage are supported.
| private mapTextOptionsToGroq( | ||
| options: TextOptions, | ||
| ): ChatCompletionCreateParamsStreaming { | ||
| const modelOptions = options.modelOptions as | ||
| | Omit< | ||
| InternalTextProviderOptions, | ||
| 'max_tokens' | 'tools' | 'temperature' | 'input' | 'top_p' | ||
| > | ||
| | undefined | ||
|
|
||
| if (modelOptions) { | ||
| validateTextProviderOptions({ | ||
| ...modelOptions, | ||
| model: options.model, | ||
| }) | ||
| } | ||
|
|
||
| const tools = options.tools | ||
| ? convertToolsToProviderFormat(options.tools) | ||
| : undefined | ||
|
|
||
| const messages: Array<ChatCompletionMessageParam> = [] | ||
|
|
||
| if (options.systemPrompts && options.systemPrompts.length > 0) { | ||
| messages.push({ | ||
| role: 'system', | ||
| content: options.systemPrompts.join('\n'), | ||
| }) | ||
| } | ||
|
|
||
| for (const message of options.messages) { | ||
| messages.push(this.convertMessageToGroq(message)) | ||
| } | ||
|
|
||
| return { | ||
| model: options.model, | ||
| messages, | ||
| temperature: options.temperature, | ||
| max_tokens: options.maxTokens, | ||
| top_p: options.topP, | ||
| tools: tools as Array<ChatCompletionTool>, | ||
| stream: true, | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Provider options (modelOptions) are never passed through to the Groq API request.
mapTextOptionsToGroq builds the request from standard TextOptions fields (model, messages, temperature, maxTokens, topP, tools) but never spreads modelOptions into the request body. This means all Groq-specific provider options defined in GroqTextProviderOptions β frequency_penalty, presence_penalty, stop, seed, response_format, reasoning_effort, logprobs, etc. β are silently dropped.
π Proposed fix β spread modelOptions into the request
return {
+ ...modelOptions,
model: options.model,
messages,
temperature: options.temperature,
max_tokens: options.maxTokens,
top_p: options.topP,
tools: tools as Array<ChatCompletionTool>,
stream: true,
}Note: place the spread first so that the explicit fields (model, temperature, etc.) take precedence over any duplicates in modelOptions.
π€ Prompt for AI Agents
In `@packages/typescript/ai-groq/src/adapters/text.ts` around lines 385 - 428,
mapTextOptionsToGroq is dropping provider-specific options because modelOptions
is validated but never merged into the returned request; update the return
object in mapTextOptionsToGroq to spread modelOptions first (e.g.,
...modelOptions) so GroqTextProviderOptions (frequency_penalty,
presence_penalty, stop, seed, response_format, reasoning_effort, logprobs, etc.)
are included, while keeping explicit fields (model, messages, temperature,
max_tokens/maxTokens, top_p/topP, tools, stream) afterwards so they take
precedence; locate modelOptions, validateTextProviderOptions,
convertToolsToProviderFormat and adjust the returned
ChatCompletionCreateParamsStreaming object to include the spread.
| usage: { | ||
| prompt_tokens: 5, | ||
| completion_tokens: 1, | ||
| total_tokens: 6, | ||
| }, | ||
| }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
π§© Analysis chain
π Script executed:
# Find the text.ts adapter file
find . -name "text.ts" -path "*/ai-groq/*" | head -5
# Also search for the test file
find . -name "groq-adapter.test.ts" | head -5Repository: TanStack/ai
Length of output: 162
π Script executed:
# Search for x_groq usage in the codebase
rg "x_groq" --type=ts -B2 -A2 | head -30Repository: TanStack/ai
Length of output: 880
π Script executed:
# Look for where usage is read in the adapter
rg "chunk\.usage|chunk\.x_groq" --type=ts -B3 -A3Repository: TanStack/ai
Length of output: 8239
π Script executed:
# Extract the mock data structure from test file
sed -n '120,160p' ./packages/typescript/ai-groq/tests/groq-adapter.test.ts
# Check other mock data locations mentioned
sed -n '180,195p' ./packages/typescript/ai-groq/tests/groq-adapter.test.ts
sed -n '240,260p' ./packages/typescript/ai-groq/tests/groq-adapter.test.ts
sed -n '335,350p' ./packages/typescript/ai-groq/tests/groq-adapter.test.tsRepository: TanStack/ai
Length of output: 2807
π Script executed:
# Also check the assertions for usage
rg -n "expect.*usage" ./packages/typescript/ai-groq/tests/groq-adapter.test.ts -B2 -A2Repository: TanStack/ai
Length of output: 540
π Script executed:
# Check the Groq SDK types or mock setup
rg -n "mock.*usage|Groq|ChatCompletionChunk" ./packages/typescript/ai-groq/tests/groq-adapter.test.ts -B2 -A2 | head -50
# Check how setupMockSdkClient works
sed -n '1,80p' ./packages/typescript/ai-groq/tests/groq-adapter.test.tsRepository: TanStack/ai
Length of output: 3930
π Script executed:
# Check if there's any transformation happening in the mock setup
rg -n "createAsyncIterable|setupMockSdk" ./packages/typescript/ai-groq/tests/groq-adapter.test.ts -A15 | head -80Repository: TanStack/ai
Length of output: 2603
π Script executed:
# Search for Groq SDK documentation or type definitions in the repo
find . -name "*.ts" -path "*/ai-groq/*" | xargs rg -l "groq-sdk" | head -5
# Check if there are any comments about the Groq API response structure
rg -n "groq|ChatCompletion" ./packages/typescript/ai-groq/src/adapters/text.ts -B2 -A2 | head -40Repository: TanStack/ai
Length of output: 1544
π Script executed:
# Search for the actual Groq SDK types or imports
rg -n "import.*groq|from.*groq" ./packages/typescript/ai-groq/src/adapters/text.ts -A5Repository: TanStack/ai
Length of output: 302
π Script executed:
# Check what the Groq ChatCompletionChunk type includes
cat ./packages/typescript/ai-groq/src/adapters/text.ts | head -30Repository: TanStack/ai
Length of output: 975
π Script executed:
# Look for any comments or documentation about the Groq API response structure
rg -n "x_groq" ./packages/typescript/ai-groq/src -B3 -A3
# Check if there are any type definitions or comments explaining the structure
rg -n "usage|ChatCompletionChunk" ./packages/typescript/ai-groq/src -B2 -A2 | head -60Repository: TanStack/ai
Length of output: 2872
π Script executed:
# Try to find the actual Groq SDK in node_modules or check package.json
cat ./packages/typescript/ai-groq/package.json | grep -A5 -B5 groq-sdkRepository: TanStack/ai
Length of output: 37
π Script executed:
# Check if there's any documentation or README about the Groq adapter
find ./packages/typescript/ai-groq -name "README*" -o -name "*.md" | xargs cat 2>/dev/null | head -50Repository: TanStack/ai
Length of output: 37
π Script executed:
# Check the actual Groq SDK types if available in node_modules or pnpm
find . -path "*groq-sdk*types*" -o -path "*groq-sdk/package.json" 2>/dev/null | head -10Repository: TanStack/ai
Length of output: 37
π Script executed:
# Try to inspect what the actual structure should be from documentation/comments
rg -n "ChatCompletionChunk|x_groq" ./packages/typescript/ai-groq -B5 -A5 | head -100Repository: TanStack/ai
Length of output: 3360
π Script executed:
# Check if the tests currently pass by looking at test output or configuration
find ./packages/typescript/ai-groq -name "vitest.config.*" -o -name "tsconfig*" | xargs cat 2>/dev/nullRepository: TanStack/ai
Length of output: 731
π Script executed:
# Check if there are any comments in the groq client utils that might explain the structure
cat ./packages/typescript/ai-groq/src/utils/client.ts | head -50Repository: TanStack/ai
Length of output: 1127
π Script executed:
# Search for web documentation or examples of Groq API response structure
# First, let's check if there's any type definition file or interface definition
rg -n "interface.*ChatCompletionChunk|type.*ChatCompletionChunk" ./packages/typescript/ai-groq -B3 -A10Repository: TanStack/ai
Length of output: 37
π Script executed:
# Check if the test is actually expected to run and pass
# Look for test setup or CI configuration
find ./packages/typescript/ai-groq -name "*.yml" -o -name "*.yaml" -o -name ".github" -type d | head -5Repository: TanStack/ai
Length of output: 37
π Script executed:
# Check if there are any similar test files that might show the correct structure
find ./packages/typescript -name "*.test.ts" -path "*/adapters/*" | xargs rg -l "usage" | head -3Repository: TanStack/ai
Length of output: 37
π Script executed:
# Compare with other provider adapters to see how they handle usage
rg -n "\.usage|x_.*\.usage" ./packages/typescript/ai-openai/src/adapters/text.ts -B2 -A2 | head -40Repository: TanStack/ai
Length of output: 420
π Script executed:
# Let me check the Groq SDK repository or documentation online to understand the actual structure
# First, let's search for any version information or documentation in the repo
grep -r "groq-sdk" ./packages/typescript/ai-groq/package.json
# Also check if there's any example of how the Groq API actually returns data
rg -n "ChatCompletionChunk" ./packages/typescript -A20 | grep -E "(usage|x_groq)" | head -20Repository: TanStack/ai
Length of output: 37
π Script executed:
# Check if maybe the chunks are being passed through any transformation
sed -n '39,50p' ./packages/typescript/ai-groq/tests/groq-adapter.test.ts
sed -n '136,145p' ./packages/typescript/ai-groq/tests/groq-adapter.test.tsRepository: TanStack/ai
Length of output: 681
π Script executed:
# Let me verify by checking if there's any type casting or transformation of the chunks before they're used
rg -n "processGroqStreamChunks" ./packages/typescript/ai-groq/src/adapters/text.ts -B5 -A15 | head -50Repository: TanStack/ai
Length of output: 1494
π Script executed:
# Search online or in the repo for Groq API documentation
# Check if there's a tsconfig or import that gives us access to the SDK types
rg -n "from 'groq-sdk'" ./packages/typescript/ai-groq/src -B2 -A10 | head -50Repository: TanStack/ai
Length of output: 2121
π Script executed:
# Let me verify by running a quick check on the mock vs adapter expectation
# Extract full context around the usage reading in the adapter
sed -n '340,365p' ./packages/typescript/ai-groq/src/adapters/text.tsRepository: TanStack/ai
Length of output: 889
π Script executed:
# Check the actual test assertion and see if there might be conditional logic
sed -n '270,290p' ./packages/typescript/ai-groq/tests/groq-adapter.test.tsRepository: TanStack/ai
Length of output: 817
π Script executed:
# Search for any Groq-specific documentation or comments about the API structure
rg -n "Groq API|groq response|chatCompletion" ./packages/typescript/ai-groq -i -B2 -A2 | head -40Repository: TanStack/ai
Length of output: 2938
π Script executed:
# Final verification: confirm that the actual Groq SDK puts usage in x_groq or at top level
# Search for any comments or documentation in the codebase about Groq API structure
rg -n "Groq.*streaming|stream.*chunk" ./packages/typescript/ai-groq -i -B3 -A3 | head -50Repository: TanStack/ai
Length of output: 4273
π Script executed:
# Check if there are any integration tests or real API examples
find ./packages/typescript/ai-groq -name "*.md" -o -name "*.example*" -o -name "*.fixture*" | xargs cat 2>/dev/nullRepository: TanStack/ai
Length of output: 37
π Script executed:
# One more check - see if the Groq SDK type definition shows where usage goes
rg -n "usage" ./packages/typescript/ai-groq/src/message-types.ts -B2 -A2Repository: TanStack/ai
Length of output: 37
π Script executed:
# Let me do a final check on the exact structure - search for any Groq API response examples
find ./packages/typescript/ai-groq -type f -name "*.ts" -o -name "*.json" | xargs rg -l "ChatCompletion" | head -5
# And check if message-types has any usage-related definitions
cat ./packages/typescript/ai-groq/src/message-types.ts | grep -i "usage" -B3 -A3Repository: TanStack/ai
Length of output: 277
π Script executed:
# Final check: Look at what the actual Groq streaming response should look like
# by checking if there are any real examples or API docs referenced
rg -n "api.groq.com|groq/resources" ./packages/typescript -i | head -10Repository: TanStack/ai
Length of output: 37
Fix mock data structure to match what the adapter reads.
The adapter reads usage from chunk.x_groq?.usage (text.ts line 354), but the mock chunks place usage at the top level. Since chunk.x_groq is undefined in the mock, the expression evaluates to undefined, causing the test assertion expect(runFinishedChunk.usage).toMatchObject(β¦) (line 280) to fail.
Fix: Nest usage under x_groq
- usage: {
- prompt_tokens: 5,
- completion_tokens: 1,
- total_tokens: 6,
- },
+ x_groq: {
+ usage: {
+ prompt_tokens: 5,
+ completion_tokens: 1,
+ total_tokens: 6,
+ },
+ },Also applies to: 186-191, 247-252, 340-345, 462-467, 541-546
π€ Prompt for AI Agents
In `@packages/typescript/ai-groq/tests/groq-adapter.test.ts` around lines 136 -
141, The mock test chunks place usage at the top level but the adapter reads it
from chunk.x_groq?.usage (see usage access in text.ts around the adapter logic),
so update each mock chunk in groq-adapter.test.ts to nest the usage object under
x_groq (e.g., x_groq: { usage: { prompt_tokens: 5, completion_tokens: 1,
total_tokens: 6 } }) so that runFinishedChunk.usage assertions (and other
expectations at the noted ranges) receive the expected shape; apply this same
nesting fix to all mentioned mock entries (lines corresponding to the other
occurrences).
|
|
||
| ### Patch Changes | ||
|
|
||
| - Updated dependencies [[`5d98472`](https://github.com/TanStack/ai/commit/5d984722e1f84725e3cfda834fbda3d0341ecedd), [`5d98472`](https://github.com/TanStack/ai/commit/5d984722e1f84725e3cfda834fbda3d0341ecedd)]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove duplicate commit reference.
Line 7 lists the same commit hash twice; consider de-duplicating it for clarity.
π€ Prompt for AI Agents
In `@packages/typescript/ai-svelte/CHANGELOG.md` at line 7, Remove the duplicated
commit reference in the CHANGELOG entry by deleting the repeated `5d98472` token
so the line reads with each commit hash only once; edit the line that currently
contains "Updated dependencies
[[`5d98472`](https://github.com/TanStack/ai/commit/5d984722e1f84725e3cfda834fbda3d0341ecedd),
[`5d98472`](https://github.com/TanStack/ai/commit/5d984722e1f84725e3cfda834fbda3d0341ecedd)]"
to contain a single reference to that commit (or list distinct commits if
intended).
| import { describe, expect, it } from 'vitest' | ||
| import { StreamProcessor } from '../src/activities/chat/stream/processor' | ||
| import type { StreamChunk } from '../src/types' | ||
| import { type Mock, describe, expect, it, vi } from 'vitest' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ESLint: prefer top-level type-only import for Mock.
Static analysis flagged this line for mixing type and value imports. Since Mock is used only as a type, it should be a separate type-only import.
π§ Proposed fix
-import { type Mock, describe, expect, it, vi } from 'vitest'
+import { describe, expect, it, vi } from 'vitest'
+import type { Mock } from 'vitest'π§° Tools
πͺ ESLint
[error] 1-1: Prefer using a top-level type-only import instead of inline type specifiers.
(import/consistent-type-specifier-style)
π€ Prompt for AI Agents
In `@packages/typescript/ai/tests/stream-processor.test.ts` at line 1, The import
mixes a type-only symbol with value imports; remove Mock from the value import
and add a separate type-only import for it (i.e. keep describe, expect, it, vi
in the existing import and add "import type { Mock } from 'vitest'") so Mock is
imported only as a type and ESLint's prefer top-level type-only import rule is
satisfied.
| - Updated dependencies []: | ||
| - @tanstack/tests-adapters@0.1.11 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix the empty commit reference list.
The empty [] makes the changelog entry look incomplete. Either add the appropriate commit link(s) or drop the bracketed list.
Suggested fix (if no commit links are available)
-- Updated dependencies []:
+- Updated dependencies:π Committable suggestion
βΌοΈ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - Updated dependencies []: | |
| - @tanstack/tests-adapters@0.1.11 | |
| - Updated dependencies: | |
| - `@tanstack/tests-adapters`@0.1.11 |
π€ Prompt for AI Agents
In `@packages/typescript/smoke-tests/e2e/CHANGELOG.md` around lines 18 - 19, The
changelog entry includes an empty commit reference list "- Updated dependencies
[]:" which looks incomplete; either remove the empty brackets so the line reads
"- Updated dependencies:" or replace "[]" with the appropriate commit link(s)
referencing the update (e.g., include the commit hash or PR URL) associated with
"@tanstack/tests-adapters@0.1.11" so the entry is no longer empty.
| "@tanstack/nitro-v2-vite-plugin": "^1.154.7", | ||
| "@tanstack/react-router": "^1.158.4", | ||
| "@tanstack/react-start": "^1.159.0", | ||
| "@tanstack/start": "^1.120.20", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
π§© Analysis chain
π Script executed:
#!/bin/bash
# Check all instances of `@tanstack/start` in package.json files
echo "=== All `@tanstack/start` instances ==="
rg '"@tanstack/start"' --type=json -n
echo -e "\n=== All `@tanstack/react-start` instances ==="
rg '"@tanstack/react-start"' --type=json -n
echo -e "\n=== All `@tanstack/react-router` instances ==="
rg '"@tanstack/react-router"' --type=json -nRepository: TanStack/ai
Length of output: 901
π Script executed:
#!/bin/bash
# Check the lock file for testing/panel to see if there are constraints
echo "=== testing/panel/package.json context ==="
cat -n testing/panel/package.json | sed -n '23,28p'
echo -e "\n=== Check if package-lock.json or yarn.lock exists ==="
ls -la testing/panel/ | grep -E 'lock|yarn'Repository: TanStack/ai
Length of output: 440
π Script executed:
#!/bin/bash
# Check recent changes to testing/panel/package.json
echo "=== Recent git changes to testing/panel/package.json ==="
git log --oneline -20 testing/panel/package.json 2>/dev/null || echo "Git history not available"
# Check if there are any peer dependencies or compatibility notes
echo -e "\n=== Search for `@tanstack/start` usage in testing/panel ==="
rg '@tanstack/start' testing/panel/ --type=ts --type=tsx --type=json -n
# Check the full package.json to see all `@tanstack` dependencies
echo -e "\n=== All `@tanstack` dependencies in testing/panel/package.json ==="
rg '@tanstack' testing/panel/package.json -nRepository: TanStack/ai
Length of output: 980
π Script executed:
#!/bin/bash
# Check if `@tanstack/start` is actually imported anywhere in testing/panel
echo "=== Search for `@tanstack/start` imports in testing/panel ==="
rg 'from.*@tanstack/start|import.*@tanstack/start' testing/panel/ -n
echo -e "\n=== Check testing/panel source files ==="
find testing/panel -type f \( -name "*.ts" -o -name "*.tsx" -o -name "*.js" -o -name "*.jsx" \) | head -20Repository: TanStack/ai
Length of output: 927
π Script executed:
#!/bin/bash
# Check router.tsx and app configuration for `@tanstack/start` usage
echo "=== Check router.tsx ==="
head -30 testing/panel/src/router.tsx
echo -e "\n=== Check app.config.ts ==="
cat testing/panel/app.config.ts
echo -e "\n=== Check __root.tsx ==="
head -40 testing/panel/src/routes/__root.tsxRepository: TanStack/ai
Length of output: 1557
π Script executed:
#!/bin/bash
# Verify `@tanstack/start` is truly unused by checking if it's listed in imports elsewhere
echo "=== Comprehensive search for `@tanstack/start` in testing/panel ==="
rg '@tanstack/start' testing/panel/ -n
echo -e "\n=== Check vite.config or build config files ==="
find testing/panel -name "vite.config.*" -o -name "tsconfig.*" -o -name "*.config.*" 2>/dev/null | xargs -I {} echo "File: {}" 2>/dev/null | head -10Repository: TanStack/ai
Length of output: 379
π Script executed:
#!/bin/bash
# Check vite.config.ts for `@tanstack/start` usage
echo "=== vite.config.ts content ==="
cat testing/panel/vite.config.tsRepository: TanStack/ai
Length of output: 613
π Script executed:
#!/bin/bash
# Check if `@tanstack/start` might be a transitive dependency
echo "=== Package.json dependencies to understand relationship ==="
grep -A 5 '"dependencies"' testing/panel/package.json | head -20
echo -e "\n=== Verify `@tanstack/start` is ONLY in package.json, not in any code ==="
find testing/panel -type f \( -name "*.ts" -o -name "*.tsx" -o -name "*.js" -o -name "*.jsx" \) -exec grep -l "tanstack/start" {} \;
echo -e "\n=== Double-check: search entire testing/panel for any 'start' import pattern ==="
rg 'from.*start|import.*start' testing/panel/src --type=ts --type=js -i | grep -v react-start | grep -v react-routerRepository: TanStack/ai
Length of output: 585
Remove unused @tanstack/start dependency from testing/panel.
@tanstack/start at version ^1.120.20 is listed in package.json but is never imported or used anywhere in the codebase. All other TanStack packages (@tanstack/react-start, @tanstack/react-router) are actively used in the configuration and source files, but @tanstack/start has no usages. This appears to be an unused dependency that should be removed.
π€ Prompt for AI Agents
In `@testing/panel/package.json` around lines 24 - 27, Remove the unused
dependency `@tanstack/start` from testing/panel's package.json: delete the
"@tanstack/start": "^1.120.20" entry, then run the project's package manager
(npm/yarn/pnpm) to update node_modules and the lockfile (e.g., npm install or
yarn install) to keep lockfiles in sync; verify no import or build errors
referencing `@tanstack/start` remain (search for "@tanstack/start" and confirm
zero usages).
Introduce a new Groq adapter to enable fast LLM inference via Groq's API. Includes TypeScript configuration and Vite build setup for consistent tooling across the AI packages.
β¦ck#278) * feat: opus 4.6 model & additional config for provider clients * fix: isue with gemini adapter
π― Changes
Add the
@tanstack/ai-groqadapter package β a new, tree-shakeable provider adapter for Groq's Chat Completions API.What's included
src/adapters/text.ts) βGroqTextAdapterclass extendingBaseTextAdapterwith full streaming (chatStream) and structured output support, including Groq-specific request mapping and response parsing.src/model-meta.ts) β comprehensive model registry (GROQ_CHAT_MODELS) covering Groq's available models (llama, gpt-oss, kimi, qwen, etc.) with per-model capabilities, pricing, context windows, and type-level provider option resolution.src/message-types.ts) β Groq-specific Chat Completions API type definitions (content parts, tool calls, response formats, compound models, search settings, etc.) to decouple the adapter from the Groq SDK's exported types.src/text/text-provider-options.ts) β typed Groq-specific options ( search, reasoning, response format, tool choice, etc.) with a runtime validation helper.src/tools/) β converts TanStack AITooldefinitions to Groq'sChatCompletionToolformat with strict-mode schema transformations (required arrays, nullable optionals,additionalProperties: false).src/utils/) β Groq SDK client factory, API key resolution, ID generation, and a recursive schema converter (makeGroqStructuredOutputCompatible/transformNullsToUndefined).tests/groq-adapter.test.ts) β 580+ lines of unit tests covering streaming, tool calls, structured output, multi-modal message conversion, provider options pass-through, error handling, and edge cases.Design notes
@tanstack/ai-grokpackage.src/index.ts.β Checklist
pnpm run test:pr.π Release Impact
Note
This is my first ever open-source contribution. The implementation works for the covered cases and is tested, but it is not fully complete yet and may have gaps or rough edges. Iβm very open to feedback and guidance on how to improve or finish it properly.
Summary by CodeRabbit
New Features
Improvements
Bug Fixes
Documentation