Skip to content

Fix AI provider exclusivity, provider-aware models, and Gemini streaming#50

Open
shettydev wants to merge 1 commit intomainfrom
codex/fix-ai-provider-streaming
Open

Fix AI provider exclusivity, provider-aware models, and Gemini streaming#50
shettydev wants to merge 1 commit intomainfrom
codex/fix-ai-provider-streaming

Conversation

@shettydev
Copy link
Owner

Summary

  • add persistent preferences.activeProvider (openrouter/gemini) and return it from /ai/settings
  • enforce single active provider by clearing the opposite BYOK key on save and normalizing active model/provider state
  • make /ai/models provider-aware with contract { provider, mode, models } and Gemini model support
  • route conversation send flow through active provider and add Gemini processing path in queue worker
  • align Gemini SSE emissions with OpenRouter (processing, progress, message user+assistant, complete)
  • add Redis-backed stream fanout bridge in StreamService so worker/API process separation still delivers SSE
  • add web fallback polling after send success to prevent stale UI when SSE is delayed
  • update settings UI + store + API typings for unambiguous active provider/model behavior

Tests

  • bun run test -- src/modules/ai/__tests__/ai.controller.spec.ts src/modules/conversations/services/__tests__/queue.gemini-stream.spec.ts src/modules/conversations/services/__tests__/queue.service.spec.ts src/modules/conversations/__tests__/conversation.controller.spec.ts (in packages/mukti-api)
  • bun run test -- src/lib/hooks/__tests__/use-conversations.spec.ts (in packages/mukti-web)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant

Comments