Skip to content

shubhammahure/MultiModelChat

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MultiModelChat (MultiChatBot)

MultiModelChat lets you compare multiple Large Language Models (LLMs) side‑by‑side, tune them, and synthesize the best answer. The app is a React front‑end that talks to the Semoss platform for authentication, cataloging available engines, and running Pixel commands against configured LLM backends.

What problems it solves

  • Rapidly evaluate and compare multiple LLM engines on the same prompt.
  • Capture per‑model temperature settings to balance creativity and factuality.
  • Produce an aggregated “best response” that resolves contradictions across engines.
  • Reduce friction with built‑in Semoss authentication and engine discovery.

Key features

  • Multi‑model selection and per‑model temperature controls.
  • Parallel, non‑blocking generation across all selected engines.
  • One‑click “Generate Best Response” synthesizer using an aggregator engine.
  • Copy / re‑run per model; stop all generations.
  • Auth flows (native + Microsoft OAuth via Semoss config).
  • Built with Material UI, so the layout is responsive and readable.

Architecture overview

  • Frontend: React + TypeScript, Material UI, @semoss/sdk/@semoss/sdk-react.
  • Platform services (Semoss): authentication, engine catalog (MyEngines), and execution of Pixel commands (LLM(...)) against configured LLM providers.
  • Backends for models: whichever engines you configure in Semoss (OpenAI, HuggingFace, local, etc.). This repo does not include those services; it consumes them via the SDK.
ProjectOverview

Detailed Component Analysis

Frontend: React Application and Routing

  • App Initialization: Sets up the Semoss SDK environment and wraps the app with theme and router.
  • Router and Protected Routes: Hash-based routing with authentication guard; redirects unauthenticated users to login.
  • Main Layout: Provides a consistent footer and disclaimer across pages.
  • Login Page: Supports native and OAuth login flows via the Semoss SDK.
AppStart

Multi-Model Chat Experience

  • Model Catalog Discovery: Queries Semoss for available LLM engines and initializes selections.
  • Parallel Execution: Sends identical prompts to multiple selected models concurrently with per-model temperature control.
  • Real-Time Rendering: Displays loading states, individual responses, and errors per model panel.
  • Best Response Synthesis: Aggregates completed responses into a single synthesis prompt and generates a consolidated answer.
multimodelChat ### Authentication System - Native and OAuth Providers: Supports native username/password and Microsoft OAuth via the Semoss SDK. - Protected Routes: Unauthenticated users are redirected to the login page. - Authorization State: Uses SDK-provided hooks to enforce route protection. authentication

Running locally

Prerequisites: Node 18+, pnpm, access to a Semoss deployment with engines configured.

  1. Install dependencies
cd client
pnpm install
  1. Configure environment
    Create client/.env (or set env vars) with your Semoss details:
MODULE=<your module id>
ACCESS_KEY=<your access key>
SECRET_KEY=<your secret key>

These values must match an existing project in your Semoss environment.

  1. Start the dev server
cd client
pnpm run dev

Visit http://localhost:3000

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors 2

  •  
  •