Skip to content

YashChavanWeb/Chat-Agent-Platform-for-Developers_TechSurf

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Chat Agent Platform for Developers

The Chat Agent SDK provides developers with a plug-and-play solution to embed domain-specific chat agents powered by Contentstack data.

The platform is built on two core components:

  • Chat SDK → React-based interface for easy frontend integration
  • LLM Model API → Manages communication with multiple LLM providers (OpenAI, Gemini, Groq, etc.)

With Contentstack MCP Integration, content is automatically fetched and utilized from your Contentstack instance.

Demo Preview:

Demo.-.Made.with.Clipchamp.mp4

Important Links

Detailed SDK Documentation: View Here

NPM Package: View Here

Package Code: View Here

LLM API Model Code: View Here

Blog Website: View Here


Table of Contents

  1. Introduction

  2. Getting Started

  3. Usage Examples

  4. Platform Architecture

  5. Technology Stack

  6. Customization

  7. Model Toggle

  8. Unique Features

  9. FAQ


1. Introduction

Why use the Chat Agent SDK Platform?

Traditionally, developers need to manually configure frontend, backend, and API integrations. With this SDK, you only need to:

  1. Install the package
  2. Import it
  3. Configure it

That’s it! The chat agent is instantly ready for your website.


2. Getting Started

Prerequisites

Since the SDK is powered by Contentstack and integrates with the Contentstack MCP Server, you’ll need the following credentials:

CONTENTSTACK_API_KEY=your_api_key
CONTENTSTACK_LAUNCH_PROJECT_ID=your_launch_id
CONTENTSTACK_DELIVERY_TOKEN=your_token
CONTENTSTACK_ENVIRONMENT=your_environment   // e.g. preview
CONTENTSTACK_REGION=your_region             // e.g. eu
CONTENTSTACK_MANAGEMENT_TOKEN=your_management_token

Note: If you don’t have these credentials, refer to the guide: Get your Contentstack credentials.


Set Up the Chat Agent SDK

Step 1: Install the SDK

Using npm:

npm install @yashchavanweb/cms-chat-agent-sdk

Using Yarn:

yarn add @yashchavanweb/cms-chat-agent-sdk

Step 2: Add Tailwind CSS (via CDN)

The SDK uses Tailwind CSS for styling. Add this to your <head> in index.html:

<script src="https://cdn.jsdelivr.net/npm/@tailwindcss/browser@4"></script>

Step 3: Configure the Chat Agent

Import the required libraries:

import {
  ChatAgent,
  ChatAgentProvider,
  darkChatConfig,
  lightChatConfig,
} from "@yashchavanweb/cms-chat-agent-sdk";


Wrap your application with the ChatAgentProvider:

const App = () => {
  return (
    <ChatAgentProvider config={chatConfig}>
      {/* Child Components */}
    </ChatAgentProvider>
  );
};

export default App;


Configure and add the Chat Agent:

const App = () => {
  const chatConfig = {
    ...darkChatConfig,
    apiKey: "your_api_key",
  };

  return (
    <ChatAgentProvider config={chatConfig}>
      <ChatAgent config={chatConfig} /> {/* Chat Agent Component */}
    </ChatAgentProvider>
  );
};

export default App;


Run your application:

npm run dev

You’ll now see a Chat Agent on your website.

Chat Icon:

chat-icon

Chat Agent View:

chat-view


Set Up LLM Model API

Once the frontend is configured, you’ll need to set up the backend for the Chat Agent to respond.

Navigate to the Environment Configuration Page.

config-page

Enter the required information and Save Configuration.

A confirmation popup will appear → Confirm to proceed.

popup

You’ll be provided with a generated API key.

api-key

Use the API Key in your frontend

Add the key to your .env file:

VITE_CHAT_AGENT_API_KEY = your_api_key;

Update your App.tsx:

const App = () => {
  const chatConfig = {
    ...darkChatConfig,
    apiKey: import.meta.env.VITE_CHAT_AGENT_API_KEY,
  };

  return (
    <ChatAgentProvider config={chatConfig}>
      <ChatAgent config={chatConfig} />
    </ChatAgentProvider>
  );
};

export default App;

Now test your Chat Agent:

chat-working


3. Usage Examples

  • Example 1:

    example1

  • Example 2:

    example2

  • Example 3:

    example3


4. Platform Architecture

Overview

architecture

  • Frontend SDK → React-based chat interface
  • Middleware → Validates API keys, ensuring secure access
  • Backend Server → Processes validated requests
  • Contentstack (MCP Server) → Content management and delivery
  • LLM Services → OpenAI, Gemini, Groq, Hugging Face, etc.

Workflow

workflow

  1. Frontend Request → User sends query & conversation history

  2. Backend Processing → Request validated & routed to LLM

  3. Intent Detection

    • Data Query:

      • Cache Hit → Response served ~4–5s faster
      • Cache Miss → Data retrieved via Contentstack MCP
    • Conversational: Response generated directly by LLM

  4. Streaming Response → Real-time response streamed to frontend


5. Technology Stack

Frontend

TypeScript React Tailwind

Backend & AI/ML

NodeJS Express Redis

DevOps

Linux Git GitHub Postman


6. Customization

  • The SDK comes with light and dark themes (lightChatConfig, darkChatConfig) and supports advanced customization.

  • Below are the examples of other customization options:

Examples

1. Dimensions

const chatConfig = {
  ...lightChatConfig,
  width: "400px",
  height: "500px",
};

Example:

dimensions

2. Borders

const chatConfig = {
  ...lightChatConfig,
  borderRadius: "4rem",
};

Example:

borders

3. Shadows

const chatConfig = {
  ...lightChatConfig,
  boxShadow: "0 25px 50px 50px rgba(1, 1, 1, 1)",
};

Example:

shadows

4. Agent Metadata

const chatConfig = {
  ...lightChatConfig,
  botName: "Yash Website Chat Agent",
  botAvatarUrl: "https://cdn-icons-png.flaticon.com/512/4944/4944377.png",
  userAvatarUrl: "https://shorturl.at/xh1PO",
};

Example:

metadata

Note: There are even more customization options, which you can checkout at the detailed documenattion.


7. Model Toggle

model-toggle

  • The provider and model are passed in query parameters.
  • The SDK automatically switches to the correct LLM service.
  • Developers don’t need to implement custom logic.

The developer just has to add the provider and model in the chat config:

const chatConfig = {
  ...lightChatConfig,
  borderRadius: "4rem",
  provider: "openai",
  model: "gpt-5",
};

8. Unique Features

  • 🎙️ Voice input & output support
  • 💾 Save chat agent responses
  • ⚡ Choose between streaming or REST responses
  • 🚦 Built-in rate limiting per user
  • 🔀 Toggle between multiple providers & LLM models seamlessly

9. FAQ

Q: Do I need a backend?
Ans: No, the SDK handles it for you. You just need to configure your credentials and follow the necessary import steps.

Q: Can I use this with frameworks other than React?
Ans: Currently, the SDK is optimized for React and NextJS. Support for more frameworks is planned.

Q: How fast are responses?
Ans: With cache hits, responses are typically 7-8 seconds faster compared to fresh queries.
For wague questions, it may take upto 12-15 seconds.


About

The Chat Agent SDK provides developers with a plug-and-play solution to embed domain-specific chat agents powered by Contentstack data.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages