The Chat Agent SDK provides developers with a plug-and-play solution to embed domain-specific chat agents powered by Contentstack data.
The platform is built on two core components:
- Chat SDK → React-based interface for easy frontend integration
- LLM Model API → Manages communication with multiple LLM providers (OpenAI, Gemini, Groq, etc.)
With Contentstack MCP Integration, content is automatically fetched and utilized from your Contentstack instance.
Demo.-.Made.with.Clipchamp.mp4
Detailed SDK Documentation: View Here
NPM Package: View Here
Package Code: View Here
LLM API Model Code: View Here
Blog Website: View Here
Traditionally, developers need to manually configure frontend, backend, and API integrations. With this SDK, you only need to:
- Install the package
- Import it
- Configure it
That’s it! The chat agent is instantly ready for your website.
Since the SDK is powered by Contentstack and integrates with the Contentstack MCP Server, you’ll need the following credentials:
CONTENTSTACK_API_KEY=your_api_key
CONTENTSTACK_LAUNCH_PROJECT_ID=your_launch_id
CONTENTSTACK_DELIVERY_TOKEN=your_token
CONTENTSTACK_ENVIRONMENT=your_environment // e.g. preview
CONTENTSTACK_REGION=your_region // e.g. eu
CONTENTSTACK_MANAGEMENT_TOKEN=your_management_tokenNote: If you don’t have these credentials, refer to the guide: Get your Contentstack credentials.
Using npm:
npm install @yashchavanweb/cms-chat-agent-sdkUsing Yarn:
yarn add @yashchavanweb/cms-chat-agent-sdkThe SDK uses Tailwind CSS for styling. Add this to your <head> in index.html:
<script src="https://cdn.jsdelivr.net/npm/@tailwindcss/browser@4"></script>Import the required libraries:
import {
ChatAgent,
ChatAgentProvider,
darkChatConfig,
lightChatConfig,
} from "@yashchavanweb/cms-chat-agent-sdk";
Wrap your application with the ChatAgentProvider:
const App = () => {
return (
<ChatAgentProvider config={chatConfig}>
{/* Child Components */}
</ChatAgentProvider>
);
};
export default App;
Configure and add the Chat Agent:
const App = () => {
const chatConfig = {
...darkChatConfig,
apiKey: "your_api_key",
};
return (
<ChatAgentProvider config={chatConfig}>
<ChatAgent config={chatConfig} /> {/* Chat Agent Component */}
</ChatAgentProvider>
);
};
export default App;
Run your application:
npm run devYou’ll now see a Chat Agent on your website.
Once the frontend is configured, you’ll need to set up the backend for the Chat Agent to respond.
Navigate to the Environment Configuration Page.

Enter the required information and Save Configuration.
A confirmation popup will appear → Confirm to proceed.

You’ll be provided with a generated API key.

Add the key to your .env file:
VITE_CHAT_AGENT_API_KEY = your_api_key;Update your App.tsx:
const App = () => {
const chatConfig = {
...darkChatConfig,
apiKey: import.meta.env.VITE_CHAT_AGENT_API_KEY,
};
return (
<ChatAgentProvider config={chatConfig}>
<ChatAgent config={chatConfig} />
</ChatAgentProvider>
);
};
export default App;- Frontend SDK → React-based chat interface
- Middleware → Validates API keys, ensuring secure access
- Backend Server → Processes validated requests
- Contentstack (MCP Server) → Content management and delivery
- LLM Services → OpenAI, Gemini, Groq, Hugging Face, etc.
-
Frontend Request → User sends query & conversation history
-
Backend Processing → Request validated & routed to LLM
-
Intent Detection →
-
Data Query:
- Cache Hit → Response served ~4–5s faster
- Cache Miss → Data retrieved via Contentstack MCP
-
Conversational: Response generated directly by LLM
-
-
Streaming Response → Real-time response streamed to frontend
-
The SDK comes with light and dark themes (
lightChatConfig,darkChatConfig) and supports advanced customization. -
Below are the examples of other customization options:
const chatConfig = {
...lightChatConfig,
width: "400px",
height: "500px",
};const chatConfig = {
...lightChatConfig,
borderRadius: "4rem",
};const chatConfig = {
...lightChatConfig,
boxShadow: "0 25px 50px 50px rgba(1, 1, 1, 1)",
};const chatConfig = {
...lightChatConfig,
botName: "Yash Website Chat Agent",
botAvatarUrl: "https://cdn-icons-png.flaticon.com/512/4944/4944377.png",
userAvatarUrl: "https://shorturl.at/xh1PO",
};Note: There are even more customization options, which you can checkout at the detailed documenattion.
- The provider and model are passed in query parameters.
- The SDK automatically switches to the correct LLM service.
- Developers don’t need to implement custom logic.
The developer just has to add the provider and model in the chat config:
const chatConfig = {
...lightChatConfig,
borderRadius: "4rem",
provider: "openai",
model: "gpt-5",
};- 🎙️ Voice input & output support
- 💾 Save chat agent responses
- ⚡ Choose between streaming or REST responses
- 🚦 Built-in rate limiting per user
- 🔀 Toggle between multiple providers & LLM models seamlessly
Q: Do I need a backend?
Ans: No, the SDK handles it for you. You just need to configure your credentials and follow the necessary import steps.
Q: Can I use this with frameworks other than React?
Ans: Currently, the SDK is optimized for React and NextJS. Support for more frameworks is planned.
Q: How fast are responses?
Ans: With cache hits, responses are typically 7-8 seconds faster compared to fresh queries.
For wague questions, it may take upto 12-15 seconds.












