Welcome to the Conversational Customer Support Agent project! This repository contains the code for a customer support chatbot built using Retrieval-Augmented Generation (RAG). The chatbot leverages FAISS vector search, OpenAI embeddings, and LangChain conversational chains to provide accurate, context-aware responses to user queries based on product documentation.
-
Retrieval-Augmented Generation (RAG): Combines retrieval of relevant documents with generative AI for effective query resolution.
-
FAISS Vector Search: Efficient similarity search over product documentation and FAQs.
-
OpenAI Embeddings: Uses OpenAI's
text-embedding-ada-002model to generate vector embeddings. -
Multi-turn Memory: Maintains conversation history for coherent multi-turn interactions.
-
Escalation Logic: Automatically escalates complex queries to human support agents when necessary.
-
Modular Design: Easily extendable and customizable for different use cases.
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activatepython -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activateInstall all required Python libraries using pip:
pip install -r requirements.txtSince you are using OpenAI embeddings, ensure your API key is set in your .bash_profile or equivalent shell configuration file.
Add the following line to your .bash_profile (or .zshrc, .bashrc, etc.):
export OPENAI_API_KEY="your-openai-api-key-here"Then reload your shell configuration:
source ~/.bash_profile # Or source ~/.zshrc, depending on your shellRun the knowledge_base.py script to generate the FAISS vector store from your product documentation:
python knowledge_base.pyThis script reads text files from the product_docs/ directory, splits them into chunks, generates embeddings using OpenAI's embedding model, and saves the FAISS index locally.
Run the rag_model.py script to interact with the chatbot:
python rag_model.pyYou can test queries like:
- "What is your return policy?"
- "How do I reset my password?"
If you want a web-based interface, run app.py using Streamlit:
streamlit run app.pyAdd or update text files in the product_docs/ directory to include your own FAQs or product details.
To use a different OpenAI embedding model, update the model parameter in both knowledge_base.py and rag_model.py.
Modify the _should_escalate() method in rag_model.py to customize escalation triggers.