Med-Buddy is an implementation of a Retrieval-Augmented Generation (RAG) based chatbot. This chatbot leverages retrieval mechanisms to fetch relevant documents from an indexed database and uses a generative model to provide detailed and contextually relevant responses.
The Med-Buddy Application is hosted on HuggingFace Spaces for anyone to go and test this amazing functionality.

- Document Indexing: Efficiently index documents for retrieval. Create as many chatbots with your different categories of documents and efficiently chat with them anytime you want without the need to re upload the documents.
- Contextual Responses: Generate responses based on retrieved documents. Utilising Llama-7B Model, and Pinecone as the Vector Database, get contextual and proper answers to your queries.
- Modular Design: Easy to extend and integrate with various data sources. Components are divided and coded to easily integrate and expand use-cases in future.
- Pinecone
- Streamlit
- HuggingFace
- Python
- Llama-7B (Groq API)
- Sentence-Transformers(Embedding)
Clone the repository:
git clone https://github.com/pmp438/RAG-Chatbot.git
cd RAG-ChatbotInstall the required dependencies:
pip install -r requirements.txt- Index Documents: Use
index.pyto index your documents. Or you can also directly index your documents through the client interface - Start Chatbot: Run
app.pyto start the chatbot server.'
streamlit run app.py- Interact: Use a client to interact with the chatbot.
README.md: This file.app.py: Entry point for running the chatbot server.chatbot.py: Core logic for the chatbot.index.py: Script for indexing documents.indexes.csv: Indexed documents and the index csv having your chatbot wise index details on Pinecone.requirements.txt: Dependencies for the project.
This project is licensed under the MIT License.
For any queries, please open an issue on the repository.
