Skip to content

MLConvexAI/AI-Assistant

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AI Assistant

This AI Assistant can help with code debugging, auditing and quality, coding, writing documents, etc. It uses multiple LLM models to improve predictions and reduce hallucinations. This example uses Gemini 1.5 Flash and OpenAI GPT-4o, but the models can be switched. By using the wide context window of these models, we can reduce the use of RAG. Special attention has been paid to the fact that the user can modify the system and user prompts and use prompt templates.

The application is simple based on Python and Streamlit and can easily be extended for other use cases like code generation etc.

The usage follows the following steps:

  1. Select the LLM Models and define their parameters

app

  1. Check the system prompt or define your own custom prompt

app

  1. You may define an User prompt using a prompt template

app

  1. Now you chat with your documents. Gemini's answer

app

  1. GPT-4o answer

app

Setup

Variables that need to be defined .env file (you may copy these from .env.sample)

GEMINI_PROJECT  =
GEMINI_LOCATION  =  
GEMINI_MODEL  =  "gemini-1.5-flash"
OPENAI_API_KEY  =
GPT_MODEL  =  "gpt-4o-2024-05-13"

Local environment

Create a new virtual environment

python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt

Clone the repository

git clone https://github.com/MLConvexAI/AI-Assistant.git

You can run the code locally as

streamlit run app.py

The page can be found from

Local URL: http://localhost:8501

Cloud

The solution can also be easily deployed to the cloud using a container and a Dockerfile.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published