Skip to content

sanidhya2803/LLM_Chatbot

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 

Repository files navigation

1️⃣ Import the library

from openai import OpenAI What this means

This line imports the OpenAI Python SDK into your Python program.

Why we need it

This library allows Python to communicate with an AI API server.

Without this library you would need to manually send HTTP requests.

What happens internally

Python loads the module that contains functions to:

send API requests

authenticate with API keys

handle JSON responses

manage errors

2️⃣ Create the API client

client = OpenAI( What this means

You are creating an API client object.

Think of it like opening a connection tool that will send requests to the AI server.

Internally

Python creates an object called client.

Later we will use it like this:

client.chat.completions.create()

3️⃣ API Key

api_key="gsk_XXXXXXXXXXXXXXXX", What this means

This is your authentication key from Groq.

Why we need it

APIs need authentication to:

identify the user

track usage

enforce rate limits

prevent unauthorized access

Internally

When a request is sent, the key is placed in the HTTP header like this:

Authorization: Bearer API_KEY

4️⃣ Base URL

base_url="https://api.groq.com/openai/v1" What this means

This tells the SDK which server to send requests to.

Normally the OpenAI library would send requests to:

https://api.openai.com

But here we changed it to:

https://api.groq.com

So the request goes to Groq servers.

Why this works

Groq uses an OpenAI-compatible API format, so the OpenAI library can still communicate with it.

5️⃣ Close the client creation

)

This finishes creating the client object.

Now your program has a configured API connection.

6️⃣ Print welcome message

print("Simple AI Chatbot") What it does

Displays a message in the terminal.

Output example:

Simple AI Chatbot

7️⃣ Print instructions

print("Type 'exit' to stop")

This tells the user how to close the chatbot.

Output:

Type 'exit' to stop

8️⃣ Infinite loop

while True: What it means

This creates a loop that runs forever.

The chatbot will keep running until we manually stop it.

Without this loop the chatbot would answer only one question.

9️⃣ Take user input

user_input = input("You: ") What happens

Python shows this in terminal:

You:

The user types something.

Example:

You: What is AI?

The text is stored in a variable called user_input.

Now:

user_input = "What is AI?"

🔟 Exit condition

if user_input.lower() == "exit": What happens

If the user types:

exit

the chatbot stops.

Why .lower() is used

This converts text to lowercase.

So these all work:

EXIT Exit exit

1️⃣1️⃣ Print goodbye message

print("Goodbye!")

If the user exits, the terminal shows:

Goodbye!

1️⃣2️⃣ Break the loop

break

This stops the while True loop.

The program ends.

1️⃣3️⃣ Send request to LLM

response = client.chat.completions.create( What this does

This sends a request to the AI model.

You are calling the chat completion API.

Internally this happens

Your program sends an HTTP POST request to:

https://api.groq.com/openai/v1/chat/completions

With JSON data.

1️⃣4️⃣ Select the model

model="llama-3.1-8b-instant", What this means

You are telling Groq which model to use.

The model here is:

Llama 3.1

Created by Meta.

What the model does

It receives the prompt and generates text as a response.

1️⃣5️⃣ Send conversation message

messages=[

The API expects conversation in chat format.

Example structure:

messages = [ {role: user, content: question} ]

1️⃣6️⃣ Define user role

{"role": "user", "content": user_input} What this means

You are sending a message to the AI like this:

User: What is AI?

Role types can be:

Role Meaning user human message assistant AI response system instructions to AI

1️⃣7️⃣ Close messages list

]

This finishes defining the messages list.

1️⃣8️⃣ Close API call

)

The request is now sent to the server.

Groq processes it and returns a response.

1️⃣9️⃣ Extract the AI reply

answer = response.choices[0].message.content Why this is needed

The API returns a large JSON response.

Example:

{ "choices":[ { "message":{ "content":"Artificial Intelligence is..." } } ] }

So we extract only the actual text response.

2️⃣0️⃣ Print the AI response

print("Bot:", answer)

Example output:

You: What is AI? Bot: Artificial Intelligence is the simulation of human intelligence by machines.

🔁 Loop repeats

After printing the answer, the program goes back to:

while True

And asks the next question.

🧠 Full Architecture of Your Chatbot

User Input ↓ Python Script ↓ OpenAI SDK ↓ Groq API Server ↓ Llama 3.1 Model ↓ Generated Response ↓ Printed in Terminal

Company Do they provide API? Do they have their own models? Example Models Do they host other models?
OpenAI ✅ Yes ✅ Yes GPT‑4o, GPT‑4 ❌ No
Anthropic ✅ Yes ✅ Yes Claude models ❌ No
Google ✅ Yes ✅ Yes Gemini, Gemma ❌ Mostly no
Meta ❌ Usually no official public API ✅ Yes Llama 3, Llama 3.1 ❌ No
Mistral AI ✅ Yes ✅ Yes Mistral 7B, Mixtral 8x7B ❌ No
Groq ✅ Yes ❌ Mostly no Uses Llama, Mixtral, Gemma ✅ Yes
Together AI ✅ Yes ❌ No Hosts Llama, Mixtral, Qwen ✅ Yes
OpenRouter ✅ Yes ❌ No Gateway to GPT, Claude, Llama ✅ Yes
Hugging Face ✅ Yes ❌ Mostly no Hosts thousands of models ✅ Yes

About

Basic Chatbot using LLM API - Using Groq API with Llama3.1 as Model

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages