Workshop Duration: 6:00 PM – 7:30 PM Format: Hybrid (In-Person & Remote) Hosted by: Versent
Welcome to our hands-on workshop! In this session, you'll learn how to build a simple command-line chatbot using Python, powered by the open-source LLaMA 3.3 model hosted on Azure AI Foundry.
By the end of this workshop, you will:
- Understand the basics of Large Language Models (LLMs) and LLaMA 3.3.
- Set up and configure a Python environment for API interactions.
- Build a CLI-based chatbot that communicates with the LLaMA 3.3 model.
- Learn how to structure prompts and handle responses effectively.
Before we begin, ensure you have the following:
- Python 3.7+ installed on your machine.
- A code editor (e.g., VS Code, PyCharm) or access to an online IDE like Replit.
- Internet connectivity to access the hosted LLaMA 3.3 model.
- Your unique API Key and Endpoint URL provided by the instructor.
llama_chatbot/
├── chatbot.py
├── .env
└── requirements.txt-
Clone the Repository
If using Git:
git clone hhttps://github.com/dylan-mccarthy/llama-hackathon
Or download the ZIP file and extract it.
-
Create and Activate a Virtual Environment (Optional but Recommended)
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install Dependencies
cd llama_chatbot pip install -r requirements.txt -
Configure Environment Variables
Create a
.envfile in the project root directory:LLAMA_API_URL=https://your-endpoint-url.com/v1/chat/completions LLAMA_API_KEY=your-api-key-here
Replace
https://your-endpoint-url.com/v1/chat/completionsandyour-api-key-herewith the credentials provided.
Execute the chatbot script:
python chatbot.pyYou should see:
🤖 Welcome to the LLaMA 3.3 Chatbot! Type 'exit' to quit.
You:Start typing your messages, and the chatbot will respond accordingly.
-
System Prompt: Modify the initial behavior of the chatbot by changing the system prompt in the code.
messages = [{"role": "system", "content": "You are a helpful assistant."}]
-
Adjust Parameters: Tweak
temperatureandmax_tokensin the payload to control response creativity and length. -
Enhance CLI: Add features like command history, colored text, or even integrate with speech-to-text libraries for voice input.
-
No Response / Errors:
- Ensure your API URL and Key are correctly set in the
.envfile. - Check your internet connection.
- Verify that the LLaMA 3.3 model is deployed and accessible.
- Ensure your API URL and Key are correctly set in the
-
Module Not Found:
-
Ensure all dependencies are installed:
pip install -r requirements.txt
-
This workshop is proudly presented by Versent, aiming to inspire and empower the next generation of engineers and technologists.
Feel free to reach out if you have any questions or need further assistance during the workshop. Happy coding! 🚀