Skip to content

speths/basic_llm_inference

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 

Repository files navigation

This basic template offers the possibility to run LLMs from Together AI and OpenAI with LangChain.


Installation guide

1. Create a new project and conda environment in PyCharm

2. Clone the repo

git clone https://github.com/speths/basic_llm_inference.git

3. Install the following packages in the terminal with pip

  • pip install python-dotenv
  • pip install langchain-openai
  • pip install langchain-together
  • pip install langchain-core

4. Add your API keys to .env


Further information

Tutorials

If you would like to deeper understand the code, look into the following links:

  1. Watch the first three videos of this playlist
  2. Do this tutorial of the LangChain series

Models

Here you can find all the models that you can run. Just change the name in code accordingly. At the moment only instruct models (chat models) are supported.

  1. Together AI
  2. OpenAI

About

Template for running basic inference on LLMs with LangChain.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages