Add support for Azure, Palm, Anthropic, Cohere, Hugging Face Llama2 70b Models - using litellm #331
Add support for Azure, Palm, Anthropic, Cohere, Hugging Face Llama2 70b Models - using litellm #331ishaan-jaff wants to merge 1 commit intocode-kern-ai:mainfrom
Conversation
|
@LeonardPuettmann @SvenjaKern can you please take a look at this PR ? Will add to .md files + tests if this initial commit looks good |
|
We're rolling out support for the top chatLLMs on Hugging Face - are there any you'd like me to add support / examples for here ? |
|
@ishaan-jaff Super awesome! Thank you for your contribution. 👍 Really like this! A few questions: How would I configure things like the temperature for the GPT model? Can I also do that with the litellm package? And are the Llama models hosted on HuggingFace? Code looks great, but I would suggest that this deserves its own brick module. Something like a "general_llm_brick" or "lite_llm_brick"? |
|
@ishaan-jaff Just following up to see if you are still interested in implementing this. :) |
|
Hi @LeonardPuettmannKern yes
It's one of the input params - exactly like OpenAI Chat Completion
You can use llama from any of the providers we support - Sagemaker, togetherAI, replicate, Deep infra etc |
|
How do I create a brick ? |
This PR adds support for models from all the above mentioned providers using https://github.com/BerriAI/litellm/
All LLM API Models are guaranteed to have the same Input/Output interface
Here's a sample of how it's used:
PR checklist: