@gpt-tag/openai is a library for building applications that rely on LLMs. It's designed to be easily composable within your existing application.
Use one of the specific LLM based packages to get started, such as @gpt-tag/openai
Install @gpt-tag/openai and make sure you have installed openai as well.
npm install @gpt-tag/openai openaior
yarn add @gpt-tag/openai openaigpt-tag libraries use a fluent interface to help provide maximum composability. Simply import openai from @gpt-tag/openai to start composing.
import { openai } from "@gpt-tag/openai";
const factual = openai.temperature(0);
const president = factual`Who was president of the United States in 1997`;
const result = await president.get();
// Bill ClintonYou can embed the result of one tag inside of another tag to create powerful compositions, and no LLM requests will be
made until you attempt to call .get() on a tag. At that time, it will resolve the necessary calls sequentially to
provide a final answer.
import { openai } from "@gpt-tag/openai";
const factual = openai.temperature(0);
const opinion = openai.temperature(1);
const president = factual`Who was president of the United States in 1997? Respond with only their name`;
const height = factual`What is ${president}'s height? Respond with only the height. Format: D'D"`;
const iceCream = opinion`Which flavor of ice cream would be preferred by ${president}? Choose only one. Guess if you don't know. Format: <flavor>`;
const [heightAnswer, iceCreamAnswer] = await Promise.all([
height.get(),
iceCream.get(),
]);
// [ 6'2" , mango ]See examples for more samples of how you can compose tags together.
Variables make your tags reusable and composable. You can use the variable function to create a variable and then use it in your tags.
import { openai, variable } from "@gpt-tag/openai";
const factual = openai.temperature(0);
const president =
factual`Who was president of the ${variable("country")} in ${variable("year")}? Respond with only their name`.get(
{
variables: {
country: "Argentina",
year: 2023,
},
},
);
// Javier MileiMost methods return GPTTag itself for fluently chaining.
Return a promise that will resolve any llm calls used to compose this tag
await openai.get()
Use temperature to control randomness
openai.temperature(0.5)
Override the model used
openai.model('gpt-4')
Use a streaming response
openai.stream(true);
Transform the results of a call before returning them
openai.transform((result) => result.trim());