OpenTelemetry-based observability SDK for LLM applications.
pip install curestryfrom curestry import Curestry
# Initialize client
curestry = Curestry(
public_key="pk-...",
secret_key="sk-...",
host="https://cloud.curestry.com" # or your self-hosted instance
)
# Create a trace
trace = curestry.trace(name="my-trace")
# Create a generation (LLM call)
generation = trace.generation(
name="my-generation",
model="gpt-4",
input="What is 2+2?",
output="4"
)
# Flush at the end
curestry.flush()- OpenTelemetry-based tracing
- OpenAI instrumentation
- LangChain integration via CallbackHandler
- Dataset management
- Prompt management
- Async support with automatic batching
from curestry.openai import observe_openai
import openai
client = observe_openai(openai.OpenAI())
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello"}]
)from curestry.langchain import CallbackHandler
handler = CallbackHandler(
public_key="pk-...",
secret_key="sk-..."
)
# Use with LangChain
chain.invoke(input, config={"callbacks": [handler]})| Variable | Description | Default |
|---|---|---|
CURESTRY_PUBLIC_KEY |
API public key | - |
CURESTRY_SECRET_KEY |
API secret key | - |
CURESTRY_HOST |
Platform URL | https://cloud.curestry.com |
CURESTRY_DEBUG |
Enable debug logging | false |
CURESTRY_TRACING_ENABLED |
Enable/disable tracing | true |
CURESTRY_SAMPLE_RATE |
Sampling rate for traces | 1.0 |
# Install dependencies
poetry install --all-extras
# Run tests
poetry run pytest -v
# Format code
poetry run ruff format .
# Lint
poetry run ruff check .