Paid is the all-in-one, drop-in Business Engine for AI Agents that handles your pricing, subscriptions, margins, billing, and renewals with just 5 lines of code. The Paid Python library provides convenient access to the Paid API from Python applications.
See the full API docs here
You can install the package using pip:
pip install paid-pythonThe client needs to be configured with your account's API key, which is available in the Paid dashboard.
from paid import Paid
client = Paid(token="API_KEY")
client.customers.create_customer(
name="name"
)The SDK provides Python classes for all request and response types. These are automatically handled when making API calls.
# Example of creating a customer
response = client.customers.create_customer(
name="John Doe",
)
# Access response data
print(response.name)
print(response.email)When the API returns a non-success status code (4xx or 5xx response), the SDK will raise an appropriate error.
from paid import BadRequestError, NotFoundError
from paid.core.api_error import ApiError
try:
client.customers.create_customer(name="John Doe")
except BadRequestError as e:
print(e.status_code) # 400
print(e.body) # ErrorResponse with error details
except NotFoundError as e:
print(e.status_code) # 404
print(e.body)
except ApiError as e:
# Catch-all for other API errors
print(e.status_code)
print(e.body)Supported log levels are DEBUG, INFO, WARNING, ERROR, and CRITICAL.
For example, to set the log level to debug, you can set the environment variable:
export PAID_LOG_LEVEL=DEBUGDefaults to ERROR.
The Paid SDK supports the following environment variables for configuration:
Your Paid API key for authentication. This is used as a fallback when you don't explicitly pass the token parameter to the Paid() client or initialize_tracing().
export PAID_API_KEY="your_api_key_here"Controls whether Paid tracing is enabled. Set to false (case-insensitive) to disable all tracing functionality.
export PAID_ENABLED=falseThis is useful for:
- Development/testing environments where tracing isn't needed
- Temporarily disabling tracing without modifying code
- Feature flagging in different deployment environments
Defaults to true if not set.
Sets the logging level for Paid SDK operations. See the Logging section for details.
Overrides the default OpenTelemetry collector endpoint URL. Only needed if you want to route traces to a custom endpoint.
export PAID_OTEL_COLLECTOR_ENDPOINT="https://your-custom-endpoint.com:4318/v1/traces"Defaults to https://collector.agentpaid.io:4318/v1/traces.
The easiest way to add cost tracking is using the @paid_tracing decorator or context manager:
Important: Always call
initialize_tracing()andpaid_autoinstrument()once at startup before usingpaid_tracing.initialize_tracingalso accepts optional arguments like OTEL collector endpoint and api key if you want to reroute your tracing somewhere else.
from paid.tracing import paid_tracing, initialize_tracing, paid_autoinstrument
initialize_tracing()
paid_autoinstrument() # instruments all supported libraries by default
@paid_tracing("<external_customer_id>", external_product_id="<optional_external_product_id>")
def some_agent_workflow(): # your function
# Your logic - use any AI providers and send signals with signal().
# This function is typically an event processor that should lead to AI calls or events emitted as Paid signalsYou can also use paid_tracing as a context manager with with statements:
from paid.tracing import paid_tracing, initialize_tracing, paid_autoinstrument
initialize_tracing()
paid_autoinstrument() # instruments all supported libraries by default
# Synchronous
with paid_tracing("customer_123", external_product_id="product_456"):
result = workflow()
# Asynchronous
async with paid_tracing("customer_123", external_product_id="product_456"):
result = await workflow()Both approaches:
- Handle both sync and async functions/code blocks
- Gracefully fall back to normal execution if tracing fails
- Support the same parameters:
external_customer_id,external_product_id,tracing_token,store_prompt,metadata
After calling initialize_tracing() and paid_autoinstrument(), use your AI provider SDKs directly — no wrapper classes needed:
from openai import OpenAI
from paid.tracing import paid_tracing, initialize_tracing, paid_autoinstrument
initialize_tracing()
paid_autoinstrument(libraries=["openai"])
openai_client = OpenAI(api_key="<OPENAI_API_KEY>")
@paid_tracing("your_external_customer_id", external_product_id="your_external_product_id")
def image_generate():
response = openai_client.images.generate(
model="dall-e-3",
prompt="A sunset over mountains",
size="1024x1024",
quality="hd",
style="vivid",
n=1
)
return response
image_generate()You can attach custom metadata to your traces by passing a metadata dictionary to the paid_tracing() decorator or context manager. This metadata will be stored with the trace and can be used to filter and query traces later.
Python - Decorator
from paid.tracing import paid_tracing, signal, initialize_tracing, paid_autoinstrument
from openai import OpenAI
initialize_tracing()
paid_autoinstrument(libraries=["openai"])
openai_client = OpenAI(api_key="<OPENAI_API_KEY>")
@paid_tracing(
"customer_123",
external_product_id="product_123",
metadata={
"campaign_id": "campaign_456",
"environment": "production",
"user_tier": "enterprise"
}
)
def process_event(event):
"""Process event with custom metadata"""
response = openai_client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": event.content}]
)
signal("event_processed", enable_cost_tracing=True)
return response
process_event(incoming_event)Python - Context Manager
from paid.tracing import paid_tracing, signal, initialize_tracing, paid_autoinstrument
from openai import OpenAI
initialize_tracing()
paid_autoinstrument(libraries=["openai"])
openai_client = OpenAI(api_key="<OPENAI_API_KEY>")
def process_event(event):
"""Process event with custom metadata"""
response = openai_client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": event.content}]
)
signal("event_processed", enable_cost_tracing=True)
return response
# Pass metadata to context manager
with paid_tracing(
"customer_123",
external_product_id="product_123",
metadata={
"campaign_id": "campaign_456",
"environment": "production",
"user_tier": "enterprise"
}
):
process_event(incoming_event)Once you've added metadata to your traces, you can filter traces using the metadata parameter in the traces API endpoint:
# Filter by single metadata field
curl -G "https://api.paid.ai/api/organizations/{orgId}/traces" \
--data-urlencode 'metadata={"campaign_id":"campaign_456"}' \
-H "Authorization: Bearer YOUR_API_KEY"
# Filter by multiple metadata fields (all must match)
curl -G "https://api.paid.ai/api/organizations/{orgId}/traces" \
--data-urlencode 'metadata={"campaign_id":"campaign_456","environment":"production"}' \
-H "Authorization: Bearer YOUR_API_KEY"For maximum convenience, you can use OpenTelemetry auto-instrumentation to automatically track costs without modifying your AI library calls. This approach uses official OpenTelemetry instrumentors for supported AI libraries.
from paid import Paid
from paid.tracing import paid_autoinstrument, initialize_tracing, paid_tracing
from openai import OpenAI
# Initialize Paid SDK
client = Paid(token="PAID_API_KEY")
initialize_tracing()
paid_autoinstrument(libraries=["openai"])
# Now all OpenAI calls will be automatically traced
openai_client = OpenAI(api_key="<OPENAI_API_KEY>")
@paid_tracing("your_external_customer_id", external_product_id="your_external_product_id")
def chat_with_gpt():
response = openai_client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello!"}]
)
return response
chat_with_gpt() # Costs are automatically tracked!Auto-instrumentation supports the following AI libraries:
anthropic - Anthropic SDK
gemini - Google Generative AI (google-generativeai)
openai - OpenAI Python SDK
openai-agents - OpenAI Agents SDK
claude-agent-sdk - Claude Agent SDK
bedrock - AWS Bedrock (boto3)
langchain - LangChain framework
instructor - Instructor
If you only want to instrument specific libraries, pass them to paid_autoinstrument():
from paid.tracing import paid_autoinstrument
# Instrument only Anthropic and OpenAI
paid_autoinstrument(libraries=["anthropic", "openai"])- Auto-instrumentation uses official OpenTelemetry instrumentors for each AI library
- It automatically wraps library calls without requiring you to use Paid wrapper classes
- Works seamlessly with
@paid_tracing()decorator or context manager - Costs are tracked in the same way as when using manual wrappers
- Should be called once during application startup, typically before creating AI client instances
Signals allow you to emit events within your tracing context. They have access to all tracing information, so you need fewer arguments compared to manual API calls.
Use the signal() function which must be called within an active @paid_tracing() context (decorator or context manager).
Here's an example of how to use it:
from paid.tracing import paid_tracing, signal, initialize_tracing, paid_autoinstrument
initialize_tracing()
paid_autoinstrument()
@paid_tracing("your_external_customer_id", external_product_id="your_external_product_id")
def do_work():
# ...do some work...
signal(
event_name="<your_signal_name>",
data={ } # optional data (ex. manual cost tracking data)
)
do_work()Same approach with context manager:
from paid.tracing import paid_tracing, signal, initialize_tracing, paid_autoinstrument
initialize_tracing()
paid_autoinstrument()
def do_work():
# ...do some work...
signal(
event_name="<your_signal_name>",
data={ } # optional data (ex. manual cost tracking data)
)
# Use context manager instead
with paid_tracing("your_external_customer_id", external_product_id="your_external_product_id"):
do_work()If you want a signal to carry information about costs, then the signal should be sent from the same tracing context as the wrappers and hooks that recorded those costs.
This will look something like this:
from paid.tracing import paid_tracing, signal, initialize_tracing, paid_autoinstrument
initialize_tracing()
paid_autoinstrument()
@paid_tracing("your_external_customer_id", external_product_id="your_external_product_id")
def do_work():
# ... your workflow logic
# ... your AI calls (auto-instrumented)
signal(
event_name="<your_signal_name>",
data={ }, # optional data (ex. manual cost tracking data)
enable_cost_tracing=True, # set this flag to associate it with costs
)
# ... your workflow logic
# ... your AI calls (auto-instrumented, can be sent after the signal too)
do_work()Then, all of the costs traced in @paid_tracing() context are related to that signal.
Sometimes your agent workflow cannot fit into a single traceable function like above, because it has to be disjoint for whatever reason. It could even be running across different machines.
For such cases, you can pass a tracing token directly to @paid_tracing() or context manager to link distributed traces together.
The simplest way to implement distributed tracing is to pass the token directly to the decorator or context manager:
from paid.tracing import paid_tracing, signal, generate_tracing_token, initialize_tracing, paid_autoinstrument
from openai import OpenAI
initialize_tracing()
paid_autoinstrument(libraries=["openai"])
openai_client = OpenAI(api_key="<OPENAI_API_KEY>")
# Process 1: Generate token and do initial work
token = generate_tracing_token()
print(f"Tracing token: {token}")
# Store token for other processes (e.g., in Redis, database, message queue)
save_to_storage("workflow_123", token)
@paid_tracing("customer_123", tracing_token=token, external_product_id="product_123")
def process_part_1():
# AI calls here will be traced
response = openai_client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Analyze data"}]
)
# Signal without cost tracing
signal("part_1_complete", enable_cost_tracing=False)
process_part_1()
# Process 2 (different machine/process): Retrieve and use token
token = load_from_storage("workflow_123")
@paid_tracing("customer_123", tracing_token=token, external_product_id="product_123")
def process_part_2():
# AI calls here will be linked to the same trace
response = openai_client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Generate response"}]
)
# Signal WITH cost tracing - links all costs from both processes
signal("workflow_complete", enable_cost_tracing=True)
process_part_2()
# No cleanup needed - token is scoped to the decorated functionUsing context manager instead of decorator:
from paid.tracing import paid_tracing, signal, generate_tracing_token, initialize_tracing, paid_autoinstrument
from openai import OpenAI
initialize_tracing()
paid_autoinstrument(libraries=["openai"])
openai_client = OpenAI(api_key="<OPENAI_API_KEY>")
# Process 1: Generate token and do initial work
token = generate_tracing_token()
save_to_storage("workflow_123", token)
with paid_tracing("customer_123", external_product_id="product_123", tracing_token=token):
response = openai_client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Analyze data"}]
)
signal("part_1_complete", enable_cost_tracing=False)
# Process 2: Retrieve and use the same token
token = load_from_storage("workflow_123")
with paid_tracing("customer_123", external_product_id="product_123", tracing_token=token):
response = openai_client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Generate response"}]
)
signal("workflow_complete", enable_cost_tracing=True)If you would prefer to not use Paid to track your costs automatically but you want to send us the costs yourself, then you can use manual cost tracking mechanism. Just attach the cost information in the following format to a signal payload:
from paid import Paid, Signal, CustomerByExternalId, ProductByExternalId
client = Paid(token="<PAID_API_KEY>")
signal = Signal(
event_name="<your_signal_name>",
customer=CustomerByExternalId(external_customer_id="<your_external_customer_id>"),
attribution=ProductByExternalId(external_product_id="<your_external_product_id>"),
data={
"costData": {
"vendor": "<any_vendor_name>", # can be anything, traces are grouped by vendors in the UI
"cost": {
"amount": 0.002,
"currency": "USD"
},
"gen_ai.response.model": "<ai_model_name>", # optional, but will be displayed on UI
"start_time": "2024-01-01T11:45:00.000Z", # optional, affects where trace is on the timeline
}
}
)
client.signals.create_signals(signals=[signal])Alternatively the same costData payload can be passed to OTLP signaling mechanism:
from paid.tracing import paid_tracing, signal, initialize_tracing, paid_autoinstrument
initialize_tracing()
paid_autoinstrument()
@paid_tracing("your_external_customer_id", external_product_id="your_external_product_id")
def do_work():
# ...do some work...
signal(
event_name="<your_signal_name>",
data={
"costData": {
"vendor": "<any_vendor_name>", # can be anything, traces are grouped by vendors in the UI
"cost": {
"amount": 0.002,
"currency": "USD"
},
"gen_ai.response.model": "<ai_model_name>", # optional, but will be displayed on UI
"start_time": "2024-01-01T11:45:00.000Z", # optional, affects where trace is on the timeline
}
}
)
do_work()If you would prefer to send us raw usage manually (without wrappers) and have us compute the cost, you can attach usage data in the following format:
from paid import Paid, Signal, CustomerByExternalId, ProductByExternalId
client = Paid(token="<PAID_API_KEY>")
signal = Signal(
event_name="<your_signal_name>",
customer=CustomerByExternalId(external_customer_id="<your_external_customer_id>"),
attribution=ProductByExternalId(external_product_id="<your_external_product_id>"),
data={
"costData": {
"vendor": "<any_vendor_name>", # can be anything, traces are grouped by vendors in the UI
"attributes": {
"gen_ai.response.model": "gpt-4.1-mini",
"gen_ai.usage.input_tokens": 100,
"gen_ai.usage.output_tokens": 300,
"gen_ai.usage.cached_input_tokens": 600,
"gen_ai.usage.cache_creation_input_tokens": 200,
},
}
}
)
client.signals.create_signals(signals=[signal])Same but via OTEL signaling:
from paid.tracing import paid_tracing, signal, initialize_tracing, paid_autoinstrument
initialize_tracing()
paid_autoinstrument()
@paid_tracing("your_external_customer_id", external_product_id="your_external_product_id")
def do_work():
# ...do some work...
signal(
event_name="<your_signal_name>",
data={
"costData": {
"vendor": "<any_vendor_name>", # can be anything, traces are grouped by vendors in the UI
"attributes": {
"gen_ai.response.model": "gpt-4.1-mini",
"gen_ai.usage.input_tokens": 100,
"gen_ai.usage.output_tokens": 300,
"gen_ai.usage.cached_input_tokens": 600,
"gen_ai.usage.cache_creation_input_tokens": 200,
},
}
}
)
do_work()All of the functionality from above is available in async flavor too.
Use AsyncPaid instead of Paid for async operations:
from paid import AsyncPaid
client = AsyncPaid(token="API_KEY")
# Async API calls
customer = await client.customers.create_customer(name="John Doe")The @paid_tracing decorator automatically handles both sync and async functions:
from openai import AsyncOpenAI
from paid.tracing import paid_tracing, initialize_tracing, paid_autoinstrument
initialize_tracing()
paid_autoinstrument(libraries=["openai"])
openai_client = AsyncOpenAI(api_key="<OPENAI_API_KEY>")
@paid_tracing("your_external_customer_id", external_product_id="your_external_product_id")
async def generate_image():
response = await openai_client.images.generate(
model="dall-e-3",
prompt="A sunset over mountains",
size="1024x1024",
quality="hd",
n=1
)
return response
# Call the async function
await generate_image()The signal() function works seamlessly in async contexts:
from paid.tracing import paid_tracing, signal, initialize_tracing, paid_autoinstrument
from openai import AsyncOpenAI
initialize_tracing()
paid_autoinstrument(libraries=["openai"])
openai_client = AsyncOpenAI(api_key="<OPENAI_API_KEY>")
@paid_tracing("your_external_customer_id", external_product_id="your_external_product_id")
async def do_work():
# Perform async AI operations
response = await openai_client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello!"}]
)
# Send signal (works in async context)
signal(
event_name="<your_signal_name>",
enable_cost_tracing=True # Associate with traced costs
)
return response
# Execute
await do_work()If you would like to use the Paid OTEL tracer provider:
from paid.tracing import get_paid_tracer_provider
paid_tracer_provider = get_paid_tracer_provider()While we value open-source contributions to this SDK, this library is generated programmatically. Additions made directly to this library would have to be moved over to our generation code, otherwise they would be overwritten upon the next generated release. Feel free to open a PR as a proof of concept, but know that we will not be able to merge it as-is. We suggest opening an issue first to discuss with us!
On the other hand, contributions to the README are always very welcome!