Skip to content

SealVera/sealvera-python

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SealVera Python SDK

Tamper-evident audit trails for AI agents — compliance-ready in minutes.

PyPI version Python License: MIT

SealVera gives every AI decision a cryptographically-sealed, immutable audit log — so you can prove what your agent decided, why it decided it, and that the record hasn't been touched. Built for teams shipping AI in finance, healthcare, legal, and any regulated industry that needs to answer to auditors, regulators, or customers.

EU AI Act · SOC 2 · HIPAA · GDPR · ISO 42001 — SealVera logs are designed to satisfy the explainability and auditability requirements of major AI compliance frameworks.


Why SealVera?

  • Tamper-evident logs — every decision is cryptographically hashed and chained; any tampering is detectable
  • 2-line integration — works as a decorator, callback, or context manager
  • Explainability built-in — captures inputs, outputs, reasoning, confidence scores, and model used
  • Real-time dashboard — search, filter, and export your full AI decision history at app.sealvera.com
  • Drift detection — get alerted when agent behaviour deviates from its baseline
  • Works with any LLM — OpenAI, Anthropic Claude, Google Gemini, Ollama, LangChain, CrewAI, AutoGen, and more
  • Zero hard dependencies — stdlib only, no bloat, no vendor lock-in

Installation

pip install sealvera

Quick Start

from sealvera import init, log

# Initialize once (e.g. in your app entry point)
init(
 endpoint="https://app.sealvera.com",
 api_key="sv_your_api_key_here",
 agent="payment-agent"
)

# Log a decision
log(
 action="approve_payment",
 decision="APPROVED",
 input={"amount": 5000, "currency": "USD", "customer_id": "c_123"},
 output={"approved": True, "reason": "Low risk score, verified merchant"},
 reasoning=[
 {"factor": "risk_score", "value": "0.12", "signal": "safe", "explanation": "Below 0.3 threshold"},
 {"factor": "merchant", "value": "verified", "signal": "safe", "explanation": "KYC passed"}
 ]
)
# Decision logged, hashed, and stored in your tamper-evident audit trail

Get your API key at app.sealvera.com.


LangChain Integration

The easiest way to audit LangChain agents — drop in the callback handler:

from sealvera import init
from sealvera.callbacks import SealVeraCallbackHandler
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage

init(endpoint="https://app.sealvera.com", api_key="sv_your_key", agent="langchain-agent")

model = ChatOpenAI(
 model="gpt-4o",
 callbacks=[SealVeraCallbackHandler(agent="langchain-agent")]
)

response = model.invoke([HumanMessage(content="Should I approve this loan application?")])
# Full chain logged — every LLM call, tool invocation, and final answer

Decorator Usage

Wrap any agent function with a single decorator:

from sealvera import init, trace

init(endpoint="https://app.sealvera.com", api_key="sv_your_key")

@trace(agent="fraud-detector", action="evaluate_transaction")
def evaluate_transaction(transaction: dict) -> dict:
 # Your agent logic here — LLM call, rules engine, ML model, anything
 risk_score = run_fraud_model(transaction)
 decision = "FLAGGED" if risk_score > 0.8 else "APPROVED"
 return {"decision": decision, "risk_score": risk_score, "reason": "..."}

result = evaluate_transaction({"amount": 9800, "merchant": "Unknown Corp"})
# Input, output, decision, and timing logged automatically

Context Manager

from sealvera import init, SealVeraClient

init(endpoint="https://app.sealvera.com", api_key="sv_your_key")
client = SealVeraClient()

with client.trace(agent="underwriting-agent", action="evaluate_loan") as ctx:
 ctx.set_input(application)
 result = run_underwriting_model(application)
 ctx.set_output(result)
 ctx.set_decision(result["decision"])
# Logged on context exit, even if an exception occurs

Autoload (Zero-Code Integration)

Add SealVera to any Python app without modifying source code:

python -m sealvera.autoload your_app.py

Or set via environment variables and import at the top of your entry point:

import sealvera.autoload # patches LLM clients automatically
SEALVERA_ENDPOINT=https://app.sealvera.com
SEALVERA_API_KEY=sv_your_key_here
SEALVERA_AGENT=my-agent

Async Support

Full async/await support for FastAPI, async LangChain, and any async agent framework:

from sealvera import init, log_async

init(endpoint="https://app.sealvera.com", api_key="sv_your_key")

async def process_claim(claim: dict) -> dict:
 result = await run_claims_model(claim)
 await log_async(
 action="evaluate_claim",
 decision=result["decision"],
 input=claim,
 output=result,
 reasoning=result.get("reasoning", [])
 )
 return result

Decision Vocabulary

Decision Meaning Use for
APPROVED Request approved / allowed Payments, loans, access grants
REJECTED Request blocked / denied Fraud blocks, denials
FLAGGED Needs human review Borderline cases, escalations
COMPLETED Task finished successfully General agent tasks
FAILED Task failed Error paths
ESCALATED Handed to human/higher agent Human-in-the-loop
PASSED Test/check passed CI, health checks

Use Cases

  • Financial services & fintech — log every credit decision, fraud flag, and payment approval for regulatory review (FINRA, OCC, CFPB)
  • Healthcare AI — tamper-evident audit trail for clinical decision support (HIPAA-aligned)
  • Legal tech — record document review, contract analysis, and compliance risk assessments
  • Insurance — log claims triage, underwriting decisions, and anomaly flags
  • HR / hiring tools — demonstrate fair, explainable AI decisions to avoid bias liability
  • Any agentic AI system — multi-step reasoning chains, tool calls, and autonomous decisions

Environment Variables

Variable Description Default
SEALVERA_ENDPOINT SealVera server URL https://app.sealvera.com
SEALVERA_API_KEY Your API key (starts with sv_)
SEALVERA_AGENT Default agent name default
SEALVERA_DEBUG Enable debug logging false

JavaScript / Node.js SDK

Looking for the Node.js version? → npmjs.com/package/sealvera


Links


License

MIT

About

Tamper-evident AI decision audit trail for Python — EU AI Act, HIPAA, GDPR, SOC 2 compliant logging for LangChain, CrewAI, AutoGen, OpenAI agents

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages