Skip to content

Small reliability layer for HTTP APIs and LLM calls. Idempotent HTTP/LLM proxy with retries, cache, circuit breaker and predictable AI costs.

License

Notifications You must be signed in to change notification settings

KikuAI-Lab/reliapi

Repository files navigation

ReliAPI

Reliability layer for API calls: retries, caching, dedup, circuit breakers.

npm version PyPI version Docker

Features

  • Retries with Backoff - Automatic retries with exponential backoff
  • Circuit Breaker - Prevent cascading failures
  • Caching - TTL cache for GET requests and LLM responses
  • Idempotency - Request coalescing with idempotency keys
  • Rate Limiting - Built-in rate limiting per tier
  • LLM Proxy - Unified interface for OpenAI, Anthropic, Mistral
  • Cost Control - Budget caps and cost estimation
  • Self-Service Onboarding - Automated API key generation
  • Paddle Payments - Subscription management

Project Structure

reliapi/
├── core/                 # Core reliability components
│   ├── cache.py          # Redis-based TTL cache
│   ├── circuit_breaker.py
│   ├── idempotency.py    # Request coalescing
│   ├── retry.py          # Exponential backoff
│   ├── rate_limiter.py   # Per-tenant rate limits
│   ├── rate_scheduler.py # Token bucket algorithm
│   ├── key_pool.py       # Multi-key management
│   └── cost_estimator.py # LLM cost calculation
├── app/
│   ├── main.py           # FastAPI application
│   ├── services.py       # Business logic
│   ├── schemas.py        # Pydantic models
│   └── routes/           # Business routes
│       ├── paddle.py     # Payment processing
│       ├── onboarding.py # Self-service signup
│       ├── analytics.py  # Usage analytics
│       ├── calculators.py# ROI/pricing calculators
│       └── dashboard.py  # Admin dashboard
├── adapters/
│   └── llm/              # LLM provider adapters
│       ├── openai.py
│       ├── anthropic.py
│       └── mistral.py
├── config/               # Configuration loader
├── metrics/              # Prometheus metrics
├── examples/             # Code examples
├── integrations/         # LangChain, LlamaIndex
├── openapi/              # OpenAPI specs
├── postman/              # Postman collection
└── tests/                # Test suite

Quick Start

Using RapidAPI (No Installation)

Try ReliAPI directly on RapidAPI.

Self-Hosting with Docker

docker run -d -p 8000:8000 \
  -e REDIS_URL="redis://localhost:6379/0" \
  -e RELIAPI_CONFIG_PATH=/app/config.yaml \
  kikudoc/reliapi:latest

Local Development

# Clone repository
git clone https://github.com/KikuAI-Lab/reliapi.git
cd reliapi

# Create virtual environment
python3 -m venv venv
source venv/bin/activate

# Install dependencies
pip install -r requirements.txt

# Start Redis
docker run -d -p 6379:6379 redis:7-alpine

# Run server
export REDIS_URL=redis://localhost:6379/0
uvicorn app.main:app --host 0.0.0.0 --port 8000 --reload

Configuration

Create config.yaml:

targets:
  openai:
    base_url: https://api.openai.com/v1
    llm:
      provider: openai
      default_model: gpt-4o-mini
      soft_cost_cap_usd: 0.10
      hard_cost_cap_usd: 0.50
    cache:
      enabled: true
      ttl_s: 3600
    circuit:
      error_threshold: 5
      cooldown_s: 60
    auth:
      type: bearer_env
      env_var: OPENAI_API_KEY

API Endpoints

Core Proxy

Endpoint Method Description
/proxy/http POST Proxy any HTTP API with reliability
/proxy/llm POST Proxy LLM requests with cost control
/healthz GET Health check
/metrics GET Prometheus metrics

Business Routes

Endpoint Method Description
/paddle/plans GET List subscription plans
/paddle/checkout POST Create checkout session
/paddle/webhook POST Handle Paddle webhooks
/onboarding/start POST Generate API key
/onboarding/quick-start GET Get quick start guide
/onboarding/verify POST Verify integration
/calculators/pricing POST Calculate pricing
/calculators/roi POST Calculate ROI
/dashboard/metrics GET Usage metrics

Environment Variables

# Required
REDIS_URL=redis://localhost:6379/0

# Optional
RELIAPI_CONFIG_PATH=config.yaml
RELIAPI_API_KEY=your-api-key
CORS_ORIGINS=*
LOG_LEVEL=INFO

# LLM Providers
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
MISTRAL_API_KEY=...

# Paddle (for payments)
PADDLE_API_KEY=...
PADDLE_VENDOR_ID=...
PADDLE_WEBHOOK_SECRET=...
PADDLE_ENVIRONMENT=sandbox

SDK Usage

Python

from reliapi_sdk import ReliAPI

client = ReliAPI(
    base_url="https://reliapi.kikuai.dev",
    api_key="your-api-key"
)

# HTTP proxy
response = client.proxy_http(
    target="my-api",
    method="GET",
    path="/users/123",
    cache=300
)

# LLM proxy
llm_response = client.proxy_llm(
    target="openai",
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "Hello!"}],
    idempotency_key="unique-key-123"
)

JavaScript

import { ReliAPI } from 'reliapi-sdk';

const client = new ReliAPI({
  baseUrl: 'https://reliapi.kikuai.dev',
  apiKey: 'your-api-key'
});

const response = await client.proxyLlm({
  target: 'openai',
  model: 'gpt-4o-mini',
  messages: [{ role: 'user', content: 'Hello!' }]
});

Testing

# Run tests
pytest

# With coverage
pytest --cov=reliapi --cov-report=html

Deployment

See DEPLOYMENT.md for production deployment guide.

Documentation

Support

License

AGPL-3.0. Copyright (c) 2025 KikuAI Lab

Contributors 4

  •  
  •  
  •  
  •