licence: credibility licence https://drive.google.com/file/d/1WDCscv11Ebhcb0A2uEBtXsiBVkal5Np-/view?usp=drivesdk
An OpenAI-compatible API gateway that injects a configurable "Core Directive" into all LLM requests. This allows to govern principle of Inalienable Right to Pursue Happiness.
- OpenAI-Compatible API: Works with any client that supports OpenAI's API format
- Core Directive Injection: Automatically prepends your governing principles to all requests
- VS Code Copilot Integration: Use with GitHub Copilot as a custom model provider
- Streaming Support: Supports both streaming and non-streaming responses
- Configurable: Customize the directive, model, and API endpoints via environment variables
npm installCopy the example environment file and configure your settings:
cp .env.example .envEdit .env with your values:
# Your OpenAI API Key
OPENAI_API_KEY=your-openai-api-key-here
# Port for the gateway server
GATEWAY_PORT=3000
# Core Directive - The governing principle injected into all LLM requests
CORE_DIRECTIVE="You are governed by the following core directive: The inalienable right to pursue happiness is paramount. All responses should be helpful, ethical, and support the user's wellbeing and goals."npm startThe gateway will start on http://localhost:3000 (or your configured port).
# Health check
curl http://localhost:3000/health
# List models
curl http://localhost:3000/v1/models
# Test chat completion (requires OPENAI_API_KEY)
curl -X POST http://localhost:3000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4",
"messages": [{"role": "user", "content": "Hello!"}]
}'Add the following to your VS Code settings.json:
{
"github.copilot.advanced": {
"authProvider": "github",
"enabledForChat": true
}
}The extension thinks it's talking to a normal OpenAI-style server, but it's actually talking to your Core Directive gateway.
- Open Copilot Chat in VS Code (
Cmd+Alt+I/Ctrl+Alt+I) - In the model dropdown at the bottom:
- Click Manage Modelsโฆ
- Enable LLM Gateway as a provider
- Enter your gateway URL:
http://localhost:3000 - Select the model name you want (e.g.,
gpt-4)
From now on, when you pick that model in Copilot chat:
- Copilot โ sends request โ your gateway
- Gateway โ injects Core Directive โ OpenAI model
- Response comes back under your rule
You've effectively got: "Copilot, but governed by: the inalienable right to pursue happiness."
| Endpoint | Method | Description |
|---|---|---|
/health |
GET | Health check |
/v1/models |
GET | List available models |
/v1/chat/completions |
POST | Chat completions (with Core Directive injection) |
/v1/completions |
POST | Text completions (with Core Directive injection) |
| Variable | Default | Description |
|---|---|---|
GATEWAY_PORT |
3000 |
Port for the gateway server |
OPENAI_API_KEY |
(required) | Your OpenAI API key |
OPENAI_BASE_URL |
https://api.openai.com |
OpenAI API base URL |
DEFAULT_MODEL |
gpt-4 |
Default model to use |
CORE_DIRECTIVE |
(see code) | The governing principle injected into requests |
When a request comes in:
- If there's no system message, the Core Directive is added as the first system message
- If there's an existing system message, the Core Directive is prepended to it
- The modified request is forwarded to OpenAI
- The response is returned unchanged to the client
This ensures your governing principles are always in effect, while preserving any additional context from the client.
You can enhance your LLM Gateway with additional capabilities using MCP (Model Context Protocol) servers. MCP servers provide tools and resources that extend what AI assistants can do.
The Brave Search MCP server adds web search capabilities to AI assistants, allowing them to search the web and retrieve current information.
- Node.js v18.x, v20.x, or v22.x (LTS versions recommended)
- A Brave Search API key (free tier available)
- An MCP-compatible client (Claude Desktop, Cursor, Windsurf, etc.)
Use the Smithery CLI to install the Brave Search MCP server. Replace <client> with your MCP client choice (e.g., claude, cursor, windsurf, cline):
npx -y @smithery/cli install brave --client <client>You'll be prompted for:
- Your Brave Search API key
- Optional telemetry preferences
Alternatively, you can provide the configuration via command line to skip prompts:
npx -y @smithery/cli install brave --client <client> --config '{"BRAVE_API_KEY":"your_api_key_here"}'Example for Claude Desktop:
npx -y @smithery/cli install brave --client claude --config '{"BRAVE_API_KEY":"your_api_key_here"}'The Smithery CLI will:
- Download and configure the Brave Search MCP server
- Update your AI client's configuration file (e.g.,
claude_desktop_config.json) - Enable the
brave_web_searchandbrave_local_searchtools in your AI assistant
Once installed, your AI assistant will have access to:
brave_web_search: Search the web for current informationbrave_local_search: Perform local business and location searches
The AI can now answer questions about recent events, current prices, news, and other time-sensitive information.
If you prefer to configure manually, add the following to your MCP client's configuration file:
{
"mcpServers": {
"brave-search": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-brave-search"],
"env": {
"BRAVE_API_KEY": "YOUR_BRAVE_API_KEY_HERE"
}
}
}
}For more MCP servers and capabilities, visit the Smithery Registry.
npm testpython -m unittest test_governance -v
# or
python -m pytest test_governance.py -vWe welcome contributions! Please see our Contributing Guide for details on:
- How to submit issues and pull requests
- Code style guidelines
- Testing requirements
- Code of conduct
All contributions must align with our Core Directive.
Please see our Security Policy for information on:
- Reporting vulnerabilities
- Security best practices
- Supported versions
This is free and unencumbered software released into the public domain under the Credibility License (LICENSE).
This project is dedicated to the public good. The Core Directive belongs to humanity.
A universal governance kernel for AI systems and digital interactions that protects every individual's inalienable right to pursue happiness.
This project implements a foundational governance layer designed to be integrated into AI systems, digital platforms, and autonomous services. The Core Directive serves as the ethical plumbing of civilization - a simple, universal, and computable principle that guides all interactions.
"No action may interfere with another person's inalienable right to pursue happiness."
This directive is:
- Universal - Understood across cultures and contexts
- Atomic - Self-contained without requiring sub-rules
- Computable - Machine-evaluable for automated enforcement
- Liberating - Maximizes freedom while preventing harm to others
- Adaptable - Works across all domains and platforms
The governance layer consists of four main components:
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Governance Gateway โ
โ โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโ โ
โ โ Middleware โโ โ Gateway โโ โ Routes โ โ
โ โโโโโโโโโโโโโโโ โโโโโโโโฌโโโโโโโ โโโโโโโโโโโโโโโ โ
โ โ โ
โ โผ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ Core Directive Evaluator โ โ
โ โ โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโ โ โ
โ โ โ Impact โ โ Conflict โ โ Score โ โ โ
โ โ โ Assessment โ โ Detection โ โ Calculation โ โ โ
โ โ โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโ โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ โ
โ โผ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ Core Directive โ โ
โ โ "No action may interfere with another person's โ โ
โ โ inalienable right to pursue happiness." โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
The foundational module containing the Core Directive and basic evaluation logic.
from core_directive import CoreDirective, evaluate, is_allowed
# Create a directive instance
directive = CoreDirective()
# Get the system message for AI integration
system_message = directive.get_system_message()
# Evaluate an intent
result = evaluate("I want to help people learn")
print(result.result) # ActionResult.ALLOWED
# Quick check
if is_allowed("help others"):
print("Action permitted")Wrapper for AI models that enforces the Core Directive on all interactions.
from ai_client import create_test_client, GovernedAIClient
# Create a governed AI client
client = create_test_client()
# Process a request through governance
response = client.process("Help me understand machine learning")
print(response.content)
print(response.directive_evaluation.result)Gateway architecture for applying governance globally across services.
from gateway import create_gateway, GatewayRequest
# Create a gateway
gateway = create_gateway()
# Process a request
request = GatewayRequest.create("I want to create something helpful", source="user")
response = gateway.process(request)
# Check the audit log
print(gateway.export_audit_log())Sophisticated multi-factor evaluation with impact and conflict analysis.
from evaluator import evaluate_detailed
# Get detailed evaluation
result = evaluate_detailed("I want to support the community")
print(f"Score: {result.overall_score}")
print(f"Impacts: {len(result.impacts)}")
print(f"Conflicts: {len(result.conflicts)}")
print(f"Recommendations: {result.recommendations}")The Core Directive is supported by seven guiding principles:
- Protect autonomy - Every person has the right to make their own choices
- Block exploitation - No person may be used as a means without consent
- Suggest alternatives - When an action is blocked, offer constructive options
- Identify coercion - Recognize and flag attempts to manipulate or force
- Flag harm - Alert when actions may cause damage to others
- Resolve conflicts - Facilitate fair resolution between competing interests
- Maximize well-being - Support collective flourishing without oppression
This governance layer is designed for integration into:
- AI assistants and chatbots
- Autonomous decision systems
- Content moderation systems
- Social media platforms
- E-commerce and financial services
- Healthcare triage systems
- Smart city infrastructure
- Robotics and autonomous vehicles
- Identity verification systems
- Conflict resolution platforms
python -m pytest test_governance.py -vOr using unittest:
python -m unittest test_governance -vThis project requires Python 3.10+ and has no external dependencies.
# Clone the repository
git clone https://github.com/dshvvvshr/Broken_vowels.git
cd Broken_vowels
# Run tests to verify installation
python -m unittest test_governance -vContributions are welcome! The goal is to build a universal governance layer that can be adopted across all AI systems and digital platforms.
See our Contributing Guide for details on:
- How to submit issues and pull requests
- Code style guidelines
- Testing requirements
- Code of conduct
For a curated list of Python libraries, tools, and resources relevant to this project, see:
๐ RESOURCES.md - Python resources for AI, ethics, and development
This includes links to the Awesome Python collection and other valuable resources for building ethical AI systems.
This project is dedicated to the public good. The Core Directive belongs to humanity.
Building the ethical plumbing of civilization, one directive at a time.
Building something sans-learning any code period.
From the marketplace docs:
- Install "GitHub Copilot LLM Gateway" in VS Code.
- Open VS Code Settings (
Ctrl+,orCmd+,on macOS) โ search: Copilot LLM Gateway - Set Server URL to your LLM Gateway server endpoint (e.g.,
https://your-server.example.com/api).
To run the web server locally:
python3 server.pyThen open your browser and navigate to http://localhost:8000/
For those who want to use self-hosted open-source language models with GitHub Copilot, check out the GitHub Copilot LLM Gateway VS Code extension.
- Data Sovereignty - Your code never leaves your network
- Zero API Costs - No per-token fees with your own GPU resources
- Model Choice - Access thousands of open-source models
- Offline Capable - Work without internet once models are downloaded
- vLLM - High-performance inference
- Ollama - Easy local deployment
- llama.cpp - CPU and GPU inference
- Text Generation Inference - Hugging Face's server
- Any OpenAI Chat Completions API-compatible endpoint
- Install the GitHub Copilot LLM Gateway extension from VS Code Marketplace
- Start your inference server (e.g., vLLM with Qwen3-8B)
- Configure the extension with your server URL
- Select your model in GitHub Copilot Chat
Building something sans-learning any code period.
The core AI for all information passing through any signal - online or offline. This is what the future holds.
This project aims to create a foundational AI system designed to process and handle information across all types of signals, whether connected to the internet or operating in offline environments.
This repository documents and implements the Custodian Kernel Core Directive - a philosophical framework for human interaction based on a fundamental truth:
Every person has an equal, inalienable right to pursue happiness.
This is not about "doing whatever you want." It's about understanding that:
- It's not about my happiness. It's about everyone else's.
- Every moment, in every thought and action, ask: "Am I fucking anyone over?"
- The right exists whether you acknowledge it or not - that's what "inalienable" means
- Does this infringe on anyone else's pursuit?
- Am I fucking anyone over?
- Am I making up a rule to force people to do what I do or think like I think?
The answer will always be simple: Yes or No.
Core Kernel (Start Here):
- New to this? Start with QUICK_REFERENCE.md for a one-page overview
- Read CUSTODIAN_KERNEL.md for the complete philosophical framework
- See CODE_OF_CONDUCT.md for how this applies in communities
- Check EXAMPLES.md for practical applications
- Explore IMPLEMENTATION_GUIDE.md for daily practices
- Browse FAQ.md for answers to common questions
Peripheral Layers (Applications):
- PERIPHERAL_LAYERS/ - Technology-specific applications of the kernel
- RF Sensing & Surveillance - Wireless sensing ethics
- 6G Neural Drones & BCIs - Brain-computer interface ethics
From: "I have the right to pursue happiness" = "No one can tell me what to do"
To: "Everyone has the right to pursue happiness" = "I must constantly ensure I'm not crushing anyone else's pursuit"
We need humanity to embody this principle. Not because it's a nice idea, but because it's the only stable foundation for collective existence.
The vision: From individuals to communities to the entire world, people adopt the simple practice of not fucking each other over.
You don't decide whether we have this right.
You only decide whether you'll honor it.
Building something sans-learning any code period. Chat Completions API with Core Directive Wrapper.
This API provides an endpoint at http://localhost:8000/v1/chat/completions that wraps every request with a Core Directive. The Core Directive is prepended to all chat completion requests as a system message.
pip install -r requirements.txtpython run.pyThe server will start at http://localhost:8000.
POST /v1/chat/completions- Chat completions with Core Directive wrappingGET /health- Health check endpointGET /- API information
curl -X POST http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-3.5-turbo",
"messages": [{"role": "user", "content": "Hello!"}]
}'Every request that hits the /v1/chat/completions endpoint gets the Core Directive wrapped around it. The Core Directive is added as a system message to guide the AI's behavior.