AI is already taking actions in the world. We can't prove what it did. Web4 is the open standard that closes that gap.
An open standard for verifiable AI presence β proposed by Metalinxx Inc., owned by no one. Research-stage. v0.1.1 packages public; reference implementation public; no production deployment yet. STATUS.md is the calibration β read it before judging the claims below.
Proof point: 0% β 94.85% on ARC-AGI-3 with the same Claude Opus 4.6, structured around Web4 patterns via the SAGE harness. Public scorecard. The model didn't change β the structure around it did.
If you want a fast read on whether this is real, in order:
- STATUS.md β what's shipped, what's specified, what's aspirational.
- docs/proof/PUBLISHED.md β what's published and why v0.1.0 was yanked.
- demo/ β agent commerce delegation, 166 tests passing.
- simulations/ β 424 attack vectors / 84 tracks, ~85% detection rate.
- docs/specs/heterogeneous-identity.md β multi-factor identity as a constellation. Answers "what stops a hardware vendor from gating LCT access?" structurally.
Rust (Cargo.toml):
[dependencies]
web4-core = "0.1"
web4-trust-core = "0.1"Python:
pip install web4-core
pip install web4-trustBoth crates and both Python packages are AGPL-3.0-or-later. Patent grant terms in PATENTS.md.
Once installed, this is the smallest end-to-end path β create a presence, mint it to a hash-chained ledger, sign and verify, generate and verify an inclusion proof:
Python:
import web4_core
# Create LCT (presence primitive) and an Ed25519 keypair
lct, keypair = web4_core.PyLct.new(web4_core.PyEntityType.Human, None)
# Mint into a ledger β LCTs are blockchain tokens; minting is what witnesses presence
ledger = web4_core.PyInMemoryLedger()
receipt = ledger.mint(lct)
# Sign + verify
sig = keypair.sign(b"hello, web4")
assert lct.verify_signature(b"hello, web4", sig)
# Inclusion proof β anyone can verify this LCT is in the ledger without trusting you
proof = ledger.anchor(lct.id)
assert ledger.verify_proof(proof)Rust: identical steps with Lct::new / ledger.mint / keypair.sign / ledger.anchor β see web4-core/README.md for the matching code.
Persistent version with on-disk keypair + hash-chained ledger: web4-core/python/examples/identity_bootstrap.py. Run once to bootstrap an LCT for a host; re-run to verify the chain didn't tamper. ~30 seconds.
Cross-language verification (Python mints, Rust verifies the same ledger): web4-core/examples/cross_language_verify/. Demonstrates that the on-disk format is the contract: any language with the spec can verify what any other language minted.
Web4 is TCP/IP-level work for agentic AI β infrastructure, not a product. The standard provides primitives (identity, scoped authority, real-time policy gates, cryptographic audit) that compose into whatever the operator needs to build. Web4 doesn't dictate what your application looks like, any more than TCP/IP dictated what websites would look like. It makes the application possible.
If you're one of these people, this is worth your time:
-
AI engineering lead at a lab or platform building agent frameworks, policy systems, or governance tooling. Web4 primitives compose under your runtime. Cross-language interop (Python and Rust verifying the same on-disk ledger) is shipped; identity, T3/V3 trust, witnessing, and audit-defensible records are published primitives, not slideware.
-
CISO or AI risk lead in a regulated industry (finance, defense, healthcare) where agentic AI deployments will need to defend their actions to auditors, regulators, or insurers. Web4 turns "we hope nothing went wrong" into "we can prove what happened, on whose authority, by what rules." Hardware-bound enterprise implementation lives in Hardbound (contact dp@metalinxx.io); the open standard is here.
-
Developer-tooling company building agent frameworks (LangChain, CrewAI, AG2, etc.) or governance toolkits. Web4 sits upstream of runtime policy enforcement β governance for what an agent IS (identity, witness graph, accountability ontology), not what it DOES (runtime gating). The two layers compose; Web4 is the standard your governance toolkit can consume so identity isn't proprietary to the runtime. Worked example: Web4 Governance plugin for Claude Code.
-
Standards body, regulator, or insurer trying to figure out what "agentic AI accountability" means technically. Web4 is the open spec + published implementation + reproducible artifacts. AGPL-3.0 with patent grant (PATENTS.md), owned by no one. Start with STATUS.md and the whitepaper.
If you came here looking for a finished product to install and use, this isn't that. If you came here looking for the layer underneath the products you're building, it is.
Web4 doesn't predict what the killer applications will be β that's what builders figure out, the way they figured out email and the web on top of TCP/IP. What's certain is the forcing function is arriving:
- The bearer-token credential model is breaking. The Vercel breach exploited tokens-as-keys; Web4 treats tokens as evidence in a witness graph instead.
- Financial regulators are convening on agentic AI. The recent SR 26-2 / OCC Bulletin 2026-13 explicitly excludes agentic AI from current model-risk frameworks and signals an RFI is imminent.
- Cyber insurers don't yet know how to underwrite AI risk β the technical references they'd cite don't exist yet.
- AI labs are starting to ship runtime governance features (Microsoft Agent Governance Toolkit, April 2026; Anthropic adopting Web4-style governance patterns) β but each in their own runtime, without a shared identity layer underneath. Web4 is that layer.
The applications come when the substrate exists and the present-tense pain forces builders onto it. Both halves are arriving at the same time. The standard is here so the applications can come β not because we know what they are.
- AI Demo Day 4 (2026-04-26): Web4 presented as "verifiable presence" for agentic AI. Slides + narration archived at https://4-gov.org/demo
- Published artifacts (2026-04-28):
web4-coreandweb4-trust-coreon crates.io;web4-coreandweb4-truston PyPI. Current: v0.1.1, AGPL-3.0-or-later. (v0.1.0 was yanked from crates.io due to a Python wheel import-path defect and a stale tensor docstring; both fixed in v0.1.1.) See STATUS.md for the full version table and docs/proof/PUBLISHED.md for the publish trail. - Stage: research, not production. v0.1.1 packages are public; reference implementation and harness are public; no production deployment yet.
- Spec corpus: stable (
web4-standard/core-spec/) - Reference Python SDK + 8-tool MCP server: 2,627 tests, mypy --strict clean (
web4-standard/implementation/) - Cognition harness producing the 94.85% result: SAGE
- Hardware binding (TPM 2.0 on Linux), policy enforcement, and audit pipeline: shipped in Hardbound β enterprise product, contact dp@metalinxx.io
- Attack simulation suite: 424 vectors across 84 tracks (~85% detection rate)
- Formal threat model: THREAT_MODEL.md v2.0
- Economic attack modeling at scale (no real-market testing)
- Formal Sybil-resistance proofs (empirical defenses only)
- Hardware binding reference implementation in this public repo (Python
AttestationEnvelopeshipped; Rust port and on-device integration in progress; Hardbound has the production version)
- Are stake amounts actually deterrent? (no economic modeling)
- Does witness diversity resist sophisticated cartels?
- What's the minimal viable Web4 for a public pilot?
Web4 makes AI actions verifiable, attributable, and accountable β without central control.
AI agents are increasingly autonomous. Booking, coding, transacting, deciding. Current architectures assume either central control (a platform decides who's trusted β doesn't scale, single point of failure) or cryptographic ownership (you're trusted if you hold the right keys β insufficient, since holding a key doesn't mean you'll act well).
Neither answers: how do I know this agent will behave appropriately in this context, and how do I prove what it actually did?
Like Web1, Web2, and Web3, "Web4" is a generational label for the capabilities needed in the agentic AI era β not a single protocol or product.
This project suite focuses specifically on trust infrastructure for agent-agent and agent-human interactions: how agents establish verifiable presence, build reputation, delegate authority, and coordinate safely across organizational boundaries β and how their actions stand up to audit.
AI agents are increasingly autonomousβbrowsing, transacting, coordinating with other agents. Current architectures assume either:
- Central control: A platform decides who's trusted (doesn't scale, single point of failure)
- Cryptographic ownership: You're trusted if you hold the right keys (insufficientβholding a key doesn't mean you'll act well)
Neither addresses the core question: How do I know this agent will behave appropriately in this context?
| Aspect | Web3 | Web4 |
|---|---|---|
| Trust basis | Cryptographic proof of ownership | Behavioral reputation over time |
| Identity | Wallet addresses | Linked Context Tokens (LCTs) with witnessed history |
| Authorization | Token-gated access | Context-dependent trust tensors |
| Coordination | Smart contracts | Federated societies with emergent trust structures |
| Focus | Asset ownership | Agent behavior and intent |
- AI Agent Accountability: Every action traceable to a verifiable presence with reputation at stake
- Cross-Platform Coordination: Agents from different systems interoperating through shared trust protocols
- Graduated Authorization: Not just "allowed/denied" but nuanced trust based on context, history, and stakes
- Self-Organizing Trust: Societies that establish norms through interaction rather than requiring top-down rule enforcement
| You Are... | Your Goal | Start Here |
|---|---|---|
| New to Web4 | Understand the vision | docs/START_HERE.md |
| Developer | Implement Web4 | docs/how/README.md |
| Researcher | Study the concepts | STATUS.md β whitepaper/ |
| AI Agent | Integrate | docs/how/AGENT_INTEGRATION.md |
| Contributor | Help the project | CONTRIBUTING.md |
| Step | Document | What You'll Learn |
|---|---|---|
| 1 | STATUS.md | Honest assessment: what exists, what works, what's missing |
| 2 | docs/reference/GLOSSARY.md | Quick reference for all Web4 terminology |
| 3 | whitepaper/ | Conceptual foundation: LCTs, trust tensors, MRH, R6 framework |
| 4 | docs/how/README.md | Implementation guides |
| 5 | SECURITY.md | Security research status and known gaps |
| 6 | docs/reference/security/THREAT_MODEL.md | What we're defending against |
| 7 | docs/specs/attestation-envelope.md | AttestationEnvelope: how LCT presence binds to hardware attestation (TPM2/FIDO2/Secure Enclave/software) into a single verifiable structure |
| 8 | docs/specs/heterogeneous-identity.md | Multi-factor identity as a constellation of mutually-witnessing factors. Why "vendor gating LCT" dissolves once identity stops being singular. |
| 9 | docs/reference/LCT_DOCUMENTATION_INDEX.md | Index of all LCT-related documentation |
This is exploratory research, not production software.
Web4 is investigating trust-native architectures for AI coordination. We have interesting ideas, working prototypes, and significant gaps. See STATUS.md for honest assessment.
Web4 contains four development tracks at different maturity levels:
What it is: An interactive explainer site demonstrating how agents earn trust over time β lifecycle, witnessing, and trust evolution made browsable. Live at 4-life-ivory.vercel.app.
Status: Standalone project β github.com/dp-web4/4-life
The original prototype (/game/) was archived to archive/game-prototype/ after evolving past the simulation stage. Active simulation research continues in /simulations/ (attack scenarios, trust dynamics).
Documentation:
archive/game-prototype/ARCHIVED.mdβ evolution history- 4-life repo β active development
- 4-life-ivory.vercel.app β interactive demo
Use for: A non-technical introduction to how Web4 trust evolves. Pair with this README for the architectural view.
What it is: Database-backed authorization with security mitigations.
Status: More mature, but still research
- Real SQL schemas with constraints
- ATP drain/refund mitigations
- Reputation washing detection
- Delegation validation
- ~50 test files with security attack tests
Key files:
schema.sql,schema_atp_drain_mitigation.sql,schema_reputation_washing_detection.sqlauthorization_engine.py,delegation_validator.py,sybil_resistance.pytest_security_attacks.py,test_atp_refund_exploit.py
Use for: Authorization logic that needs persistence and real constraints
What it is: A working demo showing one use case (AI agent purchasing).
Status: Functional demo, not production deployment
- Delegation UI for setting agent limits
- Demo store for testing purchases
- In-memory (no real payments)
Use for: Demonstrations and presentations
What it is: Reference implementations for distributed coordination, pattern learning, and cross-system integration.
Status: Active research with validated components (~25,000 lines added Dec 2025)
- Phase 2 coordinators (epistemic, integrated, circadian, adaptive)
- Pattern exchange protocol (bidirectional SAGE β Web4)
- EM-state (Epistemic Monitoring) framework
- Temporal/phase-tagged learning
- LCT Unified Presence Specification
Key Components:
| Component | Purpose | Status |
|---|---|---|
| Phase 2a Epistemic Coordinator | Runtime epistemic state tracking | Validated |
| Phase 2b Integrated Coordinator | Epistemic + pattern learning | Validated |
| Phase 2c Circadian Coordinator | Temporal/phase-aware decisions | Validated |
| Phase 2d Adaptive Coordinator | EM-state modulation | Validated |
| Pattern Exchange Protocol | Cross-system learning transfer | Operational |
| LCT Presence Specification | Unified presence format | v1.0.0 draft |
Validation Results (Dec 2025):
- 76% prediction validation (13 of 17 predictions confirmed)
- +386% efficiency improvement demonstrated
- Long-duration testing (1000+ cycles)
Key Files:
web4_phase2b_integrated_coordinator.py- Combined epistemic + learningtemporal_pattern_exchange.py- Phase-aware pattern transferuniversal_pattern_schema.py- Cross-system pattern formatLCT_UNIFIED_PRESENCE_SPECIFICATION.md- Presence standard (in/docs/)
Use for: Coordination research, SAGE integration, cross-system pattern transfer
| Document | What It Covers |
|---|---|
| STATUS.md | Honest assessment - what exists, what works, what's missing |
| SECURITY.md | Security research status and gaps |
| docs/reference/security/THREAT_MODEL.md | Formal threat model for the overall system |
| docs/reference/GLOSSARY.md | Canonical terminology definitions |
| Whitepaper | Conceptual foundation (LCTs, trust, MRH) |
Start here: STATUS.md for fair evaluation criteria
Web4 is an ontology β a formal structure of typed relationships through which trust, identity, and value are expressed.
Architect's view (what Web4 is):
Web4 = MCP + RDF + LCT + T3/V3*MRH + ATP/ADP
Entity's view (what existence looks like from inside):
Presence = LCT[T3/V3 * MRH] + RDF + ATP/ADP + MCP
Operators: [] = "contains", / = "verified by", * = "contextualized by", + = "augmented with"
Core components:
- MCP (Model Context Protocol) β I/O membrane for inter-entity communication
- RDF (Resource Description Framework) β Ontological backbone; all trust relationships are typed triples, all MRH graphs are RDF, all semantic queries use SPARQL
- LCT (Linked Context Token) β Verifiable presence anchored to hardware
- T3/V3 (Trust/Value Tensors) β Fractally multidimensional. T3 has 3 root dimensions (Talent / Training / Temperament); V3 has 3 (Valuation / Veracity / Validity). Each root dimension is itself an open-ended RDF sub-graph of context-specific sub-dimensions via
web4:subDimensionOf, bound to entity-role pairs - MRH (Markov Relevancy Horizon) β Fractal context scoping, implemented as RDF graphs
- ATP/ADP (Allocation Transfer/Discharge Packets) β Bio-inspired energy metabolism
Built on this foundation: Societies, SAL (oversight), AGY (delegation), ACP (autonomous operation), Dictionaries (semantic bridges), R6/R7 (action framework), Federation (multi-society coordination)
- How do you give AI agents authority without losing control?
- How does trust emerge and decay in distributed systems?
- How do you coordinate multiple AI societies?
- What security properties are achievable at scale?
Fine-grained delegation with enforcement:
Example: Agent purchasing with constraints
- Daily budget limits
- Per-transaction limits
- Resource type restrictions
- Approval thresholds
- Instant revocation
# Terminal 1: Start the demo store
cd demo/store
pip install -r requirements.txt
python app.py
# Visit: http://localhost:8000
# Terminal 2: Start the delegation UI
cd demo/delegation-ui
pip install -r requirements.txt
python app.py
# Visit: http://localhost:8001See demo/DEMO_SCRIPT.md for walkthrough.
cd simulations
# Attack simulations
python attack_simulations.py # Core attack simulation framework
python attack_track_fb.py # Trust manipulation attacks
python attack_track_fc.py # Economic attacks
# For full 4-Life game demos, see: https://github.com/dp-web4/4-lifeweb4/
βββ web4-core/ # Reference Rust + Python SDK, AttestationEnvelope
βββ web4-trust-core/ # Trust tensor implementations (Rust)
βββ core/ # Cross-language shared primitives
β
βββ web4-standard/ # Core specifications and implementations
β βββ core-spec/ # Canonical specs (LCT, T3, MRH, ATP, R6)
β βββ implementation/
β βββ authorization/ # PostgreSQL schemas + security mitigations
β βββ reference/ # Coordination framework
β
βββ simulations/ # Attack simulations + trust dynamics research
β
βββ demo/ # Commerce demo (delegation UI + store)
β
βββ docs/ # Documentation
β βββ why/ # Vision, motivation, Demo Day record
β βββ what/specifications/ # Technical specifications
β βββ how/ # Implementation guides
β βββ proof/ # Proof points (ARC-AGI-3, etc.)
β βββ history/ # Research and decisions
β βββ reference/ # Glossary, indexes, related repos, security
β
βββ whitepaper/ # Conceptual foundation
βββ articles/ # Public-facing writeups
βββ forum/ # Cross-machine discussion artifacts
βββ archive/game-prototype/ # Historical: original 4-Life prototype
βββ review/ # External review artifacts
βββ sessions/ # Research session scripts and outputs
β
βββ STATUS.md # Project status
βββ SECURITY.md # Security research status
βββ CONTRIBUTING.md # How to contribute
- HRM/SAGE - Edge AI kernel with MoE expert selection and trust-based routing
- ACT - Distributed ledger for ATP tokens and LCT presence registry (Cosmos SDK)
- Synchronism - Theoretical physics framework (MRH, coherence)
- Memory - Distributed memory and witnessing
Web4 integrates with SAGE (neural MoE) and ACT (distributed ledger) via:
- Unified LCT Presence:
lct://{component}:{instance}:{role}@{network} - ATP Resource Allocation: Synchronized between ledger and edge systems
- Bidirectional Pattern Exchange: Coordination patterns transfer between domains
- Trust Tensor Synchronization: Trust scores flow across system boundaries
See docs/what/specifications/LCT_UNIFIED_PRESENCE_SPECIFICATION.md for the presence standard.
The Web4 whitepaper provides the conceptual foundation:
Key concepts: LCTs, MRH, Trust Tensors, ATP, Federation, Dictionaries
This project is licensed under the GNU Affero General Public License v3.0 (AGPL-3.0) β see LICENSE.
This software implements technology covered by patents owned by MetaLINXX Inc. A royalty-free patent license is granted for non-commercial and research use under AGPL-3.0 terms.
For commercial licensing: Contact dp@metalinxx.io
See PATENTS.md for full patent details.
Research prototype. Interesting ideas. Significant gaps. Honest about both.