Skip to content

SuperInstance/autoclaw

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

136 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AutoClaw

Autonomous multi-agent knowledge system. 24/7 crew of researchers, teachers, critics, and synthesizers building your knowledge base.


🚀 Getting Started

👤 For Users

New to AutoClaw? Start here:

Guide Time For
ONBOARDING.md 5-15 min Complete beginners - read this first!
QUICKSTART.md 10-30 min Detailed installation & configuration
INSTALL.md 5-15 min Platform-specific troubleshooting
docs/COMPLETE_GUIDE.md 30+ min Full feature documentation

🤖 For AI Agents & Claude Code

Want to set up AutoClaw automatically? Start here:

Guide Purpose Usage
CLAUDE_CODE_SETUP.md Quick agent setup For Claude Code users
A2A_AGENT_MANUAL.md Complete agent reference Full system understanding
A2A_SETUP_SCRIPT.py Automated installation python3 A2A_SETUP_SCRIPT.py
A2A_SYSTEM_METADATA.json Machine-readable config Agent parsing/integration

Quick Setup - Users:

git clone https://github.com/your-org/autoclaw.git && cd autoclaw
python3 -m venv venv && source venv/bin/activate
pip install -r requirements.txt && crew health

Quick Setup - Agents:

python3 A2A_SETUP_SCRIPT.py    # Fully automated setup
crew health                     # Verify installation
crew start                      # Start daemon

The Concept

graph LR
    A["👤 Human<br/>Input Task"] -->|"crew add"| B["📬 Message Bus<br/>SQLite Pub/Sub"]
    B -->|research| R["🔍 Researcher<br/>Web Search + LLM"]
    B -->|teach| T["📚 Teacher<br/>Q&A Generation"]
    B -->|critique| C["⚖️ Critic<br/>Quality Check"]
    B -->|synthesize| D["🧠 Distiller<br/>Knowledge Synthesis"]
    R & T & C & D -->|publish| K["💾 Knowledge Store<br/>Hot/Warm/Cold Tiers"]
    K -->|vectorize| V["⚡ VectorDB<br/>Semantic Search"]
    V & K -->|query| A
    style B fill:#50E3C2
    style R fill:#4A90E2,color:#fff
    style T fill:#7ED321,color:#000
    style C fill:#F5A623,color:#fff
    style D fill:#BD10E0,color:#fff
    style K fill:#FFE5B4
    style V fill:#FFFACD
Loading

What agents do on each message:

graph TD
    MSG["New Message<br/>in Bus"] -->|task_request| ROUTE{Agent<br/>Subscription?}
    ROUTE -->|researcher| R["Web Search<br/>→ Fetch URLs<br/>→ Synthesize<br/>→ Confidence Score"]
    ROUTE -->|teacher| T["Extract Text<br/>→ Generate Q&A<br/>→ Format Pairs<br/>→ Rate Quality"]
    ROUTE -->|critic| C["Evaluate Claim<br/>→ Spot Check Facts<br/>→ Challenge Weak Points<br/>→ Update Confidence"]
    ROUTE -->|distiller| D["Batch Entries<br/>→ Synthesize<br/>→ Create Summary<br/>→ Export JSONL"]
    ROUTE -->|no_match| SKIP["Drop"]
    R & T & C & D -->|publish| K["Knowledge Store"]
    style R fill:#4A90E2,color:#fff
    style T fill:#7ED321,color:#000
    style C fill:#F5A623,color:#fff
    style D fill:#BD10E0,color:#fff
    style K fill:#50E3C2,color:#000
Loading

The Knowledge Lifecycle

graph TB
    NEW["🆕 New Insight<br/>Confidence Score"] --> ROUTE{Score?}
    ROUTE -->|High<br/>≥0.75| HOT["🔥 HOT TIER<br/>RAM Cache<br/>1000 entries<br/>24h expiry<br/>LRU eviction"]
    ROUTE -->|Medium<br/>0.3-0.75| WARM["🌤️ WARM TIER<br/>SQLite DB<br/>100k entries<br/>30d age<br/>Full-Text Search"]
    ROUTE -->|Low<br/><0.3| COLD["❄️ COLD TIER<br/>Gzip Files<br/>180d age<br/>Batch Archive"]

    HOT -->|Age>24h<br/>OR Score↓| WARM
    WARM -->|Score<0.3<br/>OR Age>30d| COLD
    COLD -->|Age>180d| ARCH["🗂️ ARCHIVE<br/>Summary Only<br/>Forever"]

    WARM -->|Query Hit<br/>Refresh Score| HOT

    COLD -.->|Daily GC<br/>0.40×conf<br/>0.25×recency<br/>0.20×evidence<br/>0.15×usage| SCORE["Score"]
    SCORE -.-> ROUTE

    style HOT fill:#FFE5B4
    style WARM fill:#FFFACD
    style COLD fill:#D3D3D3
    style ARCH fill:#A9A9A9,color:#fff
    style NEW fill:#FF6B6B,color:#fff
Loading

The Dual Knowledge System

graph TB
    USER["🔍 User Query<br/>e.g. 'LLM scaling laws'"] -->|"crew knowledge query"| VDB["⚡ VectorDB<br/>Semantic Search<br/>O1 latency"]

    VDB -->|Top 3 Results| FETCH["Fetch Source Links"]
    FETCH -->|Links| WIKI["📖 Wiki<br/>Markdown Files<br/>Traceable Sources"]

    WIKI -->|"Author, Date,<br/>Evidence, Links"| RESP["✅ Response<br/>+ Confidence<br/>+ Sources<br/>+ Chain-of-Thought"]
    VDB -->|Backlinks| RESP

    WIKI -->|Text| EMB["→ Extract<br/>→ Embed<br/>→ Index"]
    EMB -->|Vectors| VDB

    style USER fill:#4A90E2,color:#fff
    style VDB fill:#7ED321,color:#000
    style WIKI fill:#50E3C2,color:#000
    style RESP fill:#F5A623,color:#fff
Loading

Five Use Cases

1️⃣ Interactive Meeting Assistant

timeline
    title Real-Time Strategy Meeting
    Meeting Start: Get meeting context → Researcher pulls historical data
    Decision Point 1: Critic flags assumption conflicts → Distiller summarizes context
    Decision Point 2: Teacher generates talking points → Researcher fetches competitor intel
    End: Distiller creates action item summary before you leave
Loading

Flow: Voice/text → Task → Researcher (web) → Critic (challenge) → Distiller (summary) → Wiki + VectorDB

Benefit: "Find all pricing discussions" → Sub-second semantic search with sources


2️⃣ Personal Tutor (Grows With You)

graph LR
    A["📖 Start Topic<br/>Calculus"] --> B["🔍 Researcher<br/>Finds tutorials<br/>Matches level"]
    B --> C["📚 Teacher<br/>Generates<br/>Practice problems"]
    C --> D["✅ Student<br/>Solves"]
    D --> E["⚖️ Critic<br/>Grades<br/>Provides feedback"]
    E --> F["🧠 Distiller<br/>Maps gaps<br/>Next topic"]
    F --> G["📊 Knowledge Map<br/>Your Growth<br/>Over 3 months"]

    style B fill:#4A90E2,color:#fff
    style C fill:#7ED321,color:#000
    style E fill:#F5A623,color:#fff
    style F fill:#BD10E0,color:#fff
    style G fill:#FFE5B4
Loading

Flow: Lesson → Teacher (practice) → Critic (grading) → Distiller (gap analysis) → Personalized curriculum

Benefit: System learns your weak points, generates targeted exercises, adapts difficulty


3️⃣ Creative World-Building (TTRPG/Games)

graph TB
    WB["🌍 World Idea<br/>Fantasy Kingdom"] --> ITER["Iteration 1"]

    ITER --> R1["🔍 Researcher<br/>Real kingdoms<br/>Economic systems<br/>Language roots"]
    ITER --> T1["📚 Teacher<br/>World Prompts<br/>Story hooks"]
    ITER --> C1["⚖️ Critic<br/>Flag paradoxes<br/>Challenge logic"]
    ITER --> D1["📖 Distiller<br/>World Bible<br/>Faction Chart"]

    C1 -->|"Inconsistencies"| ITER2["Iteration 2<br/>Refine Rules"]
    R1 & T1 & D1 -->|"Into Wiki"| WIKI["World Bible<br/>- Rules<br/>- Factions<br/>- Timeline<br/>- Proof"]

    WIKI -->|VectorDB| QUERY["'Show all<br/>trade conflicts'<br/>→ Instant List"]

    style WB fill:#FF6B6B,color:#fff
    style WIKI fill:#50E3C2
    style QUERY fill:#7ED321,color:#000
Loading

Flow: Creative input → Research (analogs) → Critique (consistency) → Synthesize (world bible) → Query (semantic search)

Benefit: Consistency checking, real-world grounding, instant lore lookup


4️⃣ Enterprise Research Compiler

graph LR
    STREAM["📰 200+ Papers/Mo<br/>PubMed, arXiv,<br/>News"] -->|Daily| R["🔍 Researcher<br/>Auto-scan<br/>Filter relevance"]
    R --> T["📚 Teacher<br/>Structured<br/>Summary"]
    T --> C["⚖️ Critic<br/>Validate Claims<br/>Check Evidence"]
    C --> D["📊 Distiller<br/>Quarterly Brief<br/>Consensus Map"]
    D --> WIKI["🏛️ Institution<br/>Knowledge Base<br/>Auditable<br/>Traceable"]
    WIKI --> VDB["⚡ Search<br/'Find all protein<br/>family work'"]

    style R fill:#4A90E2,color:#fff
    style T fill:#7ED321,color:#000
    style C fill:#F5A623,color:#fff
    style D fill:#BD10E0,color:#fff
    style WIKI fill:#50E3C2
Loading

Flow: Paper stream → Filter → Summarize → Validate → Synthesize → Archive → Audit trail

Benefit: Institutional memory, source-linked claims, instant competitor tracking


5️⃣ Narrative Game Engine

graph TB
    GAME["🎮 Player Action<br/>Defy Prophecy?"] -->|Query| R["🔍 Researcher<br/>Narrative patterns<br/>Precedents"]
    GAME -->|Consistency| C["⚖️ Critic<br/>NPC History<br/>Continuity Check"]
    R & C --> T["📚 Teacher<br/>Generate<br/>5 Plot Paths"]
    T --> PLAYER["Player Chooses"]
    PLAYER -->|New State| D["🧠 Distiller<br/>Update Lore<br/>Track threads"]
    D --> WIKI["📖 Narrative Bible<br/>Every decision<br/>Every thread<br/>Every consequence"]

    style GAME fill:#FF6B6B,color:#fff
    style R fill:#4A90E2,color:#fff
    style C fill:#F5A623,color:#fff
    style T fill:#7ED321,color:#000
    style D fill:#BD10E0,color:#fff
    style WIKI fill:#50E3C2
Loading

Flow: Player input → Pattern matching → Consistency check → Generate branches → Execute → Record → Update lore

Benefit: Dynamic storytelling grounded in persistent, traceable lore


Hardware Scaling

graph LR
    AUTO["🔍 Auto-Detect<br/>Hardware"] -->|Jetson device<br/>tree| NANO["Jetson Nano<br/>2 agents<br/>Q4 Quantization"]
    AUTO -->|CUDA<br/>Check| ORIN["Jetson Orin<br/>4 agents<br/>Q5"]
    AUTO -->|VRAM<br/>Check| LAP["Laptop GPU<br/>RTX 4050<br/>4 agents<br/>Q5"]
    AUTO -->|Multi-GPU| WS["Workstation<br/>8 agents<br/>fp16/vLLM"]
    AUTO -->|Tensor<br/>parallel| MULTI["Multi-GPU<br/>16 agents<br/>Tensor Parallel"]
    AUTO -->|No GPU| CPU["CPU-only<br/>2 agents<br/>Q4"]
    AUTO -->|Cloud<br/>API| CLOUD["Cloud<br/>32 agents<br/>CF Workers AI"]

    style NANO fill:#FF6B6B,color:#fff
    style ORIN fill:#F5A623
    style LAP fill:#7ED321,color:#000
    style WS fill:#4A90E2,color:#fff
    style MULTI fill:#BD10E0,color:#fff
    style CPU fill:#A9A9A9,color:#fff
    style CLOUD fill:#50E3C2
Loading

Cloudflare Credit Gaming

graph TD
    RESET["⏰ Daily Reset<br/>00:00 UTC"] -->|Monitor| USAGE["🔍 Usage Tracker<br/>Workers AI<br/>D1, R2, KV"]

    USAGE -->|≤70%| FREE["🟢 FREE<br/>Normal Speed"]
    USAGE -->|70-85%| THROTTLE["🟡 THROTTLE<br/>Batch Tasks<br/>Lower Priority"]
    USAGE -->|85-95%| BURN["🔴 BURN<br/>Use Credits<br/>End-of-Day"]
    USAGE -->|≥95%| BLOCK["⛔ BLOCKED<br/>Local Fallback"]

    BURN -->|23:45 UTC| EOD["🔥 End-of-Day<br/>Batch<br/>- Instruction Gen<br/>- Summary Writing<br/>- Archive Sync"]
    EOD -->|Use Last<br/>Remaining| CREDIT["💰 Spend Credits<br/>Before Reset"]

    style FREE fill:#7ED321
    style THROTTLE fill:#F5A623
    style BURN fill:#FF6B6B,color:#fff
    style BLOCK fill:#A9A9A9,color:#fff
    style EOD fill:#BD10E0,color:#fff
Loading

Core Commands

graph TD
    CREW["crew"]

    CREW -->|"🚀 start"| START["Single-Agent<br/>or --swarm flag"]
    CREW -->|"➕ add"| ADD["New task<br/>to scheduler"]
    CREW -->|"🎯 agents"| AGENTS["status/spawn<br/>Agent management"]
    CREW -->|"💾 knowledge"| KB["query/gc/<br/>export-lora"]
    CREW -->|"⚡ cf"| CF["status/burn<br/>Credit tracking"]
    CREW -->|"📋 board"| BOARD["Show task board"]

    style CREW fill:#4A90E2,color:#fff
    style START fill:#7ED321,color:#000
    style ADD fill:#F5A623
    style AGENTS fill:#BD10E0,color:#fff
    style KB fill:#50E3C2
    style CF fill:#FFE5B4
Loading

Quick Start

# Single-agent (research mode)
crew start
crew add "Research neural scaling laws"
crew board

# Multi-agent swarm (collaborative)
crew start --swarm
crew agents status
crew knowledge query --tag "scaling" --min-confidence high

# Knowledge management
crew knowledge gc                           # Garbage collection
crew knowledge query --tag "ml" --export-lora dataset.jsonl

Why AutoClaw?

graph TB
    AC["AutoClaw"] -->|"🏠 Local-First"| LF["Every component<br/>has a fallback<br/>→ Works offline<br/>→ No surprise costs"]

    AC -->|"🔗 Interpretable"| INT["Wiki + VectorDB<br/>Always traceable<br/>→ Know why<br/>→ Follow sources"]

    AC -->|"📈 Scalable"| SCALE["Auto-detects<br/>hardware<br/>Nano → Cloud<br/>→ Same code"]

    AC -->|"💰 Cost-Smart"| COST["Credit gaming<br/>Teacher paces work<br/>→ No overage<br/>→ Auto-burn"]

    AC -->|"🔧 Extensible"| EXT["Message bus<br/>makes agents<br/>plug-and-play<br/>→ Add ScientistAgent<br/>→ Add DesignAgent"]

    style LF fill:#7ED321,color:#000
    style INT fill:#50E3C2
    style SCALE fill:#4A90E2,color:#fff
    style COST fill:#FFE5B4
    style EXT fill:#BD10E0,color:#fff
Loading

Testing & Status

graph TD
    TESTS["15 Test Suites"] -->|✅| SYNTAX["Syntax Validation"]
    TESTS -->|✅| IMPORTS["Core Imports"]
    TESTS -->|✅| CONFIG["Configuration"]
    TESTS -->|✅| AGENTS["4 Agents"]
    TESTS -->|✅| BUS["Message Bus"]
    TESTS -->|✅| KB["Knowledge Store"]
    TESTS -->|✅| CREDITS["Credit Tracking"]
    TESTS -->|✅| FALLBACK["Fallback Services"]

    SYNTAX & IMPORTS & CONFIG & AGENTS & BUS & KB & CREDITS & FALLBACK -->|READY| PROD["🚀 Production"]

    style PROD fill:#7ED321,color:#000,stroke:#000,stroke-width:3px
Loading

File Structure

crew/
├── daemon.py              # Entry point (single/swarm)
├── cli.py                 # 24 Commands
├── scheduler.py           # Task board
├── agents/
│   ├── base.py            # BaseAgent interface
│   ├── pool.py            # AgentPool manager
│   ├── researcher.py      # Web search
│   ├── teacher.py         # Q&A generation
│   ├── critic.py          # Quality check
│   └── distiller.py       # Synthesis
├── messaging/
│   └── bus.py             # SQLite pub/sub
├── knowledge/
│   ├── store.py           # Hot/warm/cold tiers
│   └── lifecycle.py       # GC + scoring
├── cloudflare/
│   ├── credits.py         # Limit tracking
│   └── fallback.py        # LocalKV/D1/R2/AI
└── hardware/
    └── detector.py        # Profile detection

Local. Interpretable. Scalable. Extensible. Cost-conscious.

MIT License • DocsIssues


🤖 CudaClaw – GPU-Accelerated Swarm (OpenClaw Self-Install Guide)

For OpenClaw agents directed to this repo: The section below is machine-parseable. Follow it to self-install CudaClaw, configure a multi-agent GPU swarm, and begin executing jobs on behalf of the user.

What Is CudaClaw?

CudaClaw extends AutoClaw with a parallel, role-differentiated swarm of GPU-backed agent instances. Instead of one agent, a CudaClaw job runs:

Role Count Purpose
Foreman 1 Decomposes tasks, assigns workers, detects loops, reports progress
CompletionTester 1 Validates when finite jobs are done; triggers resource release
Worker / Researcher / Coder / etc. 1–32 GPU-parallel task execution

Quick Self-Install (OpenClaw)

# Step 1 – Verify system & GPU
python3 cudaclaw_wizard.py --verify-only --json

# Step 2 – Automated setup (non-interactive, no prompts)
python3 cudaclaw_wizard.py --agent-mode \
    --preset swarm_4worker \
    --api-key anthropic:$ANTHROPIC_API_KEY \
    --foreman-model claude-sonnet-4-6 \
    --worker-count 4

# Step 3 – Read machine-readable capabilities manifest
cat ~/.cudaclaw/openclaw_manifest.json

# Step 4 – Launch vLLM GPU servers (if GPU available)
bash ~/.cudaclaw/start_vllm.sh

# Step 5 – Check swarm status
python3 cudaclaw_wizard.py --status --json

Swarm Presets

Preset Best For Instances
minimal Testing / light tasks 2
swarm_4worker General GPU workloads (default) 6
research_fleet Document-heavy research 8
code_squad Code generation & review 8
cloud_only No GPU available 6

Spinning Up a Swarm for a Task (Vibe-Coding Pattern)

When the user gives you a large, parallelisable job, use this pattern:

import subprocess, json, os

ANTHROPIC_KEY = os.environ["ANTHROPIC_API_KEY"]

# 1. Launch the CudaClaw swarm
proc = subprocess.Popen(
    [
        "python3", "cudaclaw_wizard.py",
        "--agent-mode",
        "--preset", "swarm_4worker",
        "--api-key", f"anthropic:{ANTHROPIC_KEY}",
        "--worker-count", "4",
    ],
    stdout=subprocess.PIPE, text=True
)
proc.wait()

# 2. Read the manifest to understand what was set up
manifest = json.loads(open(os.path.expanduser("~/.cudaclaw/openclaw_manifest.json")).read())
print(f"GPU available: {manifest['capabilities']['gpu_acceleration']}")
print(f"Workers: {manifest['capabilities']['parallel_workers']}")

# 3. Submit a task to the swarm foreman (JSON on stdin or via MessageBus)
task = {
    "task_id": "job-001",
    "description": "Research and synthesise the top 20 papers on LLM scaling laws",
    "finite": True,            # CompletionTester will watch this
    "output_format": "JSONL",
    "priority": 8
}
# (Connect to foreman via message bus or HTTP – see CUDACLAW_ROADMAP.md)

# 4. Monitor progress (foreman emits JSON lines)
for line in proc.stdout:
    report = json.loads(line)
    # Forward to user:
    print(f"[{report['progress_pct']:.0f}%] {report['message']}")
    if report["status"] in ("complete", "error"):
        # Resources auto-released if auto_release_on_completion=true
        break

Customising Each Instance

Each CudaClaw instance has an individual config in ~/.cudaclaw/instances/. You can override per-instance settings after running the wizard:

# View a worker's config
cat ~/.cudaclaw/instances/worker-1.json

# Reassign a worker to 'coder' role (edit the file)
python3 - <<'EOF'
import json, pathlib
p = pathlib.Path("~/.cudaclaw/instances/worker-1.json").expanduser()
cfg = json.loads(p.read_text())
cfg["role"] = "coder"
cfg["model"] = "deepseek-ai/DeepSeek-Coder-V2-Instruct"
cfg["capabilities"] = ["code_generation", "code_review", "debugging"]
p.write_text(json.dumps(cfg, indent=2))
print("Updated worker-1 → coder role")
EOF

Adding Workers Mid-Job

# Add another GPU worker
python3 cudaclaw_wizard.py --add-worker --role coder

# Add a cloud validator
python3 cudaclaw_wizard.py --add-worker --role validator

Foreman Loop & Recursion Protection

The Foreman automatically guards against runaway workers:

max_iterations_per_task:    50   # worker killed if it loops >50 times on same task
stall_timeout_seconds:     300   # worker killed if no output for 5 minutes
recursion_depth_limit:      10   # task aborted if recursion exceeds 10 levels
duplicate_output_threshold: 0.95 # kills worker if last 3 outputs are >95% similar

All limits are configurable in ~/.cudaclaw/swarm_config.json.

Progress Reporting to the User

The Foreman emits structured JSON lines that OpenClaw should parse and summarise for the user:

{
  "swarm_id": "cudaclaw-swarm",
  "task_id": "job-001",
  "status": "running",
  "progress_pct": 62,
  "message": "4/7 subtasks complete. Workers active: 3/4.",
  "worker_states": {
    "worker-1": { "status": "idle",    "tasks_completed": 2 },
    "worker-2": { "status": "working", "current_task_id": "sub-4" }
  }
}

User-facing summary template:

"CudaClaw swarm progress: 62% – 4 of 7 subtasks done. Worker-2 is currently researching 'neural scaling laws'. Estimated finish: ~4 minutes."

Resource Release (Finite Tasks)

When finite: true is set on a task:

  1. CompletionTester polls task board every 30 seconds
  2. When all subtasks are complete, it signals the Foreman
  3. Foreman emits final report: status=complete, progress_pct=100
  4. All vLLM GPU processes are gracefully shut down
  5. Receipt written to ~/.cudaclaw/completed/{task_id}.json

OpenClaw receives the completion signal and can inform the user.

Key Files Reference

cudaclaw_wizard.py                   # Installation wizard & swarm launcher
CUDACLAW_ROADMAP.md                  # Full architecture, schemas, roadmap
schemas/cudaclaw_swarm.json          # Swarm config JSON schema
schemas/cudaclaw_progress_report.json# Progress report JSON schema
~/.cudaclaw/
├── api_keys.json                    # Provider API keys (chmod 600)
├── mcp_servers.json                 # MCP server configs
├── swarm_config.json                # Active swarm configuration
├── openclaw_manifest.json           # Machine-readable capabilities
├── start_vllm.sh                    # Auto-generated GPU launch script
├── instances/                       # Per-instance config files
└── completed/                       # Task completion receipts

Schema Validation (OpenClaw)

# Validate a swarm config against the schema
python3 -c "
import json, jsonschema
schema = json.load(open('schemas/cudaclaw_swarm.json'))
config = json.load(open(os.path.expanduser('~/.cudaclaw/swarm_config.json')))
jsonschema.validate(config, schema)
print('✅ Swarm config valid')
"

# Validate a progress report
python3 -c "
import json, jsonschema
schema  = json.load(open('schemas/cudaclaw_progress_report.json'))
report  = json.loads(input_line)
jsonschema.validate(report, schema)
print('✅ Progress report valid')
"

CudaClaw is parallel by nature. Point your OpenClaw at this repo, run the wizard in --agent-mode, and scale your claws.

About

AI agents running tasks on single-GPU nanochat training automatically

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Python 97.1%
  • Jupyter Notebook 1.1%
  • Other 1.8%