Skip to content

Move evolution examples out of runtime directory#25

Merged
CalebisGross merged 1 commit intomainfrom
fix/sdk-refactor-13-issues
Mar 1, 2026
Merged

Move evolution examples out of runtime directory#25
CalebisGross merged 1 commit intomainfrom
fix/sdk-refactor-13-issues

Conversation

@CalebisGross
Copy link
Copy Markdown
Collaborator

Summary

  • Moves sdk/agent/evolution/examples/sdk/agent/evolution_examples/ so the evolution/ directory is purely runtime data
  • The examples subdirectory inside evolution/ was confusing the agent for new users — it should only contain files created by the agent at runtime

Test plan

  • 77/77 tests pass
  • New user clone has clean evolution/ directory with no stale example files

🤖 Generated with Claude Code

The examples/ subdirectory inside evolution/ confused the agent for
new users — the evolution dir should be purely runtime data (created
by the agent on first run). Moved to evolution_examples/ as a sibling.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@CalebisGross CalebisGross merged commit 50c427d into main Mar 1, 2026
3 checks passed
@CalebisGross CalebisGross deleted the fix/sdk-refactor-13-issues branch March 1, 2026 12:47
CalebisGross added a commit that referenced this pull request Apr 14, 2026
…391)

Adds the automated spoke training pipeline (Phase C) to the continuous
learning system. When enough experience data accumulates in the buffer,
the daemon can assemble training batches, run spoke fine-tuning via
Python subprocess, evaluate against quality gates, and deploy new spokes.

New infrastructure:
- TrainingRun type and 5 new ContinuousLearningStore methods
  (WriteTrainingRun, UpdateTrainingRun, GetLastTrainingRunTime,
  CountUntrainedExperience, MarkExperienceUsedInTraining)
- Migration 018: training_runs table for audit trail
- Training orchestrator in dreaming agent (Phase 4.85 in dream cycle)
  with subprocess execution, quality gate (EPR >= 0.90, FR <= 0.05,
  SC >= 0.95), and atomic deployment with rollback
- train_model MCP tool (#25) for manual training trigger
- 26 new tests across curriculum, training data, and trigger logic

The pipeline: check untrained count >= threshold → assemble JSONL batch
→ run train_spokes.py subprocess → evaluate via eval_encoding.py →
deploy via deploy_model.sh if quality passes → record result. Gated by
config flags, training window, and minimum data threshold.

Also includes minor fixes from other agents: episoding debug logging,
embedded LLM grammar improvements.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant