Agent-based simulation environment for Open & Distance Learning (ODL) research. SynthEd generates behaviorally grounded and temporally coherent learning trajectories by combining persona-driven agent modeling with 10 established theoretical frameworks. Built for researchers in learning analytics, educational data mining, and dropout prediction.
pip install -e ".[dev]"
python run_pipeline.py --n 200 # or: pip install syntheduFrom statistical similarity to behavioral fidelity. Traditional synthetic data methods optimize for distributional match. SynthEd optimizes for behavioral coherence -- each data point emerges from a simulated student's evolving motivations, decisions, and life context.
| Challenge | Traditional Approach | SynthEd Approach |
|---|---|---|
| Privacy regulations (GDPR/KVKK) | Anonymization (re-identification risk) | Agents are fictional -- no real individuals |
| Class imbalance in dropout data | Oversampling (SMOTE) -- loses context | Parameter-level control of dropout rates |
| Temporal incoherence | GAN/VAE post-hoc smoothing | Persona + memory produces coherent trajectories |
- 10 Theory Modules -- Tinto, Bean & Metzner, Kember, SDT, Garrison CoI, Moore, Rovai, Baulke, Epstein & Axtell, Gonzalez (+ unavoidable withdrawal mechanism)
- TheoryModule Protocol -- 4-phase dispatch (individual, network, post-peer, engagement) with auto-discovery and
_ENGAGEMENT_ORDERcomposition. New theories added with zero engine changes - Continuous Persona Spectrum -- Employment intensity, family responsibility, internet reliability as [0,1] floats with Beta distributions. No binary gates -- all theory effects scale continuously
- Multi-Semester Simulation -- Carry-over mechanics for engagement, GPA, coping, dropout phases
- GPA Feedback Loop -- Cumulative GPA anchors cost-benefit, non-fit perception, and competence beliefs
- Sobol Sensitivity -- 68-parameter sensitivity analysis identifying dominant dropout/engagement drivers
- NSGA-II Calibration -- Multi-objective optimization with Pareto front, parallel
--workers Nsupport, adaptive parameter bounds - 5-Level Validation Suite -- 22 statistical tests (default; up to 24 with backstory validation) across distributions, correlations, temporal coherence, privacy, and backstory
- InstitutionalConfig -- 5 institution-level quality parameters that modulate theory constants.
support_services_qualityscales 13 Baulke dropout phase thresholds - GradingConfig -- Beta/Normal/Uniform grade distributions, dual-hurdle pass requirements, exam-only and continuous assessment modes, relative grading with t-score cohort normalization
- EngineConfig -- 70 frozen engine constants with validation, overridable via
dataclasses.replace() - PipelineConfig -- Frozen dataclass grouping 16 pipeline params with JSON serialization for reproducibility
- OULAD-Compatible Export -- 7-table CSV matching the Open University Learning Analytics Dataset schema
- Optional LLM Enrichment -- Persona-grounded narrative backstories via OpenAI, Ollama, or any compatible provider
- Benchmark Reports -- Customizable default profile with CLI report generation (
--benchmark)
git clone https://github.com/theaiagent/SynthEd.git
cd SynthEd
pip install -e ".[dev]" # Dev install (no LLM)
pip install -e ".[dev,llm]" # Dev install with LLM support
python run_pipeline.py # 200 students, 14 weeks
python run_pipeline.py --n 500 # Custom population
python run_pipeline.py --oulad # OULAD-compatible export
python run_pipeline.py --benchmark # Run default benchmark profile
python run_calibration.py --workers 4 # Parallel NSGA-II calibrationfrom synthed.pipeline import SynthEdPipeline
from synthed.pipeline_config import PipelineConfig
config = PipelineConfig(output_dir="./output", seed=42)
pipeline = SynthEdPipeline(config=config)
report = pipeline.run(n_students=300)
print(f"Dropout: {report['simulation_summary']['dropout_rate']:.1%}")- Dropout Prediction -- Generate labeled training data with known ground-truth trajectories
- Intervention Simulation -- Model "what-if" scenarios by adjusting population parameters
- Privacy-Safe Benchmarking -- Share synthetic datasets publicly for reproducible research
| Document | Content |
|---|---|
| User Guide | Installation, configuration, calibration pipeline, OULAD export, LLM enrichment, troubleshooting |
| Theory & Architecture | 10 theoretical anchors, factor clusters, architecture diagram, project structure, validation suite, test inventory |
- Multi-semester simulation with carry-over
- 10 theory modules (Tinto, Bean & Metzner, Kember, SDT, Garrison, Moore, Rovai, Baulke, Epstein & Axtell, Gonzalez)
- Trait-based calibration (Sobol + Optuna + OULAD validation)
- Benchmark reports with CLI (
--benchmark) - OULAD-compatible 7-table export
- LLM enrichment with cost control and streaming
- Disability severity (Beta distribution)
- InstitutionalConfig (5 quality parameters modulating theory constants)
- NSGA-II multi-objective calibration with Pareto front
- GradingConfig (configurable grading policy: Beta/Normal/Uniform, dual-hurdle, exam-only)
- EngineConfig (70 frozen engine constants with validation)
- Relative grading (t-score cohort normalization)
- PipelineConfig (frozen pipeline configuration with JSON serialization)
- TheoryModule Protocol (phase-based dispatch with auto-discovery)
- Engine modularization (state.py, grading.py, statistics.py -- engine.py 834→590 lines)
- Engagement protocol unification (4th phase:
contribute_engagement_delta) - Spectrum refactoring (binary → continuous for employment/family/internet)
- GraphRAG integration (curriculum modeling)
- LLM-augmented mode (forum posts, assignment text)
- Parquet/Arrow export
- PyPI package publication (
pip install synthedu) - Interactive dashboard (Shiny + Plotly, dark/light themes, presets, validation)
SynthEd generates entirely fictional synthetic data. No real individuals are represented or identifiable. Outputs are intended for research, development, and educational purposes. SynthEd is under active development -- APIs and output formats may change between versions.
See full Legal Disclaimer and Responsible Use guidelines.
Contributions welcome! See the User Guide for development setup.
ruff check synthed/ tests/
python -m pytest tests/ -v --tb=shortMIT License. See LICENSE.
If you use SynthEd in your research, please cite using the CITATION.cff file or the Zenodo DOI above.
| Contributor | Role |
|---|---|
| Halis Aykut Cosgun | Lead Developer, Data Scientist & AI Engineer, Researcher -- Yozgat Bozok University |
| Evrim Genc Kumtepe | Research Advisor -- Anadolu University |
| Claude (Anthropic) | AI pair programmer -- implementation, testing, code review |
Conceptually inspired by TinyTroupe (Microsoft), MiroFish, and Agent Lightning. OULAD reference data: Kuzilek et al. (2017).