lcn_chain_graph is a chain-graph-aware tooling repository for Logical Credal Networks. It focuses on benchmark generation, structure recovery, decomposition, export, and solver-facing benchmark packaging.
Use this repository to:
- generate benchmark families and benchmark suites,
- parse an
.lcnfile and recover its derived mixed graph, - check whether the derived structure is a chain graph,
- build decomposition and inference skeleton summaries,
- export benchmark bundles that can be replayed against
codes/LCN.
This repository is centered on structure, benchmark preparation, and execution plumbing. It is not a standalone credal inference solver.
uv venv --python 3.12 .venv
uv sync
uv run pytestThe project targets Python 3.12 and expects a local .venv managed by uv.
- See HOWTO.md for the user-oriented guide.
- See benchmarks/README.md for the checked-in benchmark catalog.
- See artifacts/README.md for generated-output policy.
Generate one family instance:
uv run lcn-chain-graph generate-family alarm_response --output-dir artifacts/generated/example_demo --seed 44List the registered benchmark families:
uv run lcn-chain-graph list-familiesInspect and verify an existing .lcn file:
uv run lcn-chain-graph inspect-lcn examples/demo_chain.lcn
uv run lcn-chain-graph structural-report examples/demo_chain.lcn
uv run lcn-chain-graph verify-problem examples/demo_chain.lcnExport a suite into a codes/LCN-friendly bundle:
uv run lcn-chain-graph export-lcn-benchmark --suite-id benchmark_scaling_directions --output-dir artifacts/generated/example_exportRun a checked-in shared benchmark bundle directly:
python benchmarks/main_release/benchmark_scaling_directions/run_benchmark.py --lcn-source-root /path/to/LCNRun the downstream wrapper on the inner experiment mirror:
bash scripts/run_lcn_experiment.sh benchmarks/main_release/benchmark_scaling_directions/benchmarks/benchmark_scaling_directions --benchmark benchmark_scaling_directions_repoRun a managed replay over an export root:
uv run python scripts/run_benchmark_campaign.py artifacts/generated/my_export_root --lcn-source-root ../codes/LCN --time-limit 180Checked-in benchmark bundles live under three tiers:
| Tier | Purpose |
|---|---|
benchmarks/main_release/ |
Default paper-facing benchmark library |
benchmarks/supplemental_reference/ |
Compact and reference-oriented bundles |
benchmarks/smoke_debug/ |
Fast smoke tests and downstream debugging |
The curated sets currently include:
benchmark_scaling_directionspaper_scaling_directionsmaintenance_scalingpaper_ready_smallsupreme_court_judicial_power_variantssupreme_court_judicial_power_smokesupreme_court_judicial_power_xmrf_smokesupreme_court_judicial_power_source_variantssupreme_court_judicial_power_release_comparisonsupreme_court_judicial_power_credal_source_comparisonsupreme_court_judicial_power_dual_source_credal
The Supreme Court comparison track is split into three explicit categories:
source_variants: aligned single-source comparison,release_comparison: release-policy comparison,credal_source_comparison: source comparison over credal releases.
Each checked-in benchmark bundle contains:
benchmark_manifest.jsonrun_benchmark.pyREADME.mdlcn/benchmarks/<set-name>/metadata/reports/scenarios/
For downstream codes/LCN experiments, the main entry points are:
run_benchmark.pyinside each exported or checked-in bundle,scripts/run_exported_benchmark.pyfor explicit replay of one bundle root,scripts/run_benchmark_campaign.pyfor batch replay over many exported bundles,scripts/run_lcn_experiment.shfor the siblingcodes/LCNexperiment runner.
Current defaults:
- checked-in bundles recommend
arielby default, - explicit overrides such as
--algorithms ibp ccteare still supported, - local validation should rely on
validate-benchmark-export, not only downstream runs.
Useful checks:
uv run lcn-chain-graph validate-benchmark-export artifacts/generated/example_export/benchmark_scaling_directions
uv run lcn-chain-graph check-lcn-compatibility artifacts/generated/example_export/benchmark_scaling_directions --lcn-source-root ../codes/LCNThe real downstream replay path expects a sibling codes/LCN checkout and ipopt available on PATH.
Generate a random chain graph and scaffold an .lcn file:
uv run lcn-chain-graph generate --num-nodes 8 --seed 7 --output-lcn examples/generated_demo.lcnBuild a decomposition skeleton from an existing .lcn file:
uv run lcn-chain-graph build-skeleton examples/demo_chain.lcnRun the experimental execution scaffold:
uv run lcn-chain-graph experimental-execution examples/demo_chain.lcn --hook truth-tableUse the optional Python-callable execution hook:
uv run lcn-chain-graph experimental-execution examples/demo_chain.lcn --hook python-callable --callable-path tests.fixtures.execution_adapters:counting_hook --hook-parameter note=demo- Chain components are the connected components of the undirected subgraph.
- Directed edges may only connect different undirected components.
- The component-level directed graph must be acyclic.
- Generated LCNs separate atom priors, intra-component relations, and inter-component conditional links.
- The scaling library supports both bounded-width growth and width-growing benchmark families.