Skip to content

junkyul/lcn_chain_graph

Repository files navigation

lcn_chain_graph

lcn_chain_graph is a chain-graph-aware tooling repository for Logical Credal Networks. It focuses on benchmark generation, structure recovery, decomposition, export, and solver-facing benchmark packaging.

What This Repository Does

Use this repository to:

  • generate benchmark families and benchmark suites,
  • parse an .lcn file and recover its derived mixed graph,
  • check whether the derived structure is a chain graph,
  • build decomposition and inference skeleton summaries,
  • export benchmark bundles that can be replayed against codes/LCN.

This repository is centered on structure, benchmark preparation, and execution plumbing. It is not a standalone credal inference solver.

Install

uv venv --python 3.12 .venv
uv sync
uv run pytest

The project targets Python 3.12 and expects a local .venv managed by uv.

Start Here

Common Commands

Generate one family instance:

uv run lcn-chain-graph generate-family alarm_response --output-dir artifacts/generated/example_demo --seed 44

List the registered benchmark families:

uv run lcn-chain-graph list-families

Inspect and verify an existing .lcn file:

uv run lcn-chain-graph inspect-lcn examples/demo_chain.lcn
uv run lcn-chain-graph structural-report examples/demo_chain.lcn
uv run lcn-chain-graph verify-problem examples/demo_chain.lcn

Export a suite into a codes/LCN-friendly bundle:

uv run lcn-chain-graph export-lcn-benchmark --suite-id benchmark_scaling_directions --output-dir artifacts/generated/example_export

Run a checked-in shared benchmark bundle directly:

python benchmarks/main_release/benchmark_scaling_directions/run_benchmark.py --lcn-source-root /path/to/LCN

Run the downstream wrapper on the inner experiment mirror:

bash scripts/run_lcn_experiment.sh benchmarks/main_release/benchmark_scaling_directions/benchmarks/benchmark_scaling_directions --benchmark benchmark_scaling_directions_repo

Run a managed replay over an export root:

uv run python scripts/run_benchmark_campaign.py artifacts/generated/my_export_root --lcn-source-root ../codes/LCN --time-limit 180

Benchmark Library

Checked-in benchmark bundles live under three tiers:

Tier Purpose
benchmarks/main_release/ Default paper-facing benchmark library
benchmarks/supplemental_reference/ Compact and reference-oriented bundles
benchmarks/smoke_debug/ Fast smoke tests and downstream debugging

The curated sets currently include:

  • benchmark_scaling_directions
  • paper_scaling_directions
  • maintenance_scaling
  • paper_ready_small
  • supreme_court_judicial_power_variants
  • supreme_court_judicial_power_smoke
  • supreme_court_judicial_power_xmrf_smoke
  • supreme_court_judicial_power_source_variants
  • supreme_court_judicial_power_release_comparison
  • supreme_court_judicial_power_credal_source_comparison
  • supreme_court_judicial_power_dual_source_credal

The Supreme Court comparison track is split into three explicit categories:

  • source_variants: aligned single-source comparison,
  • release_comparison: release-policy comparison,
  • credal_source_comparison: source comparison over credal releases.

Each checked-in benchmark bundle contains:

  • benchmark_manifest.json
  • run_benchmark.py
  • README.md
  • lcn/
  • benchmarks/<set-name>/
  • metadata/
  • reports/
  • scenarios/

Downstream Runs

For downstream codes/LCN experiments, the main entry points are:

  • run_benchmark.py inside each exported or checked-in bundle,
  • scripts/run_exported_benchmark.py for explicit replay of one bundle root,
  • scripts/run_benchmark_campaign.py for batch replay over many exported bundles,
  • scripts/run_lcn_experiment.sh for the sibling codes/LCN experiment runner.

Current defaults:

  • checked-in bundles recommend ariel by default,
  • explicit overrides such as --algorithms ibp ccte are still supported,
  • local validation should rely on validate-benchmark-export, not only downstream runs.

Useful checks:

uv run lcn-chain-graph validate-benchmark-export artifacts/generated/example_export/benchmark_scaling_directions
uv run lcn-chain-graph check-lcn-compatibility artifacts/generated/example_export/benchmark_scaling_directions --lcn-source-root ../codes/LCN

The real downstream replay path expects a sibling codes/LCN checkout and ipopt available on PATH.

Quick Start

Generate a random chain graph and scaffold an .lcn file:

uv run lcn-chain-graph generate --num-nodes 8 --seed 7 --output-lcn examples/generated_demo.lcn

Build a decomposition skeleton from an existing .lcn file:

uv run lcn-chain-graph build-skeleton examples/demo_chain.lcn

Run the experimental execution scaffold:

uv run lcn-chain-graph experimental-execution examples/demo_chain.lcn --hook truth-table

Use the optional Python-callable execution hook:

uv run lcn-chain-graph experimental-execution examples/demo_chain.lcn --hook python-callable --callable-path tests.fixtures.execution_adapters:counting_hook --hook-parameter note=demo

Modeling Notes

  • Chain components are the connected components of the undirected subgraph.
  • Directed edges may only connect different undirected components.
  • The component-level directed graph must be acyclic.
  • Generated LCNs separate atom priors, intra-component relations, and inter-component conditional links.
  • The scaling library supports both bounded-width growth and width-growing benchmark families.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages