julia> ] instantiate
If you want to make previously executed runs available in the local database run git restore --source evaluation_results_branch codes/results.sqlite. If you want to start from scratch, just disregard this command.
Add the codes and decoders of interest to _0.helpers_and_metadata/code_metadata.jl then
julia> include("wiki_database_passes.jl")
julia> run_evaluations(CodeMetadata.code_metadata)
If you want to run only some codes, e.g. the code family CodeType, you can use run_evaluations(code_metadata; include=[CodeType]).
Optionally, if you want to specify a location (directory) for the generated database, you can use run_evaluations(code_metadata; database_path="path/to/database").
If you have multiple evaluation runs that have generated separate databases, you can merge them into a single database using the _0.helpers_and_metadata/db_join_helper.jl script.
include("_0.helpers_and_metadata/db_join_helper.jl")
using .DBJoinHelper: join_results
join_results("path/to/results"; output_path="path/to/merged_results.sqlite")julia> include("wiki_database_passes.jl")
julia> prep_everything(CodeMetadata.code_metadata)
Run the Franklin static website generator
julia> using Franklin
julia> Franklin.serve()
This is a Franklin.jl static website, together with the following extra passes for generating the source of the static pages:
_0.helpers_and_metadata- the base metadata about codes and decoders as well as some low-level helper functions for working with that metadata and the sqlite database of results_1.code_benchmark_pass- for running benchmarks and storing the performance data to the database_2.markdown_generation_pass- for reading the database and creating figures and raw markdown pageswiki_database_passes.jl- all of the functionality necessary for running the aforementioned capabilitiescodes- where the generated static website sources are keptdatabase- where the database of benchmarks is stored (insqliteas master format, and a few other formats for convenient downloading)
- if
ENV["ECCBENCHWIKI_QUICKCHECK"]!=""we will run very few samples per code, useful to check for overall correctness
Running the benchmarks on a Slurm cluster can be more efficient if you want to parallelize the execution across multiple nodes. Here are some tips for setting up and running the benchmarks on a Slurm cluster:
-
Set up Julia environment to evoid repeatedly setting up dependencies. You can do this by setting the
JULIA_DEPOT_PATHenvironment variable to a directory on your HPC cluster where you want to store Julia packages. Optionally, setJULIA_NUM_PRECOMPILE_TASKSandJULIA_PKG_PRECOMPILE_AUTOto avoid precompilation overhead. Optionally, setJULIA_CPU_TARGETto a value that covers your target architecture. For example, you can add the following lines to your startup script:export JULIA_DEPOT_PATH="/path/to/your/julia/depot" export JULIA_NUM_PRECOMPILE_TASKS=1 export JULIA_PKG_PRECOMPILE_AUTO=0 export JULIA_CPU_TARGET="generic;skylake-avx512,clone_all;znver2,clone_all"
-
You might use
SlurmClusterManager.jlto manage the execution of your benchmarks across the cluster. It's already included in the project dependencies. -
You can set up project environment, instantiate, and precompile before running the benchmarks at a large scale. This avoids the overhead of setting up the environment and precompilation for each job submission. For example, you can submit the following script as a Slurm job:
#!/bin/bash #SBATCH -J julia_warmup #SBATCH -N 1 #SBATCH -n 1 #SBATCH -t 01:30:00 source /path/to/your/startup_script.sh cd /path/to/qECCBenchWiki julia --project=. -e 'using Pkg; Pkg.instantiate(); Pkg.precompile()'
-
You can create a Julia script that runs the benchmarks for a specific set of codes and decoders as submit it as a Slurm job. In your script, you can call
run_evaluationswith the appropriate parameters to specify the output directory and setworker_dbtotrueso the results are written to the database from each worker process. For example:run_evaluations(CodeMetadata.code_metadata; output_path="path/to/results", worker_db=true)`
-
To merge your results from multiple runs, you can use the
join_resultsfunction from theDBJoinHelpermodule. This function takes a directory containing multiple SQLite databases and merges them into a single database. For example:include("_0.helpers_and_metadata/db_join_helper.jl") using .DBJoinHelper: join_results join_results("path/to/results"; output_path="path/to/merged_results.sqlite")