Observation
A full workspace clean build (cargo clean && cargo build --workspace) takes ~67 s wall clock, and the dominant cost is the highs-sys build script compiling the bundled HiGHS C++ source tree from scratch. Subjectively the laptop fans spin up and the machine gets noticeably hot for ~30 s during every cold build.
Investigation
What highs-sys actually does
Inspecting ~/.cargo/registry/src/.../highs-sys-1.14.1/:
- Ships the full HiGHS source under
HiGHS/ (18 MB total)
HiGHS/highs/ alone: 214 .cpp files, 278 headers, 7.7 MB of C++
build.rs calls the cmake crate which configures and builds all of it with clang++ (BUILD_SHARED_LIBS=OFF, FAST_BUILD=ON)
- Then runs
bindgen against the generated headers for the FFI layer
How it ends up enabled
Feature chain: problemreductions default = ["ilp-highs"] → good_lp/highs → highs crate → highs-sys with its own default = ["build", "highs_release"]. Nothing along the chain opts out, so build.rs compiles HiGHS from source on every cold build.
Cost breakdown (from cargo build --timings)
Clean workspace build, 67 s wall clock, ~217 CPU-seconds total:
| Unit |
Time |
Notes |
highs-sys run-custom-build |
38.4 s |
cmake + clang++ on 214 C++ files |
bindgen run-custom-build + codegen |
~7 s |
FFI bindings for HiGHS headers |
clang-sys run-custom-build |
3.0 s |
libclang discovery for bindgen |
problemreductions codegen |
15.0 s |
Main crate's own Rust compile |
| All other deps combined |
~4 s |
serde / clap / thiserror / ... all < 4 s each |
≈ 48 s of the 67 s critical path is the HiGHS toolchain, and ≈ 38 s of that is raw C++ compilation.
Comparison without ILP
Same clean build with --no-default-features --features example-db (fails on 5 ungated ILP references, but the dependency graph completes):
- Total CPU-seconds across 84 units: 65 s (vs. ~217 s with ILP on — 3.3× less work)
- Largest single unit: 4.3 s — no outlier at all, workload is evenly distributed
- Estimated wall clock once the code is cfg-gated: ~20–25 s
Why it heats the laptop
C++ compilation is among the most CPU-intensive workloads in existence (clang frontend + LLVM template instantiation + codegen). Cargo parallelizes the build script across all cores, so during the HiGHS build there are ~8–10 parallel clang++ processes chewing through 214 translation units for ~30 seconds straight, pegging every core at 100% and maxing out thermal dissipation. Rust's own compilation is less thermally punishing because it's coarser per-crate and has no comparable template explosion.
Why incremental compilation doesn't rescue this
build.rs is treated by cargo as an opaque external program. Cargo only tracks whether it should rerun (via rerun-if-changed / rerun-if-env-changed + fingerprint) and cannot see what it produced beyond its $OUT_DIR. The cmake crate's internal build directory does live in $OUT_DIR and is internally incremental, so subsequent cargo build runs that leave $OUT_DIR intact are near-instant. But the full 38 s is paid whenever the cache is lost:
cargo clean wipes target/ → full rebuild
- Profile switch (debug ↔ release) uses a different
$OUT_DIR → full rebuild per profile
- Feature-flag changes can invalidate the fingerprint and potentially trigger a cmake reconfigure → full rebuild
- New git worktree / fresh clone / CI without cache → full rebuild
rustup update / toolchain change → fingerprint mismatch → rerun
In practice these events happen often enough that the 38 s cost is felt "every time".
The fingerprint vs. the actual cache
- Rust incremental (rustc query cache in
target/incremental/) is semantics-aware: it can say "this function didn't change, skip codegen".
build.rs incremental is convention-driven: the script author must manually print rerun-if-* directives and stash state in $OUT_DIR. There is no middle ground — once cargo decides the script must rerun with a cold $OUT_DIR, you pay the full cost.
Scope
This issue only documents the finding; it does not prescribe a fix. Remediation options (system-library discover path, feature-gating ILP out of the default build, vendored cache, etc.) should be discussed separately once the team agrees on the constraints.
Files / paths referenced
Cargo.toml → default = ["ilp-highs"]
~/.cargo/registry/src/index.crates.io-*/highs-sys-1.14.1/build.rs
~/.cargo/registry/src/index.crates.io-*/highs-sys-1.14.1/HiGHS/highs/ (214 .cpp, 278 .h)
target/cargo-timings/cargo-timing.html (timing report used for numbers above)
Related
Observation
A full workspace clean build (
cargo clean && cargo build --workspace) takes ~67 s wall clock, and the dominant cost is thehighs-sysbuild script compiling the bundled HiGHS C++ source tree from scratch. Subjectively the laptop fans spin up and the machine gets noticeably hot for ~30 s during every cold build.Investigation
What
highs-sysactually doesInspecting
~/.cargo/registry/src/.../highs-sys-1.14.1/:HiGHS/(18 MB total)HiGHS/highs/alone: 214.cppfiles, 278 headers, 7.7 MB of C++build.rscalls thecmakecrate which configures and builds all of it withclang++(BUILD_SHARED_LIBS=OFF,FAST_BUILD=ON)bindgenagainst the generated headers for the FFI layerHow it ends up enabled
Feature chain:
problemreductionsdefault = ["ilp-highs"]→good_lp/highs→highscrate →highs-syswith its owndefault = ["build", "highs_release"]. Nothing along the chain opts out, sobuild.rscompiles HiGHS from source on every cold build.Cost breakdown (from
cargo build --timings)Clean workspace build, 67 s wall clock, ~217 CPU-seconds total:
highs-sysrun-custom-buildbindgenrun-custom-build + codegenclang-sysrun-custom-buildproblemreductionscodegen≈ 48 s of the 67 s critical path is the HiGHS toolchain, and ≈ 38 s of that is raw C++ compilation.
Comparison without ILP
Same clean build with
--no-default-features --features example-db(fails on 5 ungated ILP references, but the dependency graph completes):Why it heats the laptop
C++ compilation is among the most CPU-intensive workloads in existence (clang frontend + LLVM template instantiation + codegen). Cargo parallelizes the build script across all cores, so during the HiGHS build there are ~8–10 parallel
clang++processes chewing through 214 translation units for ~30 seconds straight, pegging every core at 100% and maxing out thermal dissipation. Rust's own compilation is less thermally punishing because it's coarser per-crate and has no comparable template explosion.Why incremental compilation doesn't rescue this
build.rsis treated by cargo as an opaque external program. Cargo only tracks whether it should rerun (viarerun-if-changed/rerun-if-env-changed+ fingerprint) and cannot see what it produced beyond its$OUT_DIR. Thecmakecrate's internal build directory does live in$OUT_DIRand is internally incremental, so subsequentcargo buildruns that leave$OUT_DIRintact are near-instant. But the full 38 s is paid whenever the cache is lost:cargo cleanwipestarget/→ full rebuild$OUT_DIR→ full rebuild per profilerustup update/ toolchain change → fingerprint mismatch → rerunIn practice these events happen often enough that the 38 s cost is felt "every time".
The fingerprint vs. the actual cache
target/incremental/) is semantics-aware: it can say "this function didn't change, skip codegen".build.rsincremental is convention-driven: the script author must manually printrerun-if-*directives and stash state in$OUT_DIR. There is no middle ground — once cargo decides the script must rerun with a cold$OUT_DIR, you pay the full cost.Scope
This issue only documents the finding; it does not prescribe a fix. Remediation options (system-library
discoverpath, feature-gating ILP out of the default build, vendored cache, etc.) should be discussed separately once the team agrees on the constraints.Files / paths referenced
Cargo.toml→default = ["ilp-highs"]~/.cargo/registry/src/index.crates.io-*/highs-sys-1.14.1/build.rs~/.cargo/registry/src/index.crates.io-*/highs-sys-1.14.1/HiGHS/highs/(214 .cpp, 278 .h)target/cargo-timings/cargo-timing.html(timing report used for numbers above)Related