Skip to content

firelzrd/nap

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

28 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Nap (Neural Adaptive Predictor) CPUIdle Governor

A Linux kernel CPUIdle governor that uses a MLP-based neural network to learn the optimal idle state for each CPU online.

nap

Overview

Traditional CPUIdle governors (ladder, menu, teo) rely on fixed heuristics to predict how long a CPU will sleep and select an idle state accordingly. These heuristics are effective for common patterns but struggle with irregular or shifting workloads.

Nap (Neural Adaptive Predictor) replaces the heuristic with a two-expert MoE regression model that runs entirely in-kernel. Each CPU maintains two 16-16-1 MLPs — one specializing in short sleeps, the other in deep sleeps — and selects the appropriate expert based on the predicted sleep length. The networks learn online via deferred backpropagation with an asymmetric overshoot loss, converging the overshoot probability to a configurable target (default 8%). SIMD-accelerated forward and backward passes (SSE2 / AVX2+FMA / AVX-512) keep inference overhead negligible. A POLL short-circuit fast path bypasses NN inference entirely when the predicted sleep is too short for any C-state.

How It Works

Neural Network Architecture

Each expert is a 16-16-1 multi-layer perceptron:

Layer Size Activation
Input 16 features -
Hidden 16 neurons ReLU
Output 1 neuron Linear

Parameters per expert: 289 (16x16 + 16 + 16 + 1). Total parameters: 578 (2 experts), active parameters per inference: 289.

The output is a scalar in log2 space representing the predicted sleep duration in nanoseconds. Idle state selection is performed by comparing this value against precomputed log2 cost thresholds (target_residency_ns only; exit latency is a wakeup cost, not a factor in residency profitability) for each state, choosing the deepest state whose cost does not exceed the prediction.

POLL Short-Circuit Fast Path

Before invoking the NN, nap_select() checks whether the predicted sleep length is shorter than the shallowest valid C-state's target residency. If so, POLL is returned immediately — no feature extraction, no inference, no history update. This eliminates NN overhead for very short idles.

  • The shallowest valid C-state is cached per-CPU and invalidated when the PM QoS latency request changes or after NAP_MIN_STATE_REFRESH_JIFFIES (1 second).
  • poll_limit_ns is set to sleep_length + 1 µs margin, clamped between 1 µs and the shallowest C-state's target residency.
  • nap_reflect() skips history, learning, and all NN-related bookkeeping for short-circuited events, updating only the aggregate residency statistic. This prevents noisy POLL-duration samples from contaminating the NN's training distribution.

Mixture of Experts

Two experts specialize on different workload regimes:

  • Expert 0 — short sleep predictions (log2(sleep_length) < threshold)
  • Expert 1 — deep sleep predictions (log2(sleep_length) >= threshold)

The expert boundary is tied to the tick period (TICK_NSEC): the first C-state whose target residency exceeds one jiffy marks the start of the "long" regime. This separates tick-bound idles (where measured residency is dominated by the next tick, producing noisy gradients) from nohz idles (where residency reflects the workload's true idle duration). If all states exceed one jiffy, the boundary is placed just below C1 so the short expert remains routable but unused. On each idle entry, feature[0] (log2 of the next timer event) is compared against this threshold to select the active expert. Only the selected expert runs the forward pass and receives weight updates.

Feature Groups

The 16 input features are organized into four groups:

  • Time prediction (4): log2 sleep length, log2 last residency, ring buffer average & standard deviation
  • Pattern analysis (4): log history min/max, trend, short idle ratio
  • State feedback (4): above-target ratio, intercept rate, prediction error, log2 busy time
  • External signals (4): tick stopped flag, I/O wait count, latency requirement ratio, log2 IRQ rate

Online Learning

After each idle exit, the governor compares the selected state against the post-hoc ideal state derived from actual residency. Learning is governed by a dual gate: it fires only when both the reflect counter reaches learn_interval (default: 4) and at least learn_jiffies_min jiffies (default: 1) have elapsed since the last learning step. The time gate prevents sustained weight churn on workloads with very rapid idle bursts; setting it to 0 restores the original counter-only behavior.

The loss function is a direct overshoot loss with asymmetric learning rates:

  • Overshoot (selected state too deep for actual residency): gradient pushes the output down with learning rate base_lr * (1 - alpha)
  • No overshoot: gradient pushes the output up with learning rate base_lr * alpha

where alpha is the target overshoot percentile (default: 0.08). At equilibrium, P(overshoot) converges to alpha. Gradients are element-wise clipped to [-max_grad_norm, +max_grad_norm].

When the network output is clamped at the upper bound (prediction equals sleep length), non-overshoot gradients are suppressed to prevent unbounded weight growth in always-idle systems.

Weight Initialization

  • Hidden layer: Xavier uniform (deterministic PRNG, seed = 42)
  • Output layer: uniform [-0.01, 0.01]
  • All biases: zero
  • Neuron 0 pass-through: w_h1[0][0] = 1.0, w_out[0] = 1.0, all other inputs to neuron 0 zeroed

The pass-through initialization ensures the initial output approximates log2(sleep_length), providing sensible state selection before any learning occurs.

SIMD Dispatch

At governor enable time, Nap probes the CPU feature set and selects the fastest available implementation:

  1. AVX-512F (non-Intel only; Intel throttles AVX-512 with a heavy license penalty)
  2. AVX2 + FMA
  3. SSE2 (baseline)

All FPU/SIMD code is compiled into separate translation units and wrapped in kernel_fpu_begin()/kernel_fpu_end() to prevent corruption of userspace FPU state.

Tunables

Exposed under /sys/devices/system/cpu/nap/:

Tunable Default Description
version (read-only) Governor version
simd (read-only) Detected SIMD capability (sse2 / avx2 / avx512)
stats (read-only) Total selects, residency, overshoot count/rate, learn count
learning_rate 1 Learning rate in thousandths (1 = 0.001)
learn_interval 4 Backpropagation frequency (every N reflects)
learn_jiffies_min 1 Minimum jiffies between learning steps (0 = disabled)
overshoot_pctl 80 Target overshoot percentile in thousandths (80 = 8%)
reset_weights (write-only) Trigger weight reinitialization (all or cpulist e.g. 0-3,5,7)
reset_stats (write-only) Reset statistics counters

Benchmark: Overshoot Rate

Overshoot rate measures how often a governor selects a C-state deeper than the actual residency justifies. Lower is better.

Governor Overshoot Rate
nap 4.2%
teo 19.70%
menu 47.27%

Measured with this patch applied on moderately idle desktop (10-second sample per governor, Linux 6.18, AMD Zen):

for gov in menu teo nap; do
  echo $gov | sudo tee /sys/devices/system/cpu/cpuidle/current_governor
  sleep 10
  grep -R . /sys/devices/system/cpu/$gov/stats
done

Installation

Nap is delivered as a kernel patch. Apply it to the Linux 6.18.3 source tree and enable CONFIG_CPU_IDLE_GOV_NAP=y:

cd /path/to/linux
patch -p1 < /path/to/nap/patches/testing/0001-6.18.3-nap-v0.3.1.patch

Activate

Add the boot parameter:

cpuidle.governor=nap

Or switch at runtime:

echo nap | sudo tee /sys/devices/system/cpu/cpuidle/current_governor

About

A CPUIdle Governor with MLP-based Neural Network on top of in-kernel SIMD

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors