Skip to content

feat(deepseek_v4): PR1 skeleton — end-to-end inference with triton MoE#650

Draft
valarLip wants to merge 25 commits intomainfrom
feat/deepseek-v4-pr1-skeleton
Draft

feat(deepseek_v4): PR1 skeleton — end-to-end inference with triton MoE#650
valarLip wants to merge 25 commits intomainfrom
feat/deepseek-v4-pr1-skeleton

Conversation

@valarLip
Copy link
Copy Markdown
Collaborator

@valarLip valarLip commented Apr 25, 2026

Summary

DeepSeek-V4 end-to-end inference on real checkpoint (/data/DeepSeek-V4-Pro, TP=8). Now covers:

  • PR1 (skeleton): full V4 architecture + triton MoE + standard ATOM loader
  • pre2a / pre2c-A / pre2c-B: per-request state cache abstraction + classical KV cache via block_table (per paper §3.6.1)
  • PR3-main: multi-sequence batched dispatch via per-seq Python loop
  • PR-A (Phase 0/1/2 partial): backend gate scaffold + CPU-mirror metadata + swa_write Triton kernel + update_compressor_states Triton kernel (pos % (2*ratio) ring buffer per paper §3.6.1 + eq 11)

Verified on real ckpt: single-seq + 4-batch, English + Chinese, coherent outputs across all slots.

Reproduce

# Prerequisites
pip install -e /triton-test/python/triton_kernels/

ATOM_USE_TRITON_MOE=1 AITER_LOG_LEVEL=WARNING \
python -m atom.examples.simple_inference \
  --model /data/DeepSeek-V4-Pro \
  --kv_cache_dtype fp8 \
  -tp 8 \
  --max-num-seqs 4 \
  --max-num-batched-tokens 1024 \
  --max-model-len 1024 \
  --gpu-memory-utilization 0.85 \
  --enforce-eager \
  --temperature 0.0 \
  --max-tokens 512

Sample output (4-prompt batch, temperature=0)

Prompt: introduce yourself
Completion: Hello! I'm DeepSeek, an AI assistant created by the Chinese
company DeepSeek (深度求索). I'm here to help you with a wide range of tasks!
[...250 tokens, eos]

Prompt: list all prime numbers within 100
Completion: Here are all the prime numbers within 100:
2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71,
73, 79, 83, 89, 97
There are 25 prime numbers in total.

Prompt: 1+2+3=?
Completion: The sum is: 1 + 2 + 3 = **6**

Prompt: 如何在一个月内增肌10公斤
Completion: 在一个月内增肌10公斤(20斤)是几乎不可能实现的生理学目标,
除非满足以下几种特殊情况:[...512 tokens, max_tokens]

Per-request: TTFT ≈ 1.5s, TPOT 0.33–0.52s/tok (eager mode, no CUDAGraph yet).

Bugs fixed in this PR

# Bug Fix
1 weights_mapping substring collision (381/2519 params silently skipped) WeightsMapper prefix-anchored remapping
2 wo_a FP8 shuffle after BF16 dequant (attn output cos=-0.002) quant_type=No to skip CK shuffle
3 Hash routing missing route_scale (FFN output 5.2× too small) topk_weights *= routed_scaling_factor
4 ActivationType.Swiglu causes 9× amplitude loss on gfx950 Use standard Silu + triton post-kernel clamp
5 shared_experts.w2 reduce_results mismatch with FusedMoE reduce_results=False + unified all_reduce
6 KV cache warmup pollution (stale data from dummy forward) Reset all KV/Compressor/Indexer buffers on start_pos=0
7 UE8M0 input quant rounding mismatch vs reference Switch input quant path to match reference; correct MoE routing scale
8 Weight loading: only one-way coverage check (orphan ckpt params undetected) Bidirectional coverage check + V4 hash-layer bias handling
9 Compressor state cache required per-decode roll memcpy pos % (2*ratio) ring buffer; consumer reads halves by block-id parity

MoE paths

Path Env var Status Notes
aiter fused_moe (CK) default broken (a16w4+Swiglu bug on gfx950) Fastest but broken
triton matmul_ogs ATOM_USE_TRITON_MOE=1 verified TPOT 0.33–0.52s/tok (4-seq batch, eager)
torch per-expert ATOM_V4_TORCH_MOE=1 verified Very slow, debug only

V4 Attention Backend (PR-A migration)

Selects between legacy per-seq Python dispatch and new batched V4AttentionBackend. The new backend removes ~256 GPU→CPU .item() syncs per forward, prerequisite for CUDAGraph.

Variable Type Default Description
ATOM_V4_BACKEND str legacy new routes through V4AttentionBackend
ATOM_V4_BACKEND_LAYERS csv int "" (=all) Per-layer A/B bisect (e.g. 0,3,15,30)

Currently landed:

  • Phase 1a: swa_write Triton kernel (per-token positions % win, gated by use_new_v4_backend(layer_id))
  • Phase 2 partial: CPU-mirror metadata (cu_seqlens_q_cpu, state_slot_mapping_cpu, start_pos_per_seq_cpu) — eliminates per-seq .tolist() / .item() syncs in dispatch
  • State cache rewrite: update_compressor_states Triton kernel writes per-token at pos % STATE_SIZE; Compressor.forward reads A/B halves by block-id parity (no roll)

Remaining (future PRs): full backend extraction (v4_attention_backend.py), removing .item() from Compressor.forward/Indexer.forward start_pos extraction, CUDAGraph capture.

Known limitations

  • --enforce-eager required: CUDAGraph unblock pending remaining .item() syncs in compressor/indexer
  • 46 params unloaded: 3× hash-layer e_score_correction_bias (expected) + 43× MTP params (PR5)

Files changed

File Change
atom/config.py V4-to-V3 config registry + V4 field re-injection
atom/models/deepseek_v4.py Full V4 model + multi-seq dispatch + state cache refactor
atom/model_loader/loader.py WeightsMapper auto-read + bidirectional coverage check
atom/model_ops/v4_kernels/ NEW: swa_write, update_compressor_states Triton kernels
atom/model_ops/v4_backend_gate.py NEW: per-layer backend selector
atom/model_ops/attentions/deepseek_v4_attn.py V4 metadata builder + CPU-mirror metadata + context_lens plumb
atom/model_ops/moe.py ATOM_USE_TRITON_MOE gate + swiglu_limit passthrough
atom/model_ops/quant_v4.py UE8M0 input quant + FP4 e2m1 dequant
atom/model_ops/fused_moe_triton.py CDNA4MXScaleLayout fix + swiglu_limit clamp
atom/model_ops/sparse_attn_v4.py Explicit device= for multi-GPU
atom/model_ops/block_manager.py Per-req cache abstraction (PR3-pre2a)
atom/utils/envs.py ATOM_V4_BACKEND / ATOM_V4_BACKEND_LAYERS
atom/utils/debug_helper/ Generic env-gated dump / compare / ref-patch
docs/environment_variables.md V4 backend env doc

Test plan

  • Single prompt English/Chinese 512 tokens — coherent output
  • 4-prompt batched inference — coherent outputs across all slots
  • Byte-equal kernel-vs-reference for update_compressor_states (15/15 cases: prefill + decode + MTP)
  • lm_eval GSM8K accuracy
  • CUDAGraph capture (future PR)

valarLip added 13 commits April 24, 2026 16:13
…arity

Adds the foundational scaffolding for DeepSeek-V4-Pro support — a major
architecture shift from V3.2 with mHC residuals, hybrid CSA+HCA attention,
hash routing, and grouped output LoRA. PR1 ships the eager-mode model code
with torch fallback kernels, validated against the official inference
implementation at bit-exact parity (max_abs_diff = 0.0).

Scope (PR1 only):
- New atom/models/deepseek_v4.py: full Compressor / Indexer / Attention /
  Gate / Expert / MoE / Block / MTPBlock / ParallelHead / Transformer port
  (~1200 lines). Single-rank only; plain nn.Linear / nn.Embedding for now.
- New atom/model_ops/sparse_attn_v4.py: torch fallbacks for sparse_attn
  and hc_split_sinkhorn (Sinkhorn-Knopp projection on Birkhoff polytope).
- New atom/model_ops/quant_v4.py: torch fallbacks for FP8/FP4 inplace
  QAT round-trip and Walsh-Hadamard transform (replaces fast_hadamard_transform
  which doesn't build on ROCm).
- Register DeepseekV4ForCausalLM in support_model_arch_dict.

Out of scope (tracked for PR2-6):
- Real HF checkpoint loading (PR2 = FP4 e2m1 loader, PR3 = TP + KV cache).
- AITER sparse_attn kernel (PR4; spec at
  /app/logs_claude/aiter_v4_sparse_attn_spec.md, AITER team kicked off).
- MTP integration with EagleProposer (PR5).
- @support_torch_compile + CUDAGraph + openai_server (PR6).

Verification: /app/logs_claude/v4_pr1_verify.py monkey-patches the reference's
TileLang kernel imports with our torch fallbacks, copies the same dummy
state_dict into both models, and runs prefill + decode side-by-side. 259
tensors match exactly; max_abs_diff = 0.0 on logits.
DeepSeek-V4-Pro stores routed expert weights as packed FP4 e2m1 (int8 with
2 values per byte, low nibble first) plus per-block ue8m0 scale (block size
32 along input dim). This commit adds `dequant_fp4_e2m1(packed, scale)` in
atom/model_ops/quant_v4.py — a pure-torch unpacker that mirrors convert.py
exactly but produces BF16 directly instead of repacking into FP8.

Validated bit-exactly against an independent reference unpack on a real
22M-element expert tensor from the on-disk checkpoint. Also regression-
tested across 5 different shapes/positions (w1/w2/w3 in first/mid/last
layer + MTP). All produce values that lie exactly on the FP4 e2m1 grid.

Scope: this is the standalone dequant utility. Wiring it into the model
loader's safetensors pipeline + tying it to specific param names happens
in PR3 alongside TP-aware expert sharding.

Test: /app/logs_claude/v4_pr2_dequant_test.py
Result: max_abs_diff = 0.0 (bit-exact)
PR3a: replace nn.Linear / nn.Embedding with ATOM tensor-parallel-aware
classes for the BF16 projections in Attention, Indexer, and the model
embedding. Same `weight` parameter naming so dummy state_dicts continue
to load. At TP=1 ATOM's tgemm.mm produces bit-identical output to F.linear,
so PR1's reference parity (max_abs_diff = 0.0) still passes.

Layers refactored (8 total):
- DeepseekV4Model.embed:           nn.Embedding -> VocabParallelEmbedding
- DeepseekV4Attention.wq_a:        nn.Linear    -> ReplicatedLinear
- DeepseekV4Attention.wq_b:        nn.Linear    -> ColumnParallelLinear
- DeepseekV4Attention.wkv:         nn.Linear    -> ReplicatedLinear  (single shared MQA head)
- DeepseekV4Attention.wo_a:        nn.Linear    -> ColumnParallelLinear
- DeepseekV4Attention.wo_b:        nn.Linear    -> RowParallelLinear (with all-reduce)
- Indexer.wq_b:                    nn.Linear    -> ColumnParallelLinear
- Indexer.weights_proj:            nn.Linear    -> ColumnParallelLinear

Deferred to later PRs (intentional):
- Compressor.wkv/wgate (fp32) -> PR3c with quant_type wiring
- ParallelHead.weight (fp32 LM head) -> PR3c
- Expert.w{1,2,3} -> PR3b (FusedMoE wholesale rewrite)
- MoE.gate.weight (used as raw Parameter, not Linear class) -> kept

Verification: /app/logs_claude/v4_pr1_verify.py (now GPU mode with
init_dist_env) shows max_abs_diff = 0.0 for prefill + decode against
reference at TP=1.
… for real ckpt

PR3c delivers end-to-end real-checkpoint loading for DeepSeek-V4 attention
layers via ATOM's existing FP8/FP4 GEMM infrastructure.

What works after this commit (validated on real /data/DeepSeek-V4-Pro/):
- DeepseekV4ForCausalLM(atom_config) auto-builds a V4QuantConfig that maps
  routed-experts -> per_1x32 (FP4) and overrides wo_a / Compressor.wkv /
  Compressor.wgate / indexer.weights_proj -> bf16 (no quant). Everything
  else inherits the global FP8 (per_1x128) spec from the HF quantization_config.
- load_weights(weights) walks an iterable of (name, tensor) pairs and:
    * Remaps ATOM's `weight_scale` -> on-disk `scale` naming.
    * Special-cases wo_a: dequantizes FP8+scale -> BF16 on the fly so the
      grouped-LoRA einsum (which aiter doesn't support in FP8) works.
    * Dispatches to ATOM Linear's weight_loader for FP8 / FP4 / BF16 paths.
    * Skips params with shape mismatch (e.g. expert nn.Linear waiting for
      PR3b's FusedMoE refactor) without crashing.
- All 23 attention parameters (FP8 q/kv proj + FP4 indexer + BF16 wo_a + fp32
  compressor) load successfully on real layer-2 of the V4 checkpoint.

Threading changes:
- DeepseekV4Args gains `quant_config: Optional[Any] = None`.
- DeepseekV4Attention / Indexer / Compressor / Block / MTPBlock / DeepseekV4Model
  now accept `prefix: str = ""` and pass `quant_config + prefix` down to each
  ATOM Linear constructor so per-layer quant lookup works.

Backward compatibility:
- When `args.quant_config is None` (toy / dummy validation), V4QuantConfig
  retains its `QuantType.No` global — Linear layers stay BF16 and the PR1
  bit-exact reference parity test (max_abs_diff = 0.0) still passes.

Remaining gaps for end-to-end real-ckpt forward (tracked in design doc):
- PR3b: replace MoE/Expert with FusedMoE so 384 expert FP4 weights load.
- PR3d: refactor V4 attention.forward to accept 2D [num_tokens, dim] input
  (ATOM TP linears require 2D — current 3D path raises "GEMM not supported").
PR3d adapts V4 model to ATOM's scheduler convention: model.forward consumes
flat 2D `[num_tokens, dim]` tokens (single sequence implicit B=1), matching
how ATOM's ModelRunner / scheduler pass tokens. This unblocks ATOM Linear's
quantized GEMM kernels (which only accept 2D `[M, K]` input) and enables
end-to-end real-checkpoint forward.

What changed:
- DeepseekV4Attention.forward(x, start_pos): now accepts 2D [num_tokens, dim].
  Internally adds a B=1 dim only where needed (RoPE, sparse_attn). The
  grouped-LoRA einsum string changes from "bsgd,grd->bsgr" to "sgd,grd->sgr".
- Compressor.forward / Indexer.forward: accept 2D x; auto-unsqueeze to 3D
  internally for backward compatibility with the existing logic.
- Block.hc_pre / hc_post + ParallelHead.hc_head: refactored to be
  shape-agnostic in leading dims (use negative indexing on flatten / sum).
  Both 4D `[B, S, hc, D]` (legacy reference path) and 3D `[num_tokens, hc, D]`
  (ATOM path) work.
- ParallelHead.get_logits: 2D path takes last token via `x[-1:]`; 3D path
  preserves `x[:, -1]` for legacy [B, S, D] inputs.
- MTPBlock.forward: 2D-aware via `e.unsqueeze(-2)` for hc-dim broadcast.
- DeepseekV4Model.forward: auto-flattens 2D `[1, S]` input_ids to 1D `[S]`
  for the new convention; rejects B>1 (proper multi-sequence batching needs
  attn_metadata, deferred).

Validated:
- PR1 reference parity (toy 4-layer dummy weights at B=1 S=32):
  max_abs_diff = 0.0 — still bit-exact after the 2D refactor.
- PR3d end-to-end on REAL V4 weights:
  + Built DeepseekV4ForCausalLM (4 layers, real V4 dims, ~105B params)
  + load_weights() loaded 36 layer-2 params; 23/23 attn params nonzero
  + attn(x_2d=[16, 7168], start_pos=0) → output [16, 7168] bf16
  + No NaN/Inf; output range [-2.94, 3.08], abs mean 0.42 (sensible)
  + This is the first successful V4 attention forward on real weights via ATOM

Test scripts (under /app/logs_claude/):
- v4_pr1_verify.py — toy parity (now uses B=1 + ATOM 2D path)
- v4_pr3d_layer_e2e.py — real-weight 2D forward end-to-end
- v4_pr3c_layer0_test.py — per-Linear validation against real ckpt

Remaining for full model end-to-end:
- PR3b: MoE → FusedMoE so 384 expert FP4 weights load (currently shape-skipped)
- Multi-sequence support via attn_metadata (currently single-sequence implicit B=1)
PR3b enables ATOM's FusedMoE for V4's 384 routed experts so FP4 expert
weights can load via the existing aiter `gemm_a4w4_quant` kernel and
shard across TP/EP ranks. Also extends `select_experts` in moe.py to
support V4's `sqrtsoftplus` scoring with `e_score_correction_bias`.

Changes in atom/model_ops/moe.py:
- `FusedMoE.select_experts` now handles `scoring_func="sqrtsoftplus"`:
  routing_weights = sqrt(softplus(router_logits)) + topk + renormalize.
  Mirrors the V4 reference Gate.forward exactly for non-hash layers.

Changes in atom/models/deepseek_v4.py:
- Dual-path MoE: when `quant_config` is set AND ATOM's global atom_config
  is initialized, MoE uses ReplicatedLinear gate + FusedMoE experts +
  ATOM-Linear shared_experts. Otherwise falls back to the original manual
  per-expert nn.Linear path so PR1 toy validation stays bit-exact (the
  reference test runs without ATOM's ModelRunner setting the global config).
- Expert class accepts `quant_config + prefix`: when set, w1/w2/w3 become
  ColumnParallelLinear/RowParallelLinear (FP8 path); else nn.Linear (toy).
- DeepseekV4ForCausalLM.get_expert_mapping() returns the (param_name,
  weight_name, expert_id, shard_id) tuples mapping V4's `w1/w2/w3` ckpt
  names to FusedMoE's merged `w13_*`/`w2_*` params.
- load_weights() walks expert_mapping first to dispatch routed expert
  tensors via FusedMoE's per-expert weight_loader, then handles the rest:
    * ATOM `weight_scale` ↔ on-disk `scale` rename (existing)
    * ATOM `gate.e_score_correction_bias` ↔ on-disk `gate.bias` rename (NEW)
    * `wo_a` FP8 → BF16 dequant on load (existing)

Validated:
- PR1 toy parity: max_abs_diff = 0.0 (manual MoE path still bit-exact).
- PR3d e2e: real layer-2 attn + 2D forward still works.
- PR3b new: under stub atom_config, FusedMoE path activates correctly.
  Layer-3 (non-hash, real V4 dims): gate + e_score_correction_bias +
  shared_experts (6/6) loaded; FusedMoE expert mapping returns 1152
  entries (384 experts × {w1,w2,w3}).

Known limitations (deferred):
- Hash routing (layers 0/1/2): tid2eid table is loaded but routing logic
  still falls through to sqrtsoftplus path → INCORRECT for hash layers.
  Proper hash routing requires either a custom path through FusedMoE
  or a pre-computed (topk_weights, topk_ids) injection point.
- Multi-sequence batching via attn_metadata (currently single-sequence implicit B=1).

Test: /app/logs_claude/v4_pr3b_fusedmoe_test.py
… prefix

Bug: `make_v4_quant_config` matched `"ffn.experts." in layer_name` (with
trailing dot). FusedMoE.__init__ asks for the layer's quant_type with
prefix `layers.N.ffn.experts` (NO trailing dot — it's the parent module
of the per-expert weights, not a per-expert lookup). The check failed,
so FusedMoE inherited the global FP8 (per_1x128) spec and allocated
the routed expert weights as `float8_e4m3fn` instead of `float4_e2m1fn_x2`.

Symptom in PR3b validation output before the fix:
  FusedMoE experts: 3/5 nonzero  (loader couldn't dispatch FP4-shaped
  on-disk tensors into FP8-typed model params; shape mismatch silently
  skipped them)

After the fix:
  experts.w13_weight: (385, 6144, 3584) torch.float4_e2m1fn_x2 ✓
  experts.w13_weight_scale: (385, 6144, 224) torch.float8_e8m0fnu ✓
  experts.w2_weight:  (385, 7168, 1536) torch.float4_e2m1fn_x2 ✓
  experts.w2_weight_scale:  (385, 7168, 96) torch.float8_e8m0fnu ✓
  e_score_correction_bias: (384,) torch.float32 ✓

Match condition tightened to `".ffn.experts" in layer_name` so it
catches BOTH `layers.N.ffn.experts.M.w1` (per-expert Linear lookups)
AND `layers.N.ffn.experts` (FusedMoE parent module lookup).

Note: a separate aiter-side issue (HSA_STATUS_ERROR_EXCEPTION on FP4
expert weight_loader, traced to a `direct_copy_kernel` with grid size
exceeding HW limits) prevents end-to-end FP4 expert load testing on
this box. The dtype/shape correctness above is verified by inspecting
the constructed module's params directly.

Validated:
- PR1 toy parity: max_abs_diff = 0.0 (manual MoE fallback unaffected)
- PR3d real-attention forward: still works
PR3b's expert weight loader had three bugs that caused weights to load as
zero or be silently dropped:

1. **Expert mapping pattern mismatch**: `make_expert_params_mapping` returns
   `(param_part="experts.w13_", weight_part="experts.0.w1.", ...)` — substring
   substitution, not endswith. The old code built `f".experts.{e}.{suffix}"`
   which never matched. Switched to longest-prefix substring substitution
   matching the standard ATOM loader pattern.

2. **Scale dtype zero-fill**: copying `torch.float8_e8m0fnu` into a `uint8`
   destination via `copy_()` silently produces zeros (mismatched dtype, no
   reinterpret). FusedMoE allocates `w13_weight_scale` as uint8; force a
   `.view(torch.uint8)` on the e8m0 source before passing to the loader.

3. **Param suffix `_scale` vs `.weight_scale`**: after substring sub,
   `experts.0.w1.scale` becomes `experts.w13_scale`, but the FusedMoE param is
   `experts.w13_weight_scale`. Added `_scale` → `_weight_scale` post-fix.

Plus: gracefully slice on-disk gate.weight / gate.bias when the test caps
n_routed_experts below the checkpoint size (no-op in real serving).

Verified:
- v4_pr3b_fusedmoe_test: 32 params loaded, 5/5 expert + 6/6 shared nonzero
- v4_pr3d_layer_e2e: real attention forward still works
- v4_pr1_verify: bit-exact reference parity preserved (0.0 max diff)
…uting_function

V4 uses tid2eid hash lookup (instead of gate-logit topk) for routing in
layers where compress_ratio implies hash layer (first 3 layers in standard
config). Previously, MoE just declared tid2eid for weight loading but
inference fell through to sqrtsoftplus path → wrong routing for those layers.

This commit:

- Adds an early `custom_routing_function` branch to FusedMoE.select_experts
  (it was in the signature but never honored — the non-grouped path went
  straight to scoring_func dispatch). Now any non-None custom fn takes
  precedence and returns (topk_weights, topk_ids).

- Adds DeepseekV4MoE._hash_topk(): topk_ids = tid2eid[input_ids],
  topk_weights = sqrtsoftplus(router_logits) gathered + renormalized.
  Stashes input_ids on self before the experts() call so the closure can
  index tid2eid; clears immediately after.

- For hash layers: assigns experts.custom_routing_function = self._hash_topk
  in MoE.__init__ so FusedMoE picks it up via the moe_forward custom op
  → forward_impl_graph → quant_method.apply → select_experts plumbing.

Verified:
- PR3e (new): synthetic tid2eid → _hash_topk produces exact expected ids,
  renormalized weights match reference math (max_abs_diff = 0.0)
- PR3e: FusedMoE.select_experts honors custom_routing_function correctly
- PR1 toy parity: still 0.0 max diff (hash path is opt-in via is_hash_layer)
- PR3b FusedMoE load: 32 params, all nonzero (no regression)
- PR3d real attn forward: still works (non-hash layer)
… real ckpt

Three changes converging on the first working V4 layer forward:

1. **weights_mapping**: Add class-level rename dict so the standard ATOM
   loader (`atom.model_loader.loader.load_model`) can ingest V4 ckpt names
   without per-model loader.py changes. `.gate.bias` →
   `.gate.e_score_correction_bias`, `.scale` → `.weight_scale_inv`. Loader's
   built-in `weight_scale_inv` → `weight_scale` rename then completes the
   path. Real serving via ModelRunner now works for non-wo_a layers.

2. **process_weights_after_loading hook**: After my custom `model.load_weights`
   finishes copying tensors, walk all submodules and call
   `quant_method.process_weights_after_loading(layer)` (or
   `layer.process_weights_after_loading()` if no quant_method).

   Without this, FusedMoE's `shuffle_weights` step is skipped and the FP4
   ck_moe kernel reads stale weight layout — manifested as
   HSA_STATUS_ERROR_EXCEPTION mid-forward. Standard loader.py calls this for
   us; my custom loader had to replicate it.

3. **PR3f end-to-end test** (logs_claude/v4_pr3f_block_e2e.py):
   - Build 1 dense layer (compress_ratios=[0]) with 8 routed experts
   - Load real layer-3 weights (32 target params, 33/33 nonzero)
   - Build mHC residual `[8 tokens, hc_mult=4, dim=7168]`
   - Call Block.forward(x, start_pos=0, input_ids)
   - Output: shape preserved, range [-4.1, 4.6], abs mean 0.81, no NaN/Inf

This is the first end-to-end forward through V4's full layer:
attention (FP8 wq/wkv + BF16 wo grouped LoRA + indexer) + FusedMoE (FP4
experts via aiter ck_moe + sqrtsoftplus routing + bias correction +
shared expert) + mHC pre/post Sinkhorn projections.

Confirmed no regression on PR1/PR3b/PR3d/PR3e.
…kpts

ModelRunner uses atom.model_loader.loader.load_model() — not the model's
custom load_weights(). This commit closes that gap so real serving via
openai_server works end-to-end:

1. **Expand weights_mapping with prefix renames**: V4 ckpt has bare names
   (`embed.`, `layers.`, `norm.`, `head.`, `hc_head_`) but our params live
   under `self.model = ...`. Add prefix substitutions so the loader's
   `model.get_parameter(name)` lookup hits the right attribute path.

2. **Fix dtype-mismatch silent zero in FusedMoE._load_w13/_load_w2**:
   PyTorch's `tensor.copy_()` between mismatched float8/uint8 dtypes silently
   writes zeros. V4's per-1x32 weight scales are stored as `float8_e8m0fnu`
   on disk but FusedMoE allocates them as `uint8` (raw byte storage). Force
   a `.view(torch.uint8)` reinterpret on the source so the bytes round-trip
   correctly. This is a pre-existing bug that was masked because V2/V3 use
   `float32` scales — V4 is the first ATOM model to use e8m0/e4m3 scales.

Verified:
- PR3i (new): standard load_model() loads V4 layer-0 from full 805GB ckpt
  index — 43/43 model params nonzero (100%), 5GB selective load.
- PR3g (new): full Model.forward(input_ids) → logits on real ckpt.
  Output shape (1, 129280), range [-14.2, 15.4], std 3.05, no NaN/Inf.
- PR3h (new): hash layer (layers 0/1/2) Block.forward works on real
  layer-0 ckpt (tid2eid loaded, 773423/775680 nonzero entries, real
  per-token expert assignments diverge from default sqrtsoftplus path).
- All 5 prior tests (PR1/PR3b/PR3d/PR3e/PR3f) still pass — no regression.

Net result: V4 inference pipeline is now production-ready for real ckpt
loading + forward; remaining gap is multi-layer + multi-batch attn metadata
+ AITER sparse_attn (parallel work).
…hook

PR3i shipped "100% nonzero params" but never ran forward through the
standard-loader path. Verifying with PR3j (new) revealed wo_a values were
2768× too large — `torch.copy_(BF16_dst, FP8_src)` does an FP8→BF16 dtype
conversion but SKIPS the per-128-block scale multiplication. Result: raw
FP8 e4m3 max value (448.0) lands in the BF16 weight buffer instead of the
true ~0.04 attention-init magnitude.

Fix: stop forcing wo_a to no_spec/BF16 in V4QuantConfig. Let it allocate
as FP8 ColumnParallelLinear so the standard FP8 loader fills both
`wo_a.weight` (FP8) and `wo_a.weight_scale` (e8m0) correctly. Then
DeepseekV4Attention.process_weights_after_loading dequants in place,
replacing weight with BF16 + dropping the scale param. Forward continues
to use BF16 weight in the grouped LoRA einsum (aiter has no FP8 grouped
einsum).

Also removes the manual wo_a special-case from custom load_weights() —
both load paths (custom + standard) now converge through the same
process_weights_after_loading dequant.

Verified by PR3j parity test:
- Custom path wo_a: abs.mean=0.0214, abs.max=0.4062
- Standard path wo_a: abs.mean=0.0214, abs.max=0.4062 (BIT-EXACT)
- Standard-loader Model.forward → logits range [-17.9, 15.8], std 3.04
- Magnitude ratio: 1.00 (was 2768× before fix)
- All 9 tests pass — no regression.

This was a silent corruption that PR3i's "params nonzero" check missed.
The lesson: nonzero != correct. Always verify with forward.
Major changes enabling correct V4 inference (single-prompt verified with
512-token coherent output in both English and Chinese):

Model fixes:
- WeightsMapper prefix-anchored remapping (fixes 381 silently-skipped params)
- wo_a FP8→BF16 dequant with quant_type=No to prevent CK shuffle corruption
- Hash routing (first 3 layers) now applies route_scale=2.5
- shared_experts reduce_results=False + unified all_reduce in MoE.forward
- KV cache reset on start_pos=0 with score_state=-inf initialization
- TP-correct head/group counts for Attention and Indexer

MoE routing:
- Standard Silu activation (not Swiglu — aiter a16w4+Swiglu has 9× amplitude
  loss on gfx950). swiglu_limit clamping done in triton post-kernel.
- ATOM_USE_TRITON_MOE=1: triton matmul_ogs path with swiglu_limit clamp
- ATOM_V4_TORCH_MOE=1: per-expert torch fallback with FP4 dequant (slow)
- GFX950MXScaleLayout→CDNA4MXScaleLayout fix in fused_moe_triton.py

Loader improvements:
- WeightsMapper auto-read from model class attribute
- Post-load WARNING listing all unloaded params
- Shape-mismatch raises RuntimeError instead of silent skip

Config:
- deepseek_v4→deepseek_v3 registry mapping with V4 field re-injection
- Robust from_hf_config with getattr defaults

Known limitations:
- Single-sequence only (kv_cache[:1,...] hardcoded); batch>1 needs PR3
- Multi-request KV isolation pending scheduler integration
- TPOT ~213ms with --enforce-eager (no CUDAGraph)
# populated with extra V4 attrs (some fields may live only in the raw
# config_dict, not on the config object — `transformers` strips unknown
# kwargs unless they're in the schema).
g = lambda k, default=None: getattr(hf_config, k, default)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ [ruff] <E731> reported by reviewdog 🐶
Do not assign a lambda expression, use a def

Suggested change
g = lambda k, default=None: getattr(hf_config, k, default)
def g(k, default=None):
return getattr(hf_config, k, default)

Oseltamivir added a commit to SemiAnalysisAI/InferenceX that referenced this pull request Apr 26, 2026
dsv4-fp4-mi355x-atom (ROCm/ATOM#650 PR1, single-sequence at TP=8 with
torch-fallback hc_pre because aiter mhc_pre crashes on this image)
runs at ~5 min per request in steady state. With 1k1k at 12 prompts
plus 8k1k at the same shape, the full sweep can exceed the 300-min
cap that #1148 set for the SGLang-DSv4 path.

Bump both the SLURM allocation in runners/launch_mi355x-amds.sh and
the GitHub Actions timeout-minutes in benchmark-tmpl.yml together —
either expiring first kills the job, so they need to stay aligned.

Note: this is a global bump that affects every MI355X benchmark and
every job that uses the shared workflow template, not just the dsv4
ATOM one. Drop back to 300 once the slow paths are gone (PR4
CUDAGraph + a working aiter MHC).
valarLip added 11 commits April 28, 2026 03:51
…202)

Upstream ref (deepseek-ai/DeepSeek-V4-Pro@a1fd202) changed shared_experts
from no swiglu_limit to swiglu_limit=args.swiglu_limit, making it consistent
with routed experts.
…witch RoPE to aiter

- DeepseekV4ForCausalLM/Model/Block/MTPBlock/Attention/Compressor/Indexer
  now accept `positions: torch.Tensor` instead of `start_pos: int`; internal
  ring-buffer indexing still derives `start_pos = positions[0].item()` (full
  per-request KV slot management deferred to PR3).
- New `_V4RoPE` wraps aiter `rope_cached_positions_{,2c_}fwd_inplace`,
  driven by per-token positions. Cos/sin cache built via V4's exact YaRN math
  (`_precompute_freqs_cis`); kept symmetric to `_apply_rotary_emb` by working
  on the pre-sliced rope tail.
- `_build_cos_sin_cache` is lru-cached on (rope params, dtype, device) so the
  3 distinct rope param sets (HCA / CSA / Dense) share one GPU tensor across
  all 62 layers instead of 62 register_buffer copies (~16 GB OOM otherwise).
- Inverse RoPE on the attention output keeps `_apply_rotary_emb` (aiter has
  no inverse kernel); the complex freqs slice is rebuilt on demand from the
  cos/sin cache via `_V4RoPE.freqs_for_positions`.
- Verified: simple_inference single-prompt CN 256 tokens coherent.
Generalize the GDN per-request state decoupling (#602) into a complete
model-agnostic KV abstraction owned by the AttentionMetadataBuilder
hierarchy. ModelRunner is now blind to attention type — it walks modules
and dispatches; per-attention-type tensor layouts (MLA 576-dim packed,
GDN-hybrid full-attn-only rows, MiMo-V2 per-module deferred, V3.2
indexer cache, GDN per-req mamba state) all live next to their
respective builder.

ModelRunner net: -526 LOC. The if/elif chains over use_mla /
is_qwen_next / is_mimo_v2 / is_deepseek_v32 in _compute_block_bytes,
allocate_kv_cache, and the binding loop are all gone. Future stateful
attentions (DeepseekV4 ring buffer + compressor state) plug in by
subclassing AttentionMetadataBuilder without touching scheduler /
block_manager / ModelRunner.

New AttentionMetadataBuilder hooks (defaults are no-ops):
  - compute_per_req_cache_bytes() / slots_per_req()
      bytes/slot for the per-request state pool
  - allocate_per_req_cache(num_slots)
      dict of named per-request state tensors
  - compute_block_bytes()
      per-block bytes for the KV pool budget
  - allocate_kv_cache_tensors(num_kv_heads, num_draft_layers)
      dict of named primary KV cache tensors (kv_cache, kv_scale,
      index_cache, aligned_index_dim, _kv_layer_cache_store)
  - build_kv_cache_tensor(layer_id, module)
      vLLM-style KVCacheTensor for one module, or None if foreign type;
      owns module setattr (k_cache/v_cache/k_scale/v_scale/kv_cache)

Builder overrides:
  - AiterAttentionMetadataBuilder: split-K/V MHA + MiMo-V2 per-module
  - AiterMLAMetadataBuilder: 576-dim MLA + V3.2 indexer
  - GDNAttentionMetadataBuilder: hybrid full-attn rows + GDN mamba slot
    pool; chains super() for MHA modules in hybrid models. Absorbs the
    formerly-runner-owned gated_delta_net_state_shape/dtypes helpers
    and the side-effect init of full_attention_interval / num_full_attn
    / num_gdn_attn_state.

Naming distinguishes group (per-request unit) from slot (raw tensor
index). One group occupies `slots_per_req()` contiguous slots in the
underlying tensor:
  Sequence.mamba_state_slot     -> .per_req_cache_group
  seq.mamba_enabled             -> .has_per_req_cache
  batch.mamba_state_slots       -> .per_req_cache_groups
  BlockManager.mamba_*          -> .per_req_cache_*  (free pool, accounting)
  config.mamba_equiv_per_req    -> .per_req_cache_equiv_blocks
  config.num_mamba_groups       -> .num_per_req_cache_groups
  ModelRunner.max_mamba_slots   -> .max_per_req_cache_slots  (tensor dim)

Removed (moved to builders):
  ModelRunner._compute_mamba_per_slot_bytes
  ModelRunner.gated_delta_net_state_shape / _dtypes

Sanity check: ModelRunner.__init__ now asserts that any builder
returning compute_per_req_cache_bytes() > 0 has its model_type
registered in InputOutputProcessor._per_req_cache_model_types(),
catching the silent-corruption misconfiguration where a stateful
attention is added but Sequence-construction never gets the
has_per_req_cache=True flag.

Verified:
  - tests/test_per_req_cache_decoupling.py: 24/24 pass
  - core suite (block_manager, sequence, scheduler, request,
    io_processor_fanout, prefix_cache_accuracy): 118/118 pass
  - Qwen3.5-397B-A17B-FP8 tp=4 simple_inference: 4-prompt completion
    quality unchanged
  - Qwen3.5-397B-A17B-FP8 tp=4 GSM8K (5-shot, 64 concurrent):
      flexible-extract = 0.8757 +/- 0.0091  (baseline 0.8711 from #602)
      strict-match     = 0.8605 +/- 0.0095
V4 backend (DeepseekV4Backend + DeepseekV4AttentionMetadataBuilder)
plus migration of state-cache buffers to ATOM's per_req_cache pool:

  - pre2a: 6 Compressor state buffers (kv_state + score_state for
    CSA Main / CSA Indexer / HCA Main).
  - pre2c-A: SWA window per layer (paper §3.6.1 state cache, every
    layer has SWA branch in V4-Pro). Attention.kv_cache splits into
    Attention.swa_kv (per_req_cache) + Attention.kv_cache (compressed
    entries only, still register_buffer; pre2c-B will move under
    block_table).

Validated single-prompt 64-token Chinese generation (V4-Pro tp=8,
triton MoE, enforce-eager) — output indistinguishable from baseline.
Strict-paper §3.6.1 split: compressed entries (CSA Main, CSA Indexer,
HCA Main) move from per-layer register_buffer to block-table-indexed
pools owned by DeepseekV4AttentionMetadataBuilder.

  - block_size = lcm(m, m') = 128 original tokens, plumbed via Config
    override on model_type=deepseek_v4 detection.
  - Three classical pools:
      v4_csa_main_kv [num_blocks, n_csa, k1=32, head_dim=512]
      v4_csa_idx_kv  [num_blocks, n_csa, k1=32, idx_head_dim=128]
      v4_hca_main_kv [num_blocks, n_hca, k2=1, head_dim=512]
    Per-layer slice bound to Compressor.kv_cache / Indexer.kv_cache.
  - V4 model adds _v4_scatter_compressed / _v4_gather_compressed helpers
    and fetches block_table from forward_context. Compressor.forward
    scatters writes into block-table slots; Indexer.forward + decode
    sparse_attn input gather committed entries from blocks.
  - Indexer + 1-slot warmup fallback register_buffer pattern same as
    pre2a Compressor.kv_state.
  - Attention.kv_cache attribute removed entirely (compressed entries
    no longer co-located on the Attention module).

Validated single-prompt 64-token Chinese generation (V4-Pro tp=8)
unchanged from pre2c-A baseline.
V4 forward now handles ATOM ragged-batch input with per-seq slot +
block_table routing. Single-seq behavior unchanged; concurrent
batched multi-seq prefill + decode verified end-to-end on 4 prompts.

Changes:
  - Builder prepare_decode/prepare_prefill populate cu_seqlens_q,
    block_tables, and v4_slot_indices (new per-seq metadata attached
    to AttentionMetaData via dynamic attribute).
  - _v4_get_block_table replaced with _v4_get_seq_metadata returning
    (block_tables, slot_indices, cu_seqlens_q, num_seqs).
  - Compressor.forward + Indexer.forward signatures: add slot,
    block_table args. Per-slot indexing via [slot:slot+1, ...]
    replaces hardcoded [:1, ...] / [:bsz, ...].
  - Attention.forward: batched Linear projections + RoPE on full flat
    tensor; per-seq loop slices (cu_seqlens_q) and dispatches SWA write,
    Compressor scatter, Indexer + sparse_attn with each seq's slot +
    block_table. Per-seq state-cache reset on prefill (start_pos==0)
    only zeros that seq's slot — no cross-seq pollution.
  - ParallelHead.get_logits: pick last-token-per-seq via cu_seqlens_q
    (fixed long-standing single-seq assumption that always returned
    only x[-1] regardless of batch size).

Validated MAX_NUM_SEQS=4 concurrent batched inference: 4 prompts
processed in parallel produce independent coherent outputs.
Three independent bugs caused V4 to ramble on edge-confidence prompts
(e.g. "1+2+3=?" output garbled despite 3/4 batch=4 prompts looking OK).
Single-prompt output now matches reference byte-equal on the first 5
tokens and produces "The sum is: 1 + 2 + 3 = **6**." (was: "I'll happily
provide a step-by-step breakdown..." ramble).

Bug 1 (quant_v4.py) — act_quant_inplace ue8m0 path used `ceil(log2)`
(matched TileLang reference) but ref_full_generate.py and aiter both use
round-to-even via f32_to_e8m0/e8m0_to_f32. The 1-binade gap appeared as
~0.002 cos drift on KV path, accumulating across 60 layers.

Bug 2 (moe.py) — FusedMoE.select_experts sqrtsoftplus path renormalized
topk_weights but never applied `* routed_scaling_factor`. The hash routing
path (V4 layers 0-2) does this internally, hiding the bug for hash layers.
Reference Gate.forward (model.py:583) applies the multiply for every
non-softmax routing path. Without the scale, layer 3+ MoE outputs were off
by 1.5x, producing the visible cos jump from 1.0 (layer 0/2) to 0.98
(layer 3+).

Bug 3 (deepseek_v4.py) — DeepseekV4Args.from_hf_config did not read
scale_fmt; HF config.json doesn't carry the field, only inference/config.json
does. Default to "ue8m0" matching reference ModelArgs (inference/model.py:40)
so act_quant_inplace's ue8m0 path is actually exercised.

Also folds in previously-validated V4 cleanups that were sitting in the
working tree:
  - _RMSNorm → ATOM RMSNorm (mark_trace + torch.compile friendly)
  - Indexer wq_b/weights_proj: ColumnParallelLinear → ReplicatedLinear
    (matches sglang/upstream; avoids extra all_reduce on index_score)
  - Block.hc_post defaults to torch (aiter mhc_post drift, opt-in via
    V4_AITER_HC_POST=1; see notes/12)
  - _torch_moe_forward: ue8m0 round-trip on input to mirror reference
    Expert.forward (act_quant before fp4_gemm), gated by V4_USE_REF_QUANT=1

Diagnosis path: notes/14_debug_1plus2plus3.md → notes/19_full_fix_verified.md
… cleanup

New module atom/utils/debug_helper/ provides reusable primitives for forward
bisecting and batch-invariance investigation. All entry points are no-ops
when their controlling env var is unset, so they are safe to leave wired
into production paths (model_runner.py post-load).

Components
  - dump.py        install_block_forward_hooks (multi-class + multi-call),
                   maybe_dump_weights_and_exit, maybe_log_topk
  - compare.py     cos_max (DOUBLE precision — fixes fp32 cos > 1.0 bug),
                   slot_split, compare_slots, pick_prefill_call,
                   schema_diff, plus CLI subcommands:
                     slot-invariance / ref-vs-target / layer-bisect / schema
  - ref_patch.py   patch_method / patch_block_forward / patch_module_dump
                   context managers for instrumenting read-only references
  - 9 ATOM_FWD_DUMP_* / ATOM_WEIGHT_DUMP_* / ATOM_DEBUG_TOPK env vars
    registered in atom/utils/envs.py "Debug Dump" section

Wired into model_runner.py with a 3-line post-load call (no-op default).

V4 model cleanup
  - Convert all nn.Parameter() constructors in deepseek_v4.py to
    atom_parameter() so inference-vs-training grad behavior is controlled
    from a single place (ATOM_REQUIRES_GRAD env). 21 call sites.

Documentation
  - docs/environment_variables.md: new "Debug Dump" subsection documenting
    all 9 env vars + CLI usage.
  - .claude/skills/dump-bisect-debug.md (v3.0): full methodology rewrite
    in English with quick-start decision tree, phase-at-a-glance summary,
    "When to stop / accept divergence" guidance, V4 paper §3.3 batch
    invariance treatment as Phase 8. Includes Bug 11 isolation case study.
  - .claude/skills/atom-patterns.md: ATOM architecture index reference.

Verified by running CLI on existing E1 4xP3 dump:
    python -m atom.utils.debug_helper.compare slot-invariance \\
        --dir /app/logs_claude/deepseek_v4/dumps/bug11_e1
reproduces the layer-by-layer divergence table that informed Bug 11
isolation in notes/21_bug11_isolation.md.
Two fixes that surfaced from the same V4 load run:

1. atom/models/deepseek_v4.py — skip `gate.e_score_correction_bias`
   allocation for hash-routed layers (layer_id < n_hash_layers). V4 hash
   layers route via `tid2eid` lookup, not bias-corrected gate logits;
   the checkpoint has no `gate.bias` for those layers (only layers >= 3).
   Allocating it caused 3 spurious "param NOT loaded from checkpoint"
   warnings every load. Both call sites that read the attribute now use
   `getattr(self.gate, "e_score_correction_bias", None)` — moe.py already
   accepts None for `e_score_correction_bias`.

2. atom/model_loader/loader.py — add ckpt-side coverage check (the
   reverse direction of the existing atom-side check). Every
   `get_parameter() except AttributeError: continue/break` site now
   records `(orig_ckpt_name, rewritten_name)`; after the main loop the
   loader warns if any non-benign drops occurred. This catches the
   actionable bug class — `weights_mapping` / `WeightsMapper` rewrites
   the ckpt name to something the model has no slot for, silently
   throwing away real weight data — which the existing atom-side check
   misses entirely. Benign families (output_scale / kv_scale / inv_freq
   / weight_scale_2) are filtered so the warning is signal, not noise.

Verified on V4 load:
  - atom-side warning: 46/2519 -> 43/2516 (3 hash bias removed)
  - ckpt-side warning: 0 drops (mapping is clean for V4)
  - remaining 43 are all model.mtp.0.* (PR5 todo)
Per paper §3.6.1, the Compressor's per-request state cache holds
"uncompressed tail tokens + previous block as B-side overlap context"
(eq 11). Restructure ATOM's kv_state from a roll-on-decode two-segment
buffer into a single pos % STATE_SIZE ring buffer (STATE_SIZE = 2*ratio
for overlap CSA, ratio for HCA).

Kernel update_compressor_states (atom/model_ops/v4_kernels/state_writes.py):
- dst = pos % STATE_SIZE for every token; no segment switching, no roll
- Phase derived in-kernel from context_lens vs cu_seqlens_q; no IS_PREFILL
- Write mask: fresh prefill keeps [max(0, cutoff-ratio), seqlen) (B-side
  overlap + tail); decode/MTP writes every token

Compressor.forward:
- Drops decode-boundary roll (kv_state[:ratio] <- kv_state[ratio:])
- Reads A-side / B-side halves by block-id parity (comp_id % 2)

Metadata plumbing:
- V4 prepare_decode now populates var["context_lens"] + attaches to
  AttentionMetaData (parent prepare_prefill already did)
- Compressor / Indexer.forward accept required context_lens kwarg
- Wrapper has no positions-derived fallback for context_lens

Also bundles PR-A scaffolding:
- ATOM_V4_BACKEND env gate + per-layer bisect (envs.py, v4_backend_gate.py)
- CPU-mirror metadata (cu_seqlens_q_cpu, state_slot_mapping_cpu,
  start_pos_per_seq_cpu) to avoid per-seq .tolist()/.item() syncs
- v4_slot_indices -> state_slot_mapping rename (clearer vs paged-KV slot_mapping)
- swa_write Triton kernel integration (Phase 1a) under backend gate

Validates: 15/15 byte-equal kernel-vs-reference (prefill + decode + MTP);
simple_inference fast path TPOT 0.328-0.518s/tok matches pre-refactor
baseline (Apr 29 v4_simple_inference.log: 0.453s/tok).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant