Skip to content

hexagon: general DMA and Binary Op fixes for large strides#20918

Merged
max-krasnyansky merged 12 commits intoggml-org:masterfrom
qualcomm:hexagon-large-dma-fixes
Mar 23, 2026
Merged

hexagon: general DMA and Binary Op fixes for large strides#20918
max-krasnyansky merged 12 commits intoggml-org:masterfrom
qualcomm:hexagon-large-dma-fixes

Conversation

@max-krasnyansky
Copy link
Copy Markdown
Member

Overview

This PR fixes functional issues with large strides found in gemma-3n-E4B and all qwen3.5 models.
Those models are now functional with the Hexagon backend.
Before they either produce garbage or caused crashes due to HVX/DMA HW exceptions.

Requirements

@max-krasnyansky max-krasnyansky requested a review from a team as a code owner March 23, 2026 21:18
@github-actions github-actions Bot added script Script related ggml changes relating to the ggml tensor library for machine learning Hexagon labels Mar 23, 2026
@max-krasnyansky max-krasnyansky merged commit 7cadbfc into ggml-org:master Mar 23, 2026
48 checks passed
@max-krasnyansky max-krasnyansky deleted the hexagon-large-dma-fixes branch April 15, 2026 01:21
Seunghhon pushed a commit to Seunghhon/llama.cpp that referenced this pull request Apr 26, 2026
…20918)

* hex-dma: make chained dma the default to handle newer models

This also includes some new instrumentation that we can remove later.

* hexagon: add uint32 dump helper

* hexagon: use single-page VTCM allocation to avoid issues with large gather ops in ssm-conv

ssm-conv uses HVX gather instruction and that instruction cannot handle cases where the base+offset
spans page boundaries.

* hexagon: update ssm-conv to make base-addr compute a bit easier to read

* hex-dma: use 1d mode for reshaping, it supports sizes up to 24-bits (>16MB)

* hex-bin: fix incorrect stride logic

* hexagon: make sure repack buffs are dumped for verbose > 2

* hex-bin: consistently use dma_queue_push even for dummy dst transactions

* hex-dma: start using 2d-wide mode on v75 and up

The removes the need to deal with the 16-bit limitaion for the strides.

* hex-bin: cleanup kernel selection logic

* hex-bin: cleanup binary op core and fix transposed tensor handling

* snapdragon: update run-bench to use larger ubatch and fa-on
rsenthilkumar6 pushed a commit to rsenthilkumar6/llama.cpp that referenced this pull request May 1, 2026
…20918)

* hex-dma: make chained dma the default to handle newer models

This also includes some new instrumentation that we can remove later.

* hexagon: add uint32 dump helper

* hexagon: use single-page VTCM allocation to avoid issues with large gather ops in ssm-conv

ssm-conv uses HVX gather instruction and that instruction cannot handle cases where the base+offset
spans page boundaries.

* hexagon: update ssm-conv to make base-addr compute a bit easier to read

* hex-dma: use 1d mode for reshaping, it supports sizes up to 24-bits (>16MB)

* hex-bin: fix incorrect stride logic

* hexagon: make sure repack buffs are dumped for verbose > 2

* hex-bin: consistently use dma_queue_push even for dummy dst transactions

* hex-dma: start using 2d-wide mode on v75 and up

The removes the need to deal with the 16-bit limitaion for the strides.

* hex-bin: cleanup kernel selection logic

* hex-bin: cleanup binary op core and fix transposed tensor handling

* snapdragon: update run-bench to use larger ubatch and fa-on
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ggml changes relating to the ggml tensor library for machine learning Hexagon script Script related

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants