Conversation
Keeps the MoE weights of the first N layers in the CPU
adding a destructor to common_params would cause issues when the object is copied
Contributor
|
Thank you :) |
|
Should this options be added to this page, too: |
4 tasks
4 tasks
thad0ctor
added a commit
to thad0ctor/llama-server-launcher
that referenced
this pull request
Aug 6, 2025
--cpu-moe to keep all MoE weights in the CPU --n-cpu-moe N to keep the MoE weights of the first N layers in the CPU ggml-org/llama.cpp#15077
|
thank you! just got 108T/s with gpt-oss:120b on my dual 5090s with llama-server -hf ggml-org/gpt-oss-120b-GGUF --ctx-size 0 --jinja --flash-attn --n-gpu-layers 99 --reasoning-format none --n-cpu-moe 3 |
blime4
referenced
this pull request
in blime4/llama.cpp
Feb 5, 2026
* llama : add --n-cpu-moe option Keeps the MoE weights of the first N layers in the CPU
Seunghhon
pushed a commit
to Seunghhon/llama.cpp
that referenced
this pull request
Apr 26, 2026
* llama : add --n-cpu-moe option Keeps the MoE weights of the first N layers in the CPU
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Following @jacekpoplawski suggestion in #14992, adds an option to keeps the MoE weights of the first N layers in the CPU. You can use:
--cpu-moeto keep all MoE weights in the CPU--n-cpu-moe Nto keep the MoE weights of the first N layers in the CPUThe goal is to avoid having to write complex regular expressions when trying to optimize the number of MoE layers to keep in the CPU.
These options work by adding the necessary tensor overrides. If you use
--override-tensorbefore these options, your overrides will take priority.