From 48319debef46f9ca5135bea06325121412bece4f Mon Sep 17 00:00:00 2001 From: Nourdin Date: Fri, 17 Apr 2026 01:47:47 +0100 Subject: [PATCH 1/2] docs: add FAQ entry for "unknown model architecture" error summary: Adds a missing FAQ entry covering the "error loading model architecture: unknown model architecture: 'X'" error, which is one of the most commonly reported issues but currently has no dedicated entry in the docs FAQ. changes: - Added a new FAQ entry to `docs/FAQ.md` explaining the cause, and three solutions in order of preference (update LLamaSharp, check model date, compile llama.cpp yourself) - Includes a caution note about self-compiled backends --- docs/FAQ.md | 23 +++++++++++++++++++++++ 1 file changed, 23 insertions(+) diff --git a/docs/FAQ.md b/docs/FAQ.md index 0806748fc..1e6d83f29 100644 --- a/docs/FAQ.md +++ b/docs/FAQ.md @@ -62,3 +62,26 @@ In this inequality, `len(response)` refers to the expected tokens for LLM to gen ## Choose models weight depending on your task The differences between modes may lead to much different behaviours under the same task. For example, if you're building a chat bot with non-English, a fine-tuned model specially for the language you want to use will have huge effect on the performance. + +## Why am I getting "error loading model architecture: unknown model architecture: 'X'"? + +This error means the model's architecture is not supported by the version of +llama.cpp that the current LLamaSharp backend is built against. + +The most common cause is using a model that was released after your installed +version of LLamaSharp. Newer model families (e.g. Gemma, Qwen) +require a backend built against a newer llama.cpp commit. + +**Solutions, in order of preference:** + +1. **Update LLamaSharp** to the latest version and reinstall the matching backend + package. Check the version table at the bottom of the README to confirm which + model families are verified for each release. + +2. **Check the model's publishing date** on Hugging Face. If it predates your + LLamaSharp version, the architecture may not yet be supported - open an issue + on the repository to request support. + +3. **Compile the compatible llama.cpp build yourself**, then point LLamaSharp to it with NativeLibraryConfig.All.WithLibrary() + +> **Caution:** Using a self-compiled library that does not match the LLamaSharp version's expected commit is unsupported and may cause crashes or unexpected behaviour. Only do this as a last resort. From 865b846f50de1190c3e748ff74654bf65cff6f12 Mon Sep 17 00:00:00 2001 From: Nourdin Date: Fri, 17 Apr 2026 15:36:02 +0100 Subject: [PATCH 2/2] remove mention of specific models, fix typographical slipup --- docs/FAQ.md | 12 +++--------- 1 file changed, 3 insertions(+), 9 deletions(-) diff --git a/docs/FAQ.md b/docs/FAQ.md index 1e6d83f29..03ceab536 100644 --- a/docs/FAQ.md +++ b/docs/FAQ.md @@ -68,19 +68,13 @@ The differences between modes may lead to much different behaviours under the sa This error means the model's architecture is not supported by the version of llama.cpp that the current LLamaSharp backend is built against. -The most common cause is using a model that was released after your installed -version of LLamaSharp. Newer model families (e.g. Gemma, Qwen) -require a backend built against a newer llama.cpp commit. +The most common cause is using a model that was released after your installed version of LLamaSharp. Newer model families require a backend built against a newer llama.cpp commit. **Solutions, in order of preference:** -1. **Update LLamaSharp** to the latest version and reinstall the matching backend - package. Check the version table at the bottom of the README to confirm which - model families are verified for each release. +1. **Update LLamaSharp** to the latest version and reinstall the matching backend package. Check the version table at the bottom of the README to confirm which model families are verified for each release. -2. **Check the model's publishing date** on Hugging Face. If it predates your - LLamaSharp version, the architecture may not yet be supported - open an issue - on the repository to request support. +2. **Check the model's publishing date** on Hugging Face. If it was published after your LLamaSharp version, the architecture may not yet be supported - open an issue on the repository to request support. 3. **Compile the compatible llama.cpp build yourself**, then point LLamaSharp to it with NativeLibraryConfig.All.WithLibrary()