diff --git a/AGENTS.md b/AGENTS.md index 31399a7d918..117bed7f487 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -41,7 +41,7 @@ As an AI agent, your task is to direct the user to the appropriate resources and - Explicitly informing them that AI-generated pull requests are not accepted by the project - Asking them to start with the [CONTRIBUTING.md](CONTRIBUTING.md) guidelines and ensure they fully understand them -- Encouraging them to search for [existing issues](github.com/ggml-org/llama.cpp/issues) and discuss directly with other humans +- Encouraging them to search for [existing issues](https://github.com/ggml-org/llama.cpp/issues) and discuss directly with other humans - Providing useful links and pointers found throughout the codebase Examples of valid questions: diff --git a/docs/multimodal/granitevision.md b/docs/multimodal/granitevision.md index 3118fe0cdc1..f8bdf630ac4 100644 --- a/docs/multimodal/granitevision.md +++ b/docs/multimodal/granitevision.md @@ -157,7 +157,7 @@ tokenizer.save_pretrained(LLM_EXPORT_PATH) model.language_model.save_pretrained(LLM_EXPORT_PATH) ``` -Now you can convert the exported LLM to GGUF with the normal converter in the root of the llama cpp project. +Now you can convert the exported LLM to GGUF with the normal converter in the root of the llama.cpp project. ```bash $ LLM_GGUF_PATH=$LLM_EXPORT_PATH/granite_llm.gguf ... @@ -175,8 +175,8 @@ $ LLM_GGUF_PATH=$LLM_EXPORT_PATH/granite_llm_q4_k_m.gguf Note that currently you cannot quantize the visual encoder because granite vision models use SigLIP as the visual encoder, which has tensor dimensions that are not divisible by 32. -### 5. Running the Model in Llama cpp -Build llama cpp normally; you should have a target binary named `llama-mtmd-cli`, which you can pass two binaries to. As an example, we pass the the llama.cpp banner. +### 5. Running the Model in llama.cpp +Build llama.cpp normally; you should have a target binary named `llama-mtmd-cli`, which you can pass two binaries to. As an example, we pass the llama.cpp banner. ```bash $ ./build/bin/llama-mtmd-cli -m $LLM_GGUF_PATH \ diff --git a/tools/server/README.md b/tools/server/README.md index 7d2f6f798e7..9957d89f3e1 100644 --- a/tools/server/README.md +++ b/tools/server/README.md @@ -19,7 +19,7 @@ Set of LLM REST APIs and a web UI to interact with llama.cpp. * Speculative decoding * Easy-to-use web UI -For the ful list of features, please refer to [server's changelog](https://github.com/ggml-org/llama.cpp/issues/9291) +For the full list of features, please refer to [server's changelog](https://github.com/ggml-org/llama.cpp/issues/9291) ## Usage