Skip to content

ggml-cuda : fix f16 mul mat#3961

Merged
slaren merged 2 commits intomasterfrom
cuda-multi-gpu-stuff-fix
Nov 5, 2023
Merged

ggml-cuda : fix f16 mul mat#3961
slaren merged 2 commits intomasterfrom
cuda-multi-gpu-stuff-fix

Conversation

@slaren
Copy link
Copy Markdown
Member

@slaren slaren commented Nov 5, 2023

Fix issue introduced in #3951

@slaren slaren force-pushed the cuda-multi-gpu-stuff-fix branch from b16ab95 to d4d45c7 Compare November 5, 2023 17:26
@slaren slaren merged commit 2833a6f into master Nov 5, 2023
@slaren slaren deleted the cuda-multi-gpu-stuff-fix branch November 5, 2023 17:45
olexiyb pushed a commit to Sanctum-AI/llama.cpp that referenced this pull request Nov 23, 2023
* ggml-cuda : fix f16 mul mat

ggml-ci

* silence common.cpp warning (bonus)
Seunghhon pushed a commit to Seunghhon/llama.cpp that referenced this pull request Apr 26, 2026
* ggml-cuda : fix f16 mul mat

ggml-ci

* silence common.cpp warning (bonus)
phuongncn pushed a commit to phuongncn/llama.cpp-gx10-dgx-sparks-deepseekv4 that referenced this pull request Apr 28, 2026
* ggml-cuda : fix f16 mul mat

ggml-ci

* silence common.cpp warning (bonus)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants