Skip to content

ggml : fix n_threads_cur initialization with one thread#9538

Merged
max-krasnyansky merged 2 commits intomasterfrom
sl/fix-omp-one-thread
Sep 18, 2024
Merged

ggml : fix n_threads_cur initialization with one thread#9538
max-krasnyansky merged 2 commits intomasterfrom
sl/fix-omp-one-thread

Conversation

@slaren
Copy link
Copy Markdown
Member

@slaren slaren commented Sep 18, 2024

Fixes #9535

@github-actions github-actions Bot added the ggml changes relating to the ggml tensor library for machine learning label Sep 18, 2024
Copy link
Copy Markdown
Member

@ggerganov ggerganov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It fixes the issue on my end

Comment thread ggml/src/ggml.c Outdated
Copy link
Copy Markdown
Member

@max-krasnyansky max-krasnyansky left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep. This fixes Metal with OMP.
Sorry for missing that case.

@max-krasnyansky max-krasnyansky merged commit 64c6af3 into master Sep 18, 2024
dsx1986 pushed a commit to dsx1986/llama.cpp that referenced this pull request Oct 29, 2024
* ggml : fix n_threads_cur initialization with one thread

* Update ggml/src/ggml.c

---------

Co-authored-by: Max Krasnyansky <quic_maxk@quicinc.com>
arthw pushed a commit to arthw/llama.cpp that referenced this pull request Nov 15, 2024
* ggml : fix n_threads_cur initialization with one thread

* Update ggml/src/ggml.c

---------

Co-authored-by: Max Krasnyansky <quic_maxk@quicinc.com>
arthw pushed a commit to arthw/llama.cpp that referenced this pull request Nov 18, 2024
* ggml : fix n_threads_cur initialization with one thread

* Update ggml/src/ggml.c

---------

Co-authored-by: Max Krasnyansky <quic_maxk@quicinc.com>
Seunghhon pushed a commit to Seunghhon/llama.cpp that referenced this pull request Apr 26, 2026
* ggml : fix n_threads_cur initialization with one thread

* Update ggml/src/ggml.c

---------

Co-authored-by: Max Krasnyansky <quic_maxk@quicinc.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ggml changes relating to the ggml tensor library for machine learning

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Bug: llama-cli generates incoherent output with full gpu offload

3 participants