Skip to content

Fix code typo in llama-cli#8198

Merged
ngxson merged 1 commit intoggml-org:masterfrom
ngxson:xsn/fix_main_cnv_tmpl
Jun 28, 2024
Merged

Fix code typo in llama-cli#8198
ngxson merged 1 commit intoggml-org:masterfrom
ngxson:xsn/fix_main_cnv_tmpl

Conversation

@ngxson
Copy link
Copy Markdown
Contributor

@ngxson ngxson commented Jun 28, 2024

Fix a small typo that breaks chat template support on llama-cli -cnv


@ngxson ngxson added the Review Complexity : Low Trivial changes to code that most beginner devs (or those who want a break) can tackle. e.g. UI fix label Jun 28, 2024
@ngxson ngxson requested a review from slaren June 28, 2024 21:30
@slaren
Copy link
Copy Markdown
Member

slaren commented Jun 28, 2024

I don't think it is completely right yet, there are still extra new lines added to the assistant messages randomly. I suspect that at least one issue is that the chat template expects a new line after the <end_of_turn> of the assistant, but it is not being added.

@ngxson
Copy link
Copy Markdown
Contributor Author

ngxson commented Jun 28, 2024

Hmm, that's strange. My result is pretty consistent (with -t 0)

> say 1
1

> say 2
2

> say 3
3

> say 4
4

> say 5
5

> say 6
6

> 

@slaren
Copy link
Copy Markdown
Member

slaren commented Jun 28, 2024

That case is also fixed for me, but I still see many messages ending in double or triple lines during random chat.

@ngxson
Copy link
Copy Markdown
Contributor Author

ngxson commented Jun 28, 2024

Probably there is some thing more specific for gemma template (or the model itself).

In any case, I'll merge this PR now and have a deeper look on gemma later.

@ngxson ngxson merged commit 72272b8 into ggml-org:master Jun 28, 2024
@ngxson
Copy link
Copy Markdown
Contributor Author

ngxson commented Jun 28, 2024

I don't think it is completely right yet, there are still extra new lines added to the assistant messages randomly. I suspect that at least one issue is that the chat template expects a new line after the <end_of_turn> of the assistant, but it is not being added.

I see what you mean. In this case, we must either patch the template behavior with add_ass, or either patch llama_chat_format_single to be aware of the trailing new line. I'll see which way is better.

Nexesenex pushed a commit to Nexesenex/croco.cpp that referenced this pull request Jun 30, 2024
Nexesenex pushed a commit to Nexesenex/croco.cpp that referenced this pull request Jun 30, 2024
MagnusS0 pushed a commit to MagnusS0/llama.cpp-normistral-tokenizer that referenced this pull request Jul 1, 2024
Nexesenex pushed a commit to Nexesenex/croco.cpp that referenced this pull request Jul 1, 2024
Seunghhon pushed a commit to Seunghhon/llama.cpp that referenced this pull request Apr 26, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

examples Review Complexity : Low Trivial changes to code that most beginner devs (or those who want a break) can tackle. e.g. UI fix

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants