Name and Version
$ llama-server --version
ggml_cuda_init: found 1 ROCm devices:
Device 0: Radeon 8060S Graphics, gfx1151 (0x1151), VMM: no, Wave Size: 32
version: 7761 (a89002f)
built with GNU 15.2.1 for Linux x86_64
Operating systems
Linux
Which llama.cpp modules do you know to be affected?
llama-server
Command line
llama-server --api-key xyz -ngl 999 -fa 1 -m models/ggml-org/gpt-oss-120b-GGUF/gpt-oss-120b-mxfp4-00001-of-00003.gguf --no-mmap --context-shift --host 0.0.0.0 -c 128000
Problem description & steps to reproduce
The actual use case is using claude code via @musistudio/claude-code-router and this has worked fine until recently.
I have distilled the problem down to this request:
curl -X POST -H 'authorization: Bearer xyz' -H 'content-type: application/json' --data '{"messages":[{"role":"user","content":"hi"}],"tools":[{"type":"function","function":{"name":"test","parameters":{"type":"object","properties":{"flag":{"type":"boolean","default":false}}}}}]}' http://127.0.0.1:8080/v1/chat/completions
This causes the following error message from llama.cpp:
{"error":{"code":500,"message":"\n------------\nWhile executing FilterExpression at line 133, column 66 in source:\n...{- ", // default: " + param_spec.default|tojson }}↵ {%- endif...\n ^\nError: Unknown (built-in) filter 'tojson' for type Boolean","type":"server_error"}}
The problem disappears if the boolean value for default is changed to a string instead of a boolean literal. I actually managed to add a booltostring transformer to claude-code-router which made the connection from claude code to llama.cpp to work again.
First Bad Commit
No response
Relevant log output
Logs
srv log_server_r: request: POST /v1/chat/completions 127.0.0.1 500
srv operator(): got exception: {"error":{"code":500,"message":"\n------------\nWhile executing FilterExpression at line 133, column 66 in source:\n...{- \", // default: \" + param_spec.default|tojson }}↵ {%- endif...\n ^\nError: Unknown (built-in) filter 'tojson' for type Boolean","type":"server_error"}}
Name and Version
$ llama-server --version
ggml_cuda_init: found 1 ROCm devices:
Device 0: Radeon 8060S Graphics, gfx1151 (0x1151), VMM: no, Wave Size: 32
version: 7761 (a89002f)
built with GNU 15.2.1 for Linux x86_64
Operating systems
Linux
Which llama.cpp modules do you know to be affected?
llama-server
Command line
Problem description & steps to reproduce
The actual use case is using claude code via @musistudio/claude-code-router and this has worked fine until recently.
I have distilled the problem down to this request:
This causes the following error message from llama.cpp:
The problem disappears if the boolean value for default is changed to a string instead of a boolean literal. I actually managed to add a booltostring transformer to claude-code-router which made the connection from claude code to llama.cpp to work again.
First Bad Commit
No response
Relevant log output
Logs