Prerequisites
Feature Description
Running on self-compiled build: 7972 (e06088d)
In the WebUI, when ending a message in a chat that has reasoning content, said reasoning content is NOT send along to the LLM (where jinja promps may or may not strip previous thinking blocks). A setting of some sort to make the reasoning content be sent along would be useful, if model templates give the option to choose (e.g. GLM4.7 Flash)
Motivation
Whenever i send a few messages to GLM4.7 Flash in the webui, it does its usual reasoning, and then outputs the content. However, every turn of the user prompt will purge that reasoning content from the context (by simply not being included in the payload), making it "forget" its previous reasonings. Even if not directly helpful in all cases, an option to A-B test on a user level wouldnt hurt.
Possible Implementation
From the little i dug in, the conversion from DatabaseMessages to APIMessage payloads only considers content, not reasoning content
Prerequisites
Feature Description
Running on self-compiled build: 7972 (e06088d)
In the WebUI, when ending a message in a chat that has reasoning content, said reasoning content is NOT send along to the LLM (where jinja promps may or may not strip previous thinking blocks). A setting of some sort to make the reasoning content be sent along would be useful, if model templates give the option to choose (e.g. GLM4.7 Flash)
Motivation
Whenever i send a few messages to GLM4.7 Flash in the webui, it does its usual reasoning, and then outputs the content. However, every turn of the user prompt will purge that reasoning content from the context (by simply not being included in the payload), making it "forget" its previous reasonings. Even if not directly helpful in all cases, an option to A-B test on a user level wouldnt hurt.
Possible Implementation
From the little i dug in, the conversion from DatabaseMessages to APIMessage payloads only considers content, not reasoning content