-
Notifications
You must be signed in to change notification settings - Fork 2.8k
Description
Please explain the motivation behind the feature request.
While there is environmental variable that can be set for the total context window (I run goose in an anaconda env so I set it like this when using qwen3-code:latest running using a local ollama host conda env config vars set GOOSE_CONTEXT_LIMIT=32600)
I need something to increase the input size of a prompt. The default being 4K which is incredibly restrictive.
This is the error I need corrected.
time=2026-02-16T21:37:35.854-06:00 level=WARN source=runner.go:187 msg="truncating input prompt" limit=4096 prompt=7176 keep=4 new=4096
Complex prompts gene truncated and look like nonsense to the model and end up returning weird responses.
Describe the solution you'd like
Please indicate or create a variable that can be set such as GOOSE_INPUT_LIMIT and let us set how many tokens we want to use as a max limit.
Describe alternatives you've considered
Take off the limit entirely or set default to size of full context window. Agents tasks will require huge token budgets for specific tasks.
Additional context
N/A
- I have verified this does not duplicate an existing feature request