Skip to content

Fix eval prompt collision with model flag#25

Open
KevinMeisel wants to merge 1 commit intohuggingface:mainfrom
KevinMeisel:fix/eval-prompt-model-collision
Open

Fix eval prompt collision with model flag#25
KevinMeisel wants to merge 1 commit intohuggingface:mainfrom
KevinMeisel:fix/eval-prompt-model-collision

Conversation

@KevinMeisel
Copy link
Copy Markdown

Summary

  • Prevent FastAgent from consuming upskill CLI args by setting parse_cli_args=False.
  • Normalize provider-prefixed model IDs when --provider is used.

Root Cause

FastAgent parses CLI args and treats -m/--message as a user prompt. Upskill’s -m model option collides with this, so the model string is injected into the chat request.

Changes

  • src/upskill/cli.py: set parse_cli_args=False on FastAgent
  • src/upskill/cli.py: provider/model normalization helper

Files

  • src/upskill/cli.py

@KevinMeisel
Copy link
Copy Markdown
Author

Fixes #24

@sysradium
Copy link
Copy Markdown

Oh yeah, I had that problem as well. Took me some time to figure out white -m "qwen3" just silently fails.

@evalstate
Copy link
Copy Markdown
Collaborator

Fixed in the latest version.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants