Problem
CloudflareProvider.formatRequest (src/providers/cloudflare.ts) does not forward a lora field to env.AI.run(). This blocks any consumer of @stackbilt/llm-providers from using a Cloudflare Workers AI fine-tune (LoRA adapter) through the package.
Concrete example: codebeast (Stackbilt-dev/codebeast) ships a fine-tuned Qwen via LoRA UUID 6d028a43-759e-417f-83fb-fa9b681d81f4 applied at inference to @cf/qwen/qwen2.5-coder-32b-instruct. To use it from another worker, callers today must bypass LLMProviders entirely and call env.AI.run(model, { ..., lora }) directly — which violates Stackbilt's "no bolted-in LLM logic, route everything through @stackbilt/llm-providers" policy.
Discovered while
Building aegis-daemon's new /api/internal/cheap-llm delegation route (aegis v2.2.0). Wanted the code task type to use the codebeast Qwen tune; ended up shipping with the base model and filing this issue rather than carving out a bypass.
Suggested change
- Extend
LLMRequest (src/types.ts) with an optional lora?: string field (LoRA name or UUID — Workers AI accepts either).
- In
CloudflareProvider.formatRequest, pass lora through to the ai.run() options object when present.
- Other providers ignore the field (no-op).
- README example showing how to call a fine-tune through the package.
Why optional, not required
LoRAs are CF-account-scoped resources. The package can't validate the UUID — it just needs to forward what the caller passes. Caller is responsible for ensuring the binding's account hosts the adapter.
Downstream
Once shipped, aegis-daemon's cheap-llm route can switch the code task default from base @cf/qwen/qwen2.5-coder-32b-instruct to the codebeast-tuned variant without violating feedback_no_bolted_llm_logic.md.
Problem
CloudflareProvider.formatRequest(src/providers/cloudflare.ts) does not forward alorafield toenv.AI.run(). This blocks any consumer of@stackbilt/llm-providersfrom using a Cloudflare Workers AI fine-tune (LoRA adapter) through the package.Concrete example: codebeast (Stackbilt-dev/codebeast) ships a fine-tuned Qwen via LoRA UUID
6d028a43-759e-417f-83fb-fa9b681d81f4applied at inference to@cf/qwen/qwen2.5-coder-32b-instruct. To use it from another worker, callers today must bypassLLMProvidersentirely and callenv.AI.run(model, { ..., lora })directly — which violates Stackbilt's "no bolted-in LLM logic, route everything through@stackbilt/llm-providers" policy.Discovered while
Building
aegis-daemon's new/api/internal/cheap-llmdelegation route (aegis v2.2.0). Wanted thecodetask type to use the codebeast Qwen tune; ended up shipping with the base model and filing this issue rather than carving out a bypass.Suggested change
LLMRequest(src/types.ts) with an optionallora?: stringfield (LoRA name or UUID — Workers AI accepts either).CloudflareProvider.formatRequest, passlorathrough to theai.run()options object when present.Why optional, not required
LoRAs are CF-account-scoped resources. The package can't validate the UUID — it just needs to forward what the caller passes. Caller is responsible for ensuring the binding's account hosts the adapter.
Downstream
Once shipped, aegis-daemon's
cheap-llmroute can switch thecodetask default from base@cf/qwen/qwen2.5-coder-32b-instructto the codebeast-tuned variant without violatingfeedback_no_bolted_llm_logic.md.