LocalAI version:
v0.18.0
Environment, CPU architecture, OS, and Version:
Describe the bug
To Reproduce
Enable prompt_cache_all and set prompt_cache_path to a llama.cpp compatible model
Expected behavior
Logs
N/A (will collect soon)
Additional context
LocalAI version:
v0.18.0
Environment, CPU architecture, OS, and Version:
Describe the bug
To Reproduce
Enable
prompt_cache_alland setprompt_cache_pathto a llama.cpp compatible modelExpected behavior
Logs
N/A (will collect soon)
Additional context