CLI tool to ask locally deployed ollama model to help developer to be more productive at work.
- Docker
- Python 3.10+
- make
Run following command to prepare environment to use CLI:
make prepareNote: make sure port 11434 is free to use because container will use it to connect with our CLI
The command will create ollama container, pull codellama model and create python virtual environment for CLI to use.
And now you are ready to go!
You can alway ask CLI to show help window:
./main.py --help
usage: main.py [-h] {generate,review,ask} ...
positional arguments:
{generate,review,ask}
possible methods to use:
generate generating answer or file with a code.
review reviewing code with codellama model.
ask ask something codellama model.
options:
-h, --help show this help message and exitYou can ask codellama model with ask utility:
./main.py ask -p "What is the best programming language to learn right now?"Parameters:
- -p - prompt to send to codellama model. Required.
You can ask codellama model to generate code with generate utility:
./main.py generate -p "Generate code for simple API in FastAPI." -o api.pyParameters:
- -p - prompt to send to codellama model. Required.
- -o - output file where generated code should be written to. Default to "". If empty then answer will be printed to console.
You can ask codellama model to review code with review utility:
./main.py review -p "How can I improve following code?" -f api.pyParameters:
- -p - prompt to send to codellama model. Required.
- -f - file with code to review. Required.