# Install
cargo install devai
# Init (optional; will be executed on each run as well)
devai init
# Will fix all code comments in all matching files
devai run proof-rust-comments -f "./src/m*.rs"
# How: It will run the installed Command Agent file ".devai/defaults/proof-rust-comments.md" on all source files matching "./src/m*.rs"
# IMPORTANT: Make sure everything is committed before usage.ONE Command Agent Markdown File that defines the full agent flow:
itemsget expanded from the-ffile matches (more ways to generate items later).-> Datascripting for getting full control over what data to put in the context.-> Instructiontemplating (Handlebars) to have full control over the prompt layout.-> Outputscripting to get full control over how to manage the AI output.
Data, Instruction, Output (and more later) are all defined in a single file (see below), which is called the Command Agent File.
Supports all models/providers supported by the genai crate (see below for more information).
You can customize the model and concurrency in .devai/config.toml.
v0.1.1 Note: New .devai/ file structure with the new .devai file extension. See .devai/ folder structure.
IMPORTANT: In VSCode or your editor, map the *.devai extension to markdown to benefit from markdown highlighting. Devai agent files are markdown files.
IMPORTANT: Make sure to run this command line when everything is committed so that overwritten files can be reverted easily.
P.S. If possible, try to refrain from publishing devai-custom type crates on crates.io, as this might be more confusing than helpful. However.
devai uses the genai crate, and therefore the simplest way to provide the API keys for each provider is via environment variables in the terminal when running devai.
Here are the environment variable names used:
OPENAI_API_KEY
ANTHROPIC_API_KEY
MODEL_GEMINI
GEMINI_API_KEY
GROQ_API_KEY
COHERE_API_KEY
Usage: devai run proof-rust-comments -f "./src/main.rs"
(or have any glob like -f "./src/**/*.rs")
- This will initialize the
.devai/defaultsfolder with the "Command Agent Markdown"proof-rust-comments.md(see .devai/defaults/proof-rust-comments.md`) and run it with genai as follows:-f "./src/**/*.rs": The-fcommand line argument takes a glob and will create an "item" for each file, which can then be accessed in the# Datascripting section.# Data, which contains arhaiblock that will get executed with theitemvalue (the file reference in our example above).- With
rhai, there are some utility functions to list files, load file content, and such that can then be used in the instruction section.
- With
# Instruction, which is a Handlebars template section, has access toitemas well as the output of the# Datasection, accessible as thedatavariable.- This will be sent to the AI.
# Output, which now executes anotherrhaiblock, using theitem,data, andai_output, which is the string returned by the AI.- It can save files in place or create new files.
- Later, it will even be able to queue new devai work.
- By default, this will run with
gpt-4o-miniand look for theOPENAI_API_KEYenvironment variable. - It supports all AI providers supported by the genai crate.
- Here are the environment variable names per provider:
OPENAI_API_KEY,ANTHROPIC_API_KEY,COHERE_API_KEY,GEMINI_API_KEY,GROQ_API_KEY. - On Mac, if the environment variable is not present, it will attempt to prompt and get/save it from the keychain, under the devai group.
- Here are the environment variable names per provider:
# Will create/update the .devai/ settings folder (not required, automatically runs on "run")
devai init
# Will execute the proof-rust-comments.md from `.devai/customs/` or `.devai/defaults/` on
# any file matching `./**/mod.rs` (those will become 'items' in the data section)
devai run proof-rust-comments -f mod.rs
# Verbose mode, will print in the console what is sent to the AI, the AI response, and the output return if string-like
devai run proof-rust-comments -f mod.rs --verbose
# Verbose and watch mode. Every time proof-rust-comments is updated, it will run it again
devai run proof-rust-comments -f main.rs -v -w
# Will do the verbose, watch, but in dry mode request, will print only the rendered instruction
devai run proof-rust-comments -f main.rs -v -w --dry req
# Will do the verbose, watch, but in dry mode response, will print rendered instruction, AI response
# and will NOT execute the data
devai run proof-rust-comments -f main.rs -v -w --dry res
# Happy coding!initsub-command - initialize or update the.devai/folder (non-destructive, only adds files missing)runsub-command- First argument is the command name.
-fthe file name or glob input files as items. Can have multiple-f--verbose(-v) will print in the command line the rendered output.--dry reqwill perform a dry run of the request by just running the data and instruction sections. Use--verboseto print out the sections.--dry reswill perform a dry run of the request, send it to the AI, and return the AI output (does not return data). Use--verboseto see what has been sent and returned.
(Updated in version 0.1.1 - migration from 0.1.0 implemented on devai run and devai init)
.devai/- The root folder of devaicustom/- Where the user custom agents and templates. These will take precedence over the.devai/default/...matching files.command-agent/- The custom agents.new-template/- Template(s) used to create new agents, e.g.,devai new my-new-cool-agentcommand-agent/- The folder containing the custom templates for command agents.solo-agent/- The folder containing custom templates for solo agents (coming later)
default/- The default command agents and templates provided by devai (these files will only be created if missing)command-agent/- The default command agents.new-template/- The default template(s) used to create new agents, e.g.,devai new my-new-cool-agentcommand-agent/- The folder containing the default templates for command agents.solo-agent/- The folder containing the default templates for solo agents (coming later)
.devai/defaults/proof-rust-comments.md (see .devai/defaults/proof-rust-comments.md`)
On devai run or devai init, a .devai/config.toml will be created with the following:
[genai]
# Required (any model rust genai crate supports).
model = "gpt-4o-mini"
[runtime]
# Default to 1 if absent. Great way to increase speed when remote AI services.
items_concurrency = 1 - Support for the
# Itemssection withyamlorRhai. - More
Rhaimodules/functions. - Support for
# Before All,# Before,# After, and# After All(allRhai). --capturewill perform the normal run but capture the request and response in the request/response file.