vals is a tool for managing configuration values and secrets.
It supports various backends including:
-
Vault
-
AWS SSM Parameter Store
-
AWS Secrets Manager
-
AWS S3
-
GCP Secrets Manager
-
SOPS-encrypted files
-
Terraform State
-
CredHub(Coming soon)
-
Use
vals eval -f refs.yamlto replace all therefs in the file to actual values and secrets. -
Use
vals exec -f env.yaml -- <COMMAND>to populate envvars and execute the command. -
Use
vals env -f env.yamlto render envvars that are consumable byevalor a tool likedirenv
vals is a Helm-like configuration "Values" loader with support for various sources and merge strategies
Usage:
vals [command]
Available Commands:
eval Evaluate a JSON/YAML document and replace any template expressions in it and prints the result
exec Populates the environment variables and executes the command
env Renders environment variables to be consumed by eval or a tool like direnv
ksdecode Decode YAML document(s) by converting Secret resources' "data" to "stringData" for use with "vals eval"
Use "vals [command] --help" for more infomation about a command
vals has a collection of providers that each an be referred with a URI scheme looks vals+<TYPE>.
For this example, use the Vault provider.
Let's start by writing some secret value to Vault:
$ vault kv put secret/foo mykey=myvalueNow input the template of your YAML and refer to vals' Vault provider by using vals+vault in the URI scheme:
$ VAULT_TOKEN=yourtoken VAULT_ADDR=http://127.0.0.1:8200/ \
echo "foo: ref+vault://secret/data/foo?proto=http#/mykey" | vals eval -f -Voila! vals, replacing every reference to your secret value in Vault, produces the output looks like:
foo: myvalueWhich is equivalent to that of the following shell script:
VAULT_TOKEN=yourtoken VAULT_ADDR=http://127.0.0.1:8200/ cat <<EOF
foo: $(vault kv get -format json secret/foo | jq -r .data.data.mykey)
EOFSave the YAML content to x.vals.yaml and running vals eval -f x.vals.yaml does produce output equivalent to the previous one:
foo: myvalueUse value references as Helm Chart values, so that you can feed the helm template output to vals -f - for transforming the refs to secrets.
$ helm template mysql-1.3.2.tgz --set mysqlPassword='ref+vault://secret/data/foo#/mykey' | vals ksdecode -o yaml -f - | tee manifests.yaml
apiVersion: v1
kind: Secret
metadata:
labels:
app: release-name-mysql
chart: mysql-1.3.2
heritage: Tiller
release: release-name
name: release-name-mysql
namespace: default
stringData:
mysql-password: refs+vault://secret/data/foo#/mykey
mysql-root-password: vZQmqdGw3z
type: OpaqueThis manifest is safe to be committed into your version-control system(GitOps!) as it doesn't contain actual secrets.
When you finally deploy the manifests, run vals eval to replace all the refs to actual secrets:
$ cat manifests.yaml | ~/p/values/bin/vals eval -f - | tee all.yaml
apiVersion: v1
kind: Secret
metadata:
labels:
app: release-name-mysql
chart: mysql-1.3.2
heritage: Tiller
release: release-name
name: release-name-mysql
namespace: default
stringData:
mysql-password: myvalue
mysql-root-password: 0A8V1SER9t
type: OpaqueFinally run kubectl apply to apply manifests:
$ kubectl apply -f all.yamlThis gives you a solid foundation for building a secure CD system as you need to allow access to a secrets store like Vault only from servers or containers that pulls safe manifests and runs deployments.
In other words, you can safely omit access from the CI to the secrets store.
import "github.com/variantdev/vals"
secretsToCache := 256 // how many secrets to keep in LRU cache
runtime, err := vals.New(secretsToCache)
if err != nil {
return nil, err
}
valsRendered, err := runtime.Eval(map[string]interface{}{
"inline": map[string]interface{}{
"foo": "ref+vault://127.0.0.1:8200/mykv/foo?proto=http#/mykey",
"bar": map[string]interface{}{
"baz": "ref+vault://127.0.0.1:8200/mykv/foo?proto=http#/mykey",
},
},
})Now, vals contains a map[string]interface{} representation of the below:
cat <<EOF
foo: $(vault read mykv/foo -o json | jq -r .mykey)
bar:
baz: $(vault read mykv/foo -o json | jq -r .mykey)
EOF- Vault
- AWS SSM Parameter Store
- AWS Secrets Manager
- AWS S3
- GCP Secrets Manager
- SOPS powered by sops)
- Terraform (tfstate) powered by tfstate-lookup
- Echo
- File
Please see pkg/providers for the implementations of all the providers. The package names corresponds to the URI schemes.
ref+vault://PATH/TO/KVBACKEND[?address=VAULT_ADDR:PORT&token_file=PATH/TO/FILE&token_env=VAULT_TOKEN]#/fieldkeyref+vault://PATH/TO/KVBACKEND[?address=VAULT_ADDR:PORT&auth_method=approle&role_id=ce5e571a-f7d4-4c73-93dd-fd6922119839&secret_id=5c9194b9-585e-4539-a865-f45604bd6f56]#/fieldkey
address defaults to the value of the VAULT_ADDR envvar.
auth_method default to token and can also be set to the value of the VAULT_AUTH_METHOD envar.
role_id defaults to the value of the VAULT_ROLE_ID envvar.
secret_id defaults to the value of the VAULT_SECRET_ID envvar.
version is the specific version of the secret to be obtained. Used when you want to get a previous content of the secret.
Examples:
ref+vault://mykv/foo#/bar?address=https://vault1.example.com:8200reads the value for the fieldbarin the kvfooon Vault listening onhttps://vault1.example.comwith the Vault token read from the envvarVAULT_TOKEN, or the file~/.vault_tokenwhen the envvar is not setref+vault://mykv/foo#/bar?token_env=VAULT_TOKEN_VAULT1&address=https://vault1.example.com:8200reads the value for the fieldbarin the kvfooon Vault listening onhttps://vault1.example.comwith the Vault token read from the envvarVAULT_TOKEN_VAULT1ref+vault://mykv/foo#/bar?token_file=~/.vault_token_vault1&address=https://vault1.example.com:8200reads the value for the fieldbarin the kvfooon Vault listening onhttps://vault1.example.comwith the Vault token read from the file~/.vault_token_vault1
There are two providers for AWS:
- SSM Parameter Store
- Secrets Manager
Both provider have support for specifying AWS region and profile via envvars or options:
- AWS profile can be specified via an option
profile=AWS_PROFILE_NAMEor envvarAWS_PROFILE - AWS region can be specified via an option
region=AWS_REGION_NAMEor envvarAWS_DEFAULT_REGION
ref+awsssm://PATH/TO/PARAM[?region=REGION]ref+awsssm://PREFIX/TO/PARAMS[?region=REGION&mode=MODE&version=VERSION]#/PATH/TO/PARAM
The first form result in a GetParameter call and result in the reference to be replaced with the value of the parameter.
The second form is handy but fairly complex.
- If
modeis not set,valsusesGetParametersByPath(/PREFIX/TO/PARAMS)caches the result per prefix rather than each single path to reduce number of API calls - If
modeissingleparam,valsusesGetParameterto obtain the value parameter for key/PREFIX/TO/PARAMS, parse the value as a YAML hash, extract the value at the yaml pathPATH.TO.PARAM.- When
versionis set,valsusesGetParameterHistoryPagesinstead ofGetParameter.
- When
For the second form, you can optionally specify recursive=true to enable the recursive option of the GetParametersByPath API.
Let's say you had a number of parameters like:
NAME VALUE
/foo/bar {"BAR":"VALUE"}
/foo/bar/a A
/foo/bar/b B
ref+awsssm://foo/barandref+awsssm://foo#/barresults in{"BAR":"VALUE"}ref+awsssm://foo/bar/a,ref+awsssm://foo/bar?#/a, andref+awsssm://foo?recursive=true#/bar/aresults inAref+awsssm://foo/bar?mode=singleparam#/BARresults inVALUE.
On the other hand,
ref+awsssm://foo/bar#/BARfails because/foo/barevaluates to{"a":"A","b":"B"}.ref+awsssm://foo?recursive=true#/barfails because/foo?recursive=trueinternal evaluates to{"foo":{"a":"A","b":"B"}}
ref+awssecrets://PATH/TO/SECRET[?region=REGION&version_stage=STAGE&version_id=ID]ref+awssecrets://PATH/TO/SECRET[?region=REGION&version_stage=STAGE&version_id=ID]#/yaml_or_json_key/in/secretref+awssecrets://ACCOUNT:ARN:secret:/PATH/TO/PARAM[?region=REGION]
The third form allows you to reference a secret in another AWS account (if your cross-account secret permissions are configured).
Examples:
ref+awssecrets://myteam/mykeyref+awssecrets://myteam/mydoc#/foo/barref+awssecrets://myteam/mykey?region=us-west-2ref+awssecrets:///arn:aws:secretsmanager:<REGION>:<ACCOUNT_ID>:secret:/myteam/mydoc/?region=ap-southeast-2#/secret/key
ref+s3://BUCKET/KEY/OF/OBJECT[?region=REGION&profile=AWS_PROFILE&version_id=ID]ref+s3://BUCKET/KEY/OF/OBJECT[?region=REGION&profile=AWS_PROFILE&version_id=ID]#/yaml_or_json_key/in/secret
Examples:
ref+s3://mybucket/mykeyref+s3://mybucket/myjsonobj#/foo/barref+s3://mybucket/myyamlobj#/foo/barref+s3://mybucket/mykey?region=us-west-2ref+s3://mybucket/mykey?profile=prod
ref+gcpsecrets://PROJECT/SECRET[?version=VERSION]ref+gcpsecrets://PROJECT/SECRET[?version=VERSION]#/yaml_or_json_key/in/secret
Examples:
ref+gcpsecrets://myproject/mysecretref+gcpsecrets://myproject/mysecret?version=3ref+gcpsecrets://myproject/mysecret?version=3#/yaml_or_json_key/in/secret
NOTE: Got an error like
expand gcpsecrets://project/secret-name?version=1: failed to get secret: rpc error: code = PermissionDenied desc = Request had insufficient authentication scopes.?In some cases like you need to use an alternative credentials or project, you'll likely need to set
GOOGLE_APPLICATION_CREDENTIALSand/orGCP_PROJECTenvvars.
ref+tfstate://path/to/some.tfstate/RESOURCE_NAME
Examples:
ref+tfstate://path/to/some.tfstate/aws_vpc.main.idref+tfstate://path/to/some.tfstate/module.mymodule.aws_vpc.main.idref+tfstate://path/to/some.tfstate/output.OUTPUT_NAME.valueref+tfstate://path/to/some.tfstate/data.thetype.name.foo.bar
When you're using terraform-aws-vpc to define a module "vpc" resource and you wanted to grab the first vpc ARN created by the module:
$ tfstate-lookup -s ./terraform.tfstate module.vpc.aws_vpc.this[0].arn
arn:aws:ec2:us-east-2:ACCOUNT_ID:vpc/vpc-0cb48a12e4df7ad4c
$ echo 'foo: ref+tfstate://terraform.tfstate/module.vpc.aws_vpc.this[0].arn' | vals eval -f -
foo: arn:aws:ec2:us-east-2:ACCOUNT_ID:vpc/vpc-0cb48a12e4df7ad4c
You can also grab a Terraform output by using output.OUTPUT_NAME.value like:
$ tfstate-lookup -s ./terraform.tfstate output.mystack_apply.value
which is equivalent to the following input for vals:
$ echo 'foo: ref+tfstate://terraform.tfstate/output.mystack_apply.value' | vals eval -f -
Remote backends like S3 is also supported. When a remote backend is used in your terraform workspace, there should be a local file at ./terraform/terraform.tfstate that contains the reference to the backend:
{
"version": 3,
"serial": 1,
"lineage": "f1ad69de-68b8-9fe5-7e87-0cb70d8572c8",
"backend": {
"type": "s3",
"config": {
"access_key": null,
"acl": null,
"assume_role_policy": null,
"bucket": "yourbucketnname",
Just specify the path to that file, so that vals is able to transparently make the remote state contents available for you.
- The whole content of a SOPS-encrypted file:
ref+sops://base64_data_or_path_to_file?key_type=[filepath|base64]&format=[binary|dotenv|yaml] - The value for the specific path in an encrypted YAML/JSON document:
ref+sops://base64_data_or_path_to_file#/json_or_yaml_key/in/the_encrypted_doc
Examples:
ref+sops://path/to/filereadspath/to/fileasbinaryinputref+sops://<base64>?key_type=base64reads<base64>as the base64-encoded data to be decrypted by sops asbinaryref+sops://path/to/file#/foo/barreadspath/to/fileas ayamlfile and returns the value atfoo.bar.ref+sops://path/to/file?format=json#/foo/barreadspath/to/fileas ajsonfile and returns the value atfoo.bar.
Echo provider echoes the string for testing purpose. Please read the original proposal to get why we might need this.
ref+echo://KEY1/KEY2/VALUE[#/path/to/the/value]
Examples:
ref+echo://foo/bargeneratesfoo/barref+echo://foo/bar/baz#/foo/bargeneratesbaz. This works by the host and the path partfoo/bar/bazgenerating an object{"foo":{"bar":"baz"}}and the fragment part#/foo/barresults in digging the object to obtain the value at$.foo.bar.
File provider reads a local text file, or the value for the specific path in a YAML/JSON file.
ref+file://path/to/file[#/path/to/the/value]
Examples:
ref+file://foo/barloads the file atfoo/barref+file://some.yaml#/foo/barloads the YAML file atsome.yamland reads the value for the path$.foo.bar. Let's saysome.yamlcontains{"foo":{"bar":"BAR"}},key1: ref+file://some.yaml#/foo/barresults inkey1: BAR.
vals has an advanced feature that helps you to do GitOps.
GitOps is a good practice that helps you to review how your change would affect the production environment.
To best leverage GitOps, it is important to remove dynamic aspects of your config before reviewing.
On the other hand, vals's primary purpose is to defer retrieval of values until the time of deployment, so that we won't accidentally git-commit secrets. The flip-side of this is, obviously, that you can't review the values themselves.
Using ref+<value uri> and secretref+<value uri> in combination with vals eval --exclude-secretref helps it.
By using the secretref+<uri> notation, you tell vals that it is a secret and regular ref+<uri> instances are for config values.
myconfigvalue: ref+awsssm://myconfig/value
mysecretvalue: secretref+awssecrets://mysecret/valueTo leverage GitOps most by allowing you to review the content of ref+awsssm://myconfig/value only, you run vals eval --exclude-secretref to generate the following:
myconfigvalue: MYCONFIG_VALUE
mysecretvalue: secretref+awssecrets://mysecret/valueThis is safe to be committed into git because, as you've told to vals, awsssm://myconfig/value is a config value that can be shared publicly.
In the early days of this project, the original author has investigated if it was a good idea to introduce string interpolation like feature to vals:
foo: xx${{ref "vals+vault://127.0.0.1:8200/mykv/foo?proto=http#/mykey" }}
bar:
baz: yy${{ref "vals+vault://127.0.0.1:8200/mykv/foo?proto=http#/mykey" }}
But the idea had abandoned due to that it seemed to drive the momentum to vals being a full-fledged YAML templating engine. What if some users started wanting to use vals for transforming values with functions?
That's not the business of vals.
Instead, use vals solely for composing sets of values that are then input to another templating engine or data manipulation language like Jsonnet and CUE.
Merging YAMLs is out of the scope of vals. There're better alternatives like Jsonnet, Sprig, and CUE for the job.