diff --git a/.github/linters/.markdownlint.json b/.github/linters/.markdownlint.json index a0bc47d..e88c120 100644 --- a/.github/linters/.markdownlint.json +++ b/.github/linters/.markdownlint.json @@ -1,6 +1,13 @@ { - "default": true, - "MD003": false, - "MD013": false, - "MD033": false -} \ No newline at end of file + "default": true, + "MD003": false, + "MD013": { + "line_length": 400, + "code_blocks": false, + "tables": false + }, + "MD033": false, + "MD060": { + "style": "compact" + } +} diff --git a/Makefile b/Makefile index f62ebfb..a87dc55 100644 --- a/Makefile +++ b/Makefile @@ -5,9 +5,14 @@ help: ## This help message @echo "Pattern: $(NAME)" @awk 'BEGIN {FS = ":.*##"; printf "\nUsage:\n make \033[36m\033[0m\n"} /^(\s|[a-zA-Z_0-9-])+:.*?##/ { printf " \033[36m%-35s\033[0m %s\n", $$1, $$2 } /^##@/ { printf "\n\033[1m%s\033[0m\n", substr($$0, 5) } ' $(MAKEFILE_LIST) +.PHONY: helm-docs +helm-docs: ## Not used here (no Helm chart); kept so workflows expecting the target succeed + @echo "helm-docs: skipped — rhvp.cluster_utils has no Chart.yaml or README.md.gotmpl." + .PHONY: super-linter super-linter: ## Runs super linter locally rm -rf .mypy_cache + rm -rf .ansible podman run -e RUN_LOCAL=true -e USE_FIND_ALGORITHM=true \ -e VALIDATE_ANSIBLE=false \ -e VALIDATE_BASH=false \ diff --git a/README.md b/README.md index 68af1ea..0aa6a30 100644 --- a/README.md +++ b/README.md @@ -8,3 +8,83 @@ The main purpose of this collections are to: loading local secrets files into VP secrets stores. 2. Help manage imperative and other utility functions of the cluster + +## SS CSI workload auth notes + +`vault_utils` can read `ssCsiWorkloadAuth` entries from clustergroup values and +create Vault Kubernetes auth roles for hub and spoke workloads. + +### Parsing (load YAML) + +With **`vault_ss_csi_aggregate_clustergroup_sources`** true (default), SS CSI +uses the **`clustergroup_discovery`** role to determine stems: **main** from +`values-global.yaml`, then **managed** names from `clusterGroup.managedClusterGroups` +in the main `values-
.yaml|yml`. For **each** stem it loads a document from +the in-cluster **`ConfigMap` `values-`** (namespace +`openshift-gitops` by default), then falls back to **`pattern_dir/values-.yaml|yml`** +when enabled. ConfigMap data keys follow **`vault_ss_csi_clustergroup_configmap_key`** +and **`vault_ss_csi_clustergroup_configmap_key_candidates`**. Each document must +include **`clusterGroup`**. Stems are merged in **`clustergroup_load_order`** +(main first, then managed stems sorted) so later sources override duplicate +`clusterGroup.applications` keys. Set **`vault_ss_csi_aggregate_clustergroup_sources`** +to false to load only the **main** document (legacy: single ConfigMap or +`values-
.yaml`). + +### Extraction (find `ssCsiWorkloadAuth`) + +The role builds **`_vault_ss_csi_apps_by_stem`** (per-stem `clusterGroup.applications`) +and a merged **`clusterGroup.managedClusterGroups`**. It collects: + +- **`clusterGroup.applications.*.ssCsiWorkloadAuth`** — per stem; omit **`cluster`** + in values: the **main** stem resolves to **hub**; **managed** stems resolve to + that **stem name** so entries under `values-.yaml` stay spoke-scoped. +- **`clusterGroup.managedClusterGroups.*.applications.*.ssCsiWorkloadAuth`** — + from the merged map; omit **`cluster`** and the row targets that managed group + (**`name`**, else the group map key). + +### Projection (Vault roles) + +Rows are appended to **`_ss_csi_all_entries`**, split into hub vs spoke using +the computed **`cluster`** field (from stem or managed group when omitted in YAML), then **hub** identities get Vault Kubernetes +auth roles via **`vault_ss_csi_apply_one_hub_sscsi_role.yaml`**. Spoke rows are +normalized to **`vault_path`** later in the play (**`vault_ss_csi_normalize_spoke_entries_to_vault_path.yaml`** +during **`vault_spokes_init`**) and roles are written on each spoke mount +(**`vault_ss_csi_apply_one_spoke_sscsi_role.yaml`**). Role names use +**`-sscsi-`**; slugs come from **`vault_ss_csi_compute_role_slug.yaml`**. + +To **inspect** stems and files locally, run **`playbooks/list_clustergroups.yml`** +or **`playbooks/parse_clustergroup_values.yml`** (see **`roles/clustergroup_discovery/README.md`**). + +At the application level (`clusterGroup.applications.`), the relevant +inputs are: + +- `ssCsiWorkloadAuth` (list) +- `ssCsiWorkloadAuth[].serviceAccount` (required) +- `ssCsiWorkloadAuth[].namespace` (optional) +- Omit **`cluster`** in pattern YAML; hub vs spoke comes from **which file or + `managedClusterGroups` branch** defines the list (see extraction above). Spoke + handling still normalizes to **`vault_path`** (full DNS), same as External Secrets. +- `ssCsiWorkloadAuth[].roleSlug` / `role_slug` (optional): suffix only; Vault + role is **`-sscsi-`** where **``** is hub **`hub`** (or + configured hub path) or the spoke **`vault_path`**. When using the + **vp-sscsi-spc** chart, `spec.parameters.roleName` uses the same **mount** + as `vaultKubernetesMountPath` (typically **`global.clusterDomain`** on + spokes), not a short clustergroup label. +- application `namespace` (optional default for entry namespace) + +CA material management for SS CSI is not handled in this collection anymore. +Provide CA distribution using a separate chart or platform mechanism. + +For the complete flow and task ordering, see +`secrets-initialization-and-vault-unseal.md`. + +## Pattern repository directory (`pattern_dir`) + +Playbooks need the path to your pattern Git checkout (where `values-global.yaml` +and related files live). Resolution order: extra var `pattern_dir`, environment +variable `PATTERN_DIR`, then `PWD` and `pwd`. + +When running from the imperative container or another fixed working directory, +pass the repository root explicitly, for example `-e pattern_dir=/git/repo` (or add +equivalent extra vars via `clusterGroup.imperative.extraPlaybookArgs` in the +clustergroup chart). diff --git a/playbooks/list_clustergroups.yml b/playbooks/list_clustergroups.yml new file mode 100644 index 0000000..6759583 --- /dev/null +++ b/playbooks/list_clustergroups.yml @@ -0,0 +1,21 @@ +--- +# Discover values-.yaml|yml under pattern_dir. +# Resolves pattern_dir like pattern_settings (extra var pattern_dir, env PATTERN_DIR, cwd). +- name: List pattern clustergroup value stems + hosts: localhost + connection: local + gather_facts: false + become: false + roles: + - pattern_settings + - role: clustergroup_discovery + tasks: + - name: Report clustergroup discovery + ansible.builtin.debug: + msg: + pattern_dir: "{{ pattern_dir }}" + main_clustergroup: "{{ main_clustergroup }}" + managed_clustergroup_names: "{{ managed_clustergroup_names }}" + clustergroup_names: "{{ clustergroup_names }}" + clustergroup_load_order: "{{ clustergroup_load_order }}" + clustergroup_file_entries: "{{ clustergroup_file_entries }}" diff --git a/playbooks/parse_clustergroup_values.yml b/playbooks/parse_clustergroup_values.yml new file mode 100644 index 0000000..59c2007 --- /dev/null +++ b/playbooks/parse_clustergroup_values.yml @@ -0,0 +1,22 @@ +--- +# Parse every top-level values-.yaml|yml into clustergroup_documents (stem -> root). +# Use for migration tooling or inspection; SS CSI merge uses the same discovery role internally. +- name: Parse pattern clustergroup values files + hosts: localhost + connection: local + gather_facts: false + become: false + roles: + - pattern_settings + - role: clustergroup_discovery + vars: + clustergroup_discovery_parse_documents: true + tasks: + - name: Summarize parsed clustergroup documents + ansible.builtin.debug: + msg: + pattern_dir: "{{ pattern_dir }}" + main_clustergroup: "{{ main_clustergroup }}" + managed_clustergroup_names: "{{ managed_clustergroup_names }}" + stems_parsed: "{{ clustergroup_documents | default({}) | dict2items | map(attribute='key') | sort | list }}" + document_count: "{{ clustergroup_documents | default({}) | length }}" diff --git a/playbooks/vault.yml b/playbooks/vault.yml index b0da940..85e72ba 100644 --- a/playbooks/vault.yml +++ b/playbooks/vault.yml @@ -4,6 +4,9 @@ connection: local gather_facts: false roles: + # Resolves pattern_dir (extra var / PATTERN_DIR / PWD) and loads main.clusterGroupName as main_clustergroup. + # vault_ss_csi_workload_auth prefers merged clustergroup YAML from an in-cluster ConfigMap, then file fallback. + - pattern_settings - find_vp_secrets - cluster_pre_check - vault_utils diff --git a/roles/clustergroup_discovery/README.md b/roles/clustergroup_discovery/README.md new file mode 100644 index 0000000..a7718b7 --- /dev/null +++ b/roles/clustergroup_discovery/README.md @@ -0,0 +1,27 @@ +# clustergroup_discovery + +Ansible role that lists **which clustergroup value stems are in use** for a Validated Patterns checkout, without scanning every `values-*.yaml` on disk. + +## Behavior + +1. Resolve **`pattern_dir`** the same way as `pattern_settings` (extra var, `PATTERN_DIR`, then `PWD` / `pwd`). +2. Read **`main.clusterGroupName`** from `values-global.yaml` under `pattern_dir` (or use `main_clustergroup` / `main_clustergroupname` if the play already set them). +3. Load **`values-
.yaml`** or **`values-
.yml`** and read **`clusterGroup.managedClusterGroups`**. For each entry, the managed name is **`value.name`** if set, otherwise the **YAML key** (same rule as SS CSI managed-cluster-group defaults). +4. Expose facts: + - **`managed_clustergroup_names`** — sorted unique managed names + - **`clustergroup_load_order`** — `[main, …managed]` (main first; used when merging so later stems override duplicate `applications` keys) + - **`clustergroup_names`** — sorted list of all stems (main + managed) + - **`clustergroup_file_entries`** — `{name, path}` only for stems where a local `values-.yaml|yml` exists + +Optional: set **`clustergroup_discovery_parse_documents: true`** to fill **`clustergroup_documents`** (`` → parsed YAML root) for each file in `clustergroup_file_entries`. + +## Playbooks + +- `playbooks/list_clustergroups.yml` — runs `pattern_settings` + this role and prints the facts above. +- `playbooks/parse_clustergroup_values.yml` — same with parsing enabled. + +Requires `ANSIBLE_ROLES_PATH` (or collection layout) so `pattern_settings` and this role resolve. + +## Relation to SS CSI + +`vault_utils` includes this role when **`vault_ss_csi_aggregate_clustergroup_sources`** is true (default): SS CSI then loads and merges **one document per stem** in `clustergroup_load_order`. See `roles/vault_utils/README.md` (SS CSI section) for parsing, extraction, and projection. diff --git a/roles/clustergroup_discovery/defaults/main.yml b/roles/clustergroup_discovery/defaults/main.yml new file mode 100644 index 0000000..8e87810 --- /dev/null +++ b/roles/clustergroup_discovery/defaults/main.yml @@ -0,0 +1,3 @@ +--- +# When true, slurp and parse each resolved clustergroup file into clustergroup_documents (stem -> root mapping) +clustergroup_discovery_parse_documents: false diff --git a/roles/clustergroup_discovery/meta/main.yml b/roles/clustergroup_discovery/meta/main.yml new file mode 100644 index 0000000..8b20d5e --- /dev/null +++ b/roles/clustergroup_discovery/meta/main.yml @@ -0,0 +1,12 @@ +--- +galaxy_info: + author: rhvp + description: >- + Resolve main clustergroup from values-global, read managedClusterGroups from the main + values file, then optionally parse existing values- files for those stems. + license: Apache-2.0 + min_ansible_version: "2.14" + galaxy_tags: + - openshift + - gitops +dependencies: [] diff --git a/roles/clustergroup_discovery/tasks/main.yml b/roles/clustergroup_discovery/tasks/main.yml new file mode 100644 index 0000000..2e4207e --- /dev/null +++ b/roles/clustergroup_discovery/tasks/main.yml @@ -0,0 +1,118 @@ +--- +# Discover clustergroups in use: main from values-global, managed from main file's clusterGroup.managedClusterGroups. +# Sets: clustergroup_names (sorted stems), managed_clustergroup_names (sorted, excludes main), +# clustergroup_load_order (main first, then managed sorted — SS CSI merge precedence), +# clustergroup_file_entries ({name, path} only when values-.yaml|yml exists), +# clustergroup_documents (optional, stem -> parsed YAML root). + +- name: Resolve pattern_dir for clustergroup discovery + ansible.builtin.include_tasks: ../pattern_settings/tasks/resolve_overrides.yml + when: (pattern_dir | default('', true) | string | trim | length) == 0 + +- name: Fail when pattern_dir is empty after resolve + ansible.builtin.fail: + msg: >- + pattern_dir is required (extra var pattern_dir, env PATTERN_DIR, or cwd with values-global.yaml). + when: (pattern_dir | default('', true) | string | trim | length) == 0 + +- name: Resolve main clustergroup stem from facts or values-global.yaml + ansible.builtin.set_fact: + _clustergroup_discovery_main_stem: >- + {{ + ( + (main_clustergroupname | default(main_clustergroup | default('', true), true) | string | trim | length) > 0 + ) + | ternary( + main_clustergroupname | default(main_clustergroup, true) | string | trim, + ( + lookup('file', (pattern_dir | string | trim) ~ '/values-global.yaml') + | from_yaml + ).main.clusterGroupName | string | trim + ) + }} + +- name: Fail when main clusterGroupName cannot be resolved + ansible.builtin.fail: + msg: >- + Could not resolve main clustergroup (values-global.yaml missing .main.clusterGroupName or empty). + when: (_clustergroup_discovery_main_stem | string | trim | length) == 0 + +- name: Stat main clustergroup values file (yaml) + ansible.builtin.stat: + path: "{{ pattern_dir | string | trim }}/values-{{ _clustergroup_discovery_main_stem }}.yaml" + register: _clustergroup_discovery_main_stat_yaml + +- name: Stat main clustergroup values file (yml) + ansible.builtin.stat: + path: "{{ pattern_dir | string | trim }}/values-{{ _clustergroup_discovery_main_stem }}.yml" + register: _clustergroup_discovery_main_stat_yml + when: not (_clustergroup_discovery_main_stat_yaml.stat.exists | default(false)) + +- name: Set path to main clustergroup values file when present + ansible.builtin.set_fact: + _clustergroup_main_values_path: "{{ pattern_dir | string | trim }}/values-{{ _clustergroup_discovery_main_stem }}.yaml" + when: _clustergroup_discovery_main_stat_yaml.stat.exists | default(false) + +- name: Set path to main clustergroup values file when only yml exists + ansible.builtin.set_fact: + _clustergroup_main_values_path: "{{ pattern_dir | string | trim }}/values-{{ _clustergroup_discovery_main_stem }}.yml" + when: + - _clustergroup_main_values_path is not defined + - _clustergroup_discovery_main_stat_yml is defined + - _clustergroup_discovery_main_stat_yml.stat.exists | default(false) + +- name: Load parsed root from main clustergroup values file + ansible.builtin.set_fact: + _clustergroup_main_root: "{{ lookup('file', _clustergroup_main_values_path) | from_yaml }}" + when: _clustergroup_main_values_path is defined + +- name: Default empty main clustergroup root when file is absent + ansible.builtin.set_fact: + _clustergroup_main_root: {} + when: _clustergroup_main_values_path is not defined + +- name: Collect managed clustergroup names from main file managedClusterGroups + ansible.builtin.set_fact: + managed_clustergroup_names: "{{ managed_clustergroup_names | default([]) + [_cgd_mcg_name] }}" + vars: + _cgd_mcg_name: "{{ (item.value.name | default(item.key, true)) | string | trim }}" + loop: "{{ (_clustergroup_main_root.clusterGroup | default({})).managedClusterGroups | default({}) | dict2items }}" + loop_control: + label: "{{ _cgd_mcg_name }}" + when: + - _clustergroup_main_root is mapping + - (_clustergroup_main_root.clusterGroup | default({})).managedClusterGroups is defined + - ((_clustergroup_main_root.clusterGroup | default({})).managedClusterGroups | default({})) is mapping + +- name: Finalize managed clustergroup names list + ansible.builtin.set_fact: + managed_clustergroup_names: "{{ managed_clustergroup_names | default([]) | unique | sort }}" + +- name: Set clustergroup load order (main first so managed values files override for SS CSI merge) + ansible.builtin.set_fact: + clustergroup_load_order: >- + {{ + ( + [_clustergroup_discovery_main_stem] + + (managed_clustergroup_names | reject('equalto', _clustergroup_discovery_main_stem) | list) + ) | unique | list + }} + +- name: Set sorted clustergroup names (all stems in use) + ansible.builtin.set_fact: + clustergroup_names: "{{ clustergroup_load_order | sort }}" + +- name: Build clustergroup_file_entries for stems that have a local values file + ansible.builtin.include_tasks: resolve_clustergroup_file_path.yml + loop: "{{ clustergroup_load_order }}" + loop_control: + loop_var: clustergroup_discovery_stem + +- name: Default empty clustergroup file entries + ansible.builtin.set_fact: + clustergroup_file_entries: [] + when: clustergroup_file_entries is not defined + +- name: Parse each resolved clustergroup values file when requested + ansible.builtin.include_tasks: parse_documents.yml + when: clustergroup_discovery_parse_documents | default(false) | bool diff --git a/roles/clustergroup_discovery/tasks/parse_documents.yml b/roles/clustergroup_discovery/tasks/parse_documents.yml new file mode 100644 index 0000000..e0d29ec --- /dev/null +++ b/roles/clustergroup_discovery/tasks/parse_documents.yml @@ -0,0 +1,7 @@ +--- +- name: Parse clustergroup values YAML into clustergroup_documents + ansible.builtin.set_fact: + clustergroup_documents: "{{ clustergroup_documents | default({}) | combine({item.name: (lookup('file', item.path) | from_yaml)}) }}" + loop: "{{ clustergroup_file_entries }}" + loop_control: + label: "{{ item.name }}" diff --git a/roles/clustergroup_discovery/tasks/resolve_clustergroup_file_path.yml b/roles/clustergroup_discovery/tasks/resolve_clustergroup_file_path.yml new file mode 100644 index 0000000..1e0778b --- /dev/null +++ b/roles/clustergroup_discovery/tasks/resolve_clustergroup_file_path.yml @@ -0,0 +1,32 @@ +--- +# loop_var: clustergroup_discovery_stem — append {name, path} to clustergroup_file_entries when file exists. + +- name: Stat values file for stem {{ clustergroup_discovery_stem }} (yaml) + ansible.builtin.stat: + path: "{{ pattern_dir | string | trim }}/values-{{ clustergroup_discovery_stem | string | trim }}.yaml" + register: _clustergroup_discovery_stem_stat_yaml + +- name: Stat values file for stem {{ clustergroup_discovery_stem }} (yml) + ansible.builtin.stat: + path: "{{ pattern_dir | string | trim }}/values-{{ clustergroup_discovery_stem | string | trim }}.yml" + register: _clustergroup_discovery_stem_stat_yml + +- name: Record clustergroup file entry for {{ clustergroup_discovery_stem }} (prefer yaml) + ansible.builtin.set_fact: + clustergroup_file_entries: "{{ clustergroup_file_entries | default([]) + [_entry] }}" + vars: + _entry: + name: "{{ clustergroup_discovery_stem | string | trim }}" + path: "{{ pattern_dir | string | trim }}/values-{{ clustergroup_discovery_stem | string | trim }}.yaml" + when: _clustergroup_discovery_stem_stat_yaml.stat.exists | default(false) + +- name: Record clustergroup file entry for {{ clustergroup_discovery_stem }} (yml fallback) + ansible.builtin.set_fact: + clustergroup_file_entries: "{{ clustergroup_file_entries | default([]) + [_entry] }}" + vars: + _entry: + name: "{{ clustergroup_discovery_stem | string | trim }}" + path: "{{ pattern_dir | string | trim }}/values-{{ clustergroup_discovery_stem | string | trim }}.yml" + when: + - not (_clustergroup_discovery_stem_stat_yaml.stat.exists | default(false)) + - _clustergroup_discovery_stem_stat_yml.stat.exists | default(false) diff --git a/roles/vault_utils/README.md b/roles/vault_utils/README.md index 50dbec1..15c3420 100644 --- a/roles/vault_utils/README.md +++ b/roles/vault_utils/README.md @@ -54,6 +54,151 @@ This role configures four secret paths in vault: be used with ESO's `PushSecrets` so you can push an existing secret from one namespace, to the vault under this path and then it can be retrieved by an `ExternalSecret` either in a different namespace *or* from an entirely different cluster. +## SS CSI workload auth + +This role can create Vault Kubernetes auth roles from +`clusterGroup.applications.*.ssCsiWorkloadAuth` and +`clusterGroup.managedClusterGroups.*.applications.*.ssCsiWorkloadAuth`. + +Implementation is split into **parsing** (load YAML), **extraction** (collect +`ssCsiWorkloadAuth` rows), and **projection** (normalize and write Vault +Kubernetes auth roles). Task entry points: + +| Stage | Primary task files | +| ----- | -------------------- | +| Parsing | `vault_ss_csi_load_clustergroup_values.yaml` (router), `vault_ss_csi_load_merged_clustergroup_values.yaml`, `vault_ss_csi_load_one_clustergroup_values_fragment.yaml`, `vault_ss_csi_load_clustergroup_values_legacy.yaml` | +| Extraction | `vault_ss_csi_workload_auth.yaml`, `vault_ss_csi_collect_applications_for_stem.yaml`, `vault_ss_csi_collect_one_application.yaml`, `vault_ss_csi_collect_one_entry.yaml`, `vault_ss_csi_collect_managed_group_application.yaml` | +| Projection | `vault_ss_csi_apply_one_hub_sscsi_role.yaml`, `vault_ss_csi_normalize_spoke_entries_to_vault_path.yaml` (in `vault_spokes_init`), `vault_ss_csi_apply_one_spoke_sscsi_role.yaml`, `vault_ss_csi_compute_role_slug.yaml` | + +### Parsing + +When **`vault_ss_csi_aggregate_clustergroup_sources`** is true (default), SS CSI +includes the **`clustergroup_discovery`** role (`../clustergroup_discovery/`) to +build **`clustergroup_load_order`**: main stem from `values-global.yaml`, then +managed names from **`clusterGroup.managedClusterGroups`** in the main values +file. For **each** stem in that order it loads one YAML root (prefer +`ConfigMap` **`values-`** in **`vault_ss_csi_clustergroup_configmap_namespace`**, +then local **`pattern_dir/values-.yaml`** or **`.yml`**), and merges: + +- **`clusterGroup.applications`** (shallow combine; later stems override keys) +- **`clusterGroup.managedClusterGroups`** (`combine(..., recursive=true)`) + +It also records **`_vault_ss_csi_apps_by_stem`** so extraction knows which +`applications` map came from which stem. The merged document is stored as +**`_vault_ss_csi_values_root`** for debugging and for the flattened +**`_vault_ss_csi_cluster_apps`** / **`_vault_ss_csi_managed_cluster_groups`** +facts. + +When **`vault_ss_csi_aggregate_clustergroup_sources`** is false, only the +**legacy** path runs: one `ConfigMap` (default name `values-` +unless **`vault_ss_csi_clustergroup_configmap_name`** is set), then optional +local **`vault_ss_csi_cluster_values_file`** or **`pattern_dir/values-
.yaml`**. + +Override defaults with `vault_ss_csi_clustergroup_configmap_namespace`, +`vault_ss_csi_clustergroup_configmap_name`, `vault_ss_csi_clustergroup_configmap_key`, +and `vault_ss_csi_clustergroup_configmap_key_candidates` as needed for your pattern. + +### Extraction + +**`vault_ss_csi_workload_auth.yaml`** (included from `vault_secrets_init.yaml`): + +1. Parses **`_vault_ss_csi_values_root`** into **`_vault_ss_csi_cluster_apps`** + and **`_vault_ss_csi_managed_cluster_groups`** (merged views). +2. Ensures **`_vault_ss_csi_apps_by_stem`** exists: after a multi-stem merge it is + filled by fragments; for legacy single-document load it is set to + `{
: }`. +3. Walks **`clustergroup_load_order`** (or `[main]` if unset) via + **`vault_ss_csi_collect_applications_for_stem.yaml`**: for each stem, every + application that defines **`ssCsiWorkloadAuth`** is passed to + **`vault_ss_csi_collect_one_entry.yaml`**. Omit **`cluster`** in values: Ansible + sets **`cluster`** to **`hub`** when the stem is the main clustergroup, else to + the **stem string** (entries under `values-.yaml` default to that managed context). +4. Walks merged **`managedClusterGroups`** via **`vault_ss_csi_collect_managed_group_application.yaml`** + (omit **`cluster`**: nested apps default to the group **`name`**, else the group YAML key). + +### Projection + +Collected rows become **`_ss_csi_all_entries`**, then: + +- **Hub mount** (`auth//role/...`): entries whose computed **`cluster`** is + `hub`, `local-cluster`, or empty — **`vault_ss_csi_apply_one_hub_sscsi_role.yaml`** + runs on the hub (**`vault_ss_csi_compute_role_slug.yaml`** for slug). +- **Spoke mounts**: other entries stay in **`_ss_csi_spoke_entries_raw`** until + **`vault_spokes_init`** runs **`vault_ss_csi_normalize_spoke_entries_to_vault_path.yaml`** + (match ACM / ESO, set internal **`cluster`** to **`vault_path`**), then + **`vault_ss_csi_apply_one_spoke_sscsi_role.yaml`** per spoke. + +Vault Kubernetes auth **role names** use the form **auth mount + `-sscsi-` + slug**. They must satisfy +Vault path rules (non-empty slug, no trailing `-`, bounded length on some versions). +This role derives `slug` from optional `roleSlug`, or from `vault_ss_csi_role_slug_mode` +(`hash` or `stable_slug`), and shortens to a SHA-1 prefix when +`vault_ss_csi_kubernetes_auth_role_name_max_length` would be exceeded (set to `0` +for no limit). If an older Vault returns **400 invalid role name**, use `hash` mode, +set a short explicit `roleSlug`, or lower `vault_ss_csi_kubernetes_auth_role_name_max_length`. + +For each `ssCsiWorkloadAuth` entry in pattern YAML: + +- required: `serviceAccount` +- optional: `namespace`, `roleSlug` (or `role_slug`) +- omit **`cluster`**; hub vs spoke is determined by **which stem file or + `managedClusterGroups` branch** defines the list (see extraction above). + +During `vault_spokes_init`, spoke rows are **normalized** so Vault uses **`vault_path`** +(FQDN) as the cluster ID, matching ESO and the Kubernetes auth mount on the spoke. + +**Charts (vp-sscsi-spc):** `SecretProviderClass` workload auth should use the same +idea: with `roleSlug` set, the chart emits **`roleName: -sscsi-`** +where **`vaultKubernetesMountPath`** is the hub mount or **`global.clusterDomain`** +on the spoke (FQDN), not a short clustergroup label. + +Application-level `namespace` is used as the default when an entry does not set +`namespace`. + +Example (hub — `values-
.yaml`): + +```yaml +clusterGroup: + applications: + my-app: + namespace: my-app-namespace + ssCsiWorkloadAuth: + - serviceAccount: my-app-sa + roleSlug: my-app-my-app-sa-my-app +``` + +Example (spoke via `managedClusterGroups` — omit `cluster`; defaults to `name` / group key): + +```yaml +clusterGroup: + managedClusterGroups: + exampleRegion: + name: group-one + applications: + my-app: + namespace: my-app-namespace + ssCsiWorkloadAuth: + - serviceAccount: my-app-sa + roleSlug: my-app-my-app-sa-my-app +``` + +Example (spoke via managed stem file `values-group-one.yaml` — same list shape; stem sets targeting): + +```yaml +clusterGroup: + applications: + my-app: + namespace: my-app-namespace + ssCsiWorkloadAuth: + - serviceAccount: my-app-sa + roleSlug: my-app-my-app-sa-my-app +``` + +SS CSI CA material management is external to this role. Use a separate chart or +platform CA distribution workflow for Vault route trust. + +For a detailed end-to-end description of `vault.yml` task order and SS CSI +behavior, see `secrets-initialization-and-vault-unseal.md` in this repository. + ## Values secret file format Currently this role supports two formats: version 1.0 (which is the assumed diff --git a/roles/vault_utils/defaults/main.yml b/roles/vault_utils/defaults/main.yml index 23f67ee..d86527e 100644 --- a/roles/vault_utils/defaults/main.yml +++ b/roles/vault_utils/defaults/main.yml @@ -60,3 +60,49 @@ app_capabilities: '[\"read\"]' app_update_hub_role: true # Whether to create JWT roles per app (only for entries with jwt_role defined) app_create_jwt_roles: false + +# Vault Secrets Store CSI: extra Kubernetes auth role on the hub mount (same policy set as hub-role) +vault_csi_kubernetes_auth: false +vault_csi_kubernetes_role_name: "{{ vault_hub }}-csi-role" +vault_csi_service_account_namespace: "secrets-store-csi-driver" +vault_csi_service_account_name: "secrets-store-csi-driver" +vault_csi_role_ttl: "15m" + +# Pattern values (clustergroup): optional list ssCsiWorkloadAuth under each +# clusterGroup.applications. +# or under clusterGroup.managedClusterGroups..applications. +# (see vault_ss_csi_* tasks). Example element (omit cluster; placement sets hub vs spoke): +# { serviceAccount: my-sa, namespace: my-ns, optional roleSlug: suffix } +# Ansible fills cluster from stem (main -> hub, other stems -> stem name) or from managed group name/key. +# Spoke rows are normalized to vault_path before Vault role writes (same id as ESO). Vault role name is always +# -sscsi- where mount is hub or vault_path (vp-sscsi-spc uses the same mount for roleName). +# namespace defaults to the application namespace. +vault_ss_csi_from_applications: true +# When true, SS CSI loads ConfigMap/file per clustergroup stem and merges applications + +# managedClusterGroups (main stem first, then others alphabetically; later files override). +# When false, only the main clustergroup document is loaded (legacy behavior). +vault_ss_csi_aggregate_clustergroup_sources: true +# Prefer merged clustergroup values from an in-cluster ConfigMap (reflects GitOps overrides). +vault_ss_csi_clustergroup_values_from_configmap: true +# Namespace containing the clustergroup values ConfigMap (OpenShift GitOps default). +vault_ss_csi_clustergroup_configmap_namespace: openshift-gitops +# If empty, the ConfigMap name defaults to values- (same stem as values-.yaml). +vault_ss_csi_clustergroup_configmap_name: "" +# If empty, try keys in vault_ss_csi_clustergroup_configmap_key_candidates in order. +vault_ss_csi_clustergroup_configmap_key: "" +vault_ss_csi_clustergroup_configmap_key_candidates: + - values.yaml + - helm-values.yaml + - values.yml +# When the ConfigMap is missing or does not contain a parseable clusterGroup document, slurp local file. +vault_ss_csi_fallback_local_clustergroup_file: true +# Override path to values-.yaml; empty uses pattern_dir/values-{{ main_clustergroupname }}.yaml (fallback only) +vault_ss_csi_cluster_values_file: "" +vault_ss_csi_role_ttl: "15m" +# How Vault names Kubernetes auth roles: auth//role/-sscsi- +# - hash: legacy SHA1 of namespace|serviceAccount|app (hub) or vault_path|... (spoke) +# - stable_slug: hub-sscsi--- (sanitized); spokes prefix sanitized vault_path +# Per-entry override wins: ssCsiWorkloadAuth[].roleSlug (suffix only; still prefixed with -sscsi-) +vault_ss_csi_role_slug_mode: hash +# Full role name is -sscsi-. Cap length for older Vault (HTTP 400 invalid role name); 0 = no limit. +vault_ss_csi_kubernetes_auth_role_name_max_length: 256 diff --git a/roles/vault_utils/tasks/vault_secrets_init.yaml b/roles/vault_utils/tasks/vault_secrets_init.yaml index f8ba36c..97a892d 100644 --- a/roles/vault_utils/tasks/vault_secrets_init.yaml +++ b/roles/vault_utils/tasks/vault_secrets_init.yaml @@ -178,3 +178,7 @@ policies="{{ _merged_hub_policies | join(',') }}" ttl="{{ vault_hub_ttl }}" when: _hub_role_needs_update | bool + +# SS CSI: clusterGroup.applications.*.ssCsiWorkloadAuth (+ optional legacy vault_csi_kubernetes_auth SA) +- name: Configure Vault Kubernetes auth for SS CSI workload identities + ansible.builtin.include_tasks: vault_ss_csi_workload_auth.yaml diff --git a/roles/vault_utils/tasks/vault_spokes_init.yaml b/roles/vault_utils/tasks/vault_spokes_init.yaml index ae0215c..4b2ca22 100644 --- a/roles/vault_utils/tasks/vault_spokes_init.yaml +++ b/roles/vault_utils/tasks/vault_spokes_init.yaml @@ -328,3 +328,6 @@ - item.key != "local-cluster" loop_control: label: "{{ item.key }}" + +- name: Configure Vault Kubernetes auth for SS CSI workload identities on spokes + ansible.builtin.import_tasks: vault_ss_csi_spoke_roles.yaml diff --git a/roles/vault_utils/tasks/vault_ss_csi_apply_one_hub_sscsi_role.yaml b/roles/vault_utils/tasks/vault_ss_csi_apply_one_hub_sscsi_role.yaml new file mode 100644 index 0000000..c037c0b --- /dev/null +++ b/roles/vault_utils/tasks/vault_ss_csi_apply_one_hub_sscsi_role.yaml @@ -0,0 +1,17 @@ +--- +- name: Compute Vault role slug for hub SS CSI identity + ansible.builtin.include_tasks: vault_ss_csi_compute_role_slug.yaml + vars: + ss_csi_mount_prefix: "{{ vault_hub }}" + ss_csi_hash_input: "{{ (item.namespace | default('', true)) ~ '|' ~ (item.serviceAccount | default('', true)) ~ '|' ~ (item.app | default('', true)) }}" + +- name: Configure hub Vault Kubernetes auth role for SS CSI workload identity + kubernetes.core.k8s_exec: + namespace: "{{ vault_ns }}" + pod: "{{ vault_pod }}" + command: > + vault write auth/"{{ vault_hub }}"/role/"{{ vault_hub }}-sscsi-{{ _role_slug }}" + bound_service_account_names="{{ item.serviceAccount }}" + bound_service_account_namespaces="{{ item.namespace }}" + policies="{{ _merged_hub_policies | join(',') }}" + ttl="{{ vault_ss_csi_role_ttl }}" diff --git a/roles/vault_utils/tasks/vault_ss_csi_apply_one_spoke_sscsi_role.yaml b/roles/vault_utils/tasks/vault_ss_csi_apply_one_spoke_sscsi_role.yaml new file mode 100644 index 0000000..20f01fa --- /dev/null +++ b/roles/vault_utils/tasks/vault_ss_csi_apply_one_spoke_sscsi_role.yaml @@ -0,0 +1,23 @@ +--- +- name: Compute Vault role slug for spoke SS CSI identity + ansible.builtin.include_tasks: vault_ss_csi_compute_role_slug.yaml + vars: + ss_csi_mount_prefix: "{{ vault_spoke_cluster_loop.value.vault_path }}" + ss_csi_hash_input: >- + {{ + (vault_spoke_cluster_loop.value.vault_path | string) + ~ '|' ~ (item.namespace | default('', true)) + ~ '|' ~ (item.serviceAccount | default('', true)) + ~ '|' ~ (item.app | default('', true)) + }} + +- name: Configure Vault SS CSI role on spoke {{ vault_spoke_cluster_loop.key }} + kubernetes.core.k8s_exec: + namespace: "{{ vault_ns }}" + pod: "{{ vault_pod }}" + command: > + vault write auth/{{ vault_spoke_cluster_loop.value.vault_path }}/role/{{ vault_spoke_cluster_loop.value.vault_path }}-sscsi-{{ _role_slug }} + bound_service_account_names="{{ item.serviceAccount }}" + bound_service_account_namespaces="{{ item.namespace }}" + policies="default,{{ vault_global_policy }}-secret,{{ vault_pushsecrets_policy }}-secret,{{ vault_spoke_cluster_loop.value.vault_path }}-secret" + ttl="{{ vault_spoke_ttl }}" diff --git a/roles/vault_utils/tasks/vault_ss_csi_collect_applications_for_stem.yaml b/roles/vault_utils/tasks/vault_ss_csi_collect_applications_for_stem.yaml new file mode 100644 index 0000000..eb0caa5 --- /dev/null +++ b/roles/vault_utils/tasks/vault_ss_csi_collect_applications_for_stem.yaml @@ -0,0 +1,25 @@ +--- +# loop_var: cg_collect_stem — collect ssCsiWorkloadAuth from clusterGroup.applications for that stem only. +# Omit cluster in values: main stem -> hub, other stems -> stem name (see ss_csi_cluster_default_for_app below). + +- name: Collect SS CSI rows from clusterGroup.applications for stem {{ cg_collect_stem }} + ansible.builtin.include_tasks: vault_ss_csi_collect_one_application.yaml + loop: >- + {{ + ((_vault_ss_csi_apps_by_stem | default({}))[cg_collect_stem] | default({})) + | dict2items + | selectattr('value.ssCsiWorkloadAuth', 'defined') + | list + }} + loop_control: + loop_var: outer_item + vars: + ss_csi_cluster_default_for_app: >- + {{ + 'hub' + if ( + (cg_collect_stem | string | trim) + == (main_clustergroupname | default(main_clustergroup | default('', true), true) | string | trim) + ) + else (cg_collect_stem | string | trim) + }} diff --git a/roles/vault_utils/tasks/vault_ss_csi_collect_managed_group_application.yaml b/roles/vault_utils/tasks/vault_ss_csi_collect_managed_group_application.yaml new file mode 100644 index 0000000..0fd4904 --- /dev/null +++ b/roles/vault_utils/tasks/vault_ss_csi_collect_managed_group_application.yaml @@ -0,0 +1,14 @@ +--- +# mcg_outer_item: { key: , value: } +# Reuses vault_ss_csi_collect_one_application; ss_csi_cluster_default_for_app fills cluster when omitted in each entry. +- name: Process managed cluster group {{ mcg_outer_item.key }} applications for SS CSI + ansible.builtin.include_tasks: vault_ss_csi_collect_one_application.yaml + loop: "{{ (mcg_outer_item.value.applications | default({})) | dict2items | selectattr('value.ssCsiWorkloadAuth', 'defined') | list }}" + loop_control: + loop_var: outer_item + vars: + ss_csi_cluster_default_for_app: "{{ mcg_outer_item.value.name | default(mcg_outer_item.key) | string | trim }}" + when: + - mcg_outer_item.value.applications is defined + - mcg_outer_item.value.applications is mapping + - (mcg_outer_item.value.applications | length) > 0 diff --git a/roles/vault_utils/tasks/vault_ss_csi_collect_one_application.yaml b/roles/vault_utils/tasks/vault_ss_csi_collect_one_application.yaml new file mode 100644 index 0000000..09f36bd --- /dev/null +++ b/roles/vault_utils/tasks/vault_ss_csi_collect_one_application.yaml @@ -0,0 +1,10 @@ +--- +# outer_item: { key: , value: } +# ss_csi_cluster_default_for_app: optional; when cluster omitted in each entry: hub for main stem apps, or MCG name/key for managedClusterGroups.*.applications +- name: Process ssCsiWorkloadAuth entries for application {{ outer_item.key }} + ansible.builtin.include_tasks: vault_ss_csi_collect_one_entry.yaml + loop: "{{ outer_item.value.ssCsiWorkloadAuth | default([]) }}" + loop_control: + loop_var: inner_item + vars: + ss_csi_cluster_default_for_entry: "{{ ss_csi_cluster_default_for_app | default('hub') }}" diff --git a/roles/vault_utils/tasks/vault_ss_csi_collect_one_entry.yaml b/roles/vault_utils/tasks/vault_ss_csi_collect_one_entry.yaml new file mode 100644 index 0000000..06125c3 --- /dev/null +++ b/roles/vault_utils/tasks/vault_ss_csi_collect_one_entry.yaml @@ -0,0 +1,20 @@ +--- +# inner_item.cluster in values is optional and discouraged; omit it and use stem or managedClusterGroup placement. +- name: Validate ssCsiWorkloadAuth entry for application {{ outer_item.key }} + ansible.builtin.assert: + that: + - inner_item.serviceAccount is defined + - inner_item.serviceAccount | string | length > 0 + fail_msg: >- + clusterGroup.applications.{{ outer_item.key }}.ssCsiWorkloadAuth entry missing non-empty serviceAccount + +- name: Append SS CSI workload row for application {{ outer_item.key }} + ansible.builtin.set_fact: + _ss_csi_all_entries: "{{ _ss_csi_all_entries | default([]) + [_row] }}" + vars: + _row: + app: "{{ outer_item.key }}" + serviceAccount: "{{ inner_item.serviceAccount }}" + namespace: "{{ inner_item.namespace | default(outer_item.value.namespace | default('', true), true) }}" + cluster: "{{ inner_item.cluster | default(ss_csi_cluster_default_for_entry | default('hub')) | string | trim }}" + roleSlug: "{{ inner_item.roleSlug | default(inner_item.role_slug | default('', true), true) }}" diff --git a/roles/vault_utils/tasks/vault_ss_csi_compute_role_slug.yaml b/roles/vault_utils/tasks/vault_ss_csi_compute_role_slug.yaml new file mode 100644 index 0000000..f2f6357 --- /dev/null +++ b/roles/vault_utils/tasks/vault_ss_csi_compute_role_slug.yaml @@ -0,0 +1,50 @@ +--- +# Sets _role_slug for Vault Kubernetes auth role name: -sscsi- +# Requires: item (ssCsiWorkloadAuth row), ss_csi_mount_prefix, ss_csi_hash_input (string for SHA1). + +- name: Derive SS CSI Kubernetes auth role slug + ansible.builtin.set_fact: + _role_slug: "{{ _ss_csi_final_slug }}" + vars: + _raw_slug: "{{ item.roleSlug | default(item.role_slug | default('', true), true) | string | trim }}" + _sanitized_custom: >- + {{ _raw_slug | lower | regex_replace('[^a-z0-9-]+', '-') | regex_replace('-+', '-') | trim('-') }} + _stable_ns: "{{ item.namespace | default('', true) | lower | regex_replace('[^a-z0-9-]+', '-') | regex_replace('-+', '-') | trim('-') }}" + _stable_sa: "{{ item.serviceAccount | default('', true) | lower | regex_replace('[^a-z0-9-]+', '-') | regex_replace('-+', '-') | trim('-') }}" + _stable_app: "{{ item.app | default('', true) | lower | regex_replace('[^a-z0-9-]+', '-') | regex_replace('-+', '-') | trim('-') }}" + _stable_joined: "{{ [_stable_ns, _stable_sa, _stable_app] | reject('equalto', '') | list | join('-') | regex_replace('-+', '-') | trim('-') }}" + _hash_slug: "{{ ss_csi_hash_input | hash('sha1') }}" + _max_len: "{{ vault_ss_csi_kubernetes_auth_role_name_max_length | default(256) | int }}" + _prefix: "{{ ss_csi_mount_prefix | string }}" + _mode: "{{ vault_ss_csi_role_slug_mode | default('hash') | lower }}" + _candidate: >- + {{ + (_sanitized_custom | length > 0) + | ternary( + _sanitized_custom, + (((_mode == 'stable_slug') and (_stable_joined | length > 0)) + | ternary(_stable_joined, _hash_slug)) + ) + }} + _prefix_len: "{{ _prefix | length }}" + _candidate_len: "{{ _candidate | length }}" + _needs_shorten: >- + {{ + (_max_len | int > 0) + and ((_prefix_len | int) + 7 + (_candidate_len | int) > (_max_len | int)) + }} + _budget: >- + {{ + ([(_max_len | int) - (_prefix_len | int) - 7, 8] | max) + if (_max_len | int > 0) else 40 + }} + _hash_take: "{{ [_budget | int, 40] | min }}" + _short_hash: "{{ _hash_slug[0 : (_hash_take | int)] }}" + _ss_csi_final_slug: >- + {{ + (_needs_shorten | bool) + | ternary( + _short_hash, + (((_candidate | length) > 0) | ternary(_candidate, _hash_slug)) + ) + }} diff --git a/roles/vault_utils/tasks/vault_ss_csi_load_clustergroup_values.yaml b/roles/vault_utils/tasks/vault_ss_csi_load_clustergroup_values.yaml new file mode 100644 index 0000000..963080a --- /dev/null +++ b/roles/vault_utils/tasks/vault_ss_csi_load_clustergroup_values.yaml @@ -0,0 +1,13 @@ +--- +# Load clustergroup values for SS CSI: merged across all stems (default) or legacy single main document. +# Sets _vault_ss_csi_values_root (mapping with .clusterGroup) and _vault_ss_csi_values_source when successful. + +- name: Load merged clustergroup values across all stems for SS CSI + ansible.builtin.include_tasks: vault_ss_csi_load_merged_clustergroup_values.yaml + when: vault_ss_csi_aggregate_clustergroup_sources | default(true) | bool + +- name: Load single main clustergroup values for SS CSI (legacy) + ansible.builtin.include_tasks: vault_ss_csi_load_clustergroup_values_legacy.yaml + when: >- + not (vault_ss_csi_aggregate_clustergroup_sources | default(true) | bool) + or not (_vault_ss_csi_merge_any_loaded | default(false)) diff --git a/roles/vault_utils/tasks/vault_ss_csi_load_clustergroup_values_legacy.yaml b/roles/vault_utils/tasks/vault_ss_csi_load_clustergroup_values_legacy.yaml new file mode 100644 index 0000000..121a0ba --- /dev/null +++ b/roles/vault_utils/tasks/vault_ss_csi_load_clustergroup_values_legacy.yaml @@ -0,0 +1,132 @@ +--- +# Single clustergroup document for SS CSI (legacy): prefer in-cluster ConfigMap, then local file. +# Sets _vault_ss_csi_values_root (mapping with .clusterGroup) and _vault_ss_csi_values_source when successful. +# Requires: main_clustergroupname; pattern_dir for file fallback. + +- name: Initialize SS CSI clustergroup values load facts (legacy) + ansible.builtin.set_fact: + _vault_ss_csi_values_source: "" + _ss_csi_cm_data: {} + _ss_csi_cm_yaml_key: "" + +- name: Compute SS CSI clustergroup ConfigMap object name (legacy) + ansible.builtin.set_fact: + _ss_csi_cg_cm_name: >- + {{ + (vault_ss_csi_clustergroup_configmap_name | default('', true) | string | trim | length > 0) + | ternary( + vault_ss_csi_clustergroup_configmap_name | string | trim, + 'values-' ~ (main_clustergroupname | string | trim) + ) + }} + when: + - main_clustergroupname is defined + - main_clustergroupname | string | trim | length > 0 + +- name: Read clustergroup values ConfigMap from cluster (SS CSI legacy) + kubernetes.core.k8s_info: + api_version: v1 + kind: ConfigMap + name: "{{ _ss_csi_cg_cm_name }}" + namespace: "{{ vault_ss_csi_clustergroup_configmap_namespace }}" + register: _vault_ss_csi_cg_cm + failed_when: false + when: + - vault_ss_csi_clustergroup_values_from_configmap | default(true) | bool + - _ss_csi_cg_cm_name is defined + +- name: Set ConfigMap .data for SS CSI clustergroup parse (legacy) + ansible.builtin.set_fact: + _ss_csi_cm_data: "{{ _vault_ss_csi_cg_cm.resources[0].data | default({}) }}" + when: + - vault_ss_csi_clustergroup_values_from_configmap | default(true) | bool + - _vault_ss_csi_cg_cm is defined + - not (_vault_ss_csi_cg_cm.failed | default(false)) + - (_vault_ss_csi_cg_cm.resources | default([]) | length) > 0 + +- name: Build ordered ConfigMap data key candidates for clustergroup YAML (legacy) + ansible.builtin.set_fact: + _ss_csi_cm_key_candidates: >- + {{ + ( + ([vault_ss_csi_clustergroup_configmap_key | default('', true) | string | trim] + | reject('equalto', '') | list) + + (vault_ss_csi_clustergroup_configmap_key_candidates | default([])) + ) | unique | list + }} + when: + - _ss_csi_cm_data is defined + - (_ss_csi_cm_data | default({}) | length) > 0 + +- name: Pick first ConfigMap data key present in candidates (SS CSI legacy) + ansible.builtin.set_fact: + _ss_csi_cm_yaml_key: "{{ item }}" + loop: "{{ _ss_csi_cm_key_candidates | default([]) }}" + when: + - (_ss_csi_cm_yaml_key | default('') | string | length) == 0 + - item in (_ss_csi_cm_data | default({})) + +- name: Parse YAML from ConfigMap data (SS CSI legacy) + block: + - name: Decode YAML string from ConfigMap key (legacy) + ansible.builtin.set_fact: + _vault_ss_csi_cm_values_candidate: "{{ _ss_csi_cm_data[_ss_csi_cm_yaml_key] | trim | from_yaml }}" + rescue: + - name: Note ConfigMap YAML parse failure (SS CSI legacy) + ansible.builtin.set_fact: + _vault_ss_csi_cm_values_candidate: {} + +- name: Accept clustergroup values from ConfigMap when clusterGroup is present (SS CSI legacy) + ansible.builtin.set_fact: + _vault_ss_csi_values_root: "{{ _vault_ss_csi_cm_values_candidate }}" + _vault_ss_csi_values_source: >- + configmap {{ vault_ss_csi_clustergroup_configmap_namespace }}/{{ _ss_csi_cg_cm_name }} key={{ _ss_csi_cm_yaml_key }} + when: + - _vault_ss_csi_cm_values_candidate is defined + - _vault_ss_csi_cm_values_candidate is mapping + - _vault_ss_csi_cm_values_candidate.clusterGroup is defined + - _ss_csi_cm_yaml_key is defined + - _ss_csi_cm_yaml_key | string | length > 0 + +- name: Resolve path to clustergroup values file for SS CSI (legacy fallback) + ansible.builtin.set_fact: + _vault_ss_csi_values_path: "{{ vault_ss_csi_cluster_values_file | default('', true) | trim }}" + +- name: Default clustergroup values path from pattern_dir (SS CSI legacy fallback) + ansible.builtin.set_fact: + _vault_ss_csi_values_path: "{{ pattern_dir }}/values-{{ main_clustergroupname }}.yaml" + when: + - (_vault_ss_csi_values_path | default('', true) | length) == 0 + - pattern_dir is defined + - pattern_dir | length > 0 + - main_clustergroupname is defined + - main_clustergroupname | string | trim | length > 0 + +- name: Stat clustergroup values file for SS CSI (legacy fallback) + ansible.builtin.stat: + path: "{{ _vault_ss_csi_values_path }}" + register: _vault_ss_csi_values_stat + when: + - vault_ss_csi_fallback_local_clustergroup_file | default(true) | bool + - _vault_ss_csi_values_root is not defined + - _vault_ss_csi_values_path is defined + - _vault_ss_csi_values_path | length > 0 + +- name: Load clustergroup values YAML from local file (SS CSI legacy fallback) + ansible.builtin.slurp: + src: "{{ _vault_ss_csi_values_path }}" + register: _vault_ss_csi_values_slurp + when: + - vault_ss_csi_fallback_local_clustergroup_file | default(true) | bool + - _vault_ss_csi_values_root is not defined + - _vault_ss_csi_values_stat is defined + - _vault_ss_csi_values_stat.stat.exists | default(false) + +- name: Decode clustergroup values root from local file (SS CSI legacy) + ansible.builtin.set_fact: + _vault_ss_csi_values_root: "{{ (_vault_ss_csi_values_slurp.content | b64decode | from_yaml) }}" + _vault_ss_csi_values_source: "file {{ _vault_ss_csi_values_path }}" + when: + - vault_ss_csi_fallback_local_clustergroup_file | default(true) | bool + - _vault_ss_csi_values_slurp is defined + - _vault_ss_csi_values_slurp.content is defined diff --git a/roles/vault_utils/tasks/vault_ss_csi_load_merged_clustergroup_values.yaml b/roles/vault_utils/tasks/vault_ss_csi_load_merged_clustergroup_values.yaml new file mode 100644 index 0000000..33995fa --- /dev/null +++ b/roles/vault_utils/tasks/vault_ss_csi_load_merged_clustergroup_values.yaml @@ -0,0 +1,36 @@ +--- +# Merge clusterGroup.applications and clusterGroup.managedClusterGroups from every +# clustergroup stem (ConfigMap values- preferred, then local values-.yaml|yml). +# Sets _vault_ss_csi_values_root, _vault_ss_csi_values_source, _vault_ss_csi_merge_any_loaded. + +- name: Initialize SS CSI multi-clustergroup merge accumulators + ansible.builtin.set_fact: + _vault_merged_apps: {} + _vault_merged_mcg: {} + _vault_ss_csi_merge_source_notes: [] + _vault_ss_csi_merge_any_loaded: false + +- name: Discover clustergroup stems for SS CSI merge (main + managed from main file) + ansible.builtin.include_role: + name: clustergroup_discovery + +- name: Load and merge each clustergroup fragment for SS CSI + ansible.builtin.include_tasks: vault_ss_csi_load_one_clustergroup_values_fragment.yaml + loop: "{{ clustergroup_load_order | default([]) }}" + loop_control: + loop_var: cg_stem + when: (clustergroup_load_order | default([]) | length) > 0 + +- name: Set merge load outcome flag for SS CSI + ansible.builtin.set_fact: + _vault_ss_csi_merge_any_loaded: "{{ (_vault_ss_csi_merge_source_notes | default([]) | length) > 0 }}" + +- name: Assemble merged SS CSI values root from all fragments + ansible.builtin.set_fact: + _vault_ss_csi_values_root: + clusterGroup: + name: "{{ main_clustergroupname | default(main_clustergroup | default('', true), true) | string | trim }}" + applications: "{{ _vault_merged_apps | default({}) }}" + managedClusterGroups: "{{ _vault_merged_mcg | default({}) }}" + _vault_ss_csi_values_source: "{{ 'merged ' ~ (_vault_ss_csi_merge_source_notes | join('; ')) }}" + when: _vault_ss_csi_merge_any_loaded diff --git a/roles/vault_utils/tasks/vault_ss_csi_load_one_clustergroup_values_fragment.yaml b/roles/vault_utils/tasks/vault_ss_csi_load_one_clustergroup_values_fragment.yaml new file mode 100644 index 0000000..b499ce1 --- /dev/null +++ b/roles/vault_utils/tasks/vault_ss_csi_load_one_clustergroup_values_fragment.yaml @@ -0,0 +1,189 @@ +--- +# Load one clustergroup document (loop_var cg_stem) and merge applications / managedClusterGroups +# into _vault_merged_apps, _vault_merged_mcg, append to _vault_ss_csi_merge_source_notes. + +- name: Compute ConfigMap name for clustergroup stem {{ cg_stem }} + ansible.builtin.set_fact: + _ss_csi_frag_cm_name: >- + {{ + ( + (vault_ss_csi_clustergroup_configmap_name | default('', true) | string | trim | length > 0) + and + ( + (cg_stem | string | trim) + == + ( + main_clustergroupname + | default(main_clustergroup | default('', true), true) + | string | trim + ) + ) + ) + | ternary( + vault_ss_csi_clustergroup_configmap_name | string | trim, + 'values-' ~ (cg_stem | string | trim) + ) + }} + +- name: Reset fragment parse facts for {{ cg_stem }} + ansible.builtin.set_fact: + _ss_csi_frag_cm_data: {} + _ss_csi_frag_cm_yaml_key: "" + _ss_csi_frag_cm_values_candidate: {} + _ss_csi_frag_values_root: {} + _vault_ss_csi_frag_using_explicit_file: false + +- name: Read clustergroup values ConfigMap for stem {{ cg_stem }} + kubernetes.core.k8s_info: + api_version: v1 + kind: ConfigMap + name: "{{ _ss_csi_frag_cm_name }}" + namespace: "{{ vault_ss_csi_clustergroup_configmap_namespace }}" + register: _vault_ss_csi_frag_cg_cm + failed_when: false + when: vault_ss_csi_clustergroup_values_from_configmap | default(true) | bool + +- name: Set ConfigMap .data for fragment {{ cg_stem }} + ansible.builtin.set_fact: + _ss_csi_frag_cm_data: "{{ _vault_ss_csi_frag_cg_cm.resources[0].data | default({}) }}" + when: + - vault_ss_csi_clustergroup_values_from_configmap | default(true) | bool + - _vault_ss_csi_frag_cg_cm is defined + - not (_vault_ss_csi_frag_cg_cm.failed | default(false)) + - (_vault_ss_csi_frag_cg_cm.resources | default([]) | length) > 0 + +- name: Build ConfigMap key candidates for fragment {{ cg_stem }} + ansible.builtin.set_fact: + _ss_csi_frag_cm_key_candidates: >- + {{ + ( + ([vault_ss_csi_clustergroup_configmap_key | default('', true) | string | trim] + | reject('equalto', '') | list) + + (vault_ss_csi_clustergroup_configmap_key_candidates | default([])) + ) | unique | list + }} + when: + - _ss_csi_frag_cm_data is defined + - (_ss_csi_frag_cm_data | default({}) | length) > 0 + +- name: Pick first ConfigMap data key for fragment {{ cg_stem }} + ansible.builtin.set_fact: + _ss_csi_frag_cm_yaml_key: "{{ item }}" + loop: "{{ _ss_csi_frag_cm_key_candidates | default([]) }}" + when: + - (_ss_csi_frag_cm_yaml_key | default('') | string | length) == 0 + - item in (_ss_csi_frag_cm_data | default({})) + +- name: Parse YAML from ConfigMap for fragment {{ cg_stem }} + when: + - (_ss_csi_frag_cm_data | default({}) | length) > 0 + - (_ss_csi_frag_cm_yaml_key | default('') | string | length) > 0 + block: + - name: Decode YAML from ConfigMap key for {{ cg_stem }} + ansible.builtin.set_fact: + _ss_csi_frag_cm_values_candidate: "{{ _ss_csi_frag_cm_data[_ss_csi_frag_cm_yaml_key] | trim | from_yaml }}" + rescue: + - name: Note ConfigMap YAML parse failure for {{ cg_stem }} + ansible.builtin.set_fact: + _ss_csi_frag_cm_values_candidate: {} + +- name: Accept fragment from ConfigMap for {{ cg_stem }} + ansible.builtin.set_fact: + _ss_csi_frag_values_root: "{{ _ss_csi_frag_cm_values_candidate }}" + when: + - _ss_csi_frag_cm_values_candidate is defined + - _ss_csi_frag_cm_values_candidate is mapping + - _ss_csi_frag_cm_values_candidate.clusterGroup is defined + - _ss_csi_frag_cm_yaml_key is defined + - _ss_csi_frag_cm_yaml_key | string | length > 0 + +- name: Resolve explicit local override path for main stem {{ cg_stem }} + ansible.builtin.set_fact: + _vault_ss_csi_frag_values_path: "{{ vault_ss_csi_cluster_values_file | default('', true) | string | trim }}" + _vault_ss_csi_frag_using_explicit_file: true + when: + - vault_ss_csi_fallback_local_clustergroup_file | default(true) | bool + - (_ss_csi_frag_values_root | default({}) | length) == 0 + - (vault_ss_csi_cluster_values_file | default('', true) | string | trim | length) > 0 + - (cg_stem | string | trim) + == (main_clustergroupname | default(main_clustergroup | default('', true), true) | string | trim) + +- name: Default local values path for stem {{ cg_stem }} (yaml) + ansible.builtin.set_fact: + _vault_ss_csi_frag_values_path: "{{ pattern_dir | string | trim }}/values-{{ cg_stem | string | trim }}.yaml" + when: + - vault_ss_csi_fallback_local_clustergroup_file | default(true) | bool + - (_ss_csi_frag_values_root | default({}) | length) == 0 + - (_vault_ss_csi_frag_values_path | default('', true) | string | trim | length) == 0 + - pattern_dir is defined + - (pattern_dir | string | trim | length) > 0 + +- name: Stat local clustergroup file for {{ cg_stem }} + ansible.builtin.stat: + path: "{{ _vault_ss_csi_frag_values_path }}" + register: _vault_ss_csi_frag_values_stat + when: + - vault_ss_csi_fallback_local_clustergroup_file | default(true) | bool + - (_ss_csi_frag_values_root | default({}) | length) == 0 + - _vault_ss_csi_frag_values_path is defined + - _vault_ss_csi_frag_values_path | string | length > 0 + +- name: Fall back to .yml extension for stem {{ cg_stem }} + ansible.builtin.set_fact: + _vault_ss_csi_frag_values_path: "{{ pattern_dir | string | trim }}/values-{{ cg_stem | string | trim }}.yml" + when: + - vault_ss_csi_fallback_local_clustergroup_file | default(true) | bool + - (_ss_csi_frag_values_root | default({}) | length) == 0 + - _vault_ss_csi_frag_values_stat is defined + - not (_vault_ss_csi_frag_values_stat.stat.exists | default(false)) + - pattern_dir is defined + - (pattern_dir | string | trim | length) > 0 + - not (_vault_ss_csi_frag_using_explicit_file | default(false) | bool) + +- name: Stat alternate .yml path for {{ cg_stem }} + ansible.builtin.stat: + path: "{{ _vault_ss_csi_frag_values_path }}" + register: _vault_ss_csi_frag_values_stat + when: + - vault_ss_csi_fallback_local_clustergroup_file | default(true) | bool + - (_ss_csi_frag_values_root | default({}) | length) == 0 + - _vault_ss_csi_frag_values_path is defined + - _vault_ss_csi_frag_values_path | string | length > 0 + +- name: Slurp local clustergroup file for {{ cg_stem }} + ansible.builtin.slurp: + src: "{{ _vault_ss_csi_frag_values_path }}" + register: _vault_ss_csi_frag_values_slurp + when: + - vault_ss_csi_fallback_local_clustergroup_file | default(true) | bool + - (_ss_csi_frag_values_root | default({}) | length) == 0 + - _vault_ss_csi_frag_values_stat is defined + - _vault_ss_csi_frag_values_stat.stat.exists | default(false) + +- name: Accept fragment from local file for {{ cg_stem }} + ansible.builtin.set_fact: + _ss_csi_frag_values_root: "{{ (_vault_ss_csi_frag_values_slurp.content | b64decode | from_yaml) }}" + when: + - vault_ss_csi_fallback_local_clustergroup_file | default(true) | bool + - (_ss_csi_frag_values_root | default({}) | length) == 0 + - _vault_ss_csi_frag_values_slurp is defined + - _vault_ss_csi_frag_values_slurp.content is defined + +- name: Merge clusterGroup applications and managedClusterGroups for {{ cg_stem }} + ansible.builtin.set_fact: + _vault_merged_apps: "{{ _vault_merged_apps | default({}) | combine((_ss_csi_frag_values_root.clusterGroup | default({})).applications | default({})) }}" + _vault_merged_mcg: "{{ _vault_merged_mcg | default({}) | combine((_ss_csi_frag_values_root.clusterGroup | default({})).managedClusterGroups | default({}), recursive=true) }}" + _vault_ss_csi_apps_by_stem: "{{ _vault_ss_csi_apps_by_stem | default({}) | combine({(cg_stem | string | trim): ((_ss_csi_frag_values_root.clusterGroup | default({})).applications | default({}))}) }}" + _vault_ss_csi_merge_source_notes: "{{ _vault_ss_csi_merge_source_notes | default([]) + [_ss_csi_frag_src] }}" + vars: + _ss_csi_frag_src: >- + {{ + ('configmap ' ~ vault_ss_csi_clustergroup_configmap_namespace ~ '/' ~ _ss_csi_frag_cm_name + ~ ' key=' ~ _ss_csi_frag_cm_yaml_key) + if (_ss_csi_frag_cm_yaml_key | default('') | string | length) > 0 + else ('file ' ~ _vault_ss_csi_frag_values_path | string) + }} + when: + - _ss_csi_frag_values_root is defined + - _ss_csi_frag_values_root is mapping + - _ss_csi_frag_values_root.clusterGroup is defined diff --git a/roles/vault_utils/tasks/vault_ss_csi_normalize_spoke_entries_one_cluster.yaml b/roles/vault_utils/tasks/vault_ss_csi_normalize_spoke_entries_one_cluster.yaml new file mode 100644 index 0000000..38d50dc --- /dev/null +++ b/roles/vault_utils/tasks/vault_ss_csi_normalize_spoke_entries_one_cluster.yaml @@ -0,0 +1,23 @@ +--- +# ci_loop: one clusters_info dict2items entry. Appends matching _ss_csi_spoke_entries_raw rows with cluster=vault_path. + +- name: Append SS CSI rows targeting spoke {{ ci_loop.key }} (cluster id = vault_path) + ansible.builtin.set_fact: + _ss_csi_spoke_entries_for_spokes: "{{ _ss_csi_spoke_entries_for_spokes | default([]) + [_row_out] }}" + loop: "{{ _ss_csi_spoke_entries_raw }}" + loop_control: + loop_var: ss_row + vars: + _row_out: "{{ ss_row | combine({'cluster': ci_loop.value.vault_path | default('', true) | string}) }}" + _ss_norm_match: >- + {{ + (ss_row.cluster | default('') | string == ci_loop.key | string) + or (ss_row.cluster | default('') | string == (ci_loop.value.name | default('', true) | string)) + or (ss_row.cluster | default('') | string == (ci_loop.value.vault_path | default('', true) | string)) + or ( + (ci_loop.value.clusterGroup | default('', true) | string | trim | length) > 0 + and (ss_row.cluster | default('') | string == (ci_loop.value.clusterGroup | string)) + ) + }} + when: + - _ss_norm_match | bool diff --git a/roles/vault_utils/tasks/vault_ss_csi_normalize_spoke_entries_to_vault_path.yaml b/roles/vault_utils/tasks/vault_ss_csi_normalize_spoke_entries_to_vault_path.yaml new file mode 100644 index 0000000..cb4192b --- /dev/null +++ b/roles/vault_utils/tasks/vault_ss_csi_normalize_spoke_entries_to_vault_path.yaml @@ -0,0 +1,15 @@ +--- +# Rewrite ssCsiWorkloadAuth.cluster to vault_path (FQDN) for each matching ESO-enabled spoke — same cluster id as Vault/ESO spokes. + +- name: Reset SS CSI spoke entries normalized to vault_path + ansible.builtin.set_fact: + _ss_csi_spoke_entries_for_spokes: [] + +- name: Map SS CSI workload rows onto each spoke vault_path + ansible.builtin.include_tasks: vault_ss_csi_normalize_spoke_entries_one_cluster.yaml + loop: "{{ clusters_info | dict2items }}" + loop_control: + loop_var: ci_loop + when: + - ci_loop.key != 'local-cluster' + - ci_loop.value.esoToken is defined diff --git a/roles/vault_utils/tasks/vault_ss_csi_spoke_cluster.yaml b/roles/vault_utils/tasks/vault_ss_csi_spoke_cluster.yaml new file mode 100644 index 0000000..24788e4 --- /dev/null +++ b/roles/vault_utils/tasks/vault_ss_csi_spoke_cluster.yaml @@ -0,0 +1,16 @@ +--- +# cluster_loop: one entry from clusters_info (ManagedCluster name -> facts) +# Rows are pre-normalized to cluster == vault_path (FQDN) in vault_ss_csi_normalize_spoke_entries_to_vault_path.yaml +# so targeting matches ESO and Vault Kubernetes auth mounts on the spoke. +- name: Build SS CSI rows matching this spoke cluster + ansible.builtin.set_fact: + _ss_rows_this_cluster: "{{ _ss_csi_spoke_entries_for_spokes | default([]) | selectattr('cluster', 'equalto', cluster_loop.value.vault_path | default('', true) | string) | list }}" + +- name: Configure Vault SS CSI role on spoke {{ cluster_loop.key }} + ansible.builtin.include_tasks: vault_ss_csi_apply_one_spoke_sscsi_role.yaml + loop: "{{ _ss_rows_this_cluster | default([]) }}" + loop_control: + label: "{{ item.app }}/{{ item.namespace }}/{{ item.serviceAccount }}" + vars: + vault_spoke_cluster_loop: "{{ cluster_loop }}" + when: (_ss_rows_this_cluster | default([])) | length > 0 diff --git a/roles/vault_utils/tasks/vault_ss_csi_spoke_roles.yaml b/roles/vault_utils/tasks/vault_ss_csi_spoke_roles.yaml new file mode 100644 index 0000000..d02e5c5 --- /dev/null +++ b/roles/vault_utils/tasks/vault_ss_csi_spoke_roles.yaml @@ -0,0 +1,20 @@ +--- +# Align SS-CSI spoke targeting with ESO/Vault: cluster id is vault_path (spoke FQDN), not only ManagedCluster short name. +- name: Normalize SS CSI spoke entries to vault_path (ESO / Vault spoke cluster id) + ansible.builtin.include_tasks: vault_ss_csi_normalize_spoke_entries_to_vault_path.yaml + when: + - vault_ss_csi_from_applications | default(true) | bool + - _ss_csi_spoke_entries_raw is defined + - (_ss_csi_spoke_entries_raw | length) > 0 + +- name: Configure SS CSI Vault Kubernetes auth roles on each spoke + ansible.builtin.include_tasks: vault_ss_csi_spoke_cluster.yaml + loop: "{{ clusters_info | dict2items }}" + loop_control: + loop_var: cluster_loop + when: + - vault_ss_csi_from_applications | default(true) | bool + - _ss_csi_spoke_entries_raw is defined + - (_ss_csi_spoke_entries_raw | length) > 0 + - cluster_loop.value.esoToken is defined + - cluster_loop.key != "local-cluster" diff --git a/roles/vault_utils/tasks/vault_ss_csi_workload_auth.yaml b/roles/vault_utils/tasks/vault_ss_csi_workload_auth.yaml new file mode 100644 index 0000000..6e620d8 --- /dev/null +++ b/roles/vault_utils/tasks/vault_ss_csi_workload_auth.yaml @@ -0,0 +1,180 @@ +--- +# Build Vault Kubernetes auth roles from clusterGroup.applications.*.ssCsiWorkloadAuth +# Requires _merged_hub_policies (from vault_secrets_init). Sets _ss_csi_all_entries for spoke init. +- name: Initialize SS CSI workload facts when feature disabled + ansible.builtin.set_fact: + _ss_csi_all_entries: [] + _ss_csi_hub_entries: [] + _ss_csi_spoke_entries_raw: [] + when: not (vault_ss_csi_from_applications | default(true) | bool) + +- name: Initialize SS CSI workload entry list + ansible.builtin.set_fact: + _ss_csi_all_entries: [] + when: vault_ss_csi_from_applications | default(true) | bool + +# Many playbooks run vault_utils without pattern_settings. Align pattern_dir with pattern_settings +# (extra var, PATTERN_DIR, PWD, pwd), then derive main clustergroup name from values-global when unset. +- name: Resolve pattern_dir for SS CSI (align with pattern_settings) + ansible.builtin.include_tasks: ../pattern_settings/tasks/resolve_overrides.yml + when: vault_ss_csi_from_applications | default(true) | bool + +- name: Load main_clustergroup from values-global for SS CSI when unset + ansible.builtin.set_fact: + main_clustergroup: "{{ (lookup('file', (pattern_dir | string | trim) ~ '/values-global.yaml') | from_yaml).main.clusterGroupName | string | trim }}" + when: + - vault_ss_csi_from_applications | default(true) | bool + - pattern_dir is defined + - (pattern_dir | string | trim | length) > 0 + - (main_clustergroup is not defined) or ((main_clustergroup | default('', true) | string | trim) | length == 0) + +- name: Alias main_clustergroupname from main_clustergroup for SS CSI + ansible.builtin.set_fact: + main_clustergroupname: "{{ main_clustergroup | string | trim }}" + when: + - vault_ss_csi_from_applications | default(true) | bool + - (main_clustergroupname is not defined) or ((main_clustergroupname | default('', true) | string | trim) | length == 0) + - main_clustergroup is defined + - (main_clustergroup | string | trim | length) > 0 + +- name: Resolve main_clustergroupname from values-global when not set + ansible.builtin.slurp: + src: "{{ pattern_dir }}/values-global.yaml" + register: _vault_ss_csi_values_global_slurp + when: + - vault_ss_csi_from_applications | default(true) | bool + - pattern_dir is defined + - (pattern_dir | string | trim | length) > 0 + - main_clustergroupname is not defined or (main_clustergroupname | string | trim | length) == 0 + +- name: Set main_clustergroupname from slurped values-global + ansible.builtin.set_fact: + main_clustergroupname: "{{ (_vault_ss_csi_values_global_slurp.content | b64decode | from_yaml).main.clusterGroupName | string | trim }}" + when: + - vault_ss_csi_from_applications | default(true) | bool + - _vault_ss_csi_values_global_slurp is defined + - _vault_ss_csi_values_global_slurp.content is defined + +- name: Load clustergroup values for SS CSI (cluster ConfigMap, then local file) + ansible.builtin.include_tasks: vault_ss_csi_load_clustergroup_values.yaml + when: vault_ss_csi_from_applications | default(true) | bool + +- name: Parse clusterGroup applications and managedClusterGroups from clustergroup values + ansible.builtin.set_fact: + _vault_ss_csi_cluster_apps: "{{ (_vault_ss_csi_values_root.clusterGroup | default({})).applications | default({}) }}" + _vault_ss_csi_managed_cluster_groups: "{{ (_vault_ss_csi_values_root.clusterGroup | default({})).managedClusterGroups | default({}) }}" + when: + - vault_ss_csi_from_applications | default(true) | bool + - _vault_ss_csi_values_root is defined + +- name: Default empty applications and managedClusterGroups when clustergroup values not loaded + ansible.builtin.set_fact: + _vault_ss_csi_cluster_apps: {} + _vault_ss_csi_managed_cluster_groups: {} + when: + - vault_ss_csi_from_applications | default(true) | bool + - _vault_ss_csi_cluster_apps is not defined + +- name: Build per-stem applications map for SS CSI (legacy single document) + ansible.builtin.set_fact: + _vault_ss_csi_apps_by_stem: "{{ {(_cg_ssci_main | string): (_vault_ss_csi_cluster_apps | default({}))} }}" + vars: + _cg_ssci_main: "{{ main_clustergroupname | default(main_clustergroup | default('', true), true) | string | trim }}" + when: + - vault_ss_csi_from_applications | default(true) | bool + - _vault_ss_csi_apps_by_stem is not defined + - _vault_ss_csi_cluster_apps is defined + +- name: Build SS CSI application collect stem order + ansible.builtin.set_fact: + _vault_ss_csi_cg_collect_stems: >- + {{ + clustergroup_load_order + if (clustergroup_load_order is defined and (clustergroup_load_order | length) > 0) + else [_cg_ssci_main] + }} + vars: + _cg_ssci_main: "{{ main_clustergroupname | default(main_clustergroup | default('', true), true) | string | trim }}" + when: + - vault_ss_csi_from_applications | default(true) | bool + - _vault_ss_csi_apps_by_stem is defined + +- name: Collect SS CSI rows from clusterGroup.applications per clustergroup stem + ansible.builtin.include_tasks: vault_ss_csi_collect_applications_for_stem.yaml + loop: "{{ _vault_ss_csi_cg_collect_stems | default([]) }}" + loop_control: + loop_var: cg_collect_stem + when: + - vault_ss_csi_from_applications | default(true) | bool + - _vault_ss_csi_apps_by_stem is defined + - (_vault_ss_csi_cg_collect_stems | default([]) | length) > 0 + +- name: Collect SS CSI rows from clusterGroup.managedClusterGroups.*.applications + ansible.builtin.include_tasks: vault_ss_csi_collect_managed_group_application.yaml + loop: "{{ _vault_ss_csi_managed_cluster_groups | default({}) | dict2items | list }}" + loop_control: + loop_var: mcg_outer_item + when: + - vault_ss_csi_from_applications | default(true) | bool + - _vault_ss_csi_managed_cluster_groups is defined + - (_vault_ss_csi_managed_cluster_groups | default({}) | length) > 0 + +- name: Append legacy Vault CSI hub binding when enabled + ansible.builtin.set_fact: + _ss_csi_all_entries: "{{ _ss_csi_all_entries | default([]) + [_legacy_row] }}" + vars: + _legacy_row: + app: _legacy_driver + serviceAccount: "{{ vault_csi_service_account_name }}" + namespace: "{{ vault_csi_service_account_namespace }}" + cluster: hub + when: + - vault_ss_csi_from_applications | default(true) | bool + - vault_csi_kubernetes_auth | default(false) | bool + +- name: Reset hub/spoke SS CSI classification lists + ansible.builtin.set_fact: + _ss_csi_hub_entries: [] + _ss_csi_spoke_entries_raw: [] + when: vault_ss_csi_from_applications | default(true) | bool + +- name: Classify SS CSI entries for hub Kubernetes auth mount + ansible.builtin.set_fact: + _ss_csi_hub_entries: "{{ _ss_csi_hub_entries | default([]) + [item] }}" + loop: "{{ _ss_csi_all_entries | default([]) }}" + when: + - vault_ss_csi_from_applications | default(true) | bool + - (item.cluster | default('hub') | lower) in ['hub', 'local-cluster', ''] + +- name: Classify SS CSI entries for spoke mounts (non-hub cluster field) + ansible.builtin.set_fact: + _ss_csi_spoke_entries_raw: "{{ _ss_csi_spoke_entries_raw | default([]) + [item] }}" + loop: "{{ _ss_csi_all_entries | default([]) }}" + when: + - vault_ss_csi_from_applications | default(true) | bool + - (item.cluster | default('hub') | lower) not in ['hub', 'local-cluster', ''] + +- name: SS CSI workload Vault auth — summary (values source, counts, next step) + ansible.builtin.debug: + msg: >- + SS CSI Vault Kubernetes auth: clustergroup values source={{ _vault_ss_csi_values_source | default('(none)') }}, + pattern_dir={{ pattern_dir | default('(unset)') }}, main_clustergroupname={{ main_clustergroupname | default('(unset)') }}; + applications stems scanned={{ _vault_ss_csi_cg_collect_stems | default([]) | length }}, + hub applications in merged values={{ _vault_ss_csi_cluster_apps | default({}) | dict2items | length }}, + managedClusterGroups={{ _vault_ss_csi_managed_cluster_groups | default({}) | dict2items | length }}, + ssCsiWorkloadAuth identities={{ _ss_csi_all_entries | default([]) | length }}, + hub roles to configure={{ _ss_csi_hub_entries | default([]) | length }}, + spoke-bound raw rows={{ _ss_csi_spoke_entries_raw | default([]) | length }} (Vault spoke -sscsi- roles run only when this is > 0). + If identities is 0, define ssCsiWorkloadAuth under clusterGroup.applications, leaving cluster unset in values (main stem targets hub, other stems target that stem), or under clusterGroup.managedClusterGroups.*.applications, leaving cluster unset (targets that managed group). + If spoke-bound raw rows is 0 but you expected a spoke role, every row was classified as hub (wrong file/placement, merged hub-only app, or legacy explicit cluster key in values). Prefer leaving cluster unset; defaults come from stem or managed group. + If nothing loads, check vault_ss_csi_clustergroup_configmap_* settings, pass pattern_dir (and optionally main_clustergroup / main_clustergroupname) via extra vars, set vault_ss_csi_cluster_values_file, or set vault_ss_csi_fallback_local_clustergroup_file; ensure main.clusterGroupName in values-global when resolving from pattern_dir. + when: vault_ss_csi_from_applications | default(true) | bool + +- name: Configure hub Vault Kubernetes auth role per SS CSI workload identity + ansible.builtin.include_tasks: vault_ss_csi_apply_one_hub_sscsi_role.yaml + loop: "{{ _ss_csi_hub_entries | default([]) }}" + loop_control: + label: "{{ item.app }}/{{ item.namespace }}/{{ item.serviceAccount }}" + when: + - vault_ss_csi_from_applications | default(true) | bool + - (_ss_csi_hub_entries | default([])) | length > 0 diff --git a/secrets-initialization-and-vault-unseal.md b/secrets-initialization-and-vault-unseal.md new file mode 100644 index 0000000..c6f2397 --- /dev/null +++ b/secrets-initialization-and-vault-unseal.md @@ -0,0 +1,225 @@ +# Secrets initialization process (cluster_utils) + +This document describes how Vault and application secrets are bootstrapped when you run the **vault** playbook and the **`vault_utils`** role, with emphasis on **`vault_unseal`** (`roles/vault_utils/tasks/vault_unseal.yaml`). + +## Entry point + +- **Playbook:** `playbooks/vault.yml` +- **Hosts:** `localhost`, `connection: local`, `gather_facts: false` +- **Roles (order):** + 1. **`pattern_settings`** — Resolves `pattern_dir` (extra var, `PATTERN_DIR`, + then `PWD` / `pwd`) and loads `values-global.yaml` (including + `main.clusterGroupName` as `main_clustergroup`). When `pattern_settings` is + not in the play, **`vault_ss_csi_workload_auth`** repeats the same + `pattern_dir` resolution and, if needed, reads `values-global.yaml` under + that directory to set `main_clustergroup` / `main_clustergroupname` before + loading merged clustergroup values. + 2. **`find_vp_secrets`** — Locates pattern secrets inputs as used elsewhere in the repository. + 3. **`cluster_pre_check`** — Verifies Python `kubernetes` import, kubeconfig (`KUBECONFIG` or `~/.kube/config`), or in-cluster operation via `KUBERNETES_SERVICE_HOST`. + 4. **`vault_utils`** — Performs Vault init, unseal, backends/policies, spokes, and pushing secrets from `values-secret` files. + +## `vault_utils` role task order (`roles/vault_utils/tasks/main.yml`) + +Tasks run in this fixed order (each block has an Ansible **tag** of the same name for selective runs): + +| Order | Import | Tag | +| ----- | ------ | --- | +| 1 | `vault_init.yaml` | `vault_init` | +| 2 | `vault_unseal.yaml` | `vault_unseal` | +| 3 | `vault_secrets_init.yaml` | `vault_secrets_init` | +| 4 | `vault_spokes_init.yaml` | `vault_spokes_init` | +| 5 | `push_secrets.yaml` | `push_secrets` | +| 6 | `vault_jwt.yaml` | `vault_jwt` (only if `vault_jwt_config` is true) | + +--- + +## Step 1: `vault_init` (`vault_init.yaml`) + +Purpose: **first-time Vault operator initialization** if the cluster's Vault is not already initialized. + +1. **Include `vault_status.yaml`** (see below) so `vault_status` is populated. +2. **Set `vault_initialized`** from `vault_status['initialized']`. +3. **If not initialized:** run `vault operator init -format=json` inside pod `{{ vault_pod }}` in namespace `{{ vault_ns }}` (retries: 10, delay 15s) to tolerate startup 500s. +4. **If not initialized:** parse stdout as JSON into `vault_init_json`. +5. **If not initialized:** create/update Kubernetes **Secret** `{{ unseal_secret }}` in `{{ unseal_namespace }}` with key `vault_data_json` (base64-encoded JSON of the init output, including **root token** and **unseal keys**). + +**Defaults (from `roles/vault_utils/defaults/main.yml`):** `unseal_secret: vaultkeys`, `unseal_namespace: imperative`. + +**Note:** A comment in the task file mentions `unseal_from_cluster`; the **actual** `when` clause only requires `not vault_initialized` — the secret is saved whenever init runs successfully. + +If Vault is **already** initialized, all mutating steps are skipped. + +--- + +## Step 2: `vault_unseal` (`vault_unseal.yaml`) — detailed + +Purpose: **unseal** the leader (and followers in HA), **join Raft** followers to the leader, and **log in** with the root token so subsequent tasks in the same play can use Vault. Most steps run **only when `vault_sealed` is true** (Vault reported sealed in status). + +### 2.1 Shared prerequisite: `vault_status.yaml` (included first) + +This file is **not** tagged separately; it runs as part of both `vault_init` and `vault_unseal` (and again inside `push_secrets`). + +1. **Wait for namespace** `{{ vault_ns }}` to exist (`k8s_info` Namespace, retries 20 × 45s). +2. **Wait for pod** `{{ vault_pod }}` in that namespace (retries 20 × 45s). +3. **Exec** `vault status -format=json` on the leader pod until the result includes `'rc'` (handles transient 500 / handshake issues; retries 20 × 45s). +4. **Set fact `vault_status`** from parsed JSON stdout. +5. **List pods** in `{{ vault_ns }}` with label `component=server`, build **`vault_pods`** (names). +6. **Set `followers`** = all server pods **except** `{{ vault_pod }}` (the leader name from defaults is `vault-0`). + +### 2.2 `vault_unseal` proper + +1. **Include `vault_status.yaml`** again (refreshes `vault_status`, `followers`, etc.). +2. **Set `vault_sealed`** = `vault_status['sealed']` (boolean). +3. **If sealed:** read Secret **`{{ unseal_namespace }}/{{ unseal_secret }}`** (`k8s_info`); register `vault_init_data`. +4. **If sealed:** set **`vaultkeys_exists`** from whether the secret has any resources. +5. **If sealed and the secret is missing:** **`meta: end_play`** — the play stops. Unseal cannot proceed without the init material stored in the cluster. +6. **If sealed:** decode `vault_data_json` from the secret, parse JSON → **`vault_init_json`**. +7. **If sealed:** set **`root_token`** and **`unseal_keys`** from `vault_init_json` (`root_token`, `unseal_keys_hex`). +8. **If sealed — Unseal leader:** for **each** key in `unseal_keys`, exec on the leader pod: `vault operator unseal ""`. +9. **If sealed and `followers` is non-empty — Join Raft:** for each follower pod, exec: + `vault operator raft join http://{{ vault_pod }}.{{ vault_ns }}-internal:8200` + (retries 10, delay 15s per follower). +10. **If sealed and followers exist — Unseal followers:** nested loop over `followers x unseal_keys` (each follower gets every unseal key applied via `vault operator unseal` on that follower's pod). +11. **If sealed — Login:** on the leader pod: `vault login "{{ root_token }}"`. + +**If Vault is already unsealed** (`vault_sealed` false): steps 3–11 are skipped (no secret read, no unseal, no join, no login from this file). The play continues to `vault_secrets_init`. + +#### Operational implications + +- **HA:** Followers are discovered by label `component=server`; leader is fixed by name `vault_pod` (default `vault-0`). +- **Security:** Root token and unseal keys live in the **`vaultkeys`** secret in **`imperative`** (by default); anyone with cluster access to that secret can unseal and administer Vault. +- **Cold start:** Run **`vault_init`** before **`vault_unseal`** in the same play (as `main.yml` does), or ensure the `vaultkeys` secret already exists if Vault was initialized out-of-band. + +--- + +## Step 3: `vault_secrets_init` (`vault_secrets_init.yaml`) + +Runs **after** unseal. Configures Vault **engines**, **Kubernetes auth** for External Secrets Operator, **policies**, and the **hub Kubernetes role**; then includes SS CSI workload auth tasks. + +Summary: + +1. Enable **KV v2** secrets engine at `{{ vault_base_path }}` (default `secret`) if not already present. +2. Enable **`kubernetes`** auth at path `{{ vault_hub }}` (default `hub`) if missing. +3. Resolve **External Secrets** SA token: prefer Secret `{{ external_secrets_ns }}/{{ external_secrets_secret }}` (defaults: `external-secrets` / `ocp-external-secrets`); else legacy `golang-external-secrets` / `golang-external-secrets`. Fail if neither exists. +4. **`vault write auth/{{ vault_hub }}/config`** with `token_reviewer_jwt`, `kubernetes_host`, CA from the Vault pod's service account, issuer `https://kubernetes.default.svc`. +5. Write **HCL policy files** in the pod under `/tmp` and **`vault policy write`** for: global, pushsecrets (data + metadata paths), hub path. +6. Read existing **`auth/{{ vault_hub }}/role/{{ vault_hub }}-role`**, merge policies with `vault_hub_role_default_policies`, and **`vault write`** the role when an update is needed (bound SA/namespace from active external-secrets config, TTL from `vault_hub_ttl`). +7. **`include_tasks: vault_ss_csi_workload_auth.yaml`** for optional SS CSI Kubernetes auth roles from pattern values. + +### SS CSI: parsing, extraction, and projection + +SS CSI workload auth runs from **`include_tasks: vault_ss_csi_workload_auth.yaml`** +(inside **`vault_secrets_init.yaml`**). The pipeline is: + +1. **Parsing** — **`vault_ss_csi_load_clustergroup_values.yaml`** chooses merged + multi-stem loading (**`vault_ss_csi_aggregate_clustergroup_sources`**, default + true) or **legacy** single-document loading. Merged mode runs + **`clustergroup_discovery`** then, for each stem in **`clustergroup_load_order`**, + loads **`ConfigMap` `values-`** (then optional **`values-.yaml|yml`** + under **`pattern_dir`**) and merges **`clusterGroup.applications`** and + **`clusterGroup.managedClusterGroups`**. See **`roles/vault_utils/README.md`** + (SS CSI) for variables and task filenames. +2. **Extraction** — Builds per-stem **`_vault_ss_csi_apps_by_stem`** and collects + **`ssCsiWorkloadAuth`** from **`clusterGroup.applications`** per stem (omit + **`cluster`** in values: main stem resolves to **hub**; other stems to the + **stem name**) and from merged **`clusterGroup.managedClusterGroups.*.applications`** + (omit **`cluster`**; defaults to managed group **`name`** or YAML key). +3. **Projection** — Hub-classified rows get **`vault_ss_csi_apply_one_hub_sscsi_role`**; + spoke rows are normalized to **`vault_path`** during **`vault_spokes_init`** + (**`vault_ss_csi_normalize_spoke_entries_to_vault_path`**) and written with + **`vault_ss_csi_apply_one_spoke_sscsi_role`**. + +**Defaults:** ConfigMaps live in **`openshift-gitops`** unless +**`vault_ss_csi_clustergroup_configmap_namespace`** is changed; YAML is read from +data keys in **`vault_ss_csi_clustergroup_configmap_key_candidates`** unless +**`vault_ss_csi_clustergroup_configmap_key`** is set. Each document must define +**`clusterGroup`**. Set **`vault_ss_csi_clustergroup_values_from_configmap`** to +false to force file-only reads. When **`vault_ss_csi_fallback_local_clustergroup_file`** +is true, missing or unusable cluster data falls back to local files as implemented +in **`vault_ss_csi_load_one_clustergroup_values_fragment.yaml`** / legacy tasks. + +**Spoke cluster ID and charts:** Omit **`cluster`** in pattern `ssCsiWorkloadAuth` lists; Ansible derives it from stem or managed group. Before applying SS CSI roles on spokes, +`**vault_ss_csi_normalize_spoke_entries_to_vault_path.yaml`** rewrites each spoke row so **`cluster` equals `vault_path`** +(spoke FQDN) for every cluster that has External Secrets token data (`esoToken`). +That matches Vault Kubernetes auth mounts and ESO. +Pattern charts that render **`SecretProviderClass`** via **vp-sscsi-spc** should keep **`global.clusterDomain`** set to that same FQDN on the spoke; the library builds **`spec.parameters.roleName`** as **`-sscsi-`**, using the FQDN mount path (not a short clustergroup label). + +**Local inspection:** **`playbooks/list_clustergroups.yml`** and +**`playbooks/parse_clustergroup_values.yml`** exercise the **`clustergroup_discovery`** +role; see **`roles/clustergroup_discovery/README.md`**. + +### Vault route CA for SS CSI TLS + +The **SS CSI** path in this collection no longer gathers hub ingress CA material or applies CA `ConfigMap` objects. +CA distribution for the Vault route is now expected to be handled by a separate chart. + +When using **Secrets Store CSI** against Vault over HTTPS (`vaultSkipTLSVerify: "false"`), ensure your platform/chart layer provides the CA bundle and mount path expected by your SS CSI deployment. +The `vault_utils` role now only configures Vault auth backends, policies, and SS CSI Kubernetes auth roles. + +--- + +## Step 4: `vault_spokes_init` (`vault_spokes_init.yaml`) + +Configures Vault for **ACM managed clusters** (Kubernetes auth mounts and roles per spoke, paths under `secret/`, etc.). + +**Important:** If there are **no** `ManagedCluster` resources, the ACM API call **failed**, or **`api_found`** is false, the role runs **`meta: end_play`**, which **stops the entire play** immediately. +In that situation **`push_secrets`** and **`vault_jwt`** do **not** run in the same invocation. +For hub-only workflows, use **`--skip-tags vault_spokes_init`** (or run `push_secrets` in a separate tagged run) so secret loading still executes. + +--- + +## Step 5: `push_secrets` (`push_secrets.yaml`) + +Purpose: Load **pattern** secrets from disk into Vault using the **`vault_load_secrets`** module. + +1. **Include `vault_status.yaml`**. +2. **Retry loop** on leader: `vault status -format=json` until **`sealed` is false** (handles race with async unseal or external unseal). +3. **Retry** until `vault list auth/{{ vault_hub }}/role` shows **`{{ vault_hub }}-role`** (hub role from secrets init). +4. Resolve **`found_file`**: `VALUES_SECRET` env if set and file exists; else `first_found` among pattern-specific paths under `~/.config/...`, `~/values-secret-*.yaml`, `~/values-secret.yaml`, or `{{ pattern_dir }}/values-secret.yaml.template`. +5. Detect **ansible-vault** encryption (first line `$ANSIBLE_VAULT`); if encrypted, **pause** for password and **`ansible-vault view`** to plaintext. +6. **`vault_load_secrets`** with either file path or plaintext, `check_missing_secrets: false`, and `values_secret_template` pointing at `{{ pattern_dir }}/values-secret.yaml.template`. + +--- + +## Step 6: `vault_jwt` (`vault_jwt.yaml`) + +Included from `main.yml` only when **`vault_jwt_config | default(false) | bool`** is true. Configures JWT auth and roles as defined in role defaults/vars. + +--- + +## Key variables (defaults) + +| Variable | Default | Meaning | +| -------- | ------- | ------- | +| `vault_ns` | `vault` | Vault namespace | +| `vault_pod` | `vault-0` | Leader pod name | +| `vault_hub` | `hub` | Kubernetes auth mount path segment | +| `vault_base_path` | `secret` | KV v2 mount path | +| `unseal_secret` | `vaultkeys` | Secret name holding init JSON | +| `unseal_namespace` | `imperative` | Namespace for unseal secret | + +Override via inventory, extra vars, or role vars as needed. + +--- + +## Selective execution (tags) + +You can run subsets, for example: + +```bash +ansible-playbook playbooks/vault.yml --tags vault_init,vault_unseal +``` + +Useful for reproducing only init+unseal without spokes or secret push. + +--- + +## Related documentation in repository + +- **`roles/vault_utils/README.md`** — Role variables, values-secret v1/v2 formats, Vault path layout (`secret/global`, `secret/hub`, spokes, `secret/pushsecrets`), and the SS CSI **parsing / extraction / projection** section. +- **`roles/clustergroup_discovery/README.md`** — How main + managed clustergroup stems are derived and how **`playbooks/list_clustergroups.yml`** / **`playbooks/parse_clustergroup_values.yml`** use them. +- **`playbooks/process_secrets.yml`** / **`roles/load_secrets`** — Broader "load secrets" flow for patterns (not identical to `vault.yml`, but shares concepts like `find_vp_secrets` and backing store). + +--- + +*Generated from repository `rhvp.cluster_utils` (Ansible tasks as of documentation date). Task files are authoritative if they diverge from this text.*