diff --git a/ansible/README.md b/ansible/README.md new file mode 100644 index 00000000..a73de2ca --- /dev/null +++ b/ansible/README.md @@ -0,0 +1,136 @@ +## MicroShift cluster automation with Ansible + +This directory contains a minimal workflow to build an Ansible runner container, prepare an inventory, install upstream MicroShift on target nodes, and join them into a single cluster. + +### Workflow + +```mermaid + sequenceDiagram + participant User + participant Runner as Ansible Runner (container) + participant EC2 + participant Primary as Primary Node + participant Secondary as Secondary Node(s) + + User->>EC2: run cloudformation VM instances provisioning - Centos Stream + EC2->>Primary: VMs Created with SSH keys + EC2->>Secondary: + User->>User: Build ansible container image + User->>Runner: ansible provision + rect rgb(235,245,255) + Note over Primary: Install Microshift + Runner->>Primary: prepare OS + Runner->>Secondary: + Runner->>Primary: Download Latest MicroShift RPMs from GH Releases + Runner->>Secondary: + Runner->>Primary: Install microshift packages + Runner->>Secondary: + Runner->>Primary: Start Microshift + Runner->>Secondary: + Runner->>Primary: Healthcheck + Runner->>Secondary: + end + + rect rgb(245,235,255) + Note over Primary,Secondary: Create Cluster + Runner->>Primary: Upload & run configure-node.sh + Runner->>Secondary: Distribute kubeconfig-bootstrap to Nodes + Runner->>Secondary: Run configure-node.sh --bootstrap-kubeconfig + Secondary->>Primary: Joins cluster + end + + Runner->>User: Report cluster ready / errors +``` + +#### 1. Infrastructure Setup +- User provisions EC2 VM instances using CloudFormation with CentOS Stream +- VMs are created with SSH keys for access +- User builds the Ansible runner container image locally +#### 2. MicroShift Installation +- Ansible runner prepares the operating system on both primary and secondary nodes +- Downloads latest MicroShift RPM packages from GitHub releases +- Installs MicroShift packages on all nodes +- Starts the MicroShift service on all nodes +- Performs health checks to verify installation success +#### 3. Cluster Formation +- Ansible runner uploads and executes configure-node.sh script on primary node +- Distributes kubeconfig-bootstrap files to secondary nodes +- Executes configure-node.sh with bootstrap-kubeconfig on secondary nodes +- Secondary nodes join the cluster by connecting to the primary node +- Ansible runner reports final cluster status (ready/errors) back to user + The entire process is orchestrated through the containerized Ansible runner, which manages parallel operations across multiple nodes to create a functional MicroShift cluster. + +### Prerequisites +- **Podman** installed on your workstation. +- **SSH access** from your workstation to all target VMs. + - Public keys should be present in `~/.ssh/` and trusted by the target hosts. + - Passwordless sudo on the target hosts is recommended for automation. +- Target hosts are reachable by DNS or IP and allow inbound SSH. +- **Note:** This workflow assumes that your target nodes are running **CentOS Stream 9** as the operating system. + +### Build the Ansible runner image (Podman) +Build the container that bundles Ansible and playbooks in this repo: + +```bash +podman build -f ansible-runner.Containerfile -t ansible-runner:latest . +``` + +### Create the Ansible inventory +Create an `inventory` file in `cluster/` directory with your managed hosts. Example: + +```ini +[remote] +192.168.1.22 ansible_connection=ssh ansible_user=ec2-user role=primary +192.168.1.23 ansible_connection=ssh ansible_user=ec2-user role=secondary +``` + +Notes: +- `ansible_user` must exist on the host and have sudo privileges. +- `role` is used by the playbooks to distinguish the primary node from secondaries. +- This setup assumes SSH keys are available at `~/.ssh/` on your workstation. + +### Install latest upstream MicroShift on all nodes +This will install Microshift in all the configured nodes in parallel. + +```bash +podman run -ti \ + -v ~/.ssh:/home/runner/.ssh:Z \ + -v "$PWD/cluster":/runner:Z \ + -v "$PWD/roles":/runner/roles:Z \ + ansible-runner:latest \ + ansible-playbook -i /runner/inventory /runner/install.yaml +``` + +### Join nodes into a single cluster +After installation, run the join procedure to form a cluster with the designated primary. + +```bash +podman run -ti \ + -v ~/.ssh:/home/runner/.ssh:Z \ + -v "$PWD/cluster":/runner:Z \ + -v "$PWD/roles":/runner/roles:Z \ + ansible-runner:latest \ + ansible-playbook -i /runner/inventory /runner/join.yaml +``` + +### Verify the cluster +Once the join completes, log into the primary node and run the following commands to verify the cluster is up and running: + +```bash +mkdir -p ~/.kube +sudo cat /var/lib/microshift/resources/kubeadmin/kubeconfig > ~/.kube/config +kubectl get nodes -o wide +kubectl get pods -A +``` + +### Troubleshooting +- If SSH fails inside the container, confirm `~/.ssh` is mounted and permissions are preserved. +- Ensure security contexts on volumes are correct (note the `:Z` label in `-v` mounts when using SELinux). +- Validate inventory hostnames/IPs resolve from your workstation. + +### Cleanup (optional) +To remove the built image locally: + +```bash +podman rmi ansible-runner:latest +``` diff --git a/ansible/ansible-runner.Containerfile b/ansible/ansible-runner.Containerfile new file mode 100644 index 00000000..8b84a7ac --- /dev/null +++ b/ansible/ansible-runner.Containerfile @@ -0,0 +1,49 @@ +FROM quay.io/centos/centos:stream9 + +ENV LANG=en_US.UTF-8 + +# Enable CRB to get python3-wheel, then install deps +RUN dnf -y update && \ + dnf -y install dnf-plugins-core && \ + dnf config-manager --set-enabled crb && \ + dnf -y install \ + python3 \ + python3-pip \ + python3-psutil \ + python3-setuptools \ + python3-wheel \ + git \ + sshpass \ + openssh-clients \ + rsync \ + tar \ + gzip \ + unzip \ + which \ + jq \ + sudo \ + findutils \ + procps-ng \ + hostname \ + ca-certificates && \ + dnf clean all && rm -rf /var/cache/dnf/* + +# Ansible + ansible-runner +RUN pip3 install --no-cache-dir --upgrade pip && \ + pip3 install --no-cache-dir ansible-core ansible-runner && \ + pip3 install --no-cache-dir "github3.py>=4.0.0" + +# Runner FS layout +RUN mkdir -p /runner/project /runner/inventory /runner/env /runner/artifacts && \ + echo "[all]" > /runner/inventory/hosts + +# Non-root user +RUN useradd -m -s /bin/bash runner && \ + chown -R runner:runner /runner + +USER runner +RUN ansible-galaxy collection install community.general containers.podman + +WORKDIR /runner/project +USER runner +CMD ["bash", "-lc", "echo 'Ansible:' && ansible --version && echo 'Runner:' && ansible-runner --version"] diff --git a/ansible/cluster/install.yaml b/ansible/cluster/install.yaml new file mode 100644 index 00000000..f3c930d1 --- /dev/null +++ b/ansible/cluster/install.yaml @@ -0,0 +1,113 @@ +--- +- name: install Microshift on remote host + hosts: remote + gather_facts: true + become: false + + vars: + RPM_DEP_PATH: "4.21-el9-beta" + + tasks: + - name: Install required packages + become: true + dnf: + name: + - tmux + - git + - python3-pip + state: present + + - name: install k9s for kubernetes debugging + include_role: + name: k9s + vars: + ansible_become: true + + - name: Install github3.py for python3 + ansible.builtin.pip: + name: "github3.py" + executable: pip3 + + - name: download microshift latest RPMs from github + include_role: + name: microshift-okd-download + vars: + microshift_download_dir: "/var/tmp/microshift_rpms" + + - name: Extract MicroShift RPMs tarball + ansible.builtin.unarchive: + src: /var/tmp/microshift_rpms/microshift-rpms-x86_64.tgz + dest: /var/tmp/microshift_rpms + remote_src: yes + + - name: on centos create openshift-mirror-beta repo file + copy: + dest: /etc/yum.repos.d/openshift-mirror-beta.repo + mode: 0644 + content: | + [openshift-mirror-beta] + name=OpenShift Mirror Beta Repository + baseurl=https://mirror.openshift.com/pub/openshift-v4/{{ ansible_facts.architecture }}/dependencies/rpms/{{ RPM_DEP_PATH }}/ + enabled=1 + gpgcheck=0 + skip_if_unavailable=0 + become: true + when: ansible_facts.distribution == "CentOS" + + - name: create local.repo file + copy: + dest: /etc/yum.repos.d/local.repo + mode: 0644 + content: | + [microshift-local] + # No spaces allowed in that [repo-name] or you get a "bad id for repo" error + name=My RPMs $releasever - $basearch + baseurl=/var/tmp/microshift_rpms + enabled=1 + metadata_expire=1d + gpgcheck=0 + become: true + + - name: Perform dnf update + ansible.builtin.dnf: + name: '*' + state: latest + update_cache: yes + become: true + + - name: Install microshift packages + dnf: + name: + - microshift + - microshift-networking + - openvswitch3.5 + state: present + become: true + + - name: create microshift config + copy: + dest: /etc/microshift/config.yaml + content: | + apiServer: + subjectAltNames: + - "{{ansible_ssh_host}}" + become: true + + - name: Restart microshift service + systemd: + name: microshift + state: restarted + daemon_reload: yes + become: true + + - name: Run microshift healthcheck for cert-manager deployment + command: microshift healthcheck + become: true + + - name: add kubeconfig to bashrc + lineinfile: + path: /root/.bashrc + line: 'export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig' + state: present + create: yes + become: true \ No newline at end of file diff --git a/ansible/cluster/join.yaml b/ansible/cluster/join.yaml new file mode 100644 index 00000000..e0b0452b --- /dev/null +++ b/ansible/cluster/join.yaml @@ -0,0 +1,75 @@ +--- +- name: join Microshift nodes to a cluster + hosts: remote + gather_facts: true + become: false + + tasks: + - name: Install required packages + become: true + dnf: + name: + - tmux + - git + - python3-pip + - firewalld + state: present + - name: Ensure kubeconfig is accessible as root on primary host + become: true + copy: + src: /var/lib/microshift/resources/kubeadmin/kubeconfig + dest: /root/kubeconfig + remote_src: yes + mode: '0600' + when: hostvars[inventory_hostname]['role'] == "primary" + + - name: Find primary host name + set_fact: + primary_hostname: "{{ groups['remote'] | map('extract', hostvars) | selectattr('role', 'equalto', 'primary') | map(attribute='inventory_hostname') | list | first | default('') }}" + run_once: true + + - name: Validate primary hostname is defined + ansible.builtin.fail: + msg: "No host with role=primary found in inventory" + when: primary_hostname is not defined or primary_hostname == '' + run_once: true + + - name: Download configure-node.sh to /root/configure-node.sh + become: true + ansible.builtin.get_url: + url: https://raw.githubusercontent.com/openshift/microshift/5a0a896896bc8ecdaf0e72ca1c12a909988a3790/scripts/multinode/configure-node.sh + dest: /root/configure-node.sh + mode: '0755' + checksum: sha256:eee96af46d8068c1b154190cab208151f915755d7ac58836b15d7d5b9bc75225 + + - name: Run configure-node.sh on primary host as root + become: true + shell: /root/configure-node.sh + async: 600 + poll: 5 + when: hostvars[inventory_hostname]['role'] == "primary" + + - name: Fetch kubeconfig from primary host to Ansible executor + become: true + fetch: + src: /root/kubeconfig-bootstrap + dest: /tmp/kubeconfig_from_primary + flat: yes + delegate_to: "{{ primary_hostname }}" + run_once: true + + - name: Copy kubeconfig from executor to secondary hosts + become: true + copy: + src: /tmp/kubeconfig_from_primary + dest: /root/kubeconfig-bootstrap + mode: '0600' + when: hostvars[inventory_hostname]['role'] == "secondary" + + - name: Run configure-node.sh with --bootstrap-kubeconfig on secondary hosts + become: true + shell: /root/configure-node.sh --bootstrap-kubeconfig /root/kubeconfig-bootstrap + async: 600 + poll: 5 + when: hostvars[inventory_hostname]['role'] == "secondary" + diff --git a/ansible-roles/README.md b/ansible/roles/README.md similarity index 100% rename from ansible-roles/README.md rename to ansible/roles/README.md diff --git a/ansible/roles/k9s/tasks/main.yaml b/ansible/roles/k9s/tasks/main.yaml new file mode 100644 index 00000000..9e45932d --- /dev/null +++ b/ansible/roles/k9s/tasks/main.yaml @@ -0,0 +1,40 @@ +--- +- name: Create temporary directory for k9s download + ansible.builtin.tempfile: + state: directory + prefix: k9s_download_ + register: temp_k9s_dir + +- name: Download k9s tarball + ansible.builtin.get_url: + url: "{{ k9s_download_url_template | replace('{{ k9s_version }}', k9s_version) }}" + dest: "{{ temp_k9s_dir.path }}/k9s.tar.gz" + mode: '0644' + +- name: Unarchive k9s tarball + ansible.builtin.unarchive: + src: "{{ temp_k9s_dir.path }}/k9s.tar.gz" + dest: "{{ temp_k9s_dir.path }}" + remote_src: yes # Indicates that the src is on the managed node + +- name: Install k9s binary + ansible.builtin.copy: + src: "{{ temp_k9s_dir.path }}/k9s" + dest: "{{ k9s_install_dir }}/k9s" + mode: '0755' + remote_src: yes + +- name: Clean up k9s temporary directory + ansible.builtin.file: + path: "{{ temp_k9s_dir.path }}" + state: absent + when: temp_k9s_dir.path is defined + +- name: add kubeconfig to bashrc + ansible.builtin.lineinfile: + path: /root/.bashrc + line: 'export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig' + state: present + create: yes + + \ No newline at end of file diff --git a/ansible/roles/k9s/vars/main.yaml b/ansible/roles/k9s/vars/main.yaml new file mode 100644 index 00000000..b854ba05 --- /dev/null +++ b/ansible/roles/k9s/vars/main.yaml @@ -0,0 +1,3 @@ +k9s_version: "v0.50.16" # Specify desired k9s version, or use "latest" for github_release module +k9s_install_dir: "/usr/local/bin" +k9s_download_url_template: "https://github.com/derailed/k9s/releases/download/{{ k9s_version }}/k9s_Linux_amd64.tar.gz" diff --git a/ansible-roles/microshift-okd-bootc/files/create_repos.sh b/ansible/roles/microshift-okd-bootc/files/create_repos.sh similarity index 100% rename from ansible-roles/microshift-okd-bootc/files/create_repos.sh rename to ansible/roles/microshift-okd-bootc/files/create_repos.sh diff --git a/ansible-roles/microshift-okd-bootc/tasks/build.yaml b/ansible/roles/microshift-okd-bootc/tasks/build.yaml similarity index 100% rename from ansible-roles/microshift-okd-bootc/tasks/build.yaml rename to ansible/roles/microshift-okd-bootc/tasks/build.yaml diff --git a/ansible-roles/microshift-okd-bootc/tasks/fetch-kubeconfig.yaml b/ansible/roles/microshift-okd-bootc/tasks/fetch-kubeconfig.yaml similarity index 100% rename from ansible-roles/microshift-okd-bootc/tasks/fetch-kubeconfig.yaml rename to ansible/roles/microshift-okd-bootc/tasks/fetch-kubeconfig.yaml diff --git a/ansible-roles/microshift-okd-bootc/tasks/main.yaml b/ansible/roles/microshift-okd-bootc/tasks/main.yaml similarity index 100% rename from ansible-roles/microshift-okd-bootc/tasks/main.yaml rename to ansible/roles/microshift-okd-bootc/tasks/main.yaml diff --git a/ansible-roles/microshift-okd-bootc/tasks/run.yaml b/ansible/roles/microshift-okd-bootc/tasks/run.yaml similarity index 100% rename from ansible-roles/microshift-okd-bootc/tasks/run.yaml rename to ansible/roles/microshift-okd-bootc/tasks/run.yaml diff --git a/ansible-roles/microshift-okd-bootc/tasks/topolvm.yaml b/ansible/roles/microshift-okd-bootc/tasks/topolvm.yaml similarity index 100% rename from ansible-roles/microshift-okd-bootc/tasks/topolvm.yaml rename to ansible/roles/microshift-okd-bootc/tasks/topolvm.yaml diff --git a/ansible-roles/microshift-okd-bootc/templates/Containerfile.template b/ansible/roles/microshift-okd-bootc/templates/Containerfile.template similarity index 100% rename from ansible-roles/microshift-okd-bootc/templates/Containerfile.template rename to ansible/roles/microshift-okd-bootc/templates/Containerfile.template diff --git a/ansible-roles/microshift-okd-bootc/vars/main.yaml b/ansible/roles/microshift-okd-bootc/vars/main.yaml similarity index 100% rename from ansible-roles/microshift-okd-bootc/vars/main.yaml rename to ansible/roles/microshift-okd-bootc/vars/main.yaml diff --git a/ansible-roles/microshift-okd-download/tasks/main.yaml b/ansible/roles/microshift-okd-download/tasks/main.yaml similarity index 92% rename from ansible-roles/microshift-okd-download/tasks/main.yaml rename to ansible/roles/microshift-okd-download/tasks/main.yaml index 39ca9980..7a749bec 100644 --- a/ansible-roles/microshift-okd-download/tasks/main.yaml +++ b/ansible/roles/microshift-okd-download/tasks/main.yaml @@ -24,6 +24,9 @@ tag: "{{ _gh_tag }}" register: _microshift_release_info delegate_to: localhost +- name: debug + ansible.builtin.debug: + var: _microshift_release_info - name: Extract MicroShift asset download URL ansible.builtin.set_fact: @@ -34,4 +37,5 @@ url: "{{ _microshift_asset_url }}" dest: "{{ _microshift_download_dest_path }}" mode: '0644' # Permissions for the downloaded file (e.g., a zip file) + force: true # Force download and replace existing file when: _microshift_asset_url is not none \ No newline at end of file diff --git a/ansible-roles/microshift-okd-download/vars/main.yaml b/ansible/roles/microshift-okd-download/vars/main.yaml similarity index 71% rename from ansible-roles/microshift-okd-download/vars/main.yaml rename to ansible/roles/microshift-okd-download/vars/main.yaml index fd3a0f7b..e850c40b 100644 --- a/ansible-roles/microshift-okd-download/vars/main.yaml +++ b/ansible/roles/microshift-okd-download/vars/main.yaml @@ -5,7 +5,7 @@ microshift_github_repository: "microshift" # The filename of the asset you want to download from the release. # You can make this dynamic, e.g., using ansible_facts.architecture: # microshift_asset_filename: "microshift-{{ ansible_facts.architecture }}.zip" -microshift_asset_filename: "microshift-x86_64.zip" # Matching your original example +microshift_asset_filename: "microshift-rpms-x86_64.tgz" # Matching your original example # Directory where the asset will be downloaded. microshift_download_dir: "./cache/microshift_assets" -release_base_url: "https://github.com/microshift-io/microshift/releases/download/" \ No newline at end of file +release_base_url: "https://github.com/{{microshift_github_owner}}/{{microshift_github_repository}}/releases/download/" \ No newline at end of file