Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
136 changes: 136 additions & 0 deletions ansible/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,136 @@
## MicroShift cluster automation with Ansible

This directory contains a minimal workflow to build an Ansible runner container, prepare an inventory, install upstream MicroShift on target nodes, and join them into a single cluster.

### Workflow

```mermaid
sequenceDiagram
participant User
participant Runner as Ansible Runner (container)
participant EC2
participant Primary as Primary Node
participant Secondary as Secondary Node(s)

User->>EC2: run cloudformation VM instances provisioning - Centos Stream
EC2->>Primary: VMs Created with SSH keys
EC2->>Secondary:
User->>User: Build ansible container image
User->>Runner: ansible provision
rect rgb(235,245,255)
Note over Primary: Install Microshift
Runner->>Primary: prepare OS
Runner->>Secondary:
Runner->>Primary: Download Latest MicroShift RPMs from GH Releases
Runner->>Secondary:
Runner->>Primary: Install microshift packages
Runner->>Secondary:
Runner->>Primary: Start Microshift
Runner->>Secondary:
Runner->>Primary: Healthcheck
Runner->>Secondary:
end

rect rgb(245,235,255)
Note over Primary,Secondary: Create Cluster
Runner->>Primary: Upload & run configure-node.sh
Runner->>Secondary: Distribute kubeconfig-bootstrap to Nodes
Runner->>Secondary: Run configure-node.sh --bootstrap-kubeconfig
Secondary->>Primary: Joins cluster
end

Runner->>User: Report cluster ready / errors
```

#### 1. Infrastructure Setup
- User provisions EC2 VM instances using CloudFormation with CentOS Stream
- VMs are created with SSH keys for access
- User builds the Ansible runner container image locally
#### 2. MicroShift Installation
- Ansible runner prepares the operating system on both primary and secondary nodes
- Downloads latest MicroShift RPM packages from GitHub releases
- Installs MicroShift packages on all nodes
- Starts the MicroShift service on all nodes
- Performs health checks to verify installation success
#### 3. Cluster Formation
- Ansible runner uploads and executes configure-node.sh script on primary node
- Distributes kubeconfig-bootstrap files to secondary nodes
- Executes configure-node.sh with bootstrap-kubeconfig on secondary nodes
- Secondary nodes join the cluster by connecting to the primary node
- Ansible runner reports final cluster status (ready/errors) back to user
The entire process is orchestrated through the containerized Ansible runner, which manages parallel operations across multiple nodes to create a functional MicroShift cluster.

### Prerequisites
- **Podman** installed on your workstation.
- **SSH access** from your workstation to all target VMs.
- Public keys should be present in `~/.ssh/` and trusted by the target hosts.
- Passwordless sudo on the target hosts is recommended for automation.
- Target hosts are reachable by DNS or IP and allow inbound SSH.
- **Note:** This workflow assumes that your target nodes are running **CentOS Stream 9** as the operating system.

### Build the Ansible runner image (Podman)
Build the container that bundles Ansible and playbooks in this repo:

```bash
podman build -f ansible-runner.Containerfile -t ansible-runner:latest .
```
Comment thread
eslutsky marked this conversation as resolved.

### Create the Ansible inventory
Create an `inventory` file in `cluster/` directory with your managed hosts. Example:

```ini
[remote]
192.168.1.22 ansible_connection=ssh ansible_user=ec2-user role=primary
192.168.1.23 ansible_connection=ssh ansible_user=ec2-user role=secondary
```

Notes:
- `ansible_user` must exist on the host and have sudo privileges.
- `role` is used by the playbooks to distinguish the primary node from secondaries.
- This setup assumes SSH keys are available at `~/.ssh/` on your workstation.

### Install latest upstream MicroShift on all nodes
This will install Microshift in all the configured nodes in parallel.

```bash
podman run -ti \
-v ~/.ssh:/home/runner/.ssh:Z \
-v "$PWD/cluster":/runner:Z \
-v "$PWD/roles":/runner/roles:Z \
ansible-runner:latest \
ansible-playbook -i /runner/inventory /runner/install.yaml
```

### Join nodes into a single cluster
After installation, run the join procedure to form a cluster with the designated primary.

```bash
podman run -ti \
-v ~/.ssh:/home/runner/.ssh:Z \
-v "$PWD/cluster":/runner:Z \
-v "$PWD/roles":/runner/roles:Z \
ansible-runner:latest \
ansible-playbook -i /runner/inventory /runner/join.yaml
```

### Verify the cluster
Once the join completes, log into the primary node and run the following commands to verify the cluster is up and running:

```bash
mkdir -p ~/.kube
sudo cat /var/lib/microshift/resources/kubeadmin/kubeconfig > ~/.kube/config
kubectl get nodes -o wide
kubectl get pods -A
Comment thread
eslutsky marked this conversation as resolved.
```

### Troubleshooting
- If SSH fails inside the container, confirm `~/.ssh` is mounted and permissions are preserved.
- Ensure security contexts on volumes are correct (note the `:Z` label in `-v` mounts when using SELinux).
- Validate inventory hostnames/IPs resolve from your workstation.

### Cleanup (optional)
To remove the built image locally:

```bash
podman rmi ansible-runner:latest
```
49 changes: 49 additions & 0 deletions ansible/ansible-runner.Containerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
FROM quay.io/centos/centos:stream9

ENV LANG=en_US.UTF-8

# Enable CRB to get python3-wheel, then install deps
RUN dnf -y update && \
dnf -y install dnf-plugins-core && \
dnf config-manager --set-enabled crb && \
dnf -y install \
python3 \
python3-pip \
python3-psutil \
python3-setuptools \
python3-wheel \
git \
sshpass \
openssh-clients \
rsync \
tar \
gzip \
unzip \
which \
jq \
sudo \
findutils \
procps-ng \
hostname \
ca-certificates && \
dnf clean all && rm -rf /var/cache/dnf/*

# Ansible + ansible-runner
RUN pip3 install --no-cache-dir --upgrade pip && \
pip3 install --no-cache-dir ansible-core ansible-runner && \
pip3 install --no-cache-dir "github3.py>=4.0.0"

# Runner FS layout
RUN mkdir -p /runner/project /runner/inventory /runner/env /runner/artifacts && \
echo "[all]" > /runner/inventory/hosts

# Non-root user
RUN useradd -m -s /bin/bash runner && \
chown -R runner:runner /runner

USER runner
RUN ansible-galaxy collection install community.general containers.podman

WORKDIR /runner/project
USER runner
CMD ["bash", "-lc", "echo 'Ansible:' && ansible --version && echo 'Runner:' && ansible-runner --version"]
113 changes: 113 additions & 0 deletions ansible/cluster/install.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,113 @@
---
- name: install Microshift on remote host
hosts: remote
gather_facts: true
become: false

vars:
RPM_DEP_PATH: "4.21-el9-beta"

tasks:
- name: Install required packages
become: true
dnf:
name:
- tmux
- git
- python3-pip
state: present

- name: install k9s for kubernetes debugging
include_role:
name: k9s
vars:
ansible_become: true

- name: Install github3.py for python3
ansible.builtin.pip:
name: "github3.py"
executable: pip3

- name: download microshift latest RPMs from github
include_role:
name: microshift-okd-download
vars:
microshift_download_dir: "/var/tmp/microshift_rpms"

- name: Extract MicroShift RPMs tarball
ansible.builtin.unarchive:
src: /var/tmp/microshift_rpms/microshift-rpms-x86_64.tgz
dest: /var/tmp/microshift_rpms
remote_src: yes

- name: on centos create openshift-mirror-beta repo file
copy:
dest: /etc/yum.repos.d/openshift-mirror-beta.repo
mode: 0644
content: |
[openshift-mirror-beta]
name=OpenShift Mirror Beta Repository
baseurl=https://mirror.openshift.com/pub/openshift-v4/{{ ansible_facts.architecture }}/dependencies/rpms/{{ RPM_DEP_PATH }}/
enabled=1
gpgcheck=0
skip_if_unavailable=0
become: true
when: ansible_facts.distribution == "CentOS"
Comment thread
ggiguash marked this conversation as resolved.

- name: create local.repo file
copy:
dest: /etc/yum.repos.d/local.repo
mode: 0644
content: |
[microshift-local]
# No spaces allowed in that [repo-name] or you get a "bad id for repo" error
name=My RPMs $releasever - $basearch
baseurl=/var/tmp/microshift_rpms
enabled=1
metadata_expire=1d
gpgcheck=0
become: true

- name: Perform dnf update
ansible.builtin.dnf:
name: '*'
state: latest
update_cache: yes
become: true

- name: Install microshift packages
dnf:
name:
- microshift
- microshift-networking
- openvswitch3.5
state: present
become: true

- name: create microshift config
copy:
dest: /etc/microshift/config.yaml
content: |
apiServer:
subjectAltNames:
- "{{ansible_ssh_host}}"
become: true
Comment thread
coderabbitai[bot] marked this conversation as resolved.

- name: Restart microshift service
systemd:
name: microshift
state: restarted
daemon_reload: yes
become: true

- name: Run microshift healthcheck for cert-manager deployment
command: microshift healthcheck
become: true

- name: add kubeconfig to bashrc
lineinfile:
path: /root/.bashrc
line: 'export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig'
state: present
create: yes
become: true
75 changes: 75 additions & 0 deletions ansible/cluster/join.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
---
- name: join Microshift nodes to a cluster
hosts: remote
gather_facts: true
become: false

tasks:
- name: Install required packages
become: true
dnf:
name:
- tmux
- git
- python3-pip
- firewalld
state: present
- name: Ensure kubeconfig is accessible as root on primary host
become: true
copy:
src: /var/lib/microshift/resources/kubeadmin/kubeconfig
dest: /root/kubeconfig
remote_src: yes
mode: '0600'
when: hostvars[inventory_hostname]['role'] == "primary"

- name: Find primary host name
set_fact:
primary_hostname: "{{ groups['remote'] | map('extract', hostvars) | selectattr('role', 'equalto', 'primary') | map(attribute='inventory_hostname') | list | first | default('') }}"
run_once: true

- name: Validate primary hostname is defined
ansible.builtin.fail:
msg: "No host with role=primary found in inventory"
when: primary_hostname is not defined or primary_hostname == ''
run_once: true
Comment thread
eslutsky marked this conversation as resolved.

- name: Download configure-node.sh to /root/configure-node.sh
become: true
ansible.builtin.get_url:
url: https://raw.githubusercontent.com/openshift/microshift/5a0a896896bc8ecdaf0e72ca1c12a909988a3790/scripts/multinode/configure-node.sh
dest: /root/configure-node.sh
mode: '0755'
checksum: sha256:eee96af46d8068c1b154190cab208151f915755d7ac58836b15d7d5b9bc75225

- name: Run configure-node.sh on primary host as root
become: true
shell: /root/configure-node.sh
async: 600
poll: 5
when: hostvars[inventory_hostname]['role'] == "primary"

- name: Fetch kubeconfig from primary host to Ansible executor
become: true
fetch:
src: /root/kubeconfig-bootstrap
dest: /tmp/kubeconfig_from_primary
flat: yes
delegate_to: "{{ primary_hostname }}"
run_once: true

- name: Copy kubeconfig from executor to secondary hosts
become: true
copy:
src: /tmp/kubeconfig_from_primary
dest: /root/kubeconfig-bootstrap
mode: '0600'
when: hostvars[inventory_hostname]['role'] == "secondary"

- name: Run configure-node.sh with --bootstrap-kubeconfig on secondary hosts
become: true
shell: /root/configure-node.sh --bootstrap-kubeconfig /root/kubeconfig-bootstrap
async: 600
poll: 5
when: hostvars[inventory_hostname]['role'] == "secondary"

File renamed without changes.
Loading
Loading