Skip to content

Releases: seapath/ansible

v1.2.0

23 Oct 13:12
v1.2.0

Choose a tag to compare

Key features

  • Cephadm Integration: Introduces support for Ceph cluster deployment using cephadm, including logic for offline installation, node replacement, and expansion. Only SEAPATH Debian is supported for the moment. ceph-ansible support is still available and continues to be the default way to configure 

  • Cockpit Plugins for Debian: Adds a new role (cockpit_plugins) and updates the prepare.sh script to fetch and install the SEAPATH Cockpit plugins (cockpit-cluster-dashboard and cockpit-cluster-vm-management) on Debian.

  • Network Role Refactoring: Replaces the external systemd-networkd role dependency with a new internal role (network_systemdnetworkd) for simpler configuration and variable management.

  • SEAPATH Yocto Enhancements:

    • Adds SR-IOV configuration support.
    • Adds the seapath_update_yocto playbook, which was missing from the main branch.
  • Debian Grub BootCount Rollback: Introduces a new role (debian_grub_bootcount) to create LVM snapshots during updates and automatically roll back to a working snapshot if a boot failure is detected.

  • VM Guest Enhancements:

    • Adds support for memory ballooning in guest.xml templates to optimize host memory.
    • Updates the vm_manager submodule to improve resilience by adding RBD host lists, allowing guests to switch Ceph monitors.
  • Hardware Customization (Welotec): The hardware_customization_welotec role no longer depends on an external network role and will not fail if an optional PRP-HSR interface is not found.

  • CI Expansion: Adds CI workflows for OracleLinux and CentOS.

Bug fixes

  • PTP Vsock: Improves the stability of the ptp_vsock service by ensuring VSOCK connections are always closed (using with syntax) and enabling unbuffered logging.

  • SNMP:

    • Fixes an issue where snmpd could become unresponsive by implementing atomic data file generation and forcing the agent to reload periodically.
    • Disables SNMPv2 when SNMPv3 is enabled.
  • Ceph:

    • Fixes a variable name typo in the replace_machine_shrink_mon playbook (Mon_to_kill -> mon_to_kill).
    • Removes the creation of an unused ceph-rbd libvirt pool.
  • VM Templates:

    • Fixes an XML closing tag in the VM template.
    • Adds a default vm_features variable to guest templates to prevent failures when it's not defined.
  • Welotec: Fixes issues to ensure the lan_hsr-prp interface comes up correctly, even without an IP address.

  • General: Numerous Ansible-lint corrections and documentation improvements across various roles.

API changes

Multiple variables and defaults have been renamed or refactored with this release. Existing inventories should be updated accordingly:

  • Network Variable Sanitization:

    • The no_cluster_network logic has been refactored. The new role-specific variables are: network_systemdnetworkd_no_cluster_network (for the network_systemdnetworkd role) and network_networkdwait_no_cluster_network (for the network_networkwait role).
      Playbooks continue to use no_cluster_networ and define the role variables.
    • The network_systemdnetworkd role variables have been renamed for consistency (e.g., default_config_file -> network_systemdnetworkd_default_config_file).
  • Variable Removed:

    • The deprecated extra_network_config variable has been removed. Users should use br0vlan or custom_network for custom network configurations.
  • Variables Added:

    • Example inventories now include subnet variables (e.g., admin_subnet) to allow specifying subnet masks in CIDR notation.
    • The vm_features variable defined in the VM configurations can now UEFI secure boot.
  • SR-IOV Variables: Variable names for SR-IOV pool creation in deploy_vms roles have been aligned. Please check role documentation.

Changelog

Read more

v1.1.3

17 Apr 07:40

Choose a tag to compare

Full Changelog: v1.1.2...v1.1.3


Revert "snmp: include the get snmp data logic directly into the expose script"
The pass_persist external script is blocking for snmpd, which means that everytime the data is gathered (which takes 20 seconds or so every 5 min), snmpd becomes totally unresponsive.
This is unacceptable. We will go back to the previous way of gathering data (cron job outside of snmpd), and solve the problem this creates but a different way.
This reverts commit d8e7692.


snmp: make snmp data genation atomic
The problem with data gathering with a cron job is that it's not synchronised with the pass_persist "refresh" logic. If the refesh happens during the 20s of the data gathering, then it will read an uncomplete /tmp/snmpdata.txt file.
To solve this, we make the generation of this file atomic, by writing to a temporary file and only at the end of the script rename the file in an atomic way.


snmp: add timestamp to data file

v1.1.2

07 Apr 11:35

Choose a tag to compare

Full Changelog: v1.1.1...v1.1.2



fix file descriptor leak in vm_manager


snmp agent: reload after 1h
The perl snmp agent seems unstable after a few fays of running time. It still runs but does not update the snmp tree anymore.
We fix this by forcing snmpd to reload it after 1h, so that it stays fresh.


snmp: make virt-df.sh not chock on lvm snapshot volumes


expose snmp data: have a different interval than the cron job
If the cron job that get the snmp data and the expose script have the same interval (currently 300s = 5min), we encounter the risk that the generation of the snmp data file and the reading of that file always happen at the same time...
This commit set the interval of the expose script to 4min, so that we are sure those script don't run at the same time.


snmp: include the get snmp data logic directly into the expose script
If both logic use the same interval, there is the risk the interfere with each other.
If we set a different interval, sometime the expose script will run just after the getting of the data (and the exposed data will be fresh) or just before, in which case the data will be fresh again after 4+5min (9min).
For the data to be always fresh, it seems best to run the get_data script just before the expose refresh, so to include it in the script.
This makes the cron job useless, however since the expose script is run by snmp, we have to give the permission to the snmp user via sudo.

v1.1.1

13 Mar 11:56

Choose a tag to compare

Full Changelog: v1.1.0...v1.1.1


team0_x/OVS: move role and solves bug

This logic concerns all physical machines and not just hypervisor.
Plus, this commits adds Before= and After= condition for this logic to also work for a graceful host shutdown (before this commit, it only works for an ovs-vswitchd.service stop).


remove backup-restore on standalone
On standalone backup-restore does not make sense.


Revert "handlers: use udevadm trigger instead of restarting udev"
seapath/ansible-role-systemd-networkd#8

v1.1.0

25 Feb 15:33

Choose a tag to compare

Key features

  • Remove consolevm script, now replaced by vm-mgr console.
  • Add nostart options for VM deployment.
  • Update submodules to latest versions.

Bug fixes

  • Playbooks improvements for ansible-lint.
  • SEAPATH Debian: add missing capabilities for pacemaker service to fix live migration.
  • SEAPATH Yocto: always keep systemd-resolved.service enable to prevent dnsmasq.service to fail
  • Remove obsolete code.

API changes

Multiple variables are renamed with this release. Existing inventories should be updated accordingly:

  • tmpdir --> configure_ha_tmpdir (role configure_ha)
  • ptp_network_transport --> timemaster_ptp_network_transport (role timemaster)
  • ptp_delay_mechanism --> timemaster_ptp_delay_mechanism (role timemaster)
  • hugepages --> yocto_hugepages (role yocto/hugepages)
  • on existing debian installations, you need to install 3 packages for v1.1 to work properly:
    • python3-pip
    • python3-wheel
    • patch
      you can get those packages from the debian website, upload them to your servers and install them manually with dpkg, or use apt if you have connectivity to a debian mirror.

Known issues

  • SEAPATH Yocto: cukinia test "Check for file with no user and group" might fails #695

Changelog

Full Changelog: v1.0.0...v1.1.0

v1.0.0

23 Jan 13:51
v1.0.0

Choose a tag to compare

Initial release

Key Features

  • Configure a SEAPATH cluster with 3 machines, supporting two configurations
    • Two hypervisors + one observer
    • Three hypervisors
  • Cluster features include:
    • VM disk redundancy via Ceph shared storage
    • Failover scenarios managed by Pacemaker
    • Network redundancy ensured by Open vSwitch
    • VM live migration across cluster machines
  • Set up SEAPATH network configurations, including:
    • Administration network
    • Cluster network
    • PTP network
    • Inter-VM communication network
    • Additional networks customized by the end user
  • Configure time synchonisation
    • PTP synchronisation
    • NTP synchronisation
    • Time forwarding to VMs
  • Implement additional cyber hardening for Debian (Yocto hardening managed in meta-seapath).
  • Provide a VM deployment interface for SEAPATH clusters or standalone machines, supporting:
    • Configurable Libvirt XML files
    • QCOW2 QEMU files
  • ABB SSC600 SW compatibility