Skip to content

Support: change independent_nonpersistent disk mode after cloud-init#369

Merged
myakove merged 6 commits intoRedHatQE:mainfrom
prabinovRedhat:fix/copyoffload-independent-nonpersistent
Mar 24, 2026
Merged

Support: change independent_nonpersistent disk mode after cloud-init#369
myakove merged 6 commits intoRedHatQE:mainfrom
prabinovRedhat:fix/copyoffload-independent-nonpersistent

Conversation

@prabinovRedhat
Copy link
Copy Markdown
Collaborator

@prabinovRedhat prabinovRedhat commented Mar 23, 2026

Summary

  • independent_nonpersistent disks lose data on power-off, so cloud-init data was lost when the VM was powered off after setup
  • Remove disk_mode: independent_nonpersistent from config so the disk is created as regular persistent during clone
  • Add VMWareProvider.change_disk_mode() method to change disk backing mode via ReconfigVM_Task
  • Add nonpersistent_disk_ready fixture that chains after vmware_cloud_init_ready and changes the added disk mode to independent_nonpersistent after the VM is powered off
  • Add test_check_xcopy_used to TestCopyoffloadIndependentNonpersistentDiskMigration

Test plan

  • Ran TestCopyoffloadIndependentNonpersistentDiskMigration - all tests passed
  • Pre-commit checks pass

Summary by CodeRabbit

  • New Features

    • Ability to change disk modes on VMware virtual machines.
  • Tests

    • Added a class-scoped fixture to prepare VMs with independent nonpersistent disks.
    • Added a test to verify XCOPY acceleration is used during copy-offload migration.
  • Chores

    • Removed an explicit disk_mode entry from the copy-offload test configuration.
  • Documentation

    • Added a 6-step copy-offload pattern including the new XCOPY verification step.

independent_nonpersistent disks lose data on power-off, so the disk
must be created as regular persistent during clone, then changed after
cloud-init completes and the VM is powered off.

- Remove disk_mode from config (defaults to persistent during clone)
- Add VMWareProvider.change_disk_mode() for ReconfigVM_Task
- Add nonpersistent_disk_ready fixture that chains after cloud-init

Made-with: Cursor
@qodo-code-review
Copy link
Copy Markdown

Review Summary by Qodo

Fix independent_nonpersistent disk mode after cloud-init completion

🐞 Bug fix ✨ Enhancement 🧪 Tests

Grey Divider

Walkthroughs

Description
• Add change_disk_mode() method to VMWareProvider for disk reconfiguration
• Create nonpersistent_disk_ready fixture to change disk mode after cloud-init
• Remove disk_mode from config to create disks as persistent during clone
• Add test_check_xcopy_used test to verify XCOPY acceleration usage
Diagram
flowchart LR
  A["VM Clone<br/>Persistent Disk"] -->|"Cloud-init<br/>completes"| B["VM Powered Off"]
  B -->|"change_disk_mode<br/>called"| C["Disk Mode Changed<br/>to independent_nonpersistent"]
  C -->|"Migration<br/>with XCOPY"| D["Verify XCOPY<br/>Acceleration"]
Loading

Grey Divider

File Changes

1. libs/providers/vmware.py ✨ Enhancement +47/-0

Add disk mode reconfiguration method

• Add change_disk_mode() method to reconfigure non-OS disk backing modes via ReconfigVM_Task
• Method filters disks by unit number to preserve OS disk (unit 0 by default)
• Supports changing disk mode to any backing type (e.g., independent_nonpersistent, persistent)
• Includes comprehensive logging and error handling for missing eligible disks

libs/providers/vmware.py


2. tests/copyoffload/conftest.py ✨ Enhancement +26/-0

Add fixture for post-cloud-init disk mode change

• Add nonpersistent_disk_ready fixture that chains after vmware_cloud_init_ready
• Fixture iterates through VMs and calls change_disk_mode() to set disks to
 independent_nonpersistent
• Ensures disk mode change occurs after cloud-init completes and VM is powered off
• Fixture has class scope to apply to all tests in the test class

tests/copyoffload/conftest.py


3. tests/copyoffload/test_copyoffload_migration.py 🧪 Tests +17/-1

Update fixtures and add XCOPY verification test

• Replace vmware_cloud_init_ready fixture with nonpersistent_disk_ready in test class decorators
• Add new test_check_xcopy_used() test method to verify XCOPY acceleration was used
• Test calls verify_xcopy_used() with expected_xcopy_used=True parameter
• Test positioned after test_migrate_vms() to validate migration results

tests/copyoffload/test_copyoffload_migration.py


View more (1)
4. tests/tests_config/config.py ⚙️ Configuration changes +0/-1

Remove disk_mode from test configuration

• Remove disk_mode: "independent_nonpersistent" from add_disks configuration
• Disk now defaults to persistent mode during clone, allowing cloud-init to complete successfully
• Disk mode will be changed to independent_nonpersistent after VM power-off via fixture

tests/tests_config/config.py


Grey Divider

Qodo Logo

@qodo-code-review
Copy link
Copy Markdown

qodo-code-review Bot commented Mar 23, 2026

Code Review by Qodo

🐞 Bugs (4) 📘 Rule violations (4) 📎 Requirement gaps (0) 📐 Spec deviations (0)

Grey Divider


Action required

1. multus_network_name requested twice 📘 Rule violation ⚙ Maintainability
Description
The class applies multus_network_name via @pytest.mark.usefixtures() while also requesting it as
a test parameter, violating the single-request fixture pattern. This can cause confusing/duplicated
fixture activation and hides which tests actually need the value.
Code

tests/copyoffload/test_copyoffload_migration.py[R1922-1924]

+@pytest.mark.usefixtures(
+    "nonpersistent_disk_ready", "multus_network_name", "copyoffload_config", "cleanup_migrated_vms"
+)
Evidence
PR Compliance ID 20 forbids requesting the same fixture via both usefixtures (side-effects) and
function parameters (values). The modified decorator includes multus_network_name
(tests/copyoffload/test_copyoffload_migration.py[1922-1924]), and the class’s
test_create_networkmap requests multus_network_name as a parameter
(tests/copyoffload/test_copyoffload_migration.py[1976-1985]).

CLAUDE.md
tests/copyoffload/test_copyoffload_migration.py[1922-1924]
tests/copyoffload/test_copyoffload_migration.py[1976-1985]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`multus_network_name` is requested via both `@pytest.mark.usefixtures()` and as a test method parameter.
## Issue Context
This violates the project fixture request pattern: parameters are for values, `usefixtures` is for side-effects, never both for the same fixture.
## Fix Focus Areas
- tests/copyoffload/test_copyoffload_migration.py[1922-1924]
- tests/copyoffload/test_copyoffload_migration.py[1976-1985]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. Duplicate test_check_xcopy_used logic 📘 Rule violation ⚙ Maintainability
Description
A new test_check_xcopy_used method was added with logic that already exists elsewhere in the same
module, increasing copy/paste maintenance burden. This violates the requirement to extract identical
logic used 2+ times into a shared helper.
Code

tests/copyoffload/test_copyoffload_migration.py[R2039-2051]

+    def test_check_xcopy_used(self, ocp_admin_client: DynamicClient, target_namespace: str) -> None:
+        """Verify XCOPY acceleration was used for all disks.
+
+        Args:
+            ocp_admin_client (DynamicClient): OpenShift admin client.
+            target_namespace (str): Namespace where populate pods exist.
+        """
+        verify_xcopy_used(
+            ocp_admin_client=ocp_admin_client,
+            plan=self.plan_resource,
+            target_namespace=target_namespace,
+            expected_xcopy_used=True,
+        )
Evidence
PR Compliance ID 9 requires extracting identical logic used 2+ times into helpers. The newly added
test_check_xcopy_used implementation (calling verify_xcopy_used(... expected_xcopy_used=True))
is identical to an existing test_check_xcopy_used in the same file.

CLAUDE.md
tests/copyoffload/test_copyoffload_migration.py[2039-2051]
tests/copyoffload/test_copyoffload_migration.py[1872-1884]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The newly added `test_check_xcopy_used` duplicates an existing identical `verify_xcopy_used(...)` call pattern already present in `tests/copyoffload/test_copyoffload_migration.py`.
## Issue Context
Compliance requires extracting identical logic used 2+ times into a shared helper to avoid copy/paste drift.
## Fix Focus Areas
- tests/copyoffload/test_copyoffload_migration.py[2039-2051]
- tests/copyoffload/test_copyoffload_migration.py[1872-1884]
- tests/copyoffload/test_copyoffload_migration.py[1690-1702]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


3. Fixture power-off not enforced🐞 Bug ⛯ Reliability
Description
nonpersistent_disk_ready calls change_disk_mode() which raises if the VM is not powered off, but the
fixture itself never powers off VMs. If a plan uses source_vm_power="on", cloud-init readiness can
leave the VM on and the class setup will fail with ValueError.
Code

tests/copyoffload/conftest.py[R359-382]

+@pytest.fixture(scope="class")
+def nonpersistent_disk_ready(
+    vmware_cloud_init_ready: None,
+    prepared_plan: dict[str, Any],
+    source_provider: VMWareProvider,
+) -> None:
+    """Change added disk mode to independent_nonpersistent after cloud-init completes.
+
+    independent_nonpersistent disks lose data on power-off, so the disk must be
+    created as regular persistent during clone (for cloud-init), then changed
+    to independent_nonpersistent after the VM is powered off.
+
+    Args:
+        vmware_cloud_init_ready (None): Ensures cloud-init has finished and VM is off.
+        prepared_plan (dict[str, Any]): Processed test plan with VM data.
+        source_provider (VMWareProvider): The VMware source provider instance.
+    """
+    for vm_data in prepared_plan["virtual_machines"]:
+        vm_name = vm_data["name"]
+        provider_vm_api = prepared_plan["source_vms_data"][vm_name]["provider_vm_api"]
+        source_provider.change_disk_mode(
+            vm=provider_vm_api,
+            disk_mode="independent_nonpersistent",
+        )
Evidence
wait_for_vmware_cloud_init_all_vms propagates vm_data['source_vm_power'] into
wait_for_cloud_init(target_power_state=...), and wait_for_cloud_init only powers off the VM in its
finally block when target_power_state == 'off'. Since nonpersistent_disk_ready doesn’t call stop_vm,
it doesn’t guarantee the precondition required by change_disk_mode.

tests/copyoffload/conftest.py[359-382]
utilities/copyoffload_migration.py[88-122]
utilities/copyoffload_migration.py[124-214]
libs/providers/vmware.py[810-833]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`nonpersistent_disk_ready` relies on an indirect guarantee that VMs are powered off before calling `change_disk_mode()`. If a plan leaves VMs on after cloud-init, the fixture will fail during setup.
### Issue Context
`change_disk_mode()` explicitly requires `poweredOff`.
### Fix Focus Areas
- Ensure the fixture explicitly powers off each VM (and/or waits for `poweredOff`) before calling `change_disk_mode()`.
- Alternatively, change `wait_for_vmware_cloud_init_all_vms()` usage for this fixture to always request `target_power_state='off'` regardless of plan settings.
- tests/copyoffload/conftest.py[359-382]
- utilities/copyoffload_migration.py[88-214]
- libs/providers/vmware.py[810-833]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


View more (3)
4. Extra test_check_xcopy_used step 📘 Rule violation ⚙ Maintainability
Description
A new test method test_check_xcopy_used was added, introducing an additional step outside the
required 5-step class-based migration test structure. This can break the standardized incremental
workflow and violates the mandated naming/step conventions.
Code

tests/copyoffload/test_copyoffload_migration.py[R2039-2051]

+    def test_check_xcopy_used(self, ocp_admin_client: DynamicClient, target_namespace: str) -> None:
+        """Verify XCOPY acceleration was used for all disks.
+
+        Args:
+            ocp_admin_client (DynamicClient): OpenShift admin client.
+            target_namespace (str): Namespace where populate pods exist.
+        """
+        verify_xcopy_used(
+            ocp_admin_client=ocp_admin_client,
+            plan=self.plan_resource,
+            target_namespace=target_namespace,
+            expected_xcopy_used=True,
+        )
Evidence
PR Compliance ID 26 requires the standard 5 step methods (test_create_storagemap,
test_create_networkmap, test_create_plan, test_migrate_vms, test_check_vms). The PR adds a
new step method test_check_xcopy_used, which deviates from that required structure.

CLAUDE.md
tests/copyoffload/test_copyoffload_migration.py[2039-2051]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
A new test step method `test_check_xcopy_used` was added to an incremental migration test class, deviating from the required standardized 5-step class structure.
## Issue Context
Compliance requires migration tests to follow the fixed 5-step naming and ordering pattern. Adding additional `test_*` steps in the class violates this convention unless an explicit documented exception exists.
## Fix Focus Areas
- tests/copyoffload/test_copyoffload_migration.py[2039-2051]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


5. Skips wrong disk units 🐞 Bug ✓ Correctness
Description
change_disk_mode() excludes disks by unitNumber only (default skipping 0), but unit numbers are
per-controller and the added disk can legitimately be unit 0, so the fixture may not change the
intended disk mode.
Code

libs/providers/vmware.py[R830-834]

+        disks: list[vim.vm.device.VirtualDisk] = [
+            dev
+            for dev in vm.config.hardware.device
+            if isinstance(dev, vim.vm.device.VirtualDisk) and dev.unitNumber not in skip_unit_numbers
+        ]
Evidence
_get_add_disk_device_specs() picks the first available unit on the first SCSI controller, which can
be 0 when that controller has no disks; with skip_unit_numbers=(0,) the newly added disk is filtered
out and never edited. This mismatch directly impacts the new nonpersistent_disk_ready fixture that
relies on change_disk_mode to flip the added disk.

libs/providers/vmware.py[830-834]
libs/providers/vmware.py[879-898]
tests/copyoffload/conftest.py[359-382]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`VMWareProvider.change_disk_mode()` currently selects disks to edit using only `unitNumber` and skips unit 0 by default. Unit numbers are per SCSI controller and the newly added disk can be assigned unit 0, causing the method to skip the very disk it is intended to modify.
### Issue Context
The added disk unit is chosen as the first free unit on the selected controller. When the controller has no disks, that unit is `0`, which is skipped by default.
### Fix Focus Areas
- libs/providers/vmware.py[810-855]
- libs/providers/vmware.py[879-898]
### What to change
- Identify the OS disk more robustly (e.g., by boot order, by matching the disk backing fileName to the VM’s primary disk, by controllerKey+unitNumber tuple for the boot disk, or by excluding the disk that matches the boot device).
- Alternatively (and simpler for this test use-case), change `change_disk_mode()` to accept a predicate/selector (e.g., only disks whose backing fileName contains the cloned VM added-disk naming pattern) or accept an explicit list of device keys to edit.
- If you keep `skip_unit_numbers`, treat it as a set of `(controllerKey, unitNumber)` pairs rather than unitNumber alone.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


6. No power-off enforcement🐞 Bug ⛯ Reliability
Description
change_disk_mode() requires the VM to be powered off (per docstring) but doesn’t validate or enforce
it, so ReconfigVM_Task can fail if the VM is on and break the class fixture chain.
Code

libs/providers/vmware.py[R818-852]

+        The VM must be powered off before calling this method.
+
+        Args:
+            vm (vim.VirtualMachine): The vSphere VM object to reconfigure.
+            disk_mode (str): Target disk mode (e.g., "independent_nonpersistent",
+                "independent_persistent", "persistent").
+            skip_unit_numbers (tuple[int, ...]): Unit numbers to skip.
+                Defaults to (0,) to preserve the OS disk.
+
+        Raises:
+            ValueError: If no eligible disks are found on the VM.
+        """
+        disks: list[vim.vm.device.VirtualDisk] = [
+            dev
+            for dev in vm.config.hardware.device
+            if isinstance(dev, vim.vm.device.VirtualDisk) and dev.unitNumber not in skip_unit_numbers
+        ]
+
+        if not disks:
+            raise ValueError(f"No eligible disks found on VM '{vm.name}' (skipping units {skip_unit_numbers})")
+
+        device_changes: list[vim.vm.device.VirtualDeviceSpec] = []
+        for disk in disks:
+            spec = vim.vm.device.VirtualDeviceSpec()
+            spec.operation = vim.vm.device.VirtualDeviceSpec.Operation.edit
+            spec.device = disk
+            spec.device.backing.diskMode = disk_mode
+            device_changes.append(spec)
+            LOGGER.info(f"VM '{vm.name}': changing disk unit {disk.unitNumber} mode to '{disk_mode}'")
+
+        config_spec = vim.vm.ConfigSpec()
+        config_spec.deviceChange = device_changes
+
+        task = vm.ReconfigVM_Task(spec=config_spec)
+        self.wait_task(
Evidence
The method documents the power-off requirement but immediately calls ReconfigVM_Task with no guard.
In this repo, stop_vm() only powers off when called elsewhere; change_disk_mode itself doesn’t check
vm.runtime.powerState, so any caller that violates the contract will get an opaque task failure.

libs/providers/vmware.py[816-855]
libs/providers/vmware.py[325-332]
utilities/copyoffload_migration.py[206-214]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`change_disk_mode()` assumes the VM is powered off but does not validate it. If the VM is powered on, `ReconfigVM_Task` may fail and the test fixture will error.
### Issue Context
The method is used in a class-scoped fixture (`nonpersistent_disk_ready`) and may be reused elsewhere. Defensive validation will make failures immediate and actionable.
### Fix Focus Areas
- libs/providers/vmware.py[810-855]
### What to change
- At the start of `change_disk_mode()`, check `getattr(vm.runtime, "powerState", None)` and:
- either raise a clear `ValueError`/`RuntimeError` if not `"poweredOff"`, or
- optionally call `self.stop_vm(vm)` and/or wait until it is off before proceeding.
- Log the current power state in the error message.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

7. Missing vSphere reconnect 🐞 Bug ⛯ Reliability
Description
VMWareProvider.change_disk_mode() calls ReconfigVM_Task without first ensuring the provider is
connected, unlike other VMwareProvider methods; a stale/expired session can cause disk-mode changes
to fail intermittently in long-running test runs.
Code

libs/providers/vmware.py[R855-858]

+        task = vm.ReconfigVM_Task(spec=config_spec)
+        self.wait_task(
+            task=task, action_name=f"Changing disk mode to '{disk_mode}' on VM '{vm.name}'", wait_timeout=120
+        )
Evidence
In vmware provider, other vSphere API operations explicitly trigger
self.reconnect_if_not_connected before interacting with vCenter. The new change_disk_mode()
method does not, and calls ReconfigVM_Task directly, so it can fail if the connection dropped
between earlier calls and the reconfig.

libs/providers/vmware.py[200-213]
libs/providers/vmware.py[365-377]
libs/providers/vmware.py[728-732]
libs/providers/vmware.py[810-859]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`VMWareProvider.change_disk_mode()` performs vSphere reconfiguration without checking/refreshing the vCenter session, unlike other provider methods.
### Issue Context
The provider already has a `reconnect_if_not_connected` pattern used before API actions.
### Fix Focus Areas
- libs/providers/vmware.py[810-859]
- Add `self.reconnect_if_not_connected` near the start of `change_disk_mode()` (before inspecting VM devices and before `ReconfigVM_Task`).
- Optionally add a short, explicit log line indicating a reconnect attempt (consistent with existing behavior).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


8. Edits all data disks 🐞 Bug ✓ Correctness
Description
change_disk_mode() updates diskMode on every VirtualDisk except those with unitNumber in
skip_unit_numbers; the new fixture’s intent is to change the “added disk” mode, but this
implementation will also change any pre-existing non-OS disks on the template VM.
Code

libs/providers/vmware.py[R834-849]

+        disks: list[vim.vm.device.VirtualDisk] = [
+            dev
+            for dev in vm.config.hardware.device
+            if isinstance(dev, vim.vm.device.VirtualDisk) and dev.unitNumber not in skip_unit_numbers
+        ]
+
+        if not disks:
+            raise ValueError(f"No eligible disks found on VM '{vm.name}' (skipping units {skip_unit_numbers})")
+
+        device_changes: list[vim.vm.device.VirtualDeviceSpec] = []
+        for disk in disks:
+            spec = vim.vm.device.VirtualDeviceSpec()
+            spec.operation = vim.vm.device.VirtualDeviceSpec.Operation.edit
+            spec.device = disk
+            spec.device.backing.diskMode = disk_mode
+            device_changes.append(spec)
Evidence
The fixture is documented as changing the mode of the added disk after cloud-init, but the
implementation selects disks solely by isinstance(VirtualDisk) and `unitNumber not in
skip_unit_numbers, then edits each disk’s backing.diskMode`. If the source VM/template has
multiple non-OS disks, all will become independent_nonpersistent, which changes test preconditions
and may affect unrelated validations.

tests/copyoffload/conftest.py[365-382]
libs/providers/vmware.py[834-851]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The new helper changes the disk mode for *all* eligible non-unit-0 disks, but the test/fixture intent is to change the mode for the *added* disk(s).
### Issue Context
Depending on the template, there can be multiple non-OS disks; changing all of them to independent_nonpersistent can alter test assumptions.
### Fix Focus Areas
- libs/providers/vmware.py[810-851]
- Add an optional selector to narrow which disks are edited (e.g., explicit `unit_numbers`, `device_keys`, or a predicate function).
- At minimum, allow the caller to pass `skip_unit_numbers` derived from the VM’s actual layout.
- tests/copyoffload/conftest.py[359-382]
- If only the newly added disk should change, compute the unit number(s) to edit and pass them explicitly (or pass a selector) rather than editing all non-unit-0 disks.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


9. nonpersistent_disk_ready fixture action-named 📘 Rule violation ⚙ Maintainability
Description
The newly added fixture name describes a setup action/readiness state rather than a noun-like
resource, reducing clarity and making fixture usage less self-documenting. This violates the fixture
naming rules intended to keep fixtures explicit and readable.
Code

tests/copyoffload/conftest.py[R359-364]

+@pytest.fixture(scope="class")
+def nonpersistent_disk_ready(
+    vmware_cloud_init_ready: None,
+    prepared_plan: dict[str, Any],
+    source_provider: VMWareProvider,
+) -> None:
Evidence
PR Compliance ID 14 requires fixture names to be nouns representing resources and not setup actions.
The added fixture nonpersistent_disk_ready performs an action (changing disk mode) and is
requested via usefixtures, but its name reads like a readiness/setup step rather than a resource.

CLAUDE.md
tests/copyoffload/conftest.py[359-364]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The fixture name `nonpersistent_disk_ready` describes a setup/readiness action rather than a noun-like resource name, which reduces readability and violates fixture naming conventions.
## Issue Context
This fixture performs a side effect (changing VM disk mode) and is used via `@pytest.mark.usefixtures(...)`. Even for side-effect fixtures, names should be noun-like to reflect the resource/state being provided.
## Fix Focus Areas
- tests/copyoffload/conftest.py[359-382]
- tests/copyoffload/test_copyoffload_migration.py[1922-1924]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


View more (1)
10. Undocumented VmCloneError 🐞 Bug ⚙ Maintainability
Description
change_disk_mode() documents only ValueError, but it can also raise VmCloneError via wait_task()
when ReconfigVM_Task fails. This breaks the method’s declared contract for callers relying on the
docstring.
Code

libs/providers/vmware.py[R827-829]

+        Raises:
+            ValueError: If the VM is not powered off or no eligible disks are found.
+        """
Evidence
change_disk_mode calls wait_task without catching exceptions, and wait_task raises VmCloneError on
vSphere task errors; however, the change_disk_mode docstring lists only ValueError.

libs/providers/vmware.py[816-859]
libs/providers/vmware.py[289-324]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`change_disk_mode()` can raise `VmCloneError` (from `wait_task`) but the docstring lists only `ValueError`.
### Issue Context
Callers may use the docstring to implement exception handling.
### Fix Focus Areas
- Update the `Raises:` section of `change_disk_mode()` to include `VmCloneError` (and any other propagated exceptions you intend to expose).
- libs/providers/vmware.py[816-859]
- libs/providers/vmware.py[289-324]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

ⓘ The new review experience is currently in Beta. Learn more

Grey Divider

Grey Divider

Previous review results

Review updated until commit c73e7b6

Results up to commit c73e7b6


🐞 Bugs (4) 📘 Rule violations (4) 📎 Requirement gaps (0) 📐 Spec deviations (0)

Grey Divider
Action required
1. multus_network_name requested twice 📘 Rule violation ⚙ Maintainability
Description
The class applies multus_network_name via @pytest.mark.usefixtures() while also requesting it as
a test parameter, violating the single-request fixture pattern. This can cause confusing/duplicated
fixture activation and hides which tests actually need the value.
Code

tests/copyoffload/test_copyoffload_migration.py[R1922-1924]

+@pytest.mark.usefixtures(
+    "nonpersistent_disk_ready", "multus_network_name", "copyoffload_config", "cleanup_migrated_vms"
+)
Evidence
PR Compliance ID 20 forbids requesting the same fixture via both usefixtures (side-effects) and
function parameters (values). The modified decorator includes multus_network_name
(tests/copyoffload/test_copyoffload_migration.py[1922-1924]), and the class’s
test_create_networkmap requests multus_network_name as a parameter
(tests/copyoffload/test_copyoffload_migration.py[1976-1985]).

CLAUDE.md
tests/copyoffload/test_copyoffload_migration.py[1922-1924]
tests/copyoffload/test_copyoffload_migration.py[1976-1985]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`multus_network_name` is requested via both `@pytest.mark.usefixtures()` and as a test method parameter.
## Issue Context
This violates the project fixture request pattern: parameters are for values, `usefixtures` is for side-effects, never both for the same fixture.
## Fix Focus Areas
- tests/copyoffload/test_copyoffload_migration.py[1922-1924]
- tests/copyoffload/test_copyoffload_migration.py[1976-1985]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. Duplicate test_check_xcopy_used logic 📘 Rule violation ⚙ Maintainability
Description
A new test_check_xcopy_used method was added with logic that already exists elsewhere in the same
module, increasing copy/paste maintenance burden. This violates the requirement to extract identical
logic used 2+ times into a shared helper.
Code

tests/copyoffload/test_copyoffload_migration.py[R2039-2051]

+    def test_check_xcopy_used(self, ocp_admin_client: DynamicClient, target_namespace: str) -> None:
+        """Verify XCOPY acceleration was used for all disks.
+
+        Args:
+            ocp_admin_client (DynamicClient): OpenShift admin client.
+            target_namespace (str): Namespace where populate pods exist.
+        """
+        verify_xcopy_used(
+            ocp_admin_client=ocp_admin_client,
+            plan=self.plan_resource,
+            target_namespace=target_namespace,
+            expected_xcopy_used=True,
+        )
Evidence
PR Compliance ID 9 requires extracting identical logic used 2+ times into helpers. The newly added
test_check_xcopy_used implementation (calling verify_xcopy_used(... expected_xcopy_used=True))
is identical to an existing test_check_xcopy_used in the same file.

CLAUDE.md
tests/copyoffload/test_copyoffload_migration.py[2039-2051]
tests/copyoffload/test_copyoffload_migration.py[1872-1884]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The newly added `test_check_xcopy_used` duplicates an existing identical `verify_xcopy_used(...)` call pattern already present in `tests/copyoffload/test_copyoffload_migration.py`.
## Issue Context
Compliance requires extracting identical logic used 2+ times into a shared helper to avoid copy/paste drift.
## Fix Focus Areas
- tests/copyoffload/test_copyoffload_migration.py[2039-2051]
- tests/copyoffload/test_copyoffload_migration.py[1872-1884]
- tests/copyoffload/test_copyoffload_migration.py[1690-1702]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


3. Fixture power-off not enforced🐞 Bug ⛯ Reliability
Description
nonpersistent_disk_ready calls change_disk_mode() which raises if the VM is not powered off, but the
fixture itself never powers off VMs. If a plan uses source_vm_power="on", cloud-init readiness can
leave the VM on and the class setup will fail with ValueError.
Code

tests/copyoffload/conftest.py[R359-382]

+@pytest.fixture(scope="class")
+def nonpersistent_disk_ready(
+    vmware_cloud_init_ready: None,
+    prepared_plan: dict[str, Any],
+    source_provider: VMWareProvider,
+) -> None:
+    """Change added disk mode to independent_nonpersistent after cloud-init completes.
+
+    independent_nonpersistent disks lose data on power-off, so the disk must be
+    created as regular persistent during clone (for cloud-init), then changed
+    to independent_nonpersistent after the VM is powered off.
+
+    Args:
+        vmware_cloud_init_ready (None): Ensures cloud-init has finished and VM is off.
+        prepared_plan (dict[str, Any]): Processed test plan with VM data.
+        source_provider (VMWareProvider): The VMware source provider instance.
+    """
+    for vm_data in prepared_plan["virtual_machines"]:
+        vm_name = vm_data["name"]
+        provider_vm_api = prepared_plan["source_vms_data"][vm_name]["provider_vm_api"]
+        source_provider.change_disk_mode(
+            vm=provider_vm_api,
+            disk_mode="independent_nonpersistent",
+        )
Evidence
wait_for_vmware_cloud_init_all_vms propagates vm_data['source_vm_power'] into
wait_for_cloud_init(target_power_state=...), and wait_for_cloud_init only powers off the VM in its
finally block when target_power_state == 'off'. Since nonpersistent_disk_ready doesn’t call stop_vm,
it doesn’t guarantee the precondition required by change_disk_mode.

tests/copyoffload/conftest.py[359-382]
utilities/copyoffload_migration.py[88-122]
utilities/copyoffload_migration.py[124-214]
libs/providers/vmware.py[810-833]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`nonpersistent_disk_ready` relies on an indirect guarantee that VMs are powered off before calling `change_disk_mode()`. If a plan leaves VMs on after cloud-init, the fixture will fail during setup.
### Issue Context
`change_disk_mode()` explicitly requires `poweredOff`.
### Fix Focus Areas
- Ensure the fixture explicitly powers off each VM (and/or waits for `poweredOff`) before calling `change_disk_mode()`.
- Alternatively, change `wait_for_vmware_cloud_init_all_vms()` usage for this fixture to always request `target_power_state='off'` regardless of plan settings.
- tests/copyoffload/conftest.py[359-382]
- utilities/copyoffload_migration.py[88-214]
- libs/providers/vmware.py[810-833]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


View more (3)
4. Extra test_check_xcopy_used step 📘 Rule violation ⚙ Maintainability
Description
A new test method test_check_xcopy_used was added, introducing an additional step outside the
required 5-step class-based migration test structure. This can break the standardized incremental
workflow and violates the mandated naming/step conventions.
Code

tests/copyoffload/test_copyoffload_migration.py[R2039-2051]

+    def test_check_xcopy_used(self, ocp_admin_client: DynamicClient, target_namespace: str) -> None:
+        """Verify XCOPY acceleration was used for all disks.
+
+        Args:
+            ocp_admin_client (DynamicClient): OpenShift admin client.
+            target_namespace (str): Namespace where populate pods exist.
+        """
+        verify_xcopy_used(
+            ocp_admin_client=ocp_admin_client,
+            plan=self.plan_resource,
+            target_namespace=target_namespace,
+            expected_xcopy_used=True,
+        )
Evidence
PR Compliance ID 26 requires the standard 5 step methods (test_create_storagemap,
test_create_networkmap, test_create_plan, test_migrate_vms, test_check_vms). The PR adds a
new step method test_check_xcopy_used, which deviates from that required structure.

CLAUDE.md
tests/copyoffload/test_copyoffload_migration.py[2039-2051]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
A new test step method `test_check_xcopy_used` was added to an incremental migration test class, deviating from the required standardized 5-step class structure.
## Issue Context
Compliance requires migration tests to follow the fixed 5-step naming and ordering pattern. Adding additional `test_*` steps in the class violates this convention unless an explicit documented exception exists.
## Fix Focus Areas
- tests/copyoffload/test_copyoffload_migration.py[2039-2051]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


5. Skips wrong disk units 🐞 Bug ✓ Correctness
Description
change_disk_mode() excludes disks by unitNumber only (default skipping 0), but unit numbers are
per-controller and the added disk can legitimately be unit 0, so the fixture may not change the
intended disk mode.
Code

libs/providers/vmware.py[R830-834]

+        disks: list[vim.vm.device.VirtualDisk] = [
+            dev
+            for dev in vm.config.hardware.device
+            if isinstance(dev, vim.vm.device.VirtualDisk) and dev.unitNumber not in skip_unit_numbers
+        ]
Evidence
_get_add_disk_device_specs() picks the first available unit on the first SCSI controller, which can
be 0 when that controller has no disks; with skip_unit_numbers=(0,) the newly added disk is filtered
out and never edited. This mismatch directly impacts the new nonpersistent_disk_ready fixture that
relies on change_disk_mode to flip the added disk.

libs/providers/vmware.py[830-834]
libs/providers/vmware.py[879-898]
tests/copyoffload/conftest.py[359-382]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`VMWareProvider.change_disk_mode()` currently selects disks to edit using only `unitNumber` and skips unit 0 by default. Unit numbers are per SCSI controller and the newly added disk can be assigned unit 0, causing the method to skip the very disk it is intended to modify.
### Issue Context
The added disk unit is chosen as the first free unit on the selected controller. When the controller has no disks, that unit is `0`, which is skipped by default.
### Fix Focus Areas
- libs/providers/vmware.py[810-855]
- libs/providers/vmware.py[879-898]
### What to change
- Identify the OS disk more robustly (e.g., by boot order, by matching the disk backing fileName to the VM’s primary disk, by controllerKey+unitNumber tuple for the boot disk, or by excluding the disk that matches the boot device).
- Alternatively (and simpler for this test use-case), change `change_disk_mode()` to accept a predicate/selector (e.g., only disks whose backing fileName contains the cloned VM added-disk naming pattern) or accept an explicit list of device keys to edit.
- If you keep `skip_unit_numbers`, treat it as a set of `(controllerKey, unitNumber)` pairs rather than unitNumber alone.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


6. No power-off enforcement🐞 Bug ⛯ Reliability
Description
change_disk_mode() requires the VM to be powered off (per docstring) but doesn’t validate or enforce
it, so ReconfigVM_Task can fail if the VM is on and break the class fixture chain.
Code

libs/providers/vmware.py[R818-852]

+        The VM must be powered off before calling this method.
+
+        Args:
+            vm (vim.VirtualMachine): The vSphere VM object to reconfigure.
+            disk_mode (str): Target disk mode (e.g., "independent_nonpersistent",
+                "independent_persistent", "persistent").
+            skip_unit_numbers (tuple[int, ...]): Unit numbers to skip.
+                Defaults to (0,) to preserve the OS disk.
+
+        Raises:
+            ValueError: If no eligible disks are found on the VM.
+        """
+        disks: list[vim.vm.device.VirtualDisk] = [
+            dev
+            for dev in vm.config.hardware.device
+            if isinstance(dev, vim.vm.device.VirtualDisk) and dev.unitNumber not in skip_unit_numbers
+        ]
+
+        if not disks:
+            raise ValueError(f"No eligible disks found on VM '{vm.name}' (skipping units {skip_unit_numbers})")
+
+        device_changes: list[vim.vm.device.VirtualDeviceSpec] = []
+        for disk in disks:
+            spec = vim.vm.device.VirtualDeviceSpec()
+            spec.operation = vim.vm.device.VirtualDeviceSpec.Operation.edit
+            spec.device = disk
+            spec.device.backing.diskMode = disk_mode
+            device_changes.append(spec)
+            LOGGER.info(f"VM '{vm.name}': changing disk unit {disk.unitNumber} mode to '{disk_mode}'")
+
+        config_spec = vim.vm.ConfigSpec()
+        config_spec.deviceChange = device_changes
+
+        task = vm.ReconfigVM_Task(spec=config_spec)
+        self.wait_task(
Evidence
The method documents the power-off requirement but immediately calls ReconfigVM_Task with no guard.
In this repo, stop_vm() only powers off when called elsewhere; change_disk_mode itself doesn’t check
vm.runtime.powerState, so any caller that violates the contract will get an opaque task failure.

libs/providers/vmware.py[816-855]
libs/providers/vmware.py[325-332]
utilities/copyoffload_migration.py[206-214]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`change_disk_mode()` assumes the VM is powered off but does not validate it. If the VM is powered on, `ReconfigVM_Task` may fail and the test fixture will error.
### Issue Context
The method is used in a class-scoped fixture (`nonpersistent_disk_ready`) and may be reused elsewhere. Defensive validation will make failures immediate and actionable.
### Fix Focus Areas
- libs/providers/vmware.py[810-855]
### What to change
- At the start of `change_disk_mode()`, check `getattr(vm.runtime, "powerState", None)` and:
- either raise a clear `ValueError`/`RuntimeError` if not `"poweredOff"`, or
- optionally call `self.stop_vm(vm)` and/or wait until it is off before proceeding.
- Log the current power state in the error message.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended
7. Missing vSphere reconnect 🐞 Bug ⛯ Reliability
Description
VMWareProvider.change_disk_mode() calls ReconfigVM_Task without first ensuring the provider is
connected, unlike other VMwareProvider methods; a stale/expired session can cause disk-mode changes
to fail intermittently in long-running test runs.
Code

libs/providers/vmware.py[R855-858]

+        task = vm.ReconfigVM_Task(spec=config_spec)
+        self.wait_task(
+            task=task, action_name=f"Changing disk mode to '{disk_mode}' on VM '{vm.name}'", wait_timeout=120
+        )
Evidence
In vmware provider, other vSphere API operations explicitly trigger
self.reconnect_if_not_connected before interacting with vCenter. The new change_disk_mode()
method does not, and calls ReconfigVM_Task directly, so it can fail if the connection dropped
between earlier calls and the reconfig.

libs/providers/vmware.py[200-213]
libs/providers/vmware.py[365-377]
libs/providers/vmware.py[728-732]
libs/providers/vmware.py[810-859]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`VMWareProvider.change_disk_mode()` performs vSphere reconfiguration without checking/refreshing the vCenter session, unlike other provider methods.
### Issue Context
The provider already has a `reconnect_if_not_connected` pattern used before API actions.
### Fix Focus Areas
- libs/providers/vmware.py[810-859]
- Add `self.reconnect_if_not_connected` near the start of `change_disk_mode()` (before inspecting VM devices and before `ReconfigVM_Task`).
- Optionally add a short, explicit log line indicating a reconnect attempt (consistent with existing behavior).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


8. Edits all data disks 🐞 Bug ✓ Correctness
Description
change_disk_mode() updates diskMode on every VirtualDisk except those with unitNumber in
skip_unit_numbers; the new fixture’s intent is to change the “added disk” mode, but this
implementation will also change any pre-existing non-OS disks on the template VM.
Code

libs/providers/vmware.py[R834-849]

+        disks: list[vim.vm.device.VirtualDisk] = [
+            dev
+            for dev in vm.config.hardware.device
+            if isinstance(dev, vim.vm.device.VirtualDisk) and dev.unitNumber not in skip_unit_numbers
+        ]
+
+        if not disks:
+            raise ValueError(f"No eligible disks found on VM '{vm.name}' (skipping units {skip_unit_numbers})")
+
+        device_changes: list[vim.vm.device.VirtualDeviceSpec] = []
+        for disk in disks:
+            spec = vim.vm.device.VirtualDeviceSpec()
+            spec.operation = vim.vm.device.VirtualDeviceSpec.Operation.edit
+            spec.device = disk
+            spec.device.backing.diskMode = disk_mode
+            device_changes.append(spec)
Evidence
The fixture is documented as changing the mode of the added disk after cloud-init, but the
implementation selects disks solely by isinstance(VirtualDisk) and `unitNumber not in
skip_unit_numbers, then edits each disk’s backing.diskMode`. If the source VM/template has
multiple non-OS disks, all will become independent_nonpersistent, which changes test preconditions
and may affect unrelated validations.

tests/copyoffload/conftest.py[365-382]
libs/providers/vmware.py[834-851]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The new helper changes the disk mode for *all* eligible non-unit-0 disks, but the test/fixture intent is to change the mode for the *added* disk(s).
### Issue Context
Depending on the template, there can be multiple non-OS disks; changing all of them to independent_nonpersistent can alter test assumptions.
### Fix Focus Areas
- libs/providers/vmware.py[810-851]
- Add an optional selector to narrow which disks are edited (e.g., explicit `unit_numbers`, `device_keys`, or a predicate function).
- At minimum, allow the caller to pass `skip_unit_numbers` derived from the VM’s actual layout.
- tests/copyoffload/conftest.py[359-382]
- If only the newly added disk should change, compute the unit number(s) to edit and pass them explicitly (or pass a selector) rather than editing all non-unit-0 disks.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


9. nonpersistent_disk_ready fixture action-named 📘 Rule violation ⚙ Maintainability
Description
The newly added fixture name describes a setup action/readiness state rather than a noun-like
resource, reducing clarity and making fixture usage less self-documenting. This violates the fixture
naming rules intended to keep fixtures explicit and readable.
Code

tests/copyoffload/conftest.py[R359-364]

+@pytest.fixture(scope="class")
+def nonpersistent_disk_ready(
+    vmware_cloud_init_ready: None,
+    prepared_plan: dict[str, Any],
+    source_provider: VMWareProvider,
+) -> None:
Evidence
PR Compliance ID 14 requires fixture names to be nouns representing resources and not setup actions.
The added fixture nonpersistent_disk_ready performs an action (changing disk mode) and is
requested via usefixtures, but its name reads like a readiness/setup step rather than a resource.

CLAUDE.md
tests/copyoffload/conftest.py[359-364]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The fixture name `nonpersistent_disk_ready` describes a setup/readiness action rather than a noun-like resource name, which reduces readability and violates fixture naming conventions.
## Issue Context
This fixture performs a side effect (changing VM disk mode) and is used via `@pytest.mark.usefixtures(...)`. Even for side-effect fixtures, names should be noun-like to reflect the resource/state being provided.
## Fix Focus Areas
- tests/copyoffload/conftest.py[359-382]
- tests/copyoffload/test_copyoffload_migration.py[1922-1924]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


View more (1)
10. Undocumented VmCloneError 🐞 Bug ⚙ Maintainability
Description
change_disk_mode() documents only ValueError, but it can also raise VmCloneError via wait_task()
when ReconfigVM_Task fails. This breaks the method’s declared contract for callers relying on the
docstring.
Code

libs/providers/vmware.py[R827-829]

+        Raises:
+            ValueError: If the VM is not powered off or no eligible disks are found.
+        """
Evidence
change_disk_mode calls wait_task without catching exceptions, and wait_task raises VmCloneError on
vSphere task errors; however, the change_disk_mode docstring lists only ValueError.

libs/providers/vmware.py[816-859]
libs/providers/vmware.py[289-324]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`change_disk_mode()` can raise `VmCloneError` (from `wait_task`) but the docstring lists only `ValueError`.
### Issue Context
Callers may us...

@redhat-qe-bot2
Copy link
Copy Markdown

Report bugs in Issues

Welcome! 🎉

This pull request will be automatically processed with the following features:

🔄 Automatic Actions

  • Reviewer Assignment: Reviewers are automatically assigned based on the OWNERS file in the repository root
  • Size Labeling: PR size labels (XS, S, M, L, XL, XXL) are automatically applied based on changes
  • Issue Creation: Disabled for this repository
  • Branch Labeling: Branch-specific labels are applied to track the target branch
  • Auto-verification: Auto-verified users have their PRs automatically marked as verified
  • Labels: All label categories are enabled (default configuration)

📋 Available Commands

PR Status Management

  • /wip - Mark PR as work in progress (adds WIP: prefix to title)
  • /wip cancel - Remove work in progress status
  • /hold - Block PR merging (approvers only)
  • /hold cancel - Unblock PR merging
  • /verified - Mark PR as verified
  • /verified cancel - Remove verification status
  • /reprocess - Trigger complete PR workflow reprocessing (useful if webhook failed or configuration changed)
  • /regenerate-welcome - Regenerate this welcome message

Review & Approval

  • /lgtm - Approve changes (looks good to me)
  • /approve - Approve PR (approvers only)
  • /automerge - Enable automatic merging when all requirements are met (maintainers and approvers only)
  • /assign-reviewers - Assign reviewers based on OWNERS file
  • /assign-reviewer @username - Assign specific reviewer
  • /check-can-merge - Check if PR meets merge requirements

Testing & Validation

  • /retest tox - Run Python test suite with tox
  • /retest build-container - Rebuild and test container image
  • /retest conventional-title - Validate commit message format
  • /retest all - Run all available tests

Container Operations

  • /build-and-push-container - Build and push container image (tagged with PR number)
    • Supports additional build arguments: /build-and-push-container --build-arg KEY=value

Cherry-pick Operations

  • /cherry-pick <branch> - Schedule cherry-pick to target branch when PR is merged
    • Multiple branches: /cherry-pick branch1 branch2 branch3

Label Management

  • /<label-name> - Add a label to the PR
  • /<label-name> cancel - Remove a label from the PR

✅ Merge Requirements

This PR will be automatically approved when the following conditions are met:

  1. Approval: /approve from at least one approver
  2. Status Checks: All required status checks must pass
  3. No Blockers: No wip, hold, has-conflicts labels and PR must be mergeable (no conflicts)
  4. Verified: PR must be marked as verified

📊 Review Process

Approvers and Reviewers

Approvers:

  • myakove
  • solenoci

Reviewers:

  • krcmarik
  • myakove
  • solenoci
Available Labels
  • hold
  • verified
  • wip
  • lgtm
  • approve
  • automerge

💡 Tips

  • WIP Status: Use /wip when your PR is not ready for review
  • Verification: The verified label is automatically removed on each new commit
  • Cherry-picking: Cherry-pick labels are processed when the PR is merged
  • Container Builds: Container images are automatically tagged with the PR number
  • Permission Levels: Some commands require approver permissions
  • Auto-verified Users: Certain users have automatic verification and merge privileges

For more information, please refer to the project documentation or contact the maintainers.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Mar 23, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: 8df8c9f5-3e71-4e93-b860-3dd49274fead

📥 Commits

Reviewing files that changed from the base of the PR and between bcbd5d3 and b132744.

📒 Files selected for processing (1)
  • CLAUDE.md

Walkthrough

Adds VMWareProvider.change_disk_mode(...) to reconfigure VM disk modes; a class-scoped pytest fixture applies it during test setup; tests updated to use the fixture and include an XCOPY verification; test config no longer specifies disk_mode.

Changes

Cohort / File(s) Summary
Core VMware Provider Enhancement
libs/providers/vmware.py
Added VMWareProvider.change_disk_mode(vm, disk_mode, skip_unit_numbers=(0,)): verifies VM is powered off, selects VirtualDisk devices excluding skip_unit_numbers, builds VirtualDeviceSpec edits updating backing.diskMode, submits ReconfigVM_Task, waits (120s) and logs; raises ValueError for invalid state or no eligible disks.
Test Fixture Setup
tests/copyoffload/conftest.py
Added class-scoped fixture nonpersistent_disk_ready (runs after vmware_cloud_init_ready) that iterates prepared_plan["virtual_machines"], resolves each VM's provider_vm_api, and calls change_disk_mode(..., "independent_nonpersistent").
Test Updates
tests/copyoffload/test_copyoffload_migration.py
Switched TestCopyoffloadIndependentNonpersistentDiskMigration to use nonpersistent_disk_ready instead of vmware_cloud_init_ready; added test_check_xcopy_used() to assert XCOPY acceleration is used.
Configuration Cleanup
tests/tests_config/config.py
Removed explicit disk_mode entry from add_disks in test_copyoffload_independent_nonpersistent_disk_migration since disk mode is now set by the fixture/provider call.
Documentation
CLAUDE.md
Documented the 6-step copy-offload pattern by adding test_check_xcopy_used to copy-offload test requirements and updated test-structure guidance.

Sequence Diagram(s)

sequenceDiagram
    participant Pytest as Pytest Framework
    participant Fixture as nonpersistent_disk_ready
    participant Provider as VMWareProvider
    participant VMware as vSphere API
    participant VM as VirtualMachine

    Pytest->>Fixture: run class fixture
    Fixture->>Fixture: iterate prepared_plan virtual_machines
    Fixture->>Provider: resolve provider_vm_api for VM
    Fixture->>Provider: change_disk_mode(vm, "independent_nonpersistent")
    Provider->>VMware: build ConfigSpec with deviceChange edits
    Provider->>VMware: vm.ReconfigVM_Task(ConfigSpec)
    VMware->>VM: apply disk backing changes
    VMware-->>Provider: task completion
    Provider->>Fixture: log per-disk changes and success
    Fixture->>Pytest: fixture completes
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly summarizes the main change: adding support for modifying disk mode to independent_nonpersistent after cloud-init, which is the primary objective.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@prabinovRedhat prabinovRedhat changed the title fix: change independent_nonpersistent disk mode after cloud-init Support: change independent_nonpersistent disk mode after cloud-init Mar 23, 2026
@prabinovRedhat
Copy link
Copy Markdown
Collaborator Author

/verified

Comment thread tests/copyoffload/test_copyoffload_migration.py
Comment thread libs/providers/vmware.py
Comment thread libs/providers/vmware.py
coderabbitai[bot]
coderabbitai Bot previously approved these changes Mar 23, 2026
@qodo-code-review
Copy link
Copy Markdown

qodo-code-review Bot commented Mar 23, 2026

Persistent review updated to latest commit 8c2c2c1

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@libs/providers/vmware.py`:
- Around line 813-859: Validate the disk_mode string before calling
vm.ReconfigVM_Task in the change-disk-mode method (the method that checks
vm.runtime.powerState, builds device_changes and calls self.wait_task),
restricting it to the supported values ("independent_nonpersistent",
"independent_persistent", "persistent"); if disk_mode is not one of these, raise
a domain-specific exception (add or reuse an exception in
exceptions/exceptions.py such as InvalidDiskModeError) instead of letting the
ReconfigVM_Task fail, and adjust any callers/tests if needed to expect the new
exception.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: 23f944cc-15ba-4401-a5d8-e53e96aacacb

📥 Commits

Reviewing files that changed from the base of the PR and between fbd0465 and 8c2c2c1.

📒 Files selected for processing (1)
  • libs/providers/vmware.py

Comment thread libs/providers/vmware.py
@prabinovRedhat
Copy link
Copy Markdown
Collaborator Author

/verified

@myakove
Copy link
Copy Markdown
Collaborator

myakove commented Mar 24, 2026

/lgtm
/approve
/verified

@myakove myakove merged commit be50043 into RedHatQE:main Mar 24, 2026
7 checks passed
@redhat-qe-bot1
Copy link
Copy Markdown

New container for ghcr.io/redhatqe/mtv-api-tests:latest published

@qodo-code-review
Copy link
Copy Markdown

qodo-code-review Bot commented Mar 24, 2026

Persistent review updated to latest commit c73e7b6

@prabinovRedhat
Copy link
Copy Markdown
Collaborator Author

/cherry-pick v2.11 v2.10

rh-bot-1 pushed a commit that referenced this pull request Mar 24, 2026
…369)

* fix: change disk mode to independent_nonpersistent after cloud-init

independent_nonpersistent disks lose data on power-off, so the disk
must be created as regular persistent during clone, then changed after
cloud-init completes and the VM is powered off.

- Remove disk_mode from config (defaults to persistent during clone)
- Add VMWareProvider.change_disk_mode() for ReconfigVM_Task
- Add nonpersistent_disk_ready fixture that chains after cloud-init

Made-with: Cursor

* feat: add test_check_xcopy_used to independent nonpersistent test class

Made-with: Cursor

* fix: add power state validation in change_disk_mode()

Made-with: Cursor

* docs: document 6-step copy-offload test pattern in CLAUDE.md

Made-with: Cursor

* docs: reference verify_xcopy_used utility instead of implementation detail

Made-with: Cursor
@rh-bot-1
Copy link
Copy Markdown

Cherry-picked PR Support: change independent_nonpersistent disk mode after cloud-init into v2.11: #382

rh-bot-1 pushed a commit that referenced this pull request Mar 24, 2026
…369)

* fix: change disk mode to independent_nonpersistent after cloud-init

independent_nonpersistent disks lose data on power-off, so the disk
must be created as regular persistent during clone, then changed after
cloud-init completes and the VM is powered off.

- Remove disk_mode from config (defaults to persistent during clone)
- Add VMWareProvider.change_disk_mode() for ReconfigVM_Task
- Add nonpersistent_disk_ready fixture that chains after cloud-init

Made-with: Cursor

* feat: add test_check_xcopy_used to independent nonpersistent test class

Made-with: Cursor

* fix: add power state validation in change_disk_mode()

Made-with: Cursor

* docs: document 6-step copy-offload test pattern in CLAUDE.md

Made-with: Cursor

* docs: reference verify_xcopy_used utility instead of implementation detail

Made-with: Cursor
@rh-bot-1
Copy link
Copy Markdown

Cherry-picked PR Support: change independent_nonpersistent disk mode after cloud-init into v2.10: #383

@prabinovRedhat
Copy link
Copy Markdown
Collaborator Author

/cherry-pick v2.11 v2.10

redhat-qe-bot pushed a commit that referenced this pull request Mar 24, 2026
…369)

* fix: change disk mode to independent_nonpersistent after cloud-init

independent_nonpersistent disks lose data on power-off, so the disk
must be created as regular persistent during clone, then changed after
cloud-init completes and the VM is powered off.

- Remove disk_mode from config (defaults to persistent during clone)
- Add VMWareProvider.change_disk_mode() for ReconfigVM_Task
- Add nonpersistent_disk_ready fixture that chains after cloud-init

Made-with: Cursor

* feat: add test_check_xcopy_used to independent nonpersistent test class

Made-with: Cursor

* fix: add power state validation in change_disk_mode()

Made-with: Cursor

* docs: document 6-step copy-offload test pattern in CLAUDE.md

Made-with: Cursor

* docs: reference verify_xcopy_used utility instead of implementation detail

Made-with: Cursor
@redhat-qe-bot
Copy link
Copy Markdown

Cherry-picked PR Support: change independent_nonpersistent disk mode after cloud-init into v2.11: #385

redhat-qe-bot pushed a commit that referenced this pull request Mar 24, 2026
…369)

* fix: change disk mode to independent_nonpersistent after cloud-init

independent_nonpersistent disks lose data on power-off, so the disk
must be created as regular persistent during clone, then changed after
cloud-init completes and the VM is powered off.

- Remove disk_mode from config (defaults to persistent during clone)
- Add VMWareProvider.change_disk_mode() for ReconfigVM_Task
- Add nonpersistent_disk_ready fixture that chains after cloud-init

Made-with: Cursor

* feat: add test_check_xcopy_used to independent nonpersistent test class

Made-with: Cursor

* fix: add power state validation in change_disk_mode()

Made-with: Cursor

* docs: document 6-step copy-offload test pattern in CLAUDE.md

Made-with: Cursor

* docs: reference verify_xcopy_used utility instead of implementation detail

Made-with: Cursor
@redhat-qe-bot
Copy link
Copy Markdown

Cherry-picked PR Support: change independent_nonpersistent disk mode after cloud-init into v2.10: #386

redhat-qe-bot pushed a commit that referenced this pull request Mar 24, 2026
…369) (#386)

* fix: change disk mode to independent_nonpersistent after cloud-init

independent_nonpersistent disks lose data on power-off, so the disk
must be created as regular persistent during clone, then changed after
cloud-init completes and the VM is powered off.

- Remove disk_mode from config (defaults to persistent during clone)
- Add VMWareProvider.change_disk_mode() for ReconfigVM_Task
- Add nonpersistent_disk_ready fixture that chains after cloud-init

Made-with: Cursor

* feat: add test_check_xcopy_used to independent nonpersistent test class

Made-with: Cursor

* fix: add power state validation in change_disk_mode()

Made-with: Cursor

* docs: document 6-step copy-offload test pattern in CLAUDE.md

Made-with: Cursor

* docs: reference verify_xcopy_used utility instead of implementation detail

Made-with: Cursor

Co-authored-by: Polina Rabinovich <58551443+prabinovRedhat@users.noreply.github.com>
rh-bot-1 pushed a commit that referenced this pull request Mar 24, 2026
…369) (#385)

* fix: change disk mode to independent_nonpersistent after cloud-init

independent_nonpersistent disks lose data on power-off, so the disk
must be created as regular persistent during clone, then changed after
cloud-init completes and the VM is powered off.

- Remove disk_mode from config (defaults to persistent during clone)
- Add VMWareProvider.change_disk_mode() for ReconfigVM_Task
- Add nonpersistent_disk_ready fixture that chains after cloud-init

Made-with: Cursor

* feat: add test_check_xcopy_used to independent nonpersistent test class

Made-with: Cursor

* fix: add power state validation in change_disk_mode()

Made-with: Cursor

* docs: document 6-step copy-offload test pattern in CLAUDE.md

Made-with: Cursor

* docs: reference verify_xcopy_used utility instead of implementation detail

Made-with: Cursor

Co-authored-by: Polina Rabinovich <58551443+prabinovRedhat@users.noreply.github.com>
myakove added a commit to myk-org/github-webhook-server that referenced this pull request Mar 24, 2026
## Summary

- Check if cherry-pick label already exists on the PR before processing
- Skip branches whose `cherry-pick-{branch}` label is already present
- Post comment telling user to remove the label to re-trigger
- Works for both merged and unmerged PRs
- Early return when no valid target branches (avoids unnecessary API
calls)

Triggered by duplicate cherry-pick PRs on
RedHatQE/mtv-api-tests#369

Closes #1054

## Test plan

- [x] `test_process_cherry_pick_command_skips_already_cherry_picked` -
partial skip (merged PR)
- [x] `test_process_cherry_pick_command_all_already_cherry_picked` -
full skip (merged PR)
- [x]
`test_process_cherry_pick_command_skips_already_cherry_picked_unmerged`
- partial skip (unmerged PR)
- [x] All existing cherry-pick tests pass (9 total)
- [x] All 200 tests pass

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Bug Fixes**
* Cherry-pick operations now skip branches that are already labeled,
post a comment listing skipped branches with re-trigger instructions,
and return early if no valid targets remain.
  * Label naming/formatting standardized for consistency.
* Per-branch handling ensures failed label additions skip that branch
without affecting others.

* **Tests**
* Added idempotency and skip-case tests covering merged/unmerged PRs and
label-add failures.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants