test: migrate pytest-operator to jubilant#218
Conversation
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
…s_not_rebuild_image Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
There was a problem hiding this comment.
Pull request overview
Migrates the repository’s integration test harness from pytest-operator/async Juju usage to the jubilant (synchronous) Juju wrapper to reduce CI failures from websocket disconnects.
Changes:
- Replace
pytest-operator+ asyncio-based tests/fixtures withjubilant-based synchronous equivalents. - Rework integration fixtures to create/manage a temporary Juju model via
jubilant.temp_model(), plus add--keep-modelsfor debugging. - Update tox and integration requirements to drop async-only dependencies and add
jubilant.
Reviewed changes
Copilot reviewed 8 out of 8 changed files in this pull request and generated 2 comments.
Show a summary per file
| File | Description |
|---|---|
| tox.ini | Replace pytest-operator/asyncio deps with jubilant in lint/integration envs. |
| tests/integration/types.py | Update test config typing from Model to jubilant.Juju. |
| tests/integration/test_upgrade.py | Port upgrade flow to jubilant APIs and synchronous waiting/log checks. |
| tests/integration/test_charm.py | Port integration tests to jubilant APIs and synchronous helpers/context management. |
| tests/integration/requirements.txt | Remove async-specific dependency (nest_asyncio). |
| tests/integration/helpers.py | Convert wait_for/wait_for_images from async to sync utilities. |
| tests/integration/conftest.py | Replace ops_test/model fixtures with a jubilant-managed temporary model + updated deploy/secret/config flows. |
| tests/conftest.py | Add --keep-models pytest CLI option to support jubilant temp model retention. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
- Use jubilant~=1.0 in lint env for consistent versioning with integration env - Wrap _change_cronjob_to_minutes yield in try/finally to ensure cron cleanup on test failure Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
florentianayuwono
left a comment
There was a problem hiding this comment.
lgtm thankyou!
encoding is only valid with filename, not handlers. The WatchedFileHandler is already created with encoding='utf-8'. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
yhaliaw
left a comment
There was a problem hiding this comment.
We need to review the logging of the jubilant first.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 9 out of 9 changed files in this pull request and generated 5 comments.
Comments suppressed due to low confidence (1)
tests/integration/conftest.py:410
- This function uses
awaitinside a non-async defand referencesapp/app.modeleven though noappvariable is defined (the deploy call doesn’t capture a return value). This is a syntax/runtime error; rewrite usingjubilantoperations (deploy, grant secrets, set config) withoutawait, or make the whole fixture async and restore the asyncio toolchain.
test_configs.juju.deploy(
test_configs.charm_file,
app_name,
constraints=base_machine_constraint,
config=app_config,
log=False,
)
await app.model.grant_secret(openstack_password_secret.name, app.name)
await app.model.grant_secret(script_secret.name, app.name)
await app.set_config(
{
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
- Pin integration_test.yaml to operator-workflows@main instead of mutable feature branch ref - Convert openstack_password_secret_fixture from async pytest_asyncio to sync pytest fixture using juju.add_secret() - Convert app_config_fixture from async pytest_asyncio to sync pytest fixture (body was already synchronous) - Fix app_fixture return type (AsyncGenerator[Application] -> Generator[str]) and replace await calls with jubilant API: grant_secret(), config() - Fix _prepare_charmhub_app_config: remove async, replace ops_test param with juju: jubilant.Juju, fix NameError on test_configs - Fix app_on_charmhub_fixture: convert to sync pytest fixture, remove ops_test param, define missing app_name, replace all async/model calls with jubilant API, yield app name string instead of Application Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
jubilant.Juju.deploy() does not accept a log parameter (it only exists on the private _cli method). Remove it from _deploy_test_charm and app_fixture to avoid TypeError at runtime. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
…alls - Flatten parenthesized tuple assignment for _prepare_charmhub_app_config to match black's preferred style - Remove log=False from juju.deploy() in test_cos_agent_relation; jubilant's deploy() does not accept a log parameter Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
…hub_fixture Was annotated as str instead of dict, causing mypy to error: Argument 'constraints' to 'deploy' of 'Juju' has incompatible type 'str'; expected 'Mapping[str, str] | None' Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
juju ssh can fail transiently due to SSH connection issues (connection not ready, brief drops). Add a juju_ssh() helper in helpers.py that wraps jubilant.Juju.ssh with tenacity retry (5 attempts, 30s apart), catching jubilant.CLIError. Replace all direct juju.ssh() calls in test_charm.py and test_upgrade.py with juju_ssh(). Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Extract the juju_id_rsa key generation into a dedicated module-scoped fixture juju_ssh_key_path in conftest.py. The juju fixture now depends on it to register the public key with juju. Update juju_ssh() helper to accept ssh_key_path and pass it via ssh_options=['-i', str(ssh_key_path)], so the correct private key is explicitly used rather than relying on the SSH client's default key discovery. Update all juju_ssh call sites in test_charm.py and test_upgrade.py to receive and forward the juju_ssh_key_path fixture. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
src/image.py: Observer typed charm as ops.CharmBase, but called self.charm._setup_proxy_environment() which only exists on the concrete GithubRunnerImageBuilderCharm. Add TYPE_CHECKING import of the concrete class and update the type annotation, resolving the mypy attr-defined error. tests/unit/test_image.py: test__on_image_relation_joined used the real charm fixture (image_observer), so _setup_proxy_environment ran against a MagicMock proxy config, causing TypeError (str expected). Add monkeypatch for _setup_proxy_environment to match the pattern used by the other observer tests. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
The test_build_image test was passing as soon as an image record appeared in OpenStack (created_at >= dispatch_time), even if the image was still being uploaded (status != active). This caused a race condition: - First image record created at e.g. 08:22:31 (upload in progress, ~3 min) - test_build_image passed at 08:22:55 (record exists, but not yet active) - test2 integrated at 08:22:56 (25s after record creation) - Charm called get_latest_images (uses 'latest-build-id' CLI which only returns active images) -> returned empty -> triggered unnecessary rebuild By requiring image.status == 'active' in image_created_from_dispatch, test_build_image now waits until the image upload completes before proceeding. When test2 is subsequently integrated, the image is guaranteed to be active and get_latest_images finds it, preventing the unnecessary rebuild. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
When the charm finishes a build and writes the image ID to the relation, the requirer responds, triggering another image_relation_changed on the charm. At that moment, the image is still uploading to OpenStack and not yet in 'active' status. The charm's get_latest_images() (which uses the 'latest-build-id' CLI that only returns active images) returned empty, causing the charm to trigger an unnecessary rebuild. Add has_any_images() to builder.py using the OpenStack SDK (already a dependency) to check for images in any status. In _on_image_relation_changed, check has_any_images() before triggering _run(): if an image exists but is not yet active, log and skip the rebuild. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Replace the parallel has_any_images() function with an active_only parameter on get_latest_images(): - active_only=True (default): existing behaviour — uses the image-builder CLI (latest-build-id) which only returns active images - active_only=False: uses the OpenStack SDK via a new private _get_images_any_status() helper to return images regardless of upload status; has_any_images() now delegates to this path This removes the need for a separate public function while keeping the same fix: before triggering a rebuild in _on_image_relation_changed, the charm calls get_latest_images(active_only=False) to detect images that are still uploading. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
…ive_only)" This reverts commit 2ff447a.
Add a new 'any-build-id' command to the image-builder application that returns the latest image ID regardless of upload status (active, saving, queued, etc.). This uses connection.image.images() which bypasses any implicit active-only filter in the Glance API. In the charm, adapt get_latest_images() to accept active_only=False which dispatches to a new _get_latest_image_any_status() helper that calls 'any-build-id' via subprocess — keeping all OpenStack SDK logic inside the application layer, not the charm layer. has_any_images() now delegates to get_latest_images(active_only=False) rather than calling the SDK directly from the charm. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
…e uploads" This reverts commit 3b86e98.
Add active_only: bool = True to _get_sorted_images_by_created_at and get_latest_build_id. When False, uses connection.image.images() which returns images in any upload status (saving/queued/active) rather than connection.search_images() which only returns active images on this cloud. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
When --any-status is passed, delegates to store.get_latest_build_id with active_only=False, returning the most recently created image in any upload status rather than only active images. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
…as_any_images - Add active_only param to _get_latest_image; appends --any-status when False - Add active_only param to get_latest_images; uses functools.partial to thread it - Rewrite has_any_images to delegate to get_latest_images(active_only=False) - Remove direct openstack SDK usage from builder Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
- Move functools import to module level in test_builder.py - Parametrize test_get_latest_images with active_only cases - Fix isort ordering in test_builder.py - Move functools.partial before try block in get_latest_images Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
- Run black on app/tests/unit/test_store.py - Remove unused functools import from tests/unit/test_builder.py Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
The test was recording dispatch_time while the initial build's upload step was still in progress. The build creates an intermediate snapshot in the build cloud, then transfers to the upload cloud. test_build_image found the build-cloud snapshot early, causing the next test to start while the upload-cloud copy was still being written. This meant the final uploaded image appeared after dispatch_time and was incorrectly flagged as a spurious rebuild. Fix: wait for all agents idle before recording dispatch_time so the entire initial build (including final upload) is complete first. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
The assertion was using hard-coded [0] index and KeyError-prone dict access to read image data from 'juju show-unit' output. This broke in Juju 4.0 where 'related-units' can be absent from the output (e.g. when no data has been written by the remote unit yet, or the key structure changed). Fix by: - Importing IMAGE_RELATION constant to find the relation by endpoint name instead of by index - Using .get() chains to safely navigate the nested structure - Extracting into _get_images_from_unit_data() helper for clarity Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 38 out of 39 changed files in this pull request and generated 10 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
- Fix missing backslash line continuations in how-to docs - Remove stray empty bullet in changelog - Add missing openstack-user-domain-name to README deploy example - Fix concurrency.group spanning two YAML lines in integration_test.yaml - Update CONTRIBUTING.md to reference arch-specific OpenStack password env vars - Fix test name typo: testsetup_ → test_setup_ in test_charm.py Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Applicable spec:
Overview
openstack-password-secretconfiguration option to securely store OpenStack passwords using Juju secrets, replacing the deprecatedopenstack-passwordconfig optionRationale
Module Changes
src/state.py: replacedopenstack-passwordconfig withopenstack-password-secretJuju secret ID; the secret must be granted to the charm before deploymentChecklist
app/pyproject.toml