Skip to content

OSPRH-16753: Create an independent vars for graphing's deployment#696

Closed
MiguelCarpio wants to merge 1 commit intoopenstack-k8s-operators:mainfrom
MiguelCarpio:OSPRH-16753
Closed

OSPRH-16753: Create an independent vars for graphing's deployment#696
MiguelCarpio wants to merge 1 commit intoopenstack-k8s-operators:mainfrom
MiguelCarpio:OSPRH-16753

Conversation

@MiguelCarpio
Copy link
Copy Markdown
Contributor

What does this PR do?

Create an independent vars for graphing's deployment

Why do we need it?

For independent graphing's deployment that will allow us to run specific graphing tests on it.

@openshift-ci-robot
Copy link
Copy Markdown

openshift-ci-robot commented Jun 5, 2025

@MiguelCarpio: This pull request references OSPRH-16753 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the task to target the "4.20.0" version, but no target version was set.

Details

In response to this:

What does this PR do?

Create an independent vars for graphing's deployment

Why do we need it?

For independent graphing's deployment that will allow us to run specific graphing tests on it.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Jun 5, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: MiguelCarpio
Once this PR has been reviewed and has the lgtm label, please assign paramite for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Jun 5, 2025

Hi @MiguelCarpio. Thanks for your PR.

I'm waiting for a openstack-k8s-operators member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@softwarefactory-project-zuul
Copy link
Copy Markdown

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/b8ed6640a289451491b99dd1c34d3c26

✔️ openstack-k8s-operators-content-provider SUCCESS in 2h 25m 48s
✔️ podified-multinode-edpm-deployment-crc SUCCESS in 1h 20m 20s
✔️ telemetry-operator-multinode-default-telemetry SUCCESS in 57m 42s
functional-graphing-tests-osp18 FAILURE in 1h 18m 48s (non-voting)
functional-autoscaling-tests-osp18 FAILURE in 2h 02m 28s
✔️ functional-logging-tests-osp18 SUCCESS in 1h 22m 27s

@vyzigold
Copy link
Copy Markdown
Contributor

vyzigold commented Jun 5, 2025

/ok-to-test

@vyzigold
Copy link
Copy Markdown
Contributor

vyzigold commented Jun 5, 2025

the autoscaling failure is unrelated to the change, but it's interesting. What we're trying to do in the test is to create a stack, which adds a VM anytime the average load for all VMs in the stack is above 50%. I can see the values of the CPU load for each minute in the log. We start with 1 VM, we create load, it gets to 100% CPU and a second VM appears. Ideally the first VM should stay on 100% CPU and a third VM should appear, but instead it fell to 0% or 1% (I can't distinguish which VM is which from the logs) even though the load on the first VM should still be the same. This is a little concerning and it's something we should keep an eye on going forwards. This is also the first failure like that I've seen. I'll keep an eye out for that as the CI Watcher this week.

@vyzigold
Copy link
Copy Markdown
Contributor

vyzigold commented Jun 5, 2025

recheck

@softwarefactory-project-zuul
Copy link
Copy Markdown

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/da0485d5d75a4c1589c09d3e604691d5

✔️ openstack-k8s-operators-content-provider SUCCESS in 2h 13m 30s
podified-multinode-edpm-deployment-crc FAILURE in 1h 21m 53s
✔️ telemetry-operator-multinode-default-telemetry SUCCESS in 59m 50s
functional-graphing-tests-osp18 FAILURE in 1h 17m 49s (non-voting)
functional-autoscaling-tests-osp18 FAILURE in 1h 55m 27s
✔️ functional-logging-tests-osp18 SUCCESS in 1h 15m 10s

@vyzigold
Copy link
Copy Markdown
Contributor

vyzigold commented Jun 6, 2025

Another failure. And it looks basically the same.

@vyzigold
Copy link
Copy Markdown
Contributor

vyzigold commented Jun 6, 2025

recheck let's give it another chance

@openshift-merge-robot
Copy link
Copy Markdown
Contributor

PR needs rebase.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@MiguelCarpio
Copy link
Copy Markdown
Contributor Author

The dashboardEnabled is set in the autoscaling kustomization. #698

@MiguelCarpio MiguelCarpio deleted the OSPRH-16753 branch July 7, 2025 09:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants