Skip to content

Comments

feat(container)!: Update image ghcr.io/prometheus-community/charts/kube-prometheus-stack ( 77.14.0 ➔ 78.1.0 )#3466

Merged
binaryn3xus merged 1 commit intomainfrom
renovate/ghcr.io-prometheus-community-charts-kube-prometheus-stack-78.x
Oct 11, 2025
Merged

feat(container)!: Update image ghcr.io/prometheus-community/charts/kube-prometheus-stack ( 77.14.0 ➔ 78.1.0 )#3466
binaryn3xus merged 1 commit intomainfrom
renovate/ghcr.io-prometheus-community-charts-kube-prometheus-stack-78.x

Conversation

@unsc-oni-ancilla
Copy link
Contributor

@unsc-oni-ancilla unsc-oni-ancilla bot commented Oct 9, 2025

This PR contains the following updates:

Package Update Change
ghcr.io/prometheus-community/charts/kube-prometheus-stack (source) major 77.14.0 -> 78.1.0

Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about these updates again.


  • If you want to rebase/retry this PR, check this box

This PR has been generated by Renovate Bot.

@unsc-oni-ancilla
Copy link
Contributor Author

--- HelmRelease: observability/kube-prometheus-stack ClusterRole: observability/kube-prometheus-stack-operator

+++ HelmRelease: observability/kube-prometheus-stack ClusterRole: observability/kube-prometheus-stack-operator

@@ -27,16 +27,19 @@

   - prometheusagents/finalizers
   - prometheusagents/status
   - thanosrulers
   - thanosrulers/finalizers
   - thanosrulers/status
   - scrapeconfigs
+  - scrapeconfigs/status
   - servicemonitors
   - servicemonitors/status
   - podmonitors
+  - podmonitors/status
   - probes
+  - probes/status
   - prometheusrules
   verbs:
   - '*'
 - apiGroups:
   - apps
   resources:
--- HelmRelease: observability/kube-prometheus-stack Deployment: observability/kube-prometheus-stack-operator

+++ HelmRelease: observability/kube-prometheus-stack Deployment: observability/kube-prometheus-stack-operator

@@ -31,20 +31,20 @@

         app: kube-prometheus-stack-operator
         app.kubernetes.io/name: kube-prometheus-stack-prometheus-operator
         app.kubernetes.io/component: prometheus-operator
     spec:
       containers:
       - name: kube-prometheus-stack
-        image: quay.io/prometheus-operator/prometheus-operator:v0.85.0
+        image: quay.io/prometheus-operator/prometheus-operator:v0.86.0
         imagePullPolicy: IfNotPresent
         args:
         - --kubelet-service=kube-system/kube-prometheus-stack-kubelet
         - --kubelet-endpoints=true
         - --kubelet-endpointslice=false
         - --localhost=127.0.0.1
-        - --prometheus-config-reloader=quay.io/prometheus-operator/prometheus-config-reloader:v0.85.0
+        - --prometheus-config-reloader=quay.io/prometheus-operator/prometheus-config-reloader:v0.86.0
         - --config-reloader-cpu-request=0
         - --config-reloader-cpu-limit=0
         - --config-reloader-memory-request=0
         - --config-reloader-memory-limit=0
         - --thanos-default-base-image=quay.io/thanos/thanos:v0.39.2
         - --secret-field-selector=type!=kubernetes.io/dockercfg,type!=kubernetes.io/service-account-token,type!=helm.sh/release.v1
--- HelmRelease: observability/kube-prometheus-stack PrometheusRule: observability/kube-prometheus-stack-alertmanager.rules

+++ HelmRelease: observability/kube-prometheus-stack PrometheusRule: observability/kube-prometheus-stack-alertmanager.rules

@@ -21,13 +21,13 @@

           $labels.pod}}.
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/alertmanager/alertmanagerfailedreload
         summary: Reloading an Alertmanager configuration has failed.
       expr: |-
         # Without max_over_time, failed scrapes could create false negatives, see
         # https://www.robustperception.io/alerting-on-gauges-in-prometheus-2-0 for details.
-        max_over_time(alertmanager_config_last_reload_successful{job="kube-prometheus-stack-alertmanager",namespace="observability"}[5m]) == 0
+        max_over_time(alertmanager_config_last_reload_successful{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="observability"}[5m]) == 0
       for: 10m
       labels:
         severity: critical
     - alert: AlertmanagerMembersInconsistent
       annotations:
         description: Alertmanager {{ $labels.namespace }}/{{ $labels.pod}} has only
@@ -35,30 +35,30 @@

         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/alertmanager/alertmanagermembersinconsistent
         summary: A member of an Alertmanager cluster has not found all other cluster
           members.
       expr: |-
         # Without max_over_time, failed scrapes could create false negatives, see
         # https://www.robustperception.io/alerting-on-gauges-in-prometheus-2-0 for details.
-          max_over_time(alertmanager_cluster_members{job="kube-prometheus-stack-alertmanager",namespace="observability"}[5m])
+          max_over_time(alertmanager_cluster_members{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="observability"}[5m])
         < on (namespace,service,cluster) group_left
-          count by (namespace,service,cluster) (max_over_time(alertmanager_cluster_members{job="kube-prometheus-stack-alertmanager",namespace="observability"}[5m]))
+          count by (namespace,service,cluster) (max_over_time(alertmanager_cluster_members{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="observability"}[5m]))
       for: 15m
       labels:
         severity: critical
     - alert: AlertmanagerFailedToSendAlerts
       annotations:
         description: Alertmanager {{ $labels.namespace }}/{{ $labels.pod}} failed
           to send {{ $value | humanizePercentage }} of notifications to {{ $labels.integration
           }}.
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/alertmanager/alertmanagerfailedtosendalerts
         summary: An Alertmanager instance failed to send notifications.
       expr: |-
         (
-          rate(alertmanager_notifications_failed_total{job="kube-prometheus-stack-alertmanager",namespace="observability"}[15m])
+          rate(alertmanager_notifications_failed_total{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="observability"}[15m])
         /
-          ignoring (reason) group_left rate(alertmanager_notifications_total{job="kube-prometheus-stack-alertmanager",namespace="observability"}[15m])
+          ignoring (reason) group_left rate(alertmanager_notifications_total{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="observability"}[15m])
         )
         > 0.01
       for: 5m
       labels:
         severity: warning
     - alert: AlertmanagerClusterFailedToSendAlerts
@@ -68,15 +68,15 @@

           humanizePercentage }}.
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/alertmanager/alertmanagerclusterfailedtosendalerts
         summary: All Alertmanager instances in a cluster failed to send notifications
           to a critical integration.
       expr: |-
         min by (namespace,service, integration) (
-          rate(alertmanager_notifications_failed_total{job="kube-prometheus-stack-alertmanager",namespace="observability", integration=~`.*`}[15m])
+          rate(alertmanager_notifications_failed_total{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="observability", integration=~`.*`}[15m])
         /
-          ignoring (reason) group_left rate(alertmanager_notifications_total{job="kube-prometheus-stack-alertmanager",namespace="observability", integration=~`.*`}[15m])
+          ignoring (reason) group_left rate(alertmanager_notifications_total{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="observability", integration=~`.*`}[15m])
         )
         > 0.01
       for: 5m
       labels:
         severity: critical
     - alert: AlertmanagerClusterFailedToSendAlerts
@@ -86,15 +86,15 @@

           humanizePercentage }}.
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/alertmanager/alertmanagerclusterfailedtosendalerts
         summary: All Alertmanager instances in a cluster failed to send notifications
           to a non-critical integration.
       expr: |-
         min by (namespace,service, integration) (
-          rate(alertmanager_notifications_failed_total{job="kube-prometheus-stack-alertmanager",namespace="observability", integration!~`.*`}[15m])
+          rate(alertmanager_notifications_failed_total{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="observability", integration!~`.*`}[15m])
         /
-          ignoring (reason) group_left rate(alertmanager_notifications_total{job="kube-prometheus-stack-alertmanager",namespace="observability", integration!~`.*`}[15m])
+          ignoring (reason) group_left rate(alertmanager_notifications_total{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="observability", integration!~`.*`}[15m])
         )
         > 0.01
       for: 5m
       labels:
         severity: warning
     - alert: AlertmanagerConfigInconsistent
@@ -102,13 +102,13 @@

         description: Alertmanager instances within the {{$labels.job}} cluster have
           different configurations.
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/alertmanager/alertmanagerconfiginconsistent
         summary: Alertmanager instances within the same cluster have different configurations.
       expr: |-
         count by (namespace,service,cluster) (
-          count_values by (namespace,service,cluster) ("config_hash", alertmanager_config_hash{job="kube-prometheus-stack-alertmanager",namespace="observability"})
+          count_values by (namespace,service,cluster) ("config_hash", alertmanager_config_hash{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="observability"})
         )
         != 1
       for: 20m
       labels:
         severity: critical
     - alert: AlertmanagerClusterDown
@@ -119,17 +119,17 @@

         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/alertmanager/alertmanagerclusterdown
         summary: Half or more of the Alertmanager instances within the same cluster
           are down.
       expr: |-
         (
           count by (namespace,service,cluster) (
-            avg_over_time(up{job="kube-prometheus-stack-alertmanager",namespace="observability"}[5m]) < 0.5
+            avg_over_time(up{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="observability"}[5m]) < 0.5
           )
         /
           count by (namespace,service,cluster) (
-            up{job="kube-prometheus-stack-alertmanager",namespace="observability"}
+            up{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="observability"}
           )
         )
         >= 0.5
       for: 5m
       labels:
         severity: critical
@@ -141,17 +141,17 @@

         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/alertmanager/alertmanagerclustercrashlooping
         summary: Half or more of the Alertmanager instances within the same cluster
           are crashlooping.
       expr: |-
         (
           count by (namespace,service,cluster) (
-            changes(process_start_time_seconds{job="kube-prometheus-stack-alertmanager",namespace="observability"}[10m]) > 4
+            changes(process_start_time_seconds{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="observability"}[10m]) > 4
           )
         /
           count by (namespace,service,cluster) (
-            up{job="kube-prometheus-stack-alertmanager",namespace="observability"}
+            up{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="observability"}
           )
         )
         >= 0.5
       for: 5m
       labels:
         severity: critical

@unsc-oni-ancilla
Copy link
Contributor Author

unsc-oni-ancilla bot commented Oct 9, 2025

--- kubernetes/apps/observability/kube-prometheus-stack/app Kustomization: observability/kube-prometheus-stack OCIRepository: observability/kube-prometheus-stack

+++ kubernetes/apps/observability/kube-prometheus-stack/app Kustomization: observability/kube-prometheus-stack OCIRepository: observability/kube-prometheus-stack

@@ -11,9 +11,9 @@

 spec:
   interval: 5m
   layerSelector:
     mediaType: application/vnd.cncf.helm.chart.content.v1.tar+gzip
     operation: copy
   ref:
-    tag: 77.14.0
+    tag: 78.1.0
   url: oci://ghcr.io/prometheus-community/charts/kube-prometheus-stack
 

@unsc-oni-ancilla unsc-oni-ancilla bot force-pushed the renovate/ghcr.io-prometheus-community-charts-kube-prometheus-stack-78.x branch from 042220c to 9a59ac8 Compare October 10, 2025 19:07
@unsc-oni-ancilla unsc-oni-ancilla bot changed the title feat(container)!: Update image ghcr.io/prometheus-community/charts/kube-prometheus-stack ( 77.14.0 ➔ 78.0.0 ) feat(container)!: Update image ghcr.io/prometheus-community/charts/kube-prometheus-stack ( 77.14.0 ➔ 78.1.0 ) Oct 10, 2025
@binaryn3xus binaryn3xus merged commit 7b30a00 into main Oct 11, 2025
11 checks passed
@binaryn3xus binaryn3xus deleted the renovate/ghcr.io-prometheus-community-charts-kube-prometheus-stack-78.x branch October 11, 2025 05:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant