Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -1,25 +1,23 @@
---
title: Deploy Helm on Google Cloud C4A (Arm-based Axion VMs)
title: Install and validate Helm on Google Cloud C4A Arm-based VMs

minutes_to_complete: 30
minutes_to_complete: 45

draft: true
cascade:
draft: true
who_is_this_for: This is an introductory topic intended for developers who want to get hands-on experience using Helm on Linux Arm64 systems, specifically Google Cloud C4A virtual machines powered by Axion processors.

who_is_this_for: This learning path is intended for software developers deploying and optimizing Helm on Linux/Arm64 environments, specifically using Google Cloud C4A virtual machines powered by Axion processors.

learning_objectives:
- Provision an Arm-based SUSE SLES virtual machine on Google Cloud (C4A with Axion processors)
- Provision an Arm-based SUSE Linux Enterprise Server (SLES) virtual machine on Google Cloud (C4A with Axion processors)
- Install Helm and kubectl on a SUSE Arm64 (C4A) instance
- Create and validate a local Kubernetes cluster (KinD) on Arm64
- Verify Helm functionality by performing install, upgrade, and uninstall workflows
- Benchmark Helm concurrency behavior using parallel Helm CLI operations on Arm64
- Observe Helm behavior under concurrent CLI operations on an Arm64-based Kubernetes cluster

prerequisites:
- A [Google Cloud Platform (GCP)](https://cloud.google.com/free) account with billing enabled
- Basic familiarity with [Kubernetes concepts](https://kubernetes.io/docs/concepts/)
- Basic understanding of [Helm](https://helm.sh/docs/topics/architecture/) and Kubernetes manifests
- Familiarity with basic Linux command-line usage

author: Pareena Verma

Expand All @@ -35,6 +33,7 @@ tools_software_languages:
- Helm
- Kubernetes
- KinD
- kubectl

operatingsystems:
- Linux
Expand Down
Original file line number Diff line number Diff line change
@@ -1,27 +1,25 @@
---
title: Getting started with Helm on Google Axion C4A (Arm Neoverse-V2)
title: Get started with Helm on Google Axion C4A (Arm-based)

weight: 2

layout: "learningpathall"
---

## Google Axion C4A Arm instances in Google Cloud
## Explore Google Axion C4A instances in Google Cloud

Google Axion C4A is a family of Arm-based virtual machines built on Google’s custom Axion CPU, which is based on Arm Neoverse-V2 cores. Designed for high-performance and energy-efficient computing, these virtual machines offer strong performance for modern cloud workloads such as CI/CD pipelines, microservices, media processing, and general-purpose applications.
Google Axion C4A is a family of Arm-based virtual machines built on Google’s custom Axion CPU, which is based on Arm Neoverse-V2 cores. Designed to deliver high performance with improved energy efficiency, these virtual machines offer strong performance for modern cloud workloads such as CI/CD pipelines, microservices, media processing, and general-purpose applications.

The C4A series provides a cost-effective alternative to x86 virtual machines while leveraging the scalability and performance benefits of the Arm architecture in Google Cloud.
The C4A series provides an Arm-based alternative to x86 virtual machines, enabling developers to evaluate cost, performance, and efficiency trade-offs in Google Cloud. For Kubernetes users, Axion C4A instances provide a practical way to run Arm-native clusters and validate tooling such as Helm on modern cloud infrastructure.

To learn more about Google Axion, refer to the [Introducing Google Axion Processors, our new Arm-based CPUs](https://cloud.google.com/blog/products/compute/introducing-googles-new-arm-based-cpu) blog.
To learn more about Google Axion, see the Google blog [Introducing Google Axion Processors, our new Arm-based CPUs](https://cloud.google.com/blog/products/compute/introducing-googles-new-arm-based-cpu).

## Helm
## Explore Helm

Helm is the package manager for Kubernetes that simplifies application deployment, upgrades, rollbacks, and lifecycle management using reusable **charts**.
Helm is the package manager for Kubernetes. It simplifies application deployment, upgrades, rollbacks, and lifecycle management by packaging Kubernetes resources into reusable charts.

It allows teams to deploy applications consistently across environments and automate Kubernetes workflows.
Helm runs as a lightweight CLI that interacts directly with the Kubernetes API. Because it is architecture-agnostic, it works consistently across x86 and Arm64 clusters, including those running on Google Axion C4A instances.

Helm runs as a lightweight CLI and integrates directly with the Kubernetes API, making it well-suited for Arm-based platforms such as Google Axion C4A.
In this Learning Path, you use Helm to deploy and manage applications on an Arm-based Kubernetes environment and verify common workflows such as install, upgrade, and uninstall operations.

It works efficiently on both x86 and Arm64 architectures and is widely used in production Kubernetes environments.

Learn more at the official [Helm website](https://helm.sh/).
For more information, see the [Helm website](https://helm.sh/).
Original file line number Diff line number Diff line change
@@ -1,15 +1,15 @@
---
title: Helm Baseline Testing on Google Axion C4A Arm Virtual Machine
title: Validate Helm workflows on a Google Axion C4A virtual machine
weight: 5

### FIXED, DO NOT MODIFY
layout: learningpathall
---

## Helm Baseline Testing on GCP SUSE VMs
This guide walks you through baseline testing to confirm that Helm works correctly on an Arm64-based Kubernetes cluster by validating core workflows such as install, upgrade, and uninstall.
## Overview
This section walks you through baseline testing to confirm that Helm works correctly on an Arm64-based Kubernetes cluster by validating core workflows such as install, upgrade, and uninstall.

### Add Helm Repository
## Add Helm repository
Add the Bitnami Helm chart repository and update the local index:

```console
Expand All @@ -25,15 +25,15 @@ Hang tight while we grab the latest from your chart repositories...
Update Complete. ⎈Happy Helming!⎈
```

### Install a Sample Application
## Install a sample application
Install a sample NGINX application using a Helm chart:

```console
helm install nginx bitnami/nginx
```
Deploy a simple test app to validate that Helm can create releases on the cluster.

You should see an output that contains text similar to this (please ignore any WARNINGS you receive):
The output is similar to the following (warnings can be safely ignored as they don't affect functionality):
```output
NAME: nginx
LAST DEPLOYED: Wed Dec 3 07:34:04 2025
Expand All @@ -48,7 +48,7 @@ APP VERSION: 1.29.3
```


### Validate Deployment
## Validate deployment
Verify that the Helm release is created:

```console
Expand All @@ -57,7 +57,7 @@ helm list

Confirm Helm recorded the release and that the deployment exists.

You should see an output similar to:
The output is similar to:
```output
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
nginx default 1 2025-12-09 21:04:15.944165326 +0000 UTC deployed nginx-22.3.3 1.29.3
Expand All @@ -69,7 +69,7 @@ Check Kubernetes resources:
kubectl get pods
kubectl get svc
```
You should see an output similar to:
The output is similar to:
```output
NAME READY STATUS RESTARTS AGE
nginx-7b9564dc4b-2ghkw 1/1 Running 0 3m5s
Expand All @@ -78,34 +78,42 @@ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4m28s
nginx LoadBalancer 10.96.216.137 <pending> 80:32708/TCP,443:31052/TCP 3m6s
```
All pods should be in the **Running** state. If the pods are in **Pending** state, please wait a bit and retry the commands above.
All pods should be in the **Running** state. If pods are in **Pending** state, wait 30-60 seconds for container images to download, then retry the commands above.


### Validate Helm Lifecycle
This step confirms that Helm supports the full application lifecycle on Arm64.
## Validate Helm lifecycle
Confirm that Helm supports the full application lifecycle on Arm64.

#### Upgrade the Release
### Upgrade the release

```console
helm upgrade nginx bitnami/nginx
```
Test Helm's ability to update an existing release to a new revision.

You should see an output similar (towards the top of the output...) to:
The output is similar to:
```output
Release "nginx" has been upgraded. Happy Helming!
```

#### Uninstall the Release
### Uninstall the release
Ensure Helm can cleanly remove the release and associated resources.

```console
helm uninstall nginx
```

You should see an output similar to:
The output is similar to:
```output
release "nginx" uninstalled
```
This confirms the successful execution of **install**, **upgrade**, and **delete** workflows using Helm on Arm64.
Helm is fully functional on the Arm64 Kubernetes cluster and ready for further experimentation or benchmarking.

## What you've accomplished and what's next

You've validated Helm's core functionality by:
- Installing a sample application using Helm charts
- Upgrading an existing release to a new revision
- Uninstalling releases and cleaning up resources
- Verifying that all workflows execute successfully on Arm64

Next, you'll benchmark Helm's performance by measuring concurrent operations and evaluating how well it handles parallel workloads on your Arm64 Kubernetes cluster.
Original file line number Diff line number Diff line change
@@ -1,58 +1,53 @@
---
title: Helm Benchmarking
title: Benchmark Helm concurrency on a Google Axion C4A virtual machine
weight: 6

### FIXED, DO NOT MODIFY
layout: learningpathall
---
## Overview

This section explains how to benchmark Helm CLI concurrency on an Arm64-based GCP SUSE virtual machine.

## Helm Benchmark on GCP SUSE Arm64 VM
This guide explains **how to benchmark Helm on an Arm64-based GCP SUSE VM** using only the **Helm CLI**.
Since Helm does not provide built-in performance metrics, we measure **concurrency behavior** by running multiple Helm commands in parallel and recording the total execution time.
Since Helm does not provide built-in performance metrics, concurrency behavior is measured by running multiple Helm commands in parallel and recording the total execution time.

### Prerequisites

{{% notice Note %}} Ensure the local Kubernetes cluster created earlier is running and has sufficient resources to deploy multiple NGINX replicas.{{% /notice %}}

Before starting the benchmark, ensure Helm is installed and the Kubernetes cluster is accessible.

```console
helm version
kubectl get nodes
```

All nodes should be in `Ready` state.


### Add Helm Repository
Helm installs applications using “charts.”
This step tells Helm where to download those charts from and updates its local chart list.
### Add a Helm repository
Helm installs applications using "charts." Configure Helm to download charts from the Bitnami repository and update the local chart index.

```console
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
```

### Create Benchmark Namespace
### Create a benchmark namespace
Isolate benchmark workloads from other cluster resources.

```console
kubectl create namespace helm-bench
```

### Warm-Up Run (Recommended)
This step prepares the cluster by pulling container images and initializing caches.
### Warm-up run (recommended)
Prepare the cluster by pulling container images and initializing caches.

```console
helm install warmup bitnami/nginx \
-n helm-bench \
--set service.type=ClusterIP \
--timeout 10m
```
The first install is usually slower because of following reasons:

- Images must be downloaded.
- Kubernetes initializes internal objects.

This warm-up ensures the real benchmark measures Helm performance, not setup overhead.
The first install is usually slower because images must be downloaded and Kubernetes needs to initialize internal objects. This warm-up run reduces image-pull and initialization overhead so the benchmark focuses more on Helm CLI concurrency and Kubernetes API behavior.

You should see output (near the top of the output) that is similar to:
```output
Expand All @@ -77,8 +72,7 @@ helm uninstall warmup -n helm-bench
{{% notice Note %}}
Helm does not provide native concurrency or throughput metrics. Concurrency benchmarking is performed by executing multiple Helm CLI operations in parallel and measuring overall completion time.
{{% /notice %}}

### Concurrent Helm Install Benchmark (No Wait)
### Concurrent Helm install benchmark (no wait)
Run multiple Helm installs in parallel using background jobs.

```console
Expand All @@ -99,7 +93,7 @@ What this measures:

* Helm concurrency handling
* Kubernetes API responsiveness
* Arm64 client-side performance
* Helm CLI client-side execution behavior on Arm64

You should see an output similar to:
```output
Expand All @@ -108,12 +102,9 @@ user 0m12.798s
sys 0m0.339s
```

### Verify Deployments

This confirms:
### Verify deployments

- Helm reports that all components were installed successfully
- Kubernetes actually created and started the applications
Confirm that Helm reports all components were installed successfully and that Kubernetes created and started the applications:

```console
helm list -n helm-bench
Expand All @@ -125,8 +116,8 @@ Expected:
* All releases in `deployed` state
* Pods in `Running` status

### Concurrent Helm Install Benchmark (With `--wait`)
This benchmark includes workload readiness time.
### Concurrent Helm install benchmark (with --wait)
Run a benchmark that includes workload readiness time.

```console
time (
Expand All @@ -141,34 +132,44 @@ wait
)
```

What this measures:

* Helm concurrency plus scheduler and image-pull contention
* End-to-end readiness impact
Measure Helm concurrency combined with scheduler and image-pull contention to understand end-to-end readiness impact.

You should see an output similar to:
The output is similar to:
```output
real 0m12.924s
user 0m7.333s
sys 0m0.312s
```

### Metrics to Record
### Metrics to record

- **Total elapsed time**: Overall time taken to complete all installs.
- **Number of parallel installs**: Number of Helm installs run at the same time.
- **Failures**: Any Helm failures or Kubernetes API errors.
- **Pod readiness delay**: Time pods take to become Ready (resource pressure)
- Total elapsed time: overall time taken to complete all installs.
- Number of parallel installs: number of Helm installs run at the same time.
- Failures: any Helm failures or Kubernetes API errors.
- Pod readiness delay: time pods take to become Ready (resource pressure)

### Benchmark summary
Results from the earlier run on the `c4a-standard-4` (4 vCPU, 16 GB memory) Arm64 VM in GCP (SUSE):

| Test Case | Parallel Installs | `--wait` Used | Timeout | Total Time (real) |
| ---------------------------- | ----------------- | ------------- | ------- | ----------------- |
| Parallel Install (No Wait) | 5 | No | 10m | **3.99 s** |
| Parallel Install (With Wait) | 3 | Yes | 15m | **12.92 s** |
| Parallel Install (No Wait) | 5 | No | 10m | **3.99 s** |
| Parallel Install (With Wait) | 3 | Yes | 15m | **12.92 s** |

Key observations:
- In this configuration, Helm CLI operations complete efficiently on an Arm64-based Axion C4A virtual machine, establishing a baseline for further testing.
- The --wait flag significantly increases total execution time because Helm waits for workloads to reach a Ready state, reflecting scheduler and image-pull delays rather than Helm CLI overhead.
- For this baseline test, parallel Helm installs complete with minimal contention, indicating that client-side execution and Kubernetes API handling are not bottlenecks at this scale.
- End-to-end workload readiness dominates total deployment time, showing that cluster resource availability and container image pulls have a greater impact than Helm CLI execution.

## What you've accomplished

You have successfully benchmarked Helm concurrency on a Google Axion C4A Arm64 virtual machine. The benchmarks demonstrated that:

- Helm CLI operations execute efficiently on Arm64 architecture with the Axion processor
- Parallel Helm installs complete in under 4 seconds when not waiting for pod readiness
- Using the `--wait` flag extends deployment time to reflect actual workload initialization
- Kubernetes API and client-side performance scale well under concurrent load
- Image pulling and resource scheduling have more impact on total deployment time than Helm CLI execution

- **Arm64 shows faster Helm execution** for both warm and ready states, indicating efficient CLI and Kubernetes API handling on Arm-based GCP instances.
- **The `--wait` flag significantly increases total execution time** because Helm waits for pods and services to reach a Ready state, revealing scheduler latency and image-pull delays rather than Helm CLI overhead.
- **Parallel Helm installs scale well on Arm64**, with minimal contention observed even at higher concurrency levels.
- **End-to-end workload readiness dominates benchmark results**, showing that cluster resource availability and container image pulls
These results establish a performance baseline for deploying containerized workloads with Helm on Arm64-based cloud infrastructure, helping you make informed decisions about deployment strategies and resource allocation.
Loading