diff --git a/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/_index.md b/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/_index.md index 593e1b801..b3968c92b 100644 --- a/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/_index.md @@ -1,25 +1,23 @@ --- -title: Deploy Helm on Google Cloud C4A (Arm-based Axion VMs) +title: Install and validate Helm on Google Cloud C4A Arm-based VMs -minutes_to_complete: 30 +minutes_to_complete: 45 -draft: true -cascade: - draft: true +who_is_this_for: This is an introductory topic intended for developers who want to get hands-on experience using Helm on Linux Arm64 systems, specifically Google Cloud C4A virtual machines powered by Axion processors. -who_is_this_for: This learning path is intended for software developers deploying and optimizing Helm on Linux/Arm64 environments, specifically using Google Cloud C4A virtual machines powered by Axion processors. learning_objectives: - - Provision an Arm-based SUSE SLES virtual machine on Google Cloud (C4A with Axion processors) + - Provision an Arm-based SUSE Linux Enterprise Server (SLES) virtual machine on Google Cloud (C4A with Axion processors) - Install Helm and kubectl on a SUSE Arm64 (C4A) instance - Create and validate a local Kubernetes cluster (KinD) on Arm64 - Verify Helm functionality by performing install, upgrade, and uninstall workflows - - Benchmark Helm concurrency behavior using parallel Helm CLI operations on Arm64 + - Observe Helm behavior under concurrent CLI operations on an Arm64-based Kubernetes cluster prerequisites: - A [Google Cloud Platform (GCP)](https://cloud.google.com/free) account with billing enabled - Basic familiarity with [Kubernetes concepts](https://kubernetes.io/docs/concepts/) - Basic understanding of [Helm](https://helm.sh/docs/topics/architecture/) and Kubernetes manifests + - Familiarity with basic Linux command-line usage author: Pareena Verma @@ -35,6 +33,7 @@ tools_software_languages: - Helm - Kubernetes - KinD + - kubectl operatingsystems: - Linux diff --git a/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/background.md b/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/background.md index 92424cf68..15439ea56 100644 --- a/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/background.md +++ b/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/background.md @@ -1,27 +1,25 @@ --- -title: Getting started with Helm on Google Axion C4A (Arm Neoverse-V2) +title: Get started with Helm on Google Axion C4A (Arm-based) weight: 2 layout: "learningpathall" --- -## Google Axion C4A Arm instances in Google Cloud +## Explore Google Axion C4A instances in Google Cloud -Google Axion C4A is a family of Arm-based virtual machines built on Google’s custom Axion CPU, which is based on Arm Neoverse-V2 cores. Designed for high-performance and energy-efficient computing, these virtual machines offer strong performance for modern cloud workloads such as CI/CD pipelines, microservices, media processing, and general-purpose applications. +Google Axion C4A is a family of Arm-based virtual machines built on Google’s custom Axion CPU, which is based on Arm Neoverse-V2 cores. Designed to deliver high performance with improved energy efficiency, these virtual machines offer strong performance for modern cloud workloads such as CI/CD pipelines, microservices, media processing, and general-purpose applications. -The C4A series provides a cost-effective alternative to x86 virtual machines while leveraging the scalability and performance benefits of the Arm architecture in Google Cloud. +The C4A series provides an Arm-based alternative to x86 virtual machines, enabling developers to evaluate cost, performance, and efficiency trade-offs in Google Cloud. For Kubernetes users, Axion C4A instances provide a practical way to run Arm-native clusters and validate tooling such as Helm on modern cloud infrastructure. -To learn more about Google Axion, refer to the [Introducing Google Axion Processors, our new Arm-based CPUs](https://cloud.google.com/blog/products/compute/introducing-googles-new-arm-based-cpu) blog. +To learn more about Google Axion, see the Google blog [Introducing Google Axion Processors, our new Arm-based CPUs](https://cloud.google.com/blog/products/compute/introducing-googles-new-arm-based-cpu). -## Helm +## Explore Helm -Helm is the package manager for Kubernetes that simplifies application deployment, upgrades, rollbacks, and lifecycle management using reusable **charts**. +Helm is the package manager for Kubernetes. It simplifies application deployment, upgrades, rollbacks, and lifecycle management by packaging Kubernetes resources into reusable charts. -It allows teams to deploy applications consistently across environments and automate Kubernetes workflows. +Helm runs as a lightweight CLI that interacts directly with the Kubernetes API. Because it is architecture-agnostic, it works consistently across x86 and Arm64 clusters, including those running on Google Axion C4A instances. -Helm runs as a lightweight CLI and integrates directly with the Kubernetes API, making it well-suited for Arm-based platforms such as Google Axion C4A. +In this Learning Path, you use Helm to deploy and manage applications on an Arm-based Kubernetes environment and verify common workflows such as install, upgrade, and uninstall operations. -It works efficiently on both x86 and Arm64 architectures and is widely used in production Kubernetes environments. - -Learn more at the official [Helm website](https://helm.sh/). +For more information, see the [Helm website](https://helm.sh/). diff --git a/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/baseline.md b/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/baseline.md index b74848cf0..c0a637020 100644 --- a/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/baseline.md +++ b/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/baseline.md @@ -1,15 +1,15 @@ --- -title: Helm Baseline Testing on Google Axion C4A Arm Virtual Machine +title: Validate Helm workflows on a Google Axion C4A virtual machine weight: 5 ### FIXED, DO NOT MODIFY layout: learningpathall --- -## Helm Baseline Testing on GCP SUSE VMs -This guide walks you through baseline testing to confirm that Helm works correctly on an Arm64-based Kubernetes cluster by validating core workflows such as install, upgrade, and uninstall. +## Overview +This section walks you through baseline testing to confirm that Helm works correctly on an Arm64-based Kubernetes cluster by validating core workflows such as install, upgrade, and uninstall. -### Add Helm Repository +## Add Helm repository Add the Bitnami Helm chart repository and update the local index: ```console @@ -25,7 +25,7 @@ Hang tight while we grab the latest from your chart repositories... Update Complete. ⎈Happy Helming!⎈ ``` -### Install a Sample Application +## Install a sample application Install a sample NGINX application using a Helm chart: ```console @@ -33,7 +33,7 @@ helm install nginx bitnami/nginx ``` Deploy a simple test app to validate that Helm can create releases on the cluster. -You should see an output that contains text similar to this (please ignore any WARNINGS you receive): +The output is similar to the following (warnings can be safely ignored as they don't affect functionality): ```output NAME: nginx LAST DEPLOYED: Wed Dec 3 07:34:04 2025 @@ -48,7 +48,7 @@ APP VERSION: 1.29.3 ``` -### Validate Deployment +## Validate deployment Verify that the Helm release is created: ```console @@ -57,7 +57,7 @@ helm list Confirm Helm recorded the release and that the deployment exists. -You should see an output similar to: +The output is similar to: ```output NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION nginx default 1 2025-12-09 21:04:15.944165326 +0000 UTC deployed nginx-22.3.3 1.29.3 @@ -69,7 +69,7 @@ Check Kubernetes resources: kubectl get pods kubectl get svc ``` -You should see an output similar to: +The output is similar to: ```output NAME READY STATUS RESTARTS AGE nginx-7b9564dc4b-2ghkw 1/1 Running 0 3m5s @@ -78,34 +78,42 @@ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) kubernetes ClusterIP 10.96.0.1 443/TCP 4m28s nginx LoadBalancer 10.96.216.137 80:32708/TCP,443:31052/TCP 3m6s ``` -All pods should be in the **Running** state. If the pods are in **Pending** state, please wait a bit and retry the commands above. +All pods should be in the **Running** state. If pods are in **Pending** state, wait 30-60 seconds for container images to download, then retry the commands above. -### Validate Helm Lifecycle -This step confirms that Helm supports the full application lifecycle on Arm64. +## Validate Helm lifecycle +Confirm that Helm supports the full application lifecycle on Arm64. -#### Upgrade the Release +### Upgrade the release ```console helm upgrade nginx bitnami/nginx ``` Test Helm's ability to update an existing release to a new revision. -You should see an output similar (towards the top of the output...) to: +The output is similar to: ```output Release "nginx" has been upgraded. Happy Helming! ``` -#### Uninstall the Release +### Uninstall the release Ensure Helm can cleanly remove the release and associated resources. ```console helm uninstall nginx ``` -You should see an output similar to: +The output is similar to: ```output release "nginx" uninstalled ``` -This confirms the successful execution of **install**, **upgrade**, and **delete** workflows using Helm on Arm64. -Helm is fully functional on the Arm64 Kubernetes cluster and ready for further experimentation or benchmarking. + +## What you've accomplished and what's next + +You've validated Helm's core functionality by: +- Installing a sample application using Helm charts +- Upgrading an existing release to a new revision +- Uninstalling releases and cleaning up resources +- Verifying that all workflows execute successfully on Arm64 + +Next, you'll benchmark Helm's performance by measuring concurrent operations and evaluating how well it handles parallel workloads on your Arm64 Kubernetes cluster. diff --git a/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/benchmarking.md b/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/benchmarking.md index a8bd169d8..e1aa5e051 100644 --- a/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/benchmarking.md +++ b/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/benchmarking.md @@ -1,45 +1,45 @@ --- -title: Helm Benchmarking +title: Benchmark Helm concurrency on a Google Axion C4A virtual machine weight: 6 ### FIXED, DO NOT MODIFY layout: learningpathall --- +## Overview +This section explains how to benchmark Helm CLI concurrency on an Arm64-based GCP SUSE virtual machine. -## Helm Benchmark on GCP SUSE Arm64 VM -This guide explains **how to benchmark Helm on an Arm64-based GCP SUSE VM** using only the **Helm CLI**. -Since Helm does not provide built-in performance metrics, we measure **concurrency behavior** by running multiple Helm commands in parallel and recording the total execution time. +Since Helm does not provide built-in performance metrics, concurrency behavior is measured by running multiple Helm commands in parallel and recording the total execution time. ### Prerequisites + +{{% notice Note %}} Ensure the local Kubernetes cluster created earlier is running and has sufficient resources to deploy multiple NGINX replicas.{{% /notice %}} + Before starting the benchmark, ensure Helm is installed and the Kubernetes cluster is accessible. ```console helm version kubectl get nodes ``` - All nodes should be in `Ready` state. - -### Add Helm Repository -Helm installs applications using “charts.” -This step tells Helm where to download those charts from and updates its local chart list. +### Add a Helm repository +Helm installs applications using "charts." Configure Helm to download charts from the Bitnami repository and update the local chart index. ```console helm repo add bitnami https://charts.bitnami.com/bitnami helm repo update ``` -### Create Benchmark Namespace +### Create a benchmark namespace Isolate benchmark workloads from other cluster resources. ```console kubectl create namespace helm-bench ``` -### Warm-Up Run (Recommended) -This step prepares the cluster by pulling container images and initializing caches. +### Warm-up run (recommended) +Prepare the cluster by pulling container images and initializing caches. ```console helm install warmup bitnami/nginx \ @@ -47,12 +47,7 @@ helm install warmup bitnami/nginx \ --set service.type=ClusterIP \ --timeout 10m ``` -The first install is usually slower because of following reasons: - -- Images must be downloaded. -- Kubernetes initializes internal objects. - -This warm-up ensures the real benchmark measures Helm performance, not setup overhead. +The first install is usually slower because images must be downloaded and Kubernetes needs to initialize internal objects. This warm-up run reduces image-pull and initialization overhead so the benchmark focuses more on Helm CLI concurrency and Kubernetes API behavior. You should see output (near the top of the output) that is similar to: ```output @@ -77,8 +72,7 @@ helm uninstall warmup -n helm-bench {{% notice Note %}} Helm does not provide native concurrency or throughput metrics. Concurrency benchmarking is performed by executing multiple Helm CLI operations in parallel and measuring overall completion time. {{% /notice %}} - -### Concurrent Helm Install Benchmark (No Wait) +### Concurrent Helm install benchmark (no wait) Run multiple Helm installs in parallel using background jobs. ```console @@ -99,7 +93,7 @@ What this measures: * Helm concurrency handling * Kubernetes API responsiveness -* Arm64 client-side performance +* Helm CLI client-side execution behavior on Arm64 You should see an output similar to: ```output @@ -108,12 +102,9 @@ user 0m12.798s sys 0m0.339s ``` -### Verify Deployments - -This confirms: +### Verify deployments -- Helm reports that all components were installed successfully -- Kubernetes actually created and started the applications +Confirm that Helm reports all components were installed successfully and that Kubernetes created and started the applications: ```console helm list -n helm-bench @@ -125,8 +116,8 @@ Expected: * All releases in `deployed` state * Pods in `Running` status -### Concurrent Helm Install Benchmark (With `--wait`) -This benchmark includes workload readiness time. +### Concurrent Helm install benchmark (with --wait) +Run a benchmark that includes workload readiness time. ```console time ( @@ -141,34 +132,44 @@ wait ) ``` -What this measures: - -* Helm concurrency plus scheduler and image-pull contention -* End-to-end readiness impact +Measure Helm concurrency combined with scheduler and image-pull contention to understand end-to-end readiness impact. -You should see an output similar to: +The output is similar to: ```output real 0m12.924s user 0m7.333s sys 0m0.312s ``` -### Metrics to Record +### Metrics to record -- **Total elapsed time**: Overall time taken to complete all installs. -- **Number of parallel installs**: Number of Helm installs run at the same time. -- **Failures**: Any Helm failures or Kubernetes API errors. -- **Pod readiness delay**: Time pods take to become Ready (resource pressure) +- Total elapsed time: overall time taken to complete all installs. +- Number of parallel installs: number of Helm installs run at the same time. +- Failures: any Helm failures or Kubernetes API errors. +- Pod readiness delay: time pods take to become Ready (resource pressure) ### Benchmark summary Results from the earlier run on the `c4a-standard-4` (4 vCPU, 16 GB memory) Arm64 VM in GCP (SUSE): | Test Case | Parallel Installs | `--wait` Used | Timeout | Total Time (real) | | ---------------------------- | ----------------- | ------------- | ------- | ----------------- | -| Parallel Install (No Wait) | 5 | No | 10m | **3.99 s** | -| Parallel Install (With Wait) | 3 | Yes | 15m | **12.92 s** | +| Parallel Install (No Wait) | 5 | No | 10m | **3.99 s** | +| Parallel Install (With Wait) | 3 | Yes | 15m | **12.92 s** | + +Key observations: +- In this configuration, Helm CLI operations complete efficiently on an Arm64-based Axion C4A virtual machine, establishing a baseline for further testing. +- The --wait flag significantly increases total execution time because Helm waits for workloads to reach a Ready state, reflecting scheduler and image-pull delays rather than Helm CLI overhead. +- For this baseline test, parallel Helm installs complete with minimal contention, indicating that client-side execution and Kubernetes API handling are not bottlenecks at this scale. +- End-to-end workload readiness dominates total deployment time, showing that cluster resource availability and container image pulls have a greater impact than Helm CLI execution. + +## What you've accomplished + +You have successfully benchmarked Helm concurrency on a Google Axion C4A Arm64 virtual machine. The benchmarks demonstrated that: + +- Helm CLI operations execute efficiently on Arm64 architecture with the Axion processor +- Parallel Helm installs complete in under 4 seconds when not waiting for pod readiness +- Using the `--wait` flag extends deployment time to reflect actual workload initialization +- Kubernetes API and client-side performance scale well under concurrent load +- Image pulling and resource scheduling have more impact on total deployment time than Helm CLI execution -- **Arm64 shows faster Helm execution** for both warm and ready states, indicating efficient CLI and Kubernetes API handling on Arm-based GCP instances. -- **The `--wait` flag significantly increases total execution time** because Helm waits for pods and services to reach a Ready state, revealing scheduler latency and image-pull delays rather than Helm CLI overhead. -- **Parallel Helm installs scale well on Arm64**, with minimal contention observed even at higher concurrency levels. -- **End-to-end workload readiness dominates benchmark results**, showing that cluster resource availability and container image pulls +These results establish a performance baseline for deploying containerized workloads with Helm on Arm64-based cloud infrastructure, helping you make informed decisions about deployment strategies and resource allocation. diff --git a/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/installation.md b/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/installation.md index 3d8e1828d..93cbabc1e 100644 --- a/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/installation.md +++ b/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/installation.md @@ -6,27 +6,32 @@ weight: 4 layout: learningpathall --- -## Install Helm on GCP VM -This section covers preparing a SUSE Arm64 system and installing the required tools to work with Helm using a local Kubernetes cluster created with KinD. +## Overview -### System Preparation -Update the system and install basic dependencies: +In this section, you prepare a SUSE Linux Arm64 virtual machine to work with Helm by installing Docker, kubectl, Helm, and KinD. You then create and verify a local Kubernetes cluster that you use in later sections of this Learning Path to validate Helm workflows. + +## Prepare the system + +Update the system packages and install dependencies: ```console sudo zypper refresh sudo zypper update -y sudo zypper install -y curl git tar gzip ``` -### Enable SUSE Containers Module -This enables SUSE’s official container support, so Docker and container tools can work properly. + +## Enable SUSE Containers Module +Enable the SUSE Containers Module to ensure that Docker and container-related tools are fully supported. ``` console sudo SUSEConnect -p sle-module-containers/15.5/arm64 sudo SUSEConnect --list-extensions | grep Containers ``` -You should see "Activated" as part of the output from the above commands. +Verify that the output shows the Containers module as **Activated**. -### Install Docker -Docker is required to run KinD and Kubernetes components as containers. This step installs Docker, starts it, and allows your user to run Docker without sudo. +## Install Docker +Docker is required to run KinD and the Kubernetes control plane components. + +Install Docker, start the service, and add your user to the docker group so that Docker commands can be run without sudo: ``` console sudo zypper refresh sudo zypper install -y docker @@ -35,20 +40,20 @@ sudo usermod -aG docker $USER exit ``` -Next, re-open a new shell into your VM and type the following: +Exit the current shell and reconnect to the virtual machine so that the group membership change takes effect. Then verify that Docker is running: ```console docker ps ``` -You should see the following output: +Output similar to the following indicates that Docker is installed and accessible: ```output CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ``` -### Install kubectl -This step installs kubectl, the command-line tool used to interact with Kubernetes clusters, compiled for the Arm64 architecture. +## Install kubectl +Install kubectl, the command-line tool for interacting with Kubernetes clusters, compiled for the Arm64 architecture. ```console curl -LO https://dl.k8s.io/release/v1.30.1/bin/linux/arm64/kubectl @@ -56,22 +61,22 @@ chmod +x kubectl sudo mv kubectl /usr/local/bin/ ``` -### Verify Installation +## Verify kubectl -This step confirms that `kubectl` is installed correctly and accessible from the command line. +Confirm that kubectl is installed and accessible from the command line: ```console kubectl version --client ``` -You should see an output similar to: +Output similar to the following indicates that kubectl is installed correctly: ```output Client Version: v1.30.1 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 ``` -### Install Helm -This step installs Helm using the official Helm installation script, ensuring you get a verified and up-to-date release. +## Install Helm +Install Helm using the official Helm installation script to get a verified and up-to-date release. ```console curl -sSfL https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 > get_helm.sh @@ -79,8 +84,8 @@ chmod 755 ./get_helm.sh ./get_helm.sh ``` -### Verify Installation -This step confirms that Helm is installed correctly and ready to be used. +## Verify Helm +Confirm that Helm is installed correctly and ready to use. ```console helm version @@ -91,8 +96,9 @@ You should see an output similar to: version.BuildInfo{Version:"v3.19.2", GitCommit:"8766e718a0119851f10ddbe4577593a45fadf544", GitTreeState:"clean", GoVersion:"go1.24.9"} ``` -### Create a Local Kubernetes Cluster (KinD) -This step installs KinD (Kubernetes-in-Docker), which allows you to run a lightweight Kubernetes cluster locally on your Arm64 VM. +## Install KinD + +Install KinD (Kubernetes-in-Docker) to run a lightweight Kubernetes cluster locally on your Arm64 virtual machine: ```console curl -Lo kind https://kind.sigs.k8s.io/dl/v0.30.0/kind-linux-arm64 @@ -100,15 +106,15 @@ chmod +x kind sudo mv kind /usr/local/bin/ ``` -**Create a local Kubernetes cluster:** +## Create a local Kubernetes cluster -This step creates a local Kubernetes cluster named helm-lab that will be used to deploy Helm charts. +Create a local Kubernetes cluster named helm-lab that you use to deploy Helm charts: ```console kind create cluster --name helm-lab ``` -### Verify Cluster Status +## Verify cluster status This step verifies that the Kubernetes cluster is operating correctly and is fully prepared to run workloads. ```console @@ -120,6 +126,15 @@ You should see an output similar to: NAME STATUS ROLES AGE VERSION helm-lab-control-plane Ready control-plane 23h v1.34.0 ``` -The node should be in the **Ready** state. If not, please retry the command again. +The node should be in the **Ready** state. If not, retry the command after waiting 30 seconds for the cluster to fully initialize. + +You now have a fully working local Kubernetes cluster running on an Arm64-based virtual machine. + +## What you've accomplished and what's next + +You've successfully set up your development environment by: +- Installing Docker, kubectl, and Helm on your Arm64 SUSE VM +- Creating a local Kubernetes cluster using KinD +- Verifying that all components are working correctly -You now have a fully working local Kubernetes environment on Arm64, ready for deploying applications using Helm. +Next, you'll validate Helm functionality by performing install, upgrade, and uninstall workflows on your Arm64 Kubernetes cluster. diff --git a/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/instance.md b/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/instance.md index e986d3a1b..b1e82e459 100644 --- a/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/instance.md +++ b/content/learning-paths/servers-and-cloud-computing/helm-on-gcp/instance.md @@ -1,5 +1,5 @@ --- -title: Create a Google Axion C4A Arm virtual machine on GCP +title: Create a Google Axion C4A virtual machine on Google Cloud weight: 3 ### FIXED, DO NOT MODIFY @@ -8,37 +8,39 @@ layout: learningpathall ## Overview -In this section, you will learn how to provision a Google Axion C4A Arm virtual machine on Google Cloud Platform (GCP) using the `c4a-standard-4` (4 vCPUs, 16 GB memory) machine type in the Google Cloud Console. +In this section, you provision a Google Axion C4A virtual machine on Google Cloud Platform (GCP) using the `c4a-standard-4` machine type, which provides 4 vCPUs and 16 GB of memory. {{% notice Note %}} -For support on GCP setup, see the Learning Path [Getting started with Google Cloud Platform](/learning-paths/servers-and-cloud-computing/csp/google/). +For general guidance on setting up a Google Cloud account and project, see the Learning Path [Getting started with Google Cloud Platform](/learning-paths/servers-and-cloud-computing/csp/google/). {{% /notice %}} -## Provision a Google Axion C4A Arm VM in Google Cloud Console +## Provision a Google Axion C4A VM in the Google Cloud Console -To create a virtual machine based on the C4A instance type: -- Navigate to the [Google Cloud Console](https://console.cloud.google.com/). -- Go to **Compute Engine > VM Instances** and select **Create Instance**. +To create a virtual machine using the C4A instance type: + +- Open the [Google Cloud Console](https://console.cloud.google.com/). +- Go to **Compute Engine** > **VM instances**, and then select **Create instance**. - Under **Machine configuration**: - - Populate fields such as **Instance name**, **Region**, and **Zone**. - - Set **Series** to `C4A`. - - Select `c4a-standard-4` for machine type. + - Specify an **Instance name**, **Region**, and **Zone**. + - Set **Series** to **C4A**. + - Select **c4a-standard-4** as the machine type. - ![Create a Google Axion C4A Arm virtual machine in the Google Cloud Console with c4a-standard-4 selected alt-text#center](images/gcp-vm.png "Creating a Google Axion C4A Arm virtual machine in Google Cloud Console") +![Google Cloud Console VM creation page with the C4A machine series selected and the c4a-standard-4 machine type highlighted alt-text#center](images/gcp-vm.png "Creating a Google Axion C4A virtual machine in the Google Cloud Console") +- Under **OS and storage**, select **Change**, and then choose an Arm64-based operating system image. + - For this Learning Path, select **SUSE Linux Enterprise Server**. + - For the license type, choose **Pay as you go**. + - Increase **Size (GB)** from **10** to **50** to allocate sufficient disk space. + - Select **Choose** to apply the changes. +- Under **Networking**, enable **Allow HTTP traffic** and **Allow HTTPS traffic** to simplify access for later Kubernetes testing. +- Select **Create** to launch the virtual machine. -- Under **OS and Storage**, select **Change**, then choose an Arm64-based OS image. For this Learning Path, use **SUSE Linux Enterprise Server**. -- If using use **SUSE Linux Enterprise Server**. Select "Pay As You Go" for the license type. -- Edit the Disk size ("Size(GB)" Textfield...) below and change it from "10" to "50" to increase the disk size of the VM to 50 GB... -- Once appropriately selected and configured, please Click **Select**. -- Under **Networking**, enable **Allow HTTP traffic** as well as **Allow HTTPS traffic**. -- Click **Create** to launch the instance. -- Once created, you should see a "SSH" option to the right in your list of VM instances. Click on this to launch a SSH shell into your VM instance: +After the instance starts, click **SSH** next to the VM in the instance list to open a browser-based terminal session. -![Invoke a SSH session via your browser alt-text#center](images/gcp-ssh.png "Invoke a SSH session into your running VM instance") +![Google Cloud Console VM instances list with the SSH button highlighted for a running C4A instance alt-text#center](images/gcp-ssh.png "Connecting to a running C4A virtual machine using SSH") -- A window from your browser should come up and you should now see a shell into your VM instance: +A new browser window opens with a terminal connected to your virtual machine. -![Terminal Shell in your VM instance alt-text#center](images/gcp-shell.png "Terminal shell in your VM instance") +![Browser-based terminal window showing a command prompt on a SUSE Linux VM running on Google Axion C4A alt-text#center](images/gcp-shell.png "Terminal session connected to the virtual machine") -Next, let's install Helm! \ No newline at end of file +Next, install Helm on your virtual machine. \ No newline at end of file