Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
185 changes: 185 additions & 0 deletions .github/workflows/azure-dsh-testing.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,185 @@
name: "[Azure] DevZero self-hosted deployment"

on:
push:
branches:
- garvit/azure-tf
workflow_dispatch:

jobs:
setup-and-test:
runs-on: ubuntu-latest
permissions:
contents: read
id-token: write
steps:
- name: Checkout Repository
uses: actions/checkout@v4

- name: 'Az CLI login'
uses: azure/login@v1.6.1
with:
client-id: ${{ secrets.AZURE_CLIENT_ID }}
tenant-id: ${{ secrets.AZURE_TENANT_ID }}
subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}

- name: Set Azure env for Terraform providers
run: |
echo "ARM_CLIENT_ID=${{ secrets.AZURE_CLIENT_ID }}" >> $GITHUB_ENV
echo "ARM_TENANT_ID=${{ secrets.AZURE_TENANT_ID }}" >> $GITHUB_ENV
echo "ARM_SUBSCRIPTION_ID=${{ secrets.AZURE_SUBSCRIPTION_ID }}" >> $GITHUB_ENV

- name: Set up Terraform
uses: hashicorp/setup-terraform@v3
with:
terraform_version: "1.11.3"

- name: Install yq
run: |
sudo wget https://github.com/mikefarah/yq/releases/download/v4.35.2/yq_linux_amd64 -O /usr/local/bin/yq
sudo chmod +x /usr/local/bin/yq

- name : Add SHORT_SHA Environment Variable
id : short-sha
shell: bash
run : echo "SHORT_SHA=`git rev-parse --short HEAD`" >> $GITHUB_ENV

- name : Generate unique job identifier
id : job-identifier
shell: bash
run : |
SAFE_ID=$(echo "gha${SHORT_SHA}" | tr -cd 'a-z0-9' | cut -c1-20)
echo "JOB_IDENTIFIER=$SAFE_ID" >> $GITHUB_ENV

- name: Add Backend Override (Base Cluster)
run: |
cd terraform/examples/azure/base-cluster
cat <<EOF > backend_override.tf
terraform {
backend "azurerm" {
resource_group_name = "dev-test"
storage_account_name = "dshterraformstate"
container_name = "tfstate"
key = "${JOB_IDENTIFIER}/base-cluster/terraform.tfstate"
}
}
EOF

- name: Initialize and Apply Terraform (Base Cluster)
run: |
cd terraform/examples/azure/base-cluster
terraform init
terraform apply -auto-approve -var="cluster_name=$JOB_IDENTIFIER"

- name: Configure Kubernetes Access
run: |
az aks get-credentials --resource-group dev-test --name $JOB_IDENTIFIER

- name: Set up Kata
run: |
cd terraform/examples/azure/base-cluster
kubectl apply -f kata-sa.yaml
kubectl apply -f daemonset.yaml
for NODE in $(kubectl get nodes -o name); do
kubectl label "$NODE" kata-runtime=running --overwrite
kubectl label "$NODE" node-role.kubernetes.io/kata-devpod-node=1 --overwrite
done

- name: Deploy Control Plane Dependencies (and modify domains)
run: |
DEFAULT_SC=$(kubectl get sc -o jsonpath='{.items[?(@.metadata.annotations.storageclass\.kubernetes\.io/is-default-class=="true")].metadata.name}')
kubectl get sc "$DEFAULT_SC" -o yaml | \
sed "s/name: $DEFAULT_SC/name: gp2/" | \
sed "/^ uid:/d; /^ resourceVersion:/d; /^ creationTimestamp:/d" | \
kubectl apply -f -

cd charts/dz-control-plane-deps

find . -name "values.yaml" -print0 | while IFS= read -r -d '' file; do
yq e -i '.. |= select(tag == "!!str" and test("example\\.com")) |= sub("example\\.com"; env(JOB_IDENTIFIER) + ".ci.selfzero.net")' "$file"
done

make install

- name: Update values.yaml for dz-control-plane
env:
BACKEND_LICENSE_KEY: ${{ secrets.BACKEND_LICENSE_KEY }}
run: |
# setting credentials enable to false since we will explicitly feed the dockerhub creds to kubernetes api
# also setting image.pullsecrets to empty to make sure that each of the deployments dont try to pull their relevant OCI images from this registry
# backend license key is ... needed

yq e '.credentials.enable = false | .backend.licenseKey = strenv(BACKEND_LICENSE_KEY) | .image.pullSecrets = []' -i charts/dz-control-plane/values.yaml

- name: Deploy DevZero Control Plane (after configuring kubernetes to use dockerhub creds, and patching all the deployments to point to the right domain)
env:
DOCKERHUB_USERNAME: ${{ secrets.DOCKERHUB_USERNAME }}
DOCKERHUB_TOKEN: ${{ secrets.DOCKERHUB_TOKEN }}
run: |
cd charts/dz-control-plane
make add-docker-creds

find . -name "values.yaml" -print0 | while IFS= read -r -d '' file; do
yq e -i '.. |= select(tag == "!!str" and test("example\\.com")) |= sub("example\\.com"; env(JOB_IDENTIFIER) + ".ci.selfzero.net")' "$file"
done

make install

- name: Validate Control Plane
run: |
.github/scripts/dsh-pod-test.sh

- name: Deploy Data Plane Dependencies
run: |
cd charts/dz-data-plane-deps

find . -name "values.yaml" -print0 | while IFS= read -r -d '' file; do
yq e -i '.. |= select(tag == "!!str" and test("example\\.com")) |= sub("example\\.com"; env(JOB_IDENTIFIER) + ".ci.selfzero.net")' "$file"
done

make install

- name: Deploy DevZero Data Plane
run: |
cd charts/dz-data-plane

find . -name "values.yaml" -print0 | while IFS= read -r -d '' file; do
yq e -i '.. |= select(tag == "!!str" and test("example\\.com")) |= sub("example\\.com"; env(JOB_IDENTIFIER) + ".ci.selfzero.net")' "$file"
done

make install

- name: Validate Data Plane
run: |
kubectl get pods -n devzero-self-hosted
kubectl get ingress -n devzero-self-hosted

- name: '[helm] Destroy data-plane'
if: always()
run: |
cd charts/dz-data-plane
make delete

- name: '[helm] Destroy data-plane-deps'
if: always()
run: |
cd charts/dz-data-plane-deps
make delete

- name: '[helm] Destroy control-plane'
if: always()
run: |
cd charts/dz-control-plane
make delete

- name: '[helm] Destroy control-plane-deps'
if: always()
run: |
cd charts/dz-control-plane-deps
make delete

- name: '[terraform] Destroy base-cluster'
if: always()
run: |
cd terraform/examples/azure/base-cluster
terraform destroy -auto-approve -var="cluster_name=$JOB_IDENTIFIER"
24 changes: 24 additions & 0 deletions charts/dz-control-plane-deps/values/vault.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,21 @@ server:
dataStorage:
enabled: true

# The following is an example of how to configure the Vault server to use Azure Key Vault for auto-unsealing.
# Before using this configuration, you need to create a secret in the Kubernetes cluster that contains the Azure Key Vault credentials:
# kubectl create secret generic vault-azure-creds --from-literal=AZURE_TENANT_ID=<TENANT_ID> --from-literal=AZURE_CLIENT_ID=<CLIENT_ID> --from-literal=AZURE_CLIENT_SECRET=<CLIENT_SECRET> -n devzero

# extraSecretEnvironmentVars:
# - envName: AZURE_CLIENT_ID
# secretName: vault-azure-creds
# secretKey: AZURE_CLIENT_ID
# - envName: AZURE_CLIENT_SECRET
# secretName: vault-azure-creds
# secretKey: AZURE_CLIENT_SECRET
# - envName: AZURE_TENANT_ID
# secretName: vault-azure-creds
# secretKey: AZURE_TENANT_ID

# Disable vault anti-affinity that requires one pod on each node. This allows vault to run on a single node cluster.
affinity: ""

Expand Down Expand Up @@ -71,6 +86,15 @@ server:
# key_ring = "GCP_KEY_RING"
# crypto_key = "GCP_CRYPTO_KEY"
# }

# This is used to configure the Vault server to use Azure Key Vault for auto-unsealing.
# seal "azurekeyvault" {
# tenant_id = "<vault_tenant_id>"
# client_id = "<vault_sp_client_id>"
# client_secret = "<vault_sp_client_secret>"
# vault_name = "<vault_keyvault_name>"
# key_name = "<vault_key_name>"
# }

ingress:
enabled: true
Expand Down
117 changes: 117 additions & 0 deletions terraform/examples/azure/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,117 @@
# DevZero Self-Hosted - Terraform Setup - Azure

This document provides a step-by-step guide for setting up the infrastructure required to self-host the DevZero Control Plane and Data Plane using Terraform. The infrastructure can be deployed on cloud platforms like Azure.

## Pre-reading

For readers experienced with Terraform deployments at their companies, we have some examples under [./examples](./examples/) that you can reference to see how to run a full DevZero deployment.
If you have your own terraform environment and want to reuse our modules, you can refer to the [./modules](./modules/) directory to use whichever components you need.

## Overview

The `terraform/` directory contains Infrastructure as Code (IaC) configurations that automate the provisioning of essential cloud resources such as VNet, AKS clusters, load balancers, and VPNs.

## Prerequisites

### Tools Required
- [Terraform](https://www.terraform.io/) (for managing infrastructure as code)
- [Azure CLI](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli) (for interacting with Azure resources)
- Access credentials for your Azure Account

### Permissions Required
Ensure your IAM user or service account has sufficient permissions to create resources like VNet, subnets, AKS clusters, VMs, and Key Vault.

## Infrastructure Setup Guide

### 1. Clone the Repository

```bash
git clone https://github.com/devzero-inc/self-hosted.git
```

### 2. Navigate to the Base Cluster Directory

```bash
cd self-hosted/terraform/examples/azure/base-cluster
```

### 3. Configure Terraform Variables

Update `terraform.tfvars`.

#### Cluster Endpoint Access
- Set `cluster_endpoint_public_access = true` to allow public access.
- Set it to `false` for private access.

### 4. Initialise and Apply Terraform

```bash
terraform init
terraform apply
```

- This will create Azure resources such as VNet, AKS, VM, Key Vault, etc.
- Copy the output values like subscription_id, resource_group_name, location, and cluster_name for the next steps.

## Extending the Cluster

### 5. Navigate to the Cluster Extensions Directory

```bash
cd ../cluster-extensions
```

### 6. Update `terraform.tfvars`

- Add the subscription_id, resource_group_name, location, and cluster_name from the previous step.

### 7. Apply Terraform for Storage

```bash
terraform init
terraform apply
```

- This will create StorageClasses, and Azure Files.

## Post-Deployment Steps

### 8. Update kubeconfig

```bash
az aks get-credentials --resource-group <resource-group-name> --name <cluster-name>
```

### 9. Install Kata in AKS Node

```bash
kubectl apply -f kata-sa.yaml
kubectl apply -f daemonset.yaml
```

### 10. Add the Labels

```bash
kubectl get nodes
kubectl label node <node-name> kata-runtime=running
kubectl label node <node-name> node-role.kubernetes.io/kata-devpod-node=1
```

Or you can automatically label all your nodes like this:

```bash
for NODE in $(kubectl get nodes -o name); do
kubectl label "$NODE" kata-runtime=running --overwrite
kubectl label "$NODE" node-role.kubernetes.io/kata-devpod-node=1 --overwrite
done
```

### 11. Install DevZero Self-Hosted

Refer to the [Charts README](../charts/README.md) for further steps to deploy the Control Plane and Data Plane.

## Troubleshooting

- Verify cloud credentials and permissions.
- Check Terraform state files for resource management.
- Use `terraform plan` to preview changes before applying.
Loading
Loading