Restart k8s deployments running new images#102
Conversation
|
This change introduces automated deployment processes for both backend and frontend applications using GitHub Actions. When a pull request is merged and the respective builds are successful, it triggers a restart of the respective deployments on SAP BTP Kyma, ensuring the systems are updated with the latest images. Walkthrough
Model: gpt-4o | Prompt Tokens: 707 | Completion Tokens: 119 |
There was a problem hiding this comment.
Here's a supportive code review enhanced by AI assistance. These observations offer guidance and suggestions rather than definitive answers, so please apply what feels most valuable to your work. You remain the expert and decision-maker—AI simply provides additional perspective to help you succeed.
Always critique what AI says. Do not let AI replace YOUR I.
Model: anthropic--claude-4-sonnet | Prompt Tokens: 1475 | Completion Tokens: 1348
| - name: Set up kubectl | ||
| uses: azure/setup-kubectl@v3 | ||
| with: | ||
| version: 'latest' | ||
|
|
||
| - name: Configure kubectl for SAP BTP Kyma | ||
| run: | | ||
| mkdir -p ~/.kube | ||
| echo "${{ secrets.KUBECONFIG }}" | base64 -d > ~/.kube/config | ||
| chmod 600 ~/.kube/config | ||
|
|
||
| - name: Restart Backend Deployment | ||
| run: | | ||
| echo "🔄 Restarting backend deployment to pull latest image..." | ||
| kubectl rollout restart deployment/stars-backend -n stars | ||
| kubectl rollout status deployment/stars-backend -n stars --timeout=10m | ||
| echo "✅ Backend deployment restarted successfully" | ||
|
|
||
| deploy-frontend: | ||
| name: Restart Frontend Deployment | ||
| if: github.event.pull_request.merged && needs.build-frontend.result == 'success' | ||
| needs: [check_version_update, build-frontend] | ||
| runs-on: ubuntu-latest | ||
| steps: | ||
| - name: Set up kubectl | ||
| uses: azure/setup-kubectl@v3 | ||
| with: | ||
| version: 'latest' | ||
|
|
||
| - name: Configure kubectl for SAP BTP Kyma | ||
| run: | | ||
| mkdir -p ~/.kube | ||
| echo "${{ secrets.KUBECONFIG }}" | base64 -d > ~/.kube/config | ||
| chmod 600 ~/.kube/config |
There was a problem hiding this comment.
Code Duplication: The kubectl setup and configuration steps are duplicated between the two jobs. Consider extracting this into a reusable action or composite action to follow DRY principles.
# Create a composite action in .github/actions/setup-kubectl/action.yml
name: 'Setup kubectl for SAP BTP Kyma'
description: 'Sets up kubectl and configures it for SAP BTP Kyma'
runs:
using: 'composite'
steps:
- name: Set up kubectl
uses: azure/setup-kubectl@v3
with:
version: 'latest'
- name: Configure kubectl for SAP BTP Kyma
shell: bash
run: |
mkdir -p ~/.kube
echo "${{ secrets.KUBECONFIG }}" | base64 -d > ~/.kube/config
chmod 600 ~/.kube/config
# Then use it in both jobs:
- name: Setup kubectl
uses: ./.github/actions/setup-kubectl| - name: Configure kubectl for SAP BTP Kyma | ||
| run: | | ||
| mkdir -p ~/.kube | ||
| echo "${{ secrets.KUBECONFIG }}" | base64 -d > ~/.kube/config | ||
| chmod 600 ~/.kube/config |
There was a problem hiding this comment.
Security Enhancement: The KUBECONFIG secret handling could be more secure by using environment variables instead of directly echoing secrets in the command line.
- name: Configure kubectl for SAP BTP Kyma
env:
KUBECONFIG_DATA: ${{ secrets.KUBECONFIG }}
run: |
mkdir -p ~/.kube
echo "$KUBECONFIG_DATA" | base64 -d > ~/.kube/config
chmod 600 ~/.kube/config| - name: Restart Backend Deployment | ||
| run: | | ||
| echo "🔄 Restarting backend deployment to pull latest image..." | ||
| kubectl rollout restart deployment/stars-backend -n stars | ||
| kubectl rollout status deployment/stars-backend -n stars --timeout=10m | ||
| echo "✅ Backend deployment restarted successfully" |
There was a problem hiding this comment.
Error Handling: The deployment restart commands lack proper error handling. Consider adding error handling and validation to ensure the operations complete successfully.
- name: Restart Backend Deployment
run: |
echo "🔄 Restarting backend deployment to pull latest image..."
if kubectl rollout restart deployment/stars-backend -n stars; then
echo "✅ Restart command issued successfully"
else
echo "❌ Failed to restart deployment"
exit 1
fi
if kubectl rollout status deployment/stars-backend -n stars --timeout=10m; then
echo "✅ Backend deployment restarted successfully"
else
echo "❌ Deployment rollout failed or timed out"
exit 1
fi| deploy-backend: | ||
| name: Restart Backend Deployment | ||
| if: github.event.pull_request.merged && needs.build-backend.result == 'success' | ||
| needs: [check_version_update, build-backend] | ||
| runs-on: ubuntu-latest | ||
| steps: | ||
| - name: Set up kubectl | ||
| uses: azure/setup-kubectl@v3 | ||
| with: | ||
| version: 'latest' | ||
|
|
||
| - name: Configure kubectl for SAP BTP Kyma | ||
| run: | | ||
| mkdir -p ~/.kube | ||
| echo "${{ secrets.KUBECONFIG }}" | base64 -d > ~/.kube/config | ||
| chmod 600 ~/.kube/config | ||
|
|
||
| - name: Restart Backend Deployment | ||
| run: | | ||
| echo "🔄 Restarting backend deployment to pull latest image..." | ||
| kubectl rollout restart deployment/stars-backend -n stars | ||
| kubectl rollout status deployment/stars-backend -n stars --timeout=10m | ||
| echo "✅ Backend deployment restarted successfully" | ||
|
|
||
| deploy-frontend: | ||
| name: Restart Frontend Deployment | ||
| if: github.event.pull_request.merged && needs.build-frontend.result == 'success' | ||
| needs: [check_version_update, build-frontend] | ||
| runs-on: ubuntu-latest | ||
| steps: | ||
| - name: Set up kubectl | ||
| uses: azure/setup-kubectl@v3 | ||
| with: | ||
| version: 'latest' | ||
|
|
||
| - name: Configure kubectl for SAP BTP Kyma | ||
| run: | | ||
| mkdir -p ~/.kube | ||
| echo "${{ secrets.KUBECONFIG }}" | base64 -d > ~/.kube/config | ||
| chmod 600 ~/.kube/config | ||
|
|
||
| - name: Restart Frontend Deployment | ||
| run: | | ||
| echo "🔄 Restarting frontend deployment to pull latest image..." | ||
| kubectl rollout restart deployment/stars-frontend -n stars | ||
| kubectl rollout status deployment/stars-frontend -n stars --timeout=10m | ||
| echo "✅ Frontend deployment restarted successfully" |
There was a problem hiding this comment.
Configuration as Variables: Hard-coded values like namespace, deployment names, and timeout should be extracted as variables or inputs for better maintainability.
deploy-backend:
name: Restart Backend Deployment
if: github.event.pull_request.merged && needs.build-backend.result == 'success'
needs: [check_version_update, build-backend]
runs-on: ubuntu-latest
env:
NAMESPACE: stars
DEPLOYMENT_NAME: stars-backend
ROLLOUT_TIMEOUT: 10m
steps:
# ... kubectl setup steps ...
- name: Restart Backend Deployment
run: |
echo "🔄 Restarting $DEPLOYMENT_NAME deployment to pull latest image..."
kubectl rollout restart deployment/$DEPLOYMENT_NAME -n $NAMESPACE
kubectl rollout status deployment/$DEPLOYMENT_NAME -n $NAMESPACE --timeout=$ROLLOUT_TIMEOUT
echo "✅ $DEPLOYMENT_NAME deployment restarted successfully"| - name: Restart Backend Deployment | ||
| run: | | ||
| echo "🔄 Restarting backend deployment to pull latest image..." | ||
| kubectl rollout restart deployment/stars-backend -n stars | ||
| kubectl rollout status deployment/stars-backend -n stars --timeout=10m | ||
| echo "✅ Backend deployment restarted successfully" |
There was a problem hiding this comment.
Validation Step: Consider adding a validation step to verify that the deployment is actually running and healthy after the restart.
- name: Verify Deployment Health
run: |
echo "🔍 Verifying deployment health..."
kubectl get pods -n stars -l app=stars-backend
# Wait for pods to be ready
kubectl wait --for=condition=ready pod -l app=stars-backend -n stars --timeout=5m
# Optional: Health check if your app has a health endpoint
# kubectl port-forward service/stars-backend 8080:8080 -n stars &
# sleep 5
# curl -f http://localhost:8080/health || exit 1
echo "✅ Deployment is healthy and ready"
When a new build is pushed to artifactory, restart the current deployments to pull the latest images