diff --git a/.gitmodules b/.gitmodules new file mode 100644 index 00000000..d7c6479b --- /dev/null +++ b/.gitmodules @@ -0,0 +1,9 @@ +[submodule "LDAP/pown"] + path = LDAP/pown + url = https://github.com/anishapant21/pown.sh.git +[submodule "LDAP/LDAPServer"] + path = LDAP/LDAPServer + url = https://github.com/mieweb/LDAPServer.git +[submodule "ci-cd automation/Proxmox-Launchpad"] + path = ci-cd automation/Proxmox-Launchpad + url = https://github.com/maxklema/proxmox-launchpad.git diff --git a/LDAP/LDAPServer b/LDAP/LDAPServer new file mode 160000 index 00000000..00b8dffc --- /dev/null +++ b/LDAP/LDAPServer @@ -0,0 +1 @@ +Subproject commit 00b8dffc95e015b4f9849a012afd1f1d7369aff1 diff --git a/LDAP/README.md b/LDAP/README.md new file mode 100644 index 00000000..f4d89aa8 --- /dev/null +++ b/LDAP/README.md @@ -0,0 +1 @@ +## LDAP \ No newline at end of file diff --git a/LDAP/pown b/LDAP/pown new file mode 160000 index 00000000..a2a19a50 --- /dev/null +++ b/LDAP/pown @@ -0,0 +1 @@ +Subproject commit a2a19a5005349dba9d5d39200c9552c41c985e20 diff --git a/README.md b/README.md index 51a226b4..282b8e82 100644 --- a/README.md +++ b/README.md @@ -1,65 +1,132 @@ # opensource-mieweb -Configuration storage for the [opensource.mieweb.com](https://opensource.mieweb.com) Proxmox project. +Configuration storage for the [opensource.mieweb.org](https://opensource.mieweb.org:8006) Proxmox project. -This repository contains configuration files and scripts for managing a Proxmox-based container hosting environment, including automated DNS, NGINX reverse proxy, and dynamic port mapping. +This repository contains configuration files and scripts for managing a Proxmox-based container hosting environment, including automated DNS, NGINX reverse proxy, dynamic port mapping, and the Proxmox LaunchPad GitHub Action for automated container deployment. +## Cluster Graph + +```mermaid + +graph TD + %% Repository Structure + REPO[opensource-mieweb Repository] + + %% All Main Folders + REPO --> CICD[ci-cd automation] + REPO --> CC[container creation] + REPO --> DNS[dnsmasq service] + REPO --> GW[gateway] + REPO --> LDAP[LDAP] + REPO --> NGINX[nginx reverse proxy] + REPO --> PL[proxmox-launchpad] + + %% Core Workflow Connections + CC --> |creates| CONTAINER[LXC Container] + CONTAINER --> |Updates Container Map| NGINX + CONTAINER --> |updates| DNS + CONTAINER --> | Updates IP Tables| GW + CONTAINER --> |authenticates via| LDAP + + %% CI/CD Operations + CICD --> |manages| CONTAINER + PL --> |automates| CC + PL --> |uses| CICD + + %% User Access Flow + USER[User Access] --> DNS + DNS --> NGINX + NGINX --> CONTAINER + + %% Styling + classDef folder fill:#e3f2fd,stroke:#1976d2,stroke-width:2px + classDef system fill:#f1f8e9,stroke:#689f38,stroke-width:2px + classDef user fill:#fff3e0,stroke:#f57c00,stroke-width:2px + + class CICD,CC,DNS,GW,LDAP,NGINX,PL folder + class CONTAINER system + class USER user ``` -│ README.md -│ -├───intern-dnsmasq -│ dnsmasq.conf -│ -├───intern-nginx -│ nginx.conf -│ port_map.js -│ reverse_proxy.conf -│ -└───intern-phxdc-pve1 - register-container.sh - register_proxy_hook.sh -``` -## Repository Structure -- [`intern-dnsmasq/dnsmasq.conf`](intern-dnsmasq/dnsmasq.conf): - Dnsmasq configuration for DHCP and DNS, including wildcard routing for the reverse proxy. +### Core Infrastructure + +- [`dnsmasq service/`](dnsmasq%20service/): + Contains Dnsmasq configuration for DHCP and DNS services, including wildcard routing for the reverse proxy and container network management. + +- [`nginx reverse proxy/`](nginx%20reverse%20proxy/): + Houses NGINX configuration files for the reverse proxy setup, including JavaScript modules for dynamic backend resolution and SSL certificate management. -- [`intern-nginx/nginx.conf`](intern-nginx/nginx.conf): - Main NGINX configuration, loading the JavaScript module for dynamic backend resolution. +- [`gateway/`](gateway/): + Gateway configuration and management scripts for network routing and access control between the internal container network and external traffic. Also contains daily clean up scripts for the cluster. -- [`intern-nginx/reverse_proxy.conf`](intern-nginx/reverse_proxy.conf): - NGINX reverse proxy config using dynamic JavaScript lookups for backend containers. +### Container Management -- [`intern-nginx/port_map.js`](intern-nginx/port_map.js): - JavaScript module for NGINX to map subdomains to backend container IPs and ports using a JSON file. +- [`container creation/`](container%20creation/): + Contains comprehensive scripts for LXC container lifecycle management, including creation, LDAP configuration, service deployment, and registration with the proxy infrastructure. -- [`intern-phxdc-pve1/register-container.sh`](intern-phxdc-pve1/register-container.sh): - Proxmox hook script to register new containers with the NGINX proxy and assign HTTP/SSH ports. +- [`ci-cd automation/`](ci-cd%20automation/): + Automation scripts for continuous integration and deployment workflows, including container existence checks, updates, and cleanup operations with helper utilities. -- [`intern-phxdc-pve1/register_proxy_hook.sh`](intern-phxdc-pve1/register_proxy_hook.sh): - Proxmox event hook to trigger container registration on startup. +### Authentication & Directory Services + +- [`LDAP/`](LDAP/): + Contains LDAP authentication infrastructure including a custom Node.js LDAP server that bridges database user management with LDAP protocols, and automated LDAP client configuration tools for seamless container authentication integration. LDAP Server configured to reference the [Proxmox VE Users @pve realm](https://pve.proxmox.com/wiki/User_Management) with optional [Push Notification 2FA](https://github.com/mieweb/mieweb_auth_app) + +### GitHub Action Integration + +- [`proxmox-launchpad/`](proxmox-launchpad/): + The Proxmox LaunchPad GitHub Action for automated container deployment directly from GitHub repositories, supporting both single and multi-component applications. ## How It Works -- **DNS**: All `*.opensource.mieweb.com` requests are routed to the NGINX proxy via Dnsmasq. -- **Reverse Proxy**: NGINX uses a JavaScript module to dynamically resolve the backend IP and port for each subdomain, based on `/etc/nginx/port_map.json`. -- **Container Registration**: When a new container starts, Proxmox runs a hook script that: - - Waits for the container to get a DHCP lease. - - Assigns available HTTP and SSH ports. - - Updates the NGINX port map and reloads NGINX. - - Sets up port forwarding for SSH access. +- **DNS**: All `*.opensource.mieweb.com` requests are routed to the NGINX proxy via Dnsmasq, providing automatic subdomain resolution for containers. +- **Reverse Proxy**: NGINX uses JavaScript modules to dynamically resolve backend IP addresses and ports for each subdomain, based on the container registry in `/etc/nginx/port_map.json`. +- **Container Lifecycle**: When containers start, Proxmox hooks automatically: + - Wait for DHCP lease assignment + - Allocate available HTTP and SSH ports + - Update the NGINX port mapping and reload configuration + - Configure iptables rules for SSH port forwarding +- **GitHub Integration**: The Proxmox LaunchPad action automates the entire process from repository push to live deployment, including dependency installation, service configuration, and application startup. +- **CI/CD Pipeline**: Automated scripts used by [Proxmox LaunchPad](#proxmox-launchpad) to handle container updates, existence checks, and cleanup operations to maintain a clean and efficient hosting environment. +- **LDAP Server**: All LXC Container Authentication is handled by a centralized LDAP server housed in the cluster. Each Container is configured with SSSD, which communicates with the LDAP server to verify/authenitcate user credentials. This approach is more secure than housing credentials locally. + -## Usage +## Proxmox LaunchPad + +The Proxmox LaunchPad is a powerful GitHub Action that automatically creates, manages, and deploys LXC containers on the Proxmox cluster based on your repository's branch activity. It supports: + +- **Automatic Container Creation**: Creates new containers when branches are created or pushed to +- **Multi-Component Deployments**: Supports applications with multiple services (e.g., frontend + backend) +- **Service Integration**: Automatically installs and configures services like MongoDB, Docker, Redis, PostgreSQL, and more +- **Branch-Based Environments**: Each branch gets its own isolated container environment +- **Automatic Cleanup**: Deletes containers when branches are deleted (e.g., after PR merges) + +The action integrates with the existing infrastructure to provide automatic DNS registration, reverse proxy configuration, and port mapping for seamless access to deployed applications. + +## Opensource Cluster Usage + +### For Infrastructure Management 1. **Clone this repository** to your Proxmox host or configuration management system. 2. **Deploy the configuration files** to their respective locations on your infrastructure. 3. **Ensure dependencies**: - - Proxmox VE with container support. - - NGINX with the `ngx_http_js_module`. - - Dnsmasq. - - `jq` for JSON manipulation. -4. **Register new containers** using the provided hook scripts for automatic proxy and DNS integration. + - Proxmox VE with LXC container support + - NGINX with the `ngx_http_js_module` + - Dnsmasq for DNS and DHCP services +4. **Set up LDAP authentication** using the provided LDAP server and client configuration tools. +5. **Configure container templates** and network settings according to your environment. +6. **Register new containers** using the provided hook scripts for automatic proxy and DNS integration. + +### For GitHub Action Deployment + +1. **Add the Proxmox LaunchPad action** to your repository's workflow file. +2. **Configure repository secrets** for Proxmox credentials and optionally a GitHub PAT. +3. **Set up trigger events** for push, create, and delete operations in your workflow. +4. **Configure deployment properties** in your workflow file for automatic application deployment. +5. **Push to your repository** and watch as containers are automatically created and your application is deployed. + +See the [`proxmox-launchpad/README.md`](proxmox-launchpad/README.md) for detailed setup instructions and configuration options. --- -*Current SME: Carter Myers and other contributors to opensource.mieweb.com * +Contributors: Carter Myers, Maxwell Klema, and Anisha Pant \ No newline at end of file diff --git a/ci-cd automation/Proxmox-Launchpad b/ci-cd automation/Proxmox-Launchpad new file mode 160000 index 00000000..038aff5a --- /dev/null +++ b/ci-cd automation/Proxmox-Launchpad @@ -0,0 +1 @@ +Subproject commit 038aff5ad0eacd9f77935ae8819ab59da13fc981 diff --git a/ci-cd automation/README.md b/ci-cd automation/README.md new file mode 100644 index 00000000..b8b97510 --- /dev/null +++ b/ci-cd automation/README.md @@ -0,0 +1 @@ +# CI/CD Automation \ No newline at end of file diff --git a/ci-cd automation/check-container-exists.sh b/ci-cd automation/check-container-exists.sh new file mode 100644 index 00000000..77c8fd8f --- /dev/null +++ b/ci-cd automation/check-container-exists.sh @@ -0,0 +1,46 @@ +#!/bin/bash +# Script to check if a container exists, and if so, whether it needs to be updated or cloned. +# Last Modified by Maxwell Klema on July 13th, 2025 +# ----------------------------------------------------- + +outputError() { + echo -e "${BOLD}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${RESET}" + echo -e "${BOLD}${MAGENTA}❌ Script Failed. Exiting... ${RESET}" + echo -e "$2" + echo -e "${BOLD}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${RESET}" + exit $1 +} + +RESET="\033[0m" +BOLD="\033[1m" +MAGENTA='\033[35m' + +echo -e "${BOLD}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${RESET}" +echo -e "${BOLD}${MAGENTA}🔎 Check Container Exists ${RESET}" +echo -e "${BOLD}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${RESET}" + +TYPE_RUNNER="true" +source /var/lib/vz/snippets/helper-scripts/PVE_user_authentication.sh +source /var/lib/vz/snippets/helper-scripts/verify_container_ownership.sh + +STATUS=$? +if [ "$STATUS" != 0 ]; then + exit 1; +fi + +REPO_BASE_NAME=$(basename -s .git "$PROJECT_REPOSITORY") + +# Check if repository folder is present. +if [ "$PVE1" == "true" ]; then + if pct exec $CONTAINER_ID -- test -f /root/container-updates.log; then + exit 2; # Update Repository + else + exit 0; # Clone Repository + fi +else + if ssh 10.15.0.5 "pct exec $CONTAINER_ID -- test -f /root/container-updates.log"; then + exit 2; # Update Repository + else + exit 0; # Clone Repository + fi +fi \ No newline at end of file diff --git a/ci-cd automation/delete-container.sh b/ci-cd automation/delete-container.sh new file mode 100644 index 00000000..d46ce23b --- /dev/null +++ b/ci-cd automation/delete-container.sh @@ -0,0 +1,29 @@ +#!/bin/bash +# Script to delete a container permanently +# Last Modified by Maxwell Klema on July 13th, 2025 +# ----------------------------------------------------- + +RESET="\033[0m" +BOLD="\033[1m" +MAGENTA='\033[35m' + +echo -e "${BOLD}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${RESET}" +echo -e "${BOLD}${MAGENTA}🗑️ Delete Container ${RESET}" +echo -e "${BOLD}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${RESET}" + +CMD=( +bash /var/lib/vz/snippets/helper-scripts/delete-runner.sh +"$PROJECT_REPOSITORY" +"$GITHUB_PAT" +"$PROXMOX_USERNAME" +"$PROXMOX_PASSWORD" +"$CONTAINER_NAME" +) + +# Safely quote each argument for the shell +QUOTED_CMD=$(printf ' %q' "${CMD[@]}") + +tmux new-session -d -s delete-runner "$QUOTED_CMD" + +echo "✅ Container with name \"$CONTAINER_NAME\" will be permanently deleted." +exit 0 # Container Deleted Successfully \ No newline at end of file diff --git a/ci-cd automation/helper-scripts/PVE_user_authentication.sh b/ci-cd automation/helper-scripts/PVE_user_authentication.sh new file mode 100755 index 00000000..a7b84f34 --- /dev/null +++ b/ci-cd automation/helper-scripts/PVE_user_authentication.sh @@ -0,0 +1,23 @@ +#!/bin/bash +# Script that checks if a user is authenticated in Proxmox PVE Realm @ opensource.mieweb.org +# Last Modified by Maxwell Klema on July 13th, 2025 +# ----------------------------------------------------- + +# Authenticate User (Only Valid Users can Create Containers) + +if [ -z "$PROXMOX_USERNAME" ]; then + read -p "Enter Proxmox Username → " PROXMOX_USERNAME +fi + +if [ -z "$PROXMOX_PASSWORD" ]; then + read -sp "Enter Proxmox Password → " PROXMOX_PASSWORD + echo "" +fi + +USER_AUTHENTICATED=$(ssh root@10.15.234.122 "node /root/bin/js/runner.js authenticateUser \"$PROXMOX_USERNAME\" \"$PROXMOX_PASSWORD\"") + +if [ $USER_AUTHENTICATED == 'false' ]; then + outputError 1 "Your Proxmox account, $PROXMOX_USERNAME@pve, was not authenticated. Retry with valid credentials." +fi + +echo "🎉 Your proxmox account, $PROXMOX_USERNAME@pve, has been authenticated" \ No newline at end of file diff --git a/ci-cd automation/helper-scripts/create-template.sh b/ci-cd automation/helper-scripts/create-template.sh new file mode 100755 index 00000000..54d5e1ea --- /dev/null +++ b/ci-cd automation/helper-scripts/create-template.sh @@ -0,0 +1,56 @@ +#!/bin/bash +# Creates a template of a LXC container +# Last modified by Maxwell Klema on July 23rd, 2025. +# -------------------------------------------------- + +if [ "${DEPLOY_ON_START^^}" != "Y" ] || [ "${GH_ACTION^^}" != "Y" ]; then + return 0 +fi + +DEFAULT_BRANCH=$(curl -s https://api.github.com/repos/$REPO_BASE_NAME_WITH_OWNER/$REPO_BASE_NAME | jq -r '.default_branch') + +if [ "$DEFAULT_BRANCH" != "$PROJECT_BRANCH" ]; then + return 0 +fi + +echo "📝 Creating Container Template..." + +# Check if template already exists, and if it does, destroy it ===== + +TEMPLATE_NAME="template-$REPO_BASE_NAME-$REPO_BASE_NAME_WITH_OWNER" +TEMPLATE_CONTAINER_ID=$( { pct list; ssh root@10.15.0.5 'pct list'; } | awk -v name="$TEMPLATE_NAME" '$3 == name {print $1}') + +if [ ! -z "$TEMPLATE_CONTAINER_ID" ]; then + pct destroy $TEMPLATE_CONTAINER_ID | true +fi + +# Clone LXC container and convert it into a template ===== + +NEXT_ID=$(pvesh get /cluster/nextid) + +if (( $CONTAINER_ID % 2 == 101 )); then + ssh root@10.15.0.5 " + pct clone $CONTAINER_ID $NEXT_ID \ + --hostname "$TEMPLATE_NAME" \ + --full true + pct migrate $NEXT_ID intern-phxdc-pve1 --target-storage containers-pve1 + " > /dev/null 2>&1 +else + pct clone $CONTAINER_ID $NEXT_ID \ + --hostname "$TEMPLATE_NAME" \ + --full true +fi + +# AUTH_TOKEN_RESPONSE=$(curl --location --request POST https://api.github.com/repos/$REPO_BASE_NAME_WITH_OWNER/$REPO_BASE_NAME/actions/runners/registration-token --header "Authorization: token $GITHUB_PAT") +# TOKEN=$(echo "$AUTH_TOKEN_RESPONSE" | jq -r '.token') + +# Remove rsa keys ==== +pct start $NEXT_ID +pct enter $NEXT_ID < /dev/null 2>&1 + else + ssh root@10.15.0.5 "pct destroy $CONTAINER_ID" > /dev/null 2>&1 + fi +else + if pct status "$CONTAINER_ID" | grep -q "status: running"; then + pct stop "$CONTAINER_ID" && pct destroy "$CONTAINER_ID" > /dev/null 2>&1 + else + pct destroy "$CONTAINER_ID" > /dev/null 2>&1 + fi +fi + +source /usr/local/bin/prune_iptables.sh + +REPO_BASE_NAME=$(basename -s .git "$PROJECT_REPOSITORY") +REPO_BASE_NAME_WITH_OWNER=$(echo "$PROJECT_REPOSITORY" | cut -d'/' -f4) + +RUNNERS=$(curl --location https://api.github.com/repos/$REPO_BASE_NAME_WITH_OWNER/$REPO_BASE_NAME/actions/runners --header "Authorization: token $GITHUB_PAT") + +while read -r RUNNER; do + RUNNER_NAME=$(echo "$RUNNER" | jq -r '.name') + if [ "$RUNNER_NAME" == "$CONTAINER_NAME" ]; then + RUNNER_ID=$(echo "$RUNNER" | jq -r '.id') + curl --location --request DELETE "https://api.github.com/repos/$REPO_BASE_NAME_WITH_OWNER/$REPO_BASE_NAME/actions/runners/$RUNNER_ID" \ + --header "Authorization: token $GITHUB_PAT" + fi +done < <(echo "$RUNNERS" | jq -c '.runners[]') \ No newline at end of file diff --git a/ci-cd automation/helper-scripts/repository_status.sh b/ci-cd automation/helper-scripts/repository_status.sh new file mode 100755 index 00000000..5ec73b76 --- /dev/null +++ b/ci-cd automation/helper-scripts/repository_status.sh @@ -0,0 +1,37 @@ +#!/bin/bash +# Helper script to determine if container needs to clone repository or simply update it +# Last Modified by Maxwell Klema on July 21st, 2025 +# ------------------------------------------------- + +set +e +TYPE_RUNNER="true" +source /var/lib/vz/snippets/helper-scripts/PVE_user_authentication.sh +source /var/lib/vz/snippets/helper-scripts/verify_container_ownership.sh + +STATUS=$? + +if [ "$STATUS" != 0 ]; then + exit 1; +fi + +REPO_BASE_NAME=$(basename -s .git "$PROJECT_REPOSITORY") + +# Check if repository folder is present. + +if [ "$PVE1" == "true" ]; then + if pct exec $CONTAINER_ID -- test -d /root/$REPO_BASE_NAME; then + echo "Update" + exit 2; # Update Repository + else + echo "Clone" + exit 0; # Clone Repository + fi +else + if ssh 10.15.0.5 "pct exec $CONTAINER_ID -- test -d /root/$REPO_BASE_NAME"; then + echo "Update" + exit 2; # Update Repository + else + echo "Clone" + exit 0; # Clone Repository + fi +fi \ No newline at end of file diff --git a/ci-cd automation/helper-scripts/verify_container_ownership.sh b/ci-cd automation/helper-scripts/verify_container_ownership.sh new file mode 100755 index 00000000..f99849ae --- /dev/null +++ b/ci-cd automation/helper-scripts/verify_container_ownership.sh @@ -0,0 +1,24 @@ +#!/bin/bash +# Script to verify container ownership based on name and CTID +# Last Modified by Maxwell Klema on August 5th, 2025 +# ----------------------------------------------------- + +CONTAINER_NAME="${CONTAINER_NAME,,}" +CONTAINER_ID=$( { pct list; ssh root@10.15.0.5 'pct list'; } | awk -v name="$CONTAINER_NAME" '$3 == name {print $1}') + +if [ -z "$CONTAINER_ID" ]; then + echo "✅ Container with name \"$CONTAINER_NAME\" is available for use." + return 1 +fi + +CONTAINER_OWNERSHIP=$(ssh root@10.15.20.69 -- "jq '.\"$CONTAINER_NAME\".user' /etc/nginx/port_map.json") +if [ "$TYPE_RUNNER" == "true" ] && (( $CONTAINER_ID % 2 == 0 )); then + PVE1="false" +elif [ "$TYPE_RUNNER" == "true" ] && (( $CONTAINER_ID % 2 != 0 )); then + PVE1="true" +fi + +if [ "$CONTAINER_OWNERSHIP" == "null" ]; then + echo "❌ You do not own the container with name \"$CONTAINER_NAME\"." + outputError 1 "You do not own the container with name \"$CONTAINER_NAME\"." +fi \ No newline at end of file diff --git a/ci-cd automation/proxmox-launchpad/.gitignore b/ci-cd automation/proxmox-launchpad/.gitignore new file mode 100644 index 00000000..e69de29b diff --git a/ci-cd automation/proxmox-launchpad/README.md b/ci-cd automation/proxmox-launchpad/README.md new file mode 100644 index 00000000..17c2a011 --- /dev/null +++ b/ci-cd automation/proxmox-launchpad/README.md @@ -0,0 +1,308 @@ +# Proxmox LaunchPad + +This GitHub action utilizes MIE's open source cluster to manage LXC containers derived from your github repository source code. + +> [!NOTE] +> This project is new and is in a early version. There are likely bugs. If you encounter any, please create an issue. + +## Table of Contents +1. [Video Walkthroughs](#video-walkthroughs) +2. [Sequence Diagram](#sequence-diagram) +3. [Prerequisites](#prerequisites) +4. [Getting Started](#getting-started) + - [Create-Runner Job](#create-runner-workflow-job) + - [Personal Access Token](#creating-a-github-pat-for-your-workflow) + - [Runner Job](#runner-job) + - [Manage-Container Job](#manage-container-workflow-job) +5. [Configurations](#configurations) + - [Basic Properties](#basic-properties) + - [Automatic Deployment Properties](#automatic-deployment-properties) +6. [Important Notes for Automatic Deployment](#important-notes-for-automatic-deployment) +7. [Output](#output) +8. [Sample Workflow File ](#sample-workflow-file) +9. [Misc.](#misc) + +## Video Walkthroughs + +I have created a series of videos to walk you through automatic deployment, both in GitHub and via the command line. + +**[Long-Form]** Proxmox LaunchPad Walkthrough: [Video](https://youtu.be/Xa2L1o-atEM)
+**[Short-Form]** Proxmox LaunchPad Demonstration: [Short](https://youtube.com/shorts/SuK73Jej5j4)
+**[Long-Form]** Automatic Deployment through Command Line: [Video](https://youtu.be/acDW-a32Yr8)
+**[Long-Form]** Getting Started with Creating LXC Continers with Proxmox: [Video](https://youtu.be/sVW3dkBqs4E) + +## Sequence Diagram + +The sequence diagram below describes the sequence of events executed by this Github Action. + +```mermaid +sequenceDiagram + participant Dev as Developer + participant GH as GitHub + participant GHAR as GitHub Actions Runner (hosted) + participant Prox as Proxmox Cluster + participant LXC as LXC Container (Self-hosted Runner) + + Dev->>GH: Push/Create/Delete branch + GH->>GHAR: Trigger workflow + + alt Push/Create event + GHAR->>Prox: Check if LXC container exists for branch + alt Container does not exist + GHAR->>Prox: Clone template, create LXC container + Prox->>LXC: Start container, configure self-hosted runner + GHAR->>LXC: Register self-hosted runner + GHAR->>LXC: Run manage container job (install deps, clone repo, install services, deploy app) + else Container exists + GHAR->>Prox: Call update script + Prox->>LXC: Update container contents, restart with latest branch + end + else Delete event + GHAR->>LXC: Call delete-container script + LXC->>Prox: Remove runner and delete LXC container + end +``` + +## Prerequisites +- Proxmox Datacluster Setup that mirrors/forks [https://github.com/mieweb/opensource-server](https://github.com/mieweb/opensource-server). +- Valid Proxmox Account. + +## Getting Started + +> [!WARNING] +> This Github Action requires you to pass your Github Personal Access Token in order to create runners. If you are comfortable doing this, see [Create-Runner Job](#create-runner-workflow-job). If you are not, you may supply your own self-hosted runner and skip to [Manage-Container Job](#manage-container-workflow-job). + +To use this action in your repository, you need to add the following trigger events in a workflow file: + +```yaml +on: + push: + create: + delete: +``` + +This allows a container to be created/updated on a push command, created when a new branch is created, and deleted when a branch is deleted (like in the case of an accepted PR). + +### Create-Runner Workflow Job + +> [!CAUTION] +> If you choose to pass in your GitHub Personal Access Token, keep it in a secure place and do not share it with anyone. + +#### Creating a GitHub PAT for your Workflow + +This Github Action requires you to pass your Github Personal Access Token in order to create runners. To create a PAT, navigate to your GitHub account settings. Then, on the bottom left-hand side, click developer settings. Navigate to Personal Access Tokens (classic). Click on generate new token, then give your token a name and an expiration date. Finally, select the manage_runners:org permission or the manage_runners:enterprise permission, depending on where your repository is housed. Finally, a token should be generated. Make sure to place the token somewhere securely. Then, add it as a repository secret in the repository that you want to run your workflow file in. + +#### Runner Job + +Before a container can be managed, a self-hosted runner must be installed on the LXC container to complete future workflow jobs. To do this, a github-supplied runner needs to create the container and install/start a custom runner on it that is linked to your repository. + +The create-runner job in your workflow file should look similar to this: + +```yaml +setup-runner: + runs-on: ubuntu-latest + steps: + - name: Install Dependencies + run: | + sudo apt install -y sshpass jq + + - uses: maxklema/proxmox-launchpad@main + with: + proxmox_password: ${{ secrets.PROXMOX_PASSWORD }} + proxmox_username: ${{ secrets.PROXMOX_USERNAME }} + github_pat: ${{ secrets.GH_PAT }} +``` + +The GitHub runner needs to install sshpass (used to authenticate into another host using password authentication) and jq (a popular package for managing/parsing JSON data). + +In the second step, 3 fields are required: `proxmox_username`, `proxmox_password`, and `github_pat` + +To see an explanation for these fields: See [Basic Properties](#basic-properties) + + +### Manage-Container Workflow Job + +The second job in your workflow file should look similar to this: + +> [!NOTE] +> If you chose to run this on your own self-hosted runner instead of the action creating one for you, this will be your first job. Therefore, the needs parameter is not needed. + +```yaml + manage-container: + runs-on: self-hosted + needs: setup-runner + steps: + - uses: maxklema/proxmox-launchpad@test + with: + proxmox_password: ${{ secrets.PROXMOX_PASSWORD }} + proxmox_username: ${{ secrets.PROXMOX_USERNAME }} +``` + + + +## Configurations + +At the very minimum, two configuration settings are required to create any container. With all of these properties specified, you can create an empty container for a branch. + +### Basic Properties + +| Propety | Required? | Description | Supplied by Github? | +| ---------------- | ------ | ---------------------------------------------- | ------ | +| `proxmox_username` | Yes | Your proxmox username assigned to you. | N/A +| `proxmox_password` | Yes | Your proxmox password assigned to you. | N/A +| `http_port` | No | The HTTP Port for your container to listen on. It must be between `80` and `60000`. Default value is `3000`. | N/A +| `linux_distribution` | No | The Linux Distribution that runs on your container. Currently, `rocky` (Rocky 9.5) and `debian` (Debian 12) are available. Default value is `Debian`. | N/A +| `github_pat` | Conditional | Your GitHub Personal Access Token. This is used to manage runners in your containers. This is **only required if you want the workflow to create runners for you.**| Yes. Accessable in developer settings. | + + +There are a few other properties that are not required, but can still be specified in the workflow file: +
+ +| Propety | Required? | Description | Supplied by Github? | +| --------- | ----- | ------------------------------------ | ------ | +| `public_key` | No | Your machine's public key that will be stored in the `~/.ssh/authorized_keys` file of your repository. This allows you to SSH into your container without a password. It is more secure and recommended. | N/A + +### Automatic Deployment Properties + +This github action can *attempt* to automatically deploy services on your container. This is done by fetching your repository contents on the branch that the script is being ran in, installing dependencies/services, and running build and start commands in the background. + +Additionally, with automatic deployment enabled, your container will update on every push command automatically, preventing you from having to SSH into the container and setting it up manually. + +> [!NOTE] +> Properties below that are required assuming you want to automatically deploy your project. If not, none of these properties are needed. + +| Propety | Required? | Description | +| --------- | ----- | ------------------------------------ | +| `project_root` | No | The root directory of your project to deploy from. Example: `/flask-server`. If the root directory is the same as the github root directory, leave blank. +| `services` | No | A JSON array of services to add to your container. Example: ```services: '["mongodb", "docker"]'```. These services will automatically install and start up on container creation. **NOTE**: All services in this list must belong on the list of available services below. If you need a service that is not on the list, see `custom_services`.

Available Services: `meteor`, `mongodb`, `docker`, `redis`, `postgresql`, `apache`, `nginx`, `rabbitmq`, `memcached`, `mariadb`. +| `custom_services` | No | A 2D JSON array of custom service installation commands to install any custom service(s) not in `services`.

Example: ```custom_services: [["sudo apt-get install -y service", "sudo systemctl enable service", "sudo systemctl start service"], ["sudo apt-get install -y service2", "sudo systemctl enable service2", "sudo systemctl start service2"]]``` + + +There are two types of deployments: single component and multi-component deployment. Single component deployment involves deploying only a single service (i.e. a single Flask Server, REACT application, MCP Server, etc.). Multi-component deployment involves deploying more than one service at the same time (i.e. a flask backend and a vite.js backend). + +> [!IMPORTANT] +> In Multi-Component applications, each top-layer key represents the file path, relative to the root directory, to the component (service) to place those variables/commands in. + +| Propety | Required? | Description | Single Component | Multi-Component | +| --------- | ----- | ------------------------------------ | ---- | --- | +| `container_env_vars` | No. | Key-Value Environment variable pairs. | Dictionary in the form of: `{ "api_key": "123", "password": "abc"}` | Dictionary in the form of: `'{"/frontend": { "api_key": "123"}, "/backend": { "password": "abc123" }}'`. +| `install_command` | Yes | Commands to install all project dependencies | String of the installation command, i.e. `npm install`. | Dictionary in the form of: `'{"/frontend": "npm install", "/backend": "pip install -r ../requirements.txt"}'`. +| `build_command` | No | Commands to build project components | String of the build command, i.e. `npm build`. | Dictionary in the form of: `'{"/frontend": "npm build", "/backend": "python3 build.py"}'`. +| `start_command` | Yes | Commands to start project components. | String of the start command, i.e. `npm run`. | Dictionary in the form of: `'{"/frontend": "npm run", "/backend": "flask run"}'`. +| `runtime_language` | Yes | Runtime language of each project component, which can either be `nodejs` or `python`. | String of runtime environment, i.e. `nodejs` | Dictionary in the form of: `'{"/frontend": "nodejs", "/backend": "python"}'`. +| `root_start_command` | No | Command to run at the project directory root for **multi-component applications**. | N/A | String of the command, i.e. `Docker run` + +## Important Notes for Automatic Deployment + +Below are some important things to keep in mind if you want your application to be automatically deployed: +- If you are using meteor, you must start your application with the flags ``--allow-superuser`` and `--port 0.0.0.0:`. + - Meteor is a large package, so deploying it may take more time than other applications. +- When running a service, ensure it is listening on `0.0.0.0` (your IP) instead of only locally at `127.0.0.1`. +- The Github action will fail with an exit code and message if a property is not set up correctly. + + +## Output + +When a container is successfully created (Github Action is successful), you will see an output with all of your container details. This includes all your ports, container ID, container IP Address (internal in 10.15.x.x subnet), public domain name, and ssh command to access your container. + +See an example output below: + +``` +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ +🔔 COPY THESE PORTS DOWN — For External Access +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ +📌 Note: Your container listens on SSH Port 22 internally, + but EXTERNAL traffic must use the SSH port listed below: +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ +✅ Hostname Registration: polyglot-test-maxklema-pull-request → 10.15.129.23 +🔐 SSH Port : 2344 +🌐 HTTP Port : 32000 +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ +📦 Container ID : 136 +🌐 Internal IP : 10.15.129.23 +🔗 Domain Name : https://polyglot-test-maxklema-pull-request.opensource.mieweb.org +🛠️ SSH Access : ssh -p 2344 root@polyglot-test-maxklema-pull-request.opensource.mieweb.org +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ +NOTE: Additional background scripts are being ran in detached terminal sessions. +Wait up to two minutes for all processes to complete. +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ +Still not working? Contact Max K. at maxklema@gmail.com +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ +``` + +> [!NOTE] +> Even if your GitHub Action workflow is finished, *it may not be accessible right away. Background tasks (migration, template cloning, cleanup, etc) are still be ran in detatched terminal sessions*. Wait a few minutes for all tasks to complete. + +## Sample Workflow File + +The workflow file below is an example workflow designed to deploy a multi-component application with a python (flask) backend and nodejs (vite) frontend: + +**With PAT:** + +```yaml +name: Proxmox Container Management + +on: + push: + create: + delete: + +jobs: + setup-runner: + runs-on: ubuntu-latest + steps: + - name: Install Dependencies + run: | + sudo apt install -y sshpass jq + - uses: maxklema/proxmox-launchpad@test + with: + proxmox_password: ${{ secrets.PROXMOX_PASSWORD }} + proxmox_username: ${{ secrets.PROXMOX_USERNAME }} + github_pat: ${{ secrets.GH_PAT }} + manage-container: + runs-on: self-hosted + needs: setup-runner + steps: + - uses: maxklema/proxmox-launchpad@test + with: + proxmox_password: ${{ secrets.PROXMOX_PASSWORD }} + proxmox_username: ${{ secrets.PROXMOX_USERNAME }} + public_key: ${{ secrets.PUBLIC_KEY }} + container_env_vars: '{"API_KEY": "1234"}' + install_command: npm i + start_command: npm start + runtime_language: nodejs + services: '["mongodb"]' +``` + +**Without PAT:** + +```yaml +name: Proxmox Container Management + +on: + push: + create: + delete: + +jobs: + manage-container: + runs-on: self-hosted + needs: setup-runner + steps: + - uses: maxklema/proxmox-launchpad@test + with: + proxmox_password: ${{ secrets.PROXMOX_PASSWORD }} + proxmox_username: ${{ secrets.PROXMOX_USERNAME }} + public_key: ${{ secrets.PUBLIC_KEY }} + container_env_vars: '{"API_KEY": "1234"}' + install_command: npm i + start_command: npm start + runtime_language: nodejs + services: '["mongodb"]' +``` + + +## Misc. +Feel free to submit a PR/issue here or in [opensource-server](https://github.com/mieweb/opensource-server). +Author: [@maxklema](https://github.com/maxklema) diff --git a/ci-cd automation/proxmox-launchpad/action.yml b/ci-cd automation/proxmox-launchpad/action.yml new file mode 100644 index 00000000..79f26b59 --- /dev/null +++ b/ci-cd automation/proxmox-launchpad/action.yml @@ -0,0 +1,494 @@ +# action.yml +name: Proxmox LaunchPad +description: Manage Proxmox Containers for your Repository. +author: maxklema +branding: + icon: "package" + color: "purple" + +inputs: + proxmox_username: + required: true + proxmox_password: + required: true + container_password: + required: false + public_key: + required: false + http_port: + required: false + project_root: + required: false + container_env_vars: + required: false + install_command: + required: false + build_command: + required: false + start_command: + required: false + runtime_language: + required: false + services: + required: false + custom_services: + required: false + linux_distribution: + required: false + multi_component: + required: false + root_start_command: + required: false + github_pat: + required: false + +runs: + using: "composite" + steps: + - name: Check if action should run + shell: bash + id: should-run + env: + GITHUB_EVENT_NAME: ${{ github.event_name }} + GITHUB_EVENT_CREATED: ${{ github.event.created }} + run: | + if [[ "$GITHUB_EVENT_NAME" != "push" ]] || [[ "$GITHUB_EVENT_CREATED" == "false" ]]; then + echo "should_run=true" >> $GITHUB_OUTPUT + else + echo "should_run=false" >> $GITHUB_OUTPUT + echo "Skipping action: Push event with created=true" + fi + + - name: Determine Target Branch Name + shell: bash + id: branch-name + if: steps.should-run.outputs.should_run == 'true' + env: + GITHUB_EVENT_NAME: ${{ github.event_name }} + GITHUB_REF_NAME: ${{ github.ref_name }} + GITHUB_EVENT_REF: ${{ github.event.ref }} + run: | + if [[ "$GITHUB_EVENT_NAME" == "delete" ]]; then + TARGET_BRANCH="$GITHUB_EVENT_REF" + echo "Using deleted branch name: $TARGET_BRANCH" + else + TARGET_BRANCH="$GITHUB_REF_NAME" + echo "Using current branch name: $TARGET_BRANCH" + fi + echo "target_branch=$TARGET_BRANCH" >> $GITHUB_OUTPUT + + - name: Create Runner (If Needed) + shell: bash + id: create-runner + if: steps.should-run.outputs.should_run == 'true' + env: + GITHUB_REPOSITORY_FULL: ${{ github.repository }} + GITHUB_REPOSITORY_OWNER: ${{ github.repository_owner }} + TARGET_BRANCH: ${{ steps.branch-name.outputs.target_branch }} + CONTAINER_PASSWORD: ${{ inputs.container_password }} + PROXMOX_USERNAME: ${{ inputs.proxmox_username }} + PROXMOX_PASSWORD: ${{ inputs.proxmox_password }} + GITHUB_PAT: ${{ inputs.github_pat }} + GITHUB_API: ${{ github.api_url }} + LINUX_DISTRIBUTION: ${{ inputs.linux_distribution }} + PROJECT_REPOSITORY: ${{ github.server_url }}/${{ github.repository }} + GITHUB_JOB: ${{ github.job }} + run: | + REPO_NAME=$(basename "$GITHUB_REPOSITORY_FULL") + CONTAINER_NAME="${GITHUB_REPOSITORY_OWNER}-${REPO_NAME}-${TARGET_BRANCH}" + CONTAINER_NAME=${CONTAINER_NAME,,} + CONTAINER_NAME=$(echo "$CONTAINER_NAME" | sed 's/[^a-z0-9-]/-/g') + export CONTAINER_NAME + + # Auto-detect if this is a runner setup job based on job name or if no container inputs are provided + CREATE_RUNNER_JOB="N" + if [[ "$GITHUB_JOB" == *"setup"* ]] || [[ "$GITHUB_JOB" == *"runner"* ]]; then + CREATE_RUNNER_JOB="Y" + echo "CREATE_RUNNER_JOB=true" >> $GITHUB_OUTPUT + fi + + if [ ! -z "$GITHUB_PAT" ]; then + RESPONSE=$(curl --location ${GITHUB_API}/repos/${GITHUB_REPOSITORY_OWNER}/${REPO_NAME}/actions/runners --header "Authorization: token $GITHUB_PAT") + + while read -r RUN; do + RUNNER_NAME=$(echo "$RUN" | jq -r '.name') + if [ "$RUNNER_NAME" == "$CONTAINER_NAME" ]; then + if [ "${CREATE_RUNNER_JOB^^}" == "N" ]; then + exit 0 #Runner exists, continue to next steps + else + echo "STOP_SCRIPT=true" >> $GITHUB_OUTPUT + exit 0 # Runner exists, continue to next job. + fi + fi + done < <(echo "$RESPONSE" | jq -c '.runners[]') + + echo "Creating a Runner..." + set +e + sshpass -p 'mie123!' ssh \ + -T \ + -o StrictHostKeyChecking=no \ + -o UserKnownHostsFile=/dev/null \ + -o SendEnv="CONTAINER_NAME CONTAINER_PASSWORD PROXMOX_USERNAME PROXMOX_PASSWORD GITHUB_PAT LINUX_DISTRIBUTION PROJECT_REPOSITORY" \ + setup-runner@opensource.mieweb.org + + EXIT_STATUS=$? + + # Exit if a container exists but an associated runner does not. + if [ $EXIT_STATUS != 3 ]; then + echo "Something went wrong with creating/using a runner." + exit 1 + fi + + echo "STOP_SCRIPT=true" >> $GITHUB_OUTPUT + fi + + - name: Container Creation for Branch (If Needed) + id: create-lxc + shell: bash + if: ${{ (github.event_name == 'create' || github.event_name == 'push') && steps.should-run.outputs.should_run == 'true' }} + env: + GITHUB_EVENT: ${{ github.event_name }} + GITHUB_REPOSITORY_FULL: ${{ github.repository }} + GITHUB_REPOSITORY_OWNER: ${{ github.repository_owner }} + TARGET_BRANCH: ${{ steps.branch-name.outputs.target_branch }} + CONTAINER_PASSWORD: ${{ inputs.container_password }} + PROXMOX_USERNAME: ${{ inputs.proxmox_username }} + PROXMOX_PASSWORD: ${{ inputs.proxmox_password }} + PUBLIC_KEY: ${{ inputs.public_key }} + HTTP_PORT: ${{ inputs.http_port }} + DEPLOY_ON_START: ${{ inputs.deploy_on_start }} + PROJECT_REPOSITORY: ${{ github.server_url }}/${{ github.repository }} + PROJECT_BRANCH: ${{ steps.branch-name.outputs.target_branch }} + PROJECT_ROOT: ${{ inputs.project_root }} + REQUIRE_ENV_VARS: ${{ inputs.require_env_vars }} + CONTAINER_ENV_VARS: ${{ inputs.container_env_vars }} + INSTALL_COMMAND: ${{ inputs.install_command }} + START_COMMAND: ${{ inputs.start_command }} + BUILD_COMMAND: ${{ inputs.build_command }} + RUNTIME_LANGUAGE: ${{ inputs.runtime_language }} + REQUIRE_SERVICES: ${{ inputs.require_services }} + SERVICES: ${{ inputs.services }} + CUSTOM_SERVICES: ${{ inputs.custom_services }} + LINUX_DISTRIBUTION: ${{ inputs.linux_distribution }} + MULTI_COMPONENT: ${{ inputs.multi_component }} + ROOT_START_COMMAND: ${{ inputs.root_start_command }} + GITHUB_PAT: ${{ inputs.github_pat }} + GH_ACTION: y + run: | + set +e + REPO_NAME=$(basename "$GITHUB_REPOSITORY_FULL") + CONTAINER_NAME="${GITHUB_REPOSITORY_OWNER}-${REPO_NAME}-${TARGET_BRANCH}" + CONTAINER_NAME=${CONTAINER_NAME,,} + CONTAINER_NAME=$(echo "$CONTAINER_NAME" | sed 's/[^a-z0-9-]/-/g') + export CONTAINER_NAME + STOP_SCRIPT=${{ steps.create-runner.outputs.STOP_SCRIPT }} + if [ "$STOP_SCRIPT" != "true" ]; then + set +e + echo "Running Container Exists..." + + # Determine SSH target based on network location + EXTERNAL_IP=$(dig +short opensource.mieweb.org) + if [ "$EXTERNAL_IP" = "10.15.20.69" ]; then + SSH_TARGET="10.15.0.4" + else + SSH_TARGET="opensource.mieweb.org" + fi + + sshpass -p 'mie123!' ssh \ + -T \ + -o StrictHostKeyChecking=no \ + -o UserKnownHostsFile=/dev/null \ + -o SendEnv="PROXMOX_USERNAME PROXMOX_PASSWORD CONTAINER_NAME PROJECT_REPOSITORY" \ + container-exists@$SSH_TARGET + CONTAINER_EXISTS=$? + if [ $CONTAINER_EXISTS -eq 1 ]; then + echo "FAILED=1" >> $GITHUB_ENV # User does not own the container + elif [ $CONTAINER_EXISTS -eq 0 ]; then + echo "Cloning repository based on $PROJECT_BRANCH branch." + + sshpass -p 'mie123!' ssh \ + -T \ + -o StrictHostKeyChecking=no \ + -o UserKnownHostsFile=/dev/null \ + -o SendEnv="CONTAINER_NAME CONTAINER_PASSWORD PROXMOX_USERNAME PUBLIC_KEY PROXMOX_PASSWORD HTTP_PORT DEPLOY_ON_START PROJECT_REPOSITORY PROJECT_BRANCH PROJECT_ROOT REQUIRE_ENV_VARS CONTAINER_ENV_VARS INSTALL_COMMAND START_COMMAND RUNTIME_LANGUAGE REQUIRE_SERVICES SERVICES CUSTOM_SERVICES LINUX_DISTRIBUTION MULTI_COMPONENT ROOT_START_COMMAND GH_ACTION GITHUB_PAT" \ + create-container@$SSH_TARGET + + CONTAINER_CREATED=$? + echo "CONTAINER_CREATED=true" >> $GITHUB_OUTPUT + if [ $CONTAINER_CREATED -ne 0 ]; then + echo "FAILED=1" >> $GITHUB_ENV + fi + fi + fi + + - name: Container Update on Branch Push + shell: bash + if: ${{ (github.event_name == 'push' && steps.create-lxc.outputs.CONTAINER_CREATED != 'true') && steps.should-run.outputs.should_run == 'true' }} + env: + GITHUB_EVENT: ${{ github.event_name }} + GITHUB_REPOSITORY_FULL: ${{ github.repository }} + GITHUB_REPOSITORY_OWNER: ${{ github.repository_owner }} + TARGET_BRANCH: ${{ steps.branch-name.outputs.target_branch }} + PROXMOX_USERNAME: ${{ inputs.proxmox_username }} + PROXMOX_PASSWORD: ${{ inputs.proxmox_password }} + PROJECT_REPOSITORY: ${{ github.server_url }}/${{ github.repository }} + PROJECT_BRANCH: ${{ steps.branch-name.outputs.target_branch }} + PROJECT_ROOT: ${{ inputs.project_root }} + INSTALL_COMMAND: ${{ inputs.install_command }} + START_COMMAND: ${{ inputs.start_command }} + BUILD_COMMAND: ${{ inputs.build_command }} + RUNTIME_LANGUAGE: ${{ inputs.runtime_language }} + MULTI_COMPONENT: ${{ inputs.multi_component }} + SERVICES: ${{ inputs.services }} + CUSTOM_SERVICES: ${{ inputs.custom_services }} + REQUIRE_SERVICES: ${{ inputs.require_services }} + LINUX_DISTRIBUTION: ${{ inputs.linux_distribution }} + DEPLOY_ON_START: ${{ inputs.deploy_on_start }} + ROOT_START_COMMAND: ${{ inputs.root_start_command }} + GITHUB_PAT: ${{ inputs.github_pat }} + HTTP_PORT: ${{ inputs.http_port }} + GH_ACTION: y + run: | + set +e + echo "Running Container Update..." + REPO_NAME=$(basename "$GITHUB_REPOSITORY_FULL") + CONTAINER_NAME="${GITHUB_REPOSITORY_OWNER}-${REPO_NAME}-${TARGET_BRANCH}" + CONTAINER_NAME=${CONTAINER_NAME,,} + CONTAINER_NAME=$(echo "$CONTAINER_NAME" | sed 's/[^a-z0-9-]/-/g') + export CONTAINER_NAME + echo "$LINUX_DISTRIBUTION" + STOP_SCRIPT=${{ steps.create-runner.outputs.STOP_SCRIPT }} + if [ "$STOP_SCRIPT" != true ]; then + # Determine SSH target based on network location + EXTERNAL_IP=$(dig +short opensource.mieweb.org) + if [ "$EXTERNAL_IP" = "10.15.20.69" ]; then + SSH_TARGET="10.15.0.4" + else + SSH_TARGET="opensource.mieweb.org" + fi + + sshpass -p 'mie123!' ssh \ + -T \ + -o StrictHostKeyChecking=no \ + -o UserKnownHostsFile=/dev/null \ + -o SendEnv="CONTAINER_NAME PROXMOX_USERNAME PROXMOX_PASSWORD PROJECT_REPOSITORY PROJECT_BRANCH PROJECT_ROOT INSTALL_COMMAND START_COMMAND BUILD_COMMAND RUNTIME_LANGUAGE MULTI_COMPONENT ROOT_START_COMMAND DEPLOY_ON_START SERVICES CUSTOM_SERVICES REQUIRE_SERVICES LINUX_DISTRIBUTION GH_ACTION HTTP_PORT" \ + update-container@$SSH_TARGET + UPDATE_EXIT=$? + if [ $UPDATE_EXIT -ne 0 ]; then + echo "FAILED=1" >> $GITHUB_ENV + fi + fi + + - name: Container Deletion on Branch Deletion (Check) + shell: bash + if: ${{ github.event_name == 'delete' && steps.should-run.outputs.should_run == 'true' }} + env: + GITHUB_EVENT: ${{ github.event_name }} + GITHUB_REPOSITORY_FULL: ${{ github.repository }} + GITHUB_REPOSITORY_OWNER: ${{ github.repository_owner }} + TARGET_BRANCH: ${{ steps.branch-name.outputs.target_branch }} + PROXMOX_USERNAME: ${{ inputs.proxmox_username }} + PROXMOX_PASSWORD: ${{ inputs.proxmox_password }} + PROJECT_REPOSITORY: ${{ github.server_url }}/${{ github.repository }} + GITHUB_PAT: ${{ inputs.github_pat }} + run: | + set +e + REPO_NAME=$(basename "$GITHUB_REPOSITORY_FULL") + CONTAINER_NAME="${GITHUB_REPOSITORY_OWNER}-${REPO_NAME}-${TARGET_BRANCH}" + CONTAINER_NAME=${CONTAINER_NAME,,} + CONTAINER_NAME=$(echo "$CONTAINER_NAME" | sed 's/[^a-z0-9-]/-/g') + export CONTAINER_NAME + STOP_SCRIPT=${{ steps.create-runner.outputs.STOP_SCRIPT }} + if [ "$STOP_SCRIPT" != true ]; then + # Determine SSH target based on network location + EXTERNAL_IP=$(dig +short opensource.mieweb.org) + if [ "$EXTERNAL_IP" = "10.15.20.69" ]; then + SSH_TARGET="10.15.0.4" + else + SSH_TARGET="opensource.mieweb.org" + fi + + sshpass -p 'mie123!' ssh \ + -T \ + -o StrictHostKeyChecking=no \ + -o UserKnownHostsFile=/dev/null \ + -o SendEnv="PROXMOX_USERNAME PROXMOX_PASSWORD CONTAINER_NAME GITHUB_PAT PROJECT_REPOSITORY" \ + delete-container@$SSH_TARGET + DELETE_EXIT=$? + if [ $DELETE_EXIT -ne 0 ]; then + echo "FAILED=1" >> $GITHUB_ENV + fi + fi + + - name: Check if branch is part of a PR and comment + shell: bash + id: check-pr + if: steps.should-run.outputs.should_run == 'true' && steps.create-runner.outputs.CREATE_RUNNER_JOB != 'true' && env.FAILED != '1' + env: + GITHUB_TOKEN: ${{ inputs.github_pat }} + GITHUB_REPOSITORY: ${{ github.repository }} + TARGET_BRANCH: ${{ steps.branch-name.outputs.target_branch }} + GITHUB_REPOSITORY_OWNER: ${{ github.repository_owner }} + RUN_ID: ${{ github.run_id }} + run: | + if [ -z "$GITHUB_TOKEN" ]; then + echo "pr_number=" >> $GITHUB_OUTPUT + echo "is_pr=false" >> $GITHUB_OUTPUT + echo "No GitHub token provided, skipping PR detection" + exit 0 + fi + + # Check if this branch has an open PR + PR_DATA=$(curl -s -H "Authorization: token $GITHUB_TOKEN" \ + "https://api.github.com/repos/$GITHUB_REPOSITORY/pulls?state=open&head=${{ github.repository_owner }}:$TARGET_BRANCH") + + PR_NUMBER=$(echo "$PR_DATA" | jq -r '.[0].number // empty') + + if [ -n "$PR_NUMBER" ] && [ "$PR_NUMBER" != "null" ]; then + echo "pr_number=$PR_NUMBER" >> $GITHUB_OUTPUT + echo "is_pr=true" >> $GITHUB_OUTPUT + echo "Branch $TARGET_BRANCH is part of PR #$PR_NUMBER" + + # Generate container name + REPO_NAME=$(basename "$GITHUB_REPOSITORY") + CONTAINER_NAME="${GITHUB_REPOSITORY_OWNER}-${REPO_NAME}-${TARGET_BRANCH}" + CONTAINER_NAME=${CONTAINER_NAME,,} + CONTAINER_NAME=$(echo "$CONTAINER_NAME" | sed 's/[^a-z0-9-]/-/g') + + # Create initial comment on PR + CONTAINER_URL="https://${CONTAINER_NAME}.opensource.mieweb.org" + + COMMENT_BODY="## 🚀 Proxmox LaunchPad Action + **Expected URL**: [$CONTAINER_NAME]($CONTAINER_URL) *(will be available once deployment completes)* + **Status**: ✅ Application was deployed according to workflow configurations. + **Branch**: \`$TARGET_BRANCH\` + **Run ID**: [\`$RUN_ID\`](https://github.com/$GITHUB_REPOSITORY/actions/runs/$RUN_ID) + **Container Name**: \`$CONTAINER_NAME\` + + > This comment was automatically generated by Proxmox LaunchPad: The fastest way to deploy your repository code. To use Proxmox in your own repository, see: [Proxmox LaunchPad](https://github.com/marketplace/actions/proxmox-launchpad)." + + # Use jq to safely build the JSON payload from the variable + JSON_PAYLOAD=$(jq -n --arg body "$COMMENT_BODY" '{body: $body}') + + # Post the initial comment + curl -s -X POST \ + -H "Authorization: token $GITHUB_TOKEN" \ + -H "Content-Type: application/json" \ + -H "Accept: application/vnd.github.v3+json" \ + "https://api.github.com/repos/$GITHUB_REPOSITORY/issues/$PR_NUMBER/comments" \ + -d "$JSON_PAYLOAD" > /dev/null + + echo "Initial comment posted to PR #$PR_NUMBER" + else + echo "pr_number=" >> $GITHUB_OUTPUT + echo "is_pr=false" >> $GITHUB_OUTPUT + echo "Branch $TARGET_BRANCH is not part of any open PR" + fi + + - name: Comment on PR on Failure + if: env.FAILED == '1' + shell: bash + env: + GITHUB_TOKEN: ${{ inputs.github_pat }} + GITHUB_REPOSITORY: ${{ github.repository }} + TARGET_BRANCH: ${{ steps.branch-name.outputs.target_branch }} + RUN_ID: ${{ github.run_id }} + GITHUB_REPOSITORY_OWNER: ${{ github.repository_owner }} + run: | + if [ -z "$GITHUB_TOKEN" ]; then + echo "Cannot comment on PR: missing token" + exit 1 + fi + + # Check if this branch has an open PR + PR_DATA=$(curl -s -H "Authorization: token $GITHUB_TOKEN" \ + "https://api.github.com/repos/$GITHUB_REPOSITORY/pulls?state=open&head=${GITHUB_REPOSITORY_OWNER}:$TARGET_BRANCH") + + PR_NUMBER=$(echo "$PR_DATA" | jq -r '.[0].number // empty') + + if [ -z "$PR_NUMBER" ] || [ "$PR_NUMBER" == "null" ]; then + echo "Not a pull request, skipping failure comment." + exit 0 + fi + + REPO_NAME=$(basename "$GITHUB_REPOSITORY") + CONTAINER_NAME="${GITHUB_REPOSITORY_OWNER}-${REPO_NAME}-${TARGET_BRANCH}" + CONTAINER_NAME=${CONTAINER_NAME,,} + CONTAINER_NAME=$(echo "$CONTAINER_NAME" | sed 's/[^a-z0-9-]/-/g') + + CONTAINER_URL="https://${CONTAINER_NAME}.opensource.mieweb.org" + + COMMENT_BODY="## 🚀 Proxmox LaunchPad Action + **Expected URL**: [$CONTAINER_NAME]($CONTAINER_URL) *(will be available once deployment completes)* + **Status**: ❌ Application failed to deploy. View [\`$RUN_ID\`](https://github.com/$GITHUB_REPOSITORY/actions/runs/$RUN_ID) to see logs. + **Branch**: \`$TARGET_BRANCH\` + **Run ID**: [\`$RUN_ID\`](https://github.com/$GITHUB_REPOSITORY/actions/runs/$RUN_ID) + **Container Name**: \`$CONTAINER_NAME\` + + > This comment was automatically generated by Proxmox LaunchPad: The fastest way to deploy your repository code. To use Proxmox in your own repository, see: [Proxmox LaunchPad](https://github.com/marketplace/actions/proxmox-launchpad)." + + JSON_PAYLOAD=$(jq -n --arg body "$COMMENT_BODY" '{body: $body}') + + # Post the comment + curl -s -X POST \ + -H "Authorization: token $GITHUB_TOKEN" \ + -H "Content-Type: application/json" \ + -H "Accept: application/vnd.github.v3+json" \ + "https://api.github.com/repos/$GITHUB_REPOSITORY/issues/$PR_NUMBER/comments" \ + -d "$JSON_PAYLOAD" > /dev/null + + echo "Failure comment posted to PR #$PR_NUMBER" + exit 1 + + - name: Create GitHub Deployment (Default Branch) + if: github.ref == format('refs/heads/{0}', github.event.repository.default_branch) && env.FAILED != '1' + shell: bash + env: + GITHUB_TOKEN: ${{ inputs.github_pat }} + GITHUB_REPOSITORY: ${{ github.repository }} + GITHUB_SHA: ${{ github.sha }} + GITHUB_REF: ${{ github.ref }} + GITHUB_REPOSITORY_OWNER: ${{ github.repository_owner }} + run: | + REPO_NAME=$(basename "$GITHUB_REPOSITORY") + CONTAINER_NAME="${GITHUB_REPOSITORY_OWNER}-${REPO_NAME}-${GITHUB_REF#refs/heads/}" + CONTAINER_NAME=${CONTAINER_NAME,,} + CONTAINER_NAME=$(echo "$CONTAINER_NAME" | sed 's/[^a-z0-9-]/-/g') + CONTAINER_URL="https://${CONTAINER_NAME}.opensource.mieweb.org" + DEPLOYMENT_RESPONSE=$(curl -s -X POST \ + -H "Authorization: token $GITHUB_TOKEN" \ + -H "Content-Type: application/json" \ + -H "Accept: application/vnd.github.v3+json" \ + "https://api.github.com/repos/$GITHUB_REPOSITORY/deployments" \ + -d '{ + "ref": "'${GITHUB_REF#refs/heads/}'", + "required_contexts": [], + "environment": "Preview - '$GITHUB_REPOSITORY'", + "description": "Deployment triggered from Proxmox LaunchPad action.", + "sha": "'$GITHUB_SHA'" + }') + DEPLOYMENT_ID=$(echo "$DEPLOYMENT_RESPONSE" | jq -r '.id') + if [ "$DEPLOYMENT_ID" != "null" ] && [ -n "$DEPLOYMENT_ID" ]; then + curl -s -X POST \ + -H "Authorization: token $GITHUB_TOKEN" \ + -H "Content-Type: application/json" \ + -H "Accept: application/vnd.github.v3+json" \ + "https://api.github.com/repos/$GITHUB_REPOSITORY/deployments/$DEPLOYMENT_ID/statuses" \ + -d '{ + "state": "success", + "description": "Deployment completed successfully.", + "environment": "Preview - '$GITHUB_REPOSITORY'", + "environment_url": "'$CONTAINER_URL'" + }' > /dev/null + echo "Deployment created and marked as successful for default branch: ${GITHUB_REF#refs/heads/}" + echo "Deployment URL: $CONTAINER_URL" + else + echo "Deployment creation failed." + fi + + - name: Catch All Failure Step + if: env.FAILED == '1' + shell: bash + run: | + echo "Workflow failed. See previous steps for details." + exit 1 diff --git a/ci-cd automation/update-container.sh b/ci-cd automation/update-container.sh new file mode 100644 index 00000000..484f4996 --- /dev/null +++ b/ci-cd automation/update-container.sh @@ -0,0 +1,266 @@ +#!/bin/bash +# Script to automatically fetch new contents from a branch, push them to container, and restart intern +# Last Modified on August 5th, 2025 by Maxwell Klema +# ---------------------------------------- + +RESET="\033[0m" +BOLD="\033[1m" +MAGENTA='\033[35m' + +outputError() { + echo -e "${BOLD}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${RESET}" + echo -e "${BOLD}${MAGENTA}❌ Script Failed. Exiting... ${RESET}" + echo -e "$2" + echo -e "${BOLD}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${RESET}" + exit $1 +} + + +echo -e "${BOLD}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${RESET}" +echo -e "${BOLD}${MAGENTA}🔄 Update Container Contents ${RESET}" +echo -e "${BOLD}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${RESET}" + +if [ -z "${RUNTIME_LANGUAGE^^}" ]; then + echo "Skipping container update because there is nothing to update." + exit 0 +fi + +source /var/lib/vz/snippets/helper-scripts/PVE_user_authentication.sh +source /var/lib/vz/snippets/helper-scripts/verify_container_ownership.sh + +# Get Project Details + +CONTAINER_NAME="${CONTAINER_NAME,,}" + +if [ -z "$PROJECT_REPOSITORY" ]; then + read -p "🚀 Paste the link to your project repository → " PROJECT_REPOSITORY +else + DEPLOY_ON_START="y" +fi + +CheckRepository() { + PROJECT_REPOSITORY_SHORTENED=${PROJECT_REPOSITORY#*github.com/} + PROJECT_REPOSITORY_SHORTENED=${PROJECT_REPOSITORY_SHORTENED%.git} + REPOSITORY_EXISTS=$(curl -s -o /dev/null -w "%{http_code}" https://github.com/$PROJECT_REPOSITORY_SHORTENED) +} + +CheckRepository + +if [ "$REPOSITORY_EXISTS" != "200" ]; then + outputError 1 "The repository link you provided, \"$PROJECT_REPOSITORY\" was not valid." +fi + +echo "✅ The repository link you provided, \"$PROJECT_REPOSITORY\", was valid." + +# Get Project Branch + +if [ -z "$PROJECT_BRANCH" ]; then + PROJECT_BRANCH="main" +fi + +REPOSITORY_BRANCH_EXISTS=$(curl -s -o /dev/null -w "%{http_code}" https://api.github.com/repos/$PROJECT_REPOSITORY_SHORTENED/branches/$PROJECT_BRANCH) + +if [ "$REPOSITORY_BRANCH_EXISTS" != "200" ]; then + outputError 1 "The branch you provided, \"$PROJECT_BRANCH\", does not exist on repository at \"$PROJECT_REPOSITORY\"." +fi + + +# # Get Project Root Directroy + +if [ "$PROJECT_ROOT" == "." ] || [ -z "$PROJECT_ROOT" ]; then + PROJECT_ROOT="/" +fi + +VALID_PROJECT_ROOT=$(ssh root@10.15.234.122 "node /root/bin/js/runner.js authenticateRepo \"$PROJECT_REPOSITORY\" \"$PROJECT_BRANCH\" \"$PROJECT_ROOT\"") + +if [ "$VALID_PROJECT_ROOT" == "false" ]; then + outputError 1 "The root directory you provided, \"$PROJECT_ROOT\", does not exist on branch, \"$PROJECT_BRANCH\", on repository at \"$PROJECT_REPOSITORY\"." +fi + +REPO_BASE_NAME=$(basename -s .git "$PROJECT_REPOSITORY") +REPO_BASE_NAME_WITH_OWNER=$(echo "$PROJECT_REPOSITORY" | cut -d'/' -f4) + +if [ "$PROJECT_ROOT" == "" ] || [ "$PROJECT_ROOT" == "/" ]; then + PROJECT_ROOT="." +fi + +# Install Services ==== + +echo "🛎️ Installing Services..." + +if [ -z "$LINUX_DISTRIBUTION" ]; then + LINUX_DISTRIBUTION="debian" +fi + +if [ ! -z "$SERVICES" ] || [ ! -z "$CUSTOM_SERVICES" ]; then + REQUIRE_SERVICES="y" +fi + +SERVICE_COMMANDS=$(ssh -o SendEnv="LINUX_DISTRIBUTION SERVICES CUSTOM_SERVICES REQUIRE_SERVICES" \ + root@10.15.234.122 \ + "/root/bin/deployment-scripts/gatherServices.sh true") + +echo "$SERVICE_COMMANDS" | while read -r line; do + pct exec $CONTAINER_ID -- bash -c "$line | true" > /dev/null 2>&1 +done + +# Change HTTP port if necessary ==== + +if [ ! -z "$HTTP_PORT" ]; then + if [ "$HTTP_PORT" -lt 80 ] || [ "$HTTP_PORT" -gt 60000 ]; then + outputError 1 "Invalid HTTP port: $HTTP_PORT. Must be between 80 and 60000." + fi + ssh root@10.15.20.69 -- \ +"jq \ '.[\"$CONTAINER_NAME\"].ports.http = $HTTP_PORT' \ + /etc/nginx/port_map.json > /tmp/port_map.json.new \ + && mv -f /tmp/port_map.json.new /etc/nginx/port_map.json " +fi + + +# Clone repository if needed ==== + +if (( "$CONTAINER_ID" % 2 == 0 )); then + ssh root@10.15.0.5 " + pct enter $CONTAINER_ID < /dev/null +fi +EOF + " +else + pct enter $CONTAINER_ID < /dev/null +fi +EOF +fi + +# Update Container with New Contents from repository ===== + +startComponentPVE1() { + + RUNTIME="$1" + BUILD_CMD="$2" + START_CMD="$3" + COMP_DIR="$4" + INSTALL_CMD="$5" + + if [ "${RUNTIME^^}" == "NODEJS" ]; then + pct set $CONTAINER_ID --memory 4096 --swap 0 --cores 4 + pct exec $CONTAINER_ID -- bash -c "cd /root/$REPO_BASE_NAME/$PROJECT_ROOT/ && git fetch origin && git reset --hard origin/$PROJECT_BRANCH && git pull" > /dev/null 2>&1 + pct exec $CONTAINER_ID -- bash -c "cd /root/$REPO_BASE_NAME/$PROJECT_ROOT/$COMP_DIR && $INSTALL_CMD && $BUILD_CMD" > /dev/null 2>&1 + pct set $CONTAINER_ID --memory 2048 --swap 0 --cores 2 + elif [ "${RUNTIME^^}" == "PYTHON" ]; then + pct set $CONTAINER_ID --memory 4096 --swap 0 --cores 4 + pct exec $CONTAINER_ID -- bash -c "cd /root/$REPO_BASE_NAME/$PROJECT_ROOT/ && git fetch origin && git reset --hard origin/$PROJECT_BRANCH && git pull" > /dev/null 2>&1 + pct exec $CONTAINER_ID -- bash -c "cd /root/$REPO_BASE_NAME/$PROJECT_ROOT/$COMP_DIR && source venv/bin/activate && $INSTALL_CMD && $BUILD_CMD" > /dev/null 2>&1 + pct set $CONTAINER_ID --memory 2048 --swap 0 --cores 2 + fi +} + +startComponentPVE2() { + + RUNTIME="$1" + BUILD_CMD="$2" + START_CMD="$3" + COMP_DIR="$4" + INSTALL_CMD="$5" + + if [ "${RUNTIME^^}" == "NODEJS" ]; then + ssh root@10.15.0.5 " + pct set $CONTAINER_ID --memory 4096 --swap 0 --cores 4 && + pct exec $CONTAINER_ID -- bash -c 'cd /root/$REPO_BASE_NAME/$PROJECT_ROOT/ && git fetch origin && git reset --hard origin/$PROJECT_BRANCH && git pull' > /dev/null 2>&1 + pct exec $CONTAINER_ID -- bash -c 'cd /root/$REPO_BASE_NAME/$PROJECT_ROOT/$COMP_DIR && $INSTALL_CMD' && '$BUILD_CMD' > /dev/null 2>&1 + pct set $CONTAINER_ID --memory 2048 --swap 0 --cores 2 + " + elif [ "${RUNTIME^^}" == "PYTHON" ]; then + ssh root@10.15.0.5 " + pct set $CONTAINER_ID --memory 4096 --swap 0 --cores 4 && + pct exec $CONTAINER_ID -- bash -c 'cd /root/$REPO_BASE_NAME/$PROJECT_ROOT && git fetch origin && git reset --hard origin/$PROJECT_BRANCH && git pull' > /dev/null 2>&1 + pct exec $CONTAINER_ID -- bash -c 'cd /root/$REPO_BASE_NAME/$PROJECT_ROOT/$COMP_DIR && source venv/bin/activate && $INSTALL_CMD' && '$BUILD_CMD' > /dev/null 2>&1 + pct set $CONTAINER_ID --memory 2048 --swap 0 --cores 2 + " + fi +} + + +if [ ! -z "$RUNTIME_LANGUAGE" ] && echo "$RUNTIME_LANGUAGE" | jq . >/dev/null 2>&1; then # If RUNTIME_LANGUAGE is set and is valid JSON + MULTI_COMPONENT="Y" +fi + +if [ "${MULTI_COMPONENT^^}" == "Y" ]; then + for COMPONENT in $(echo "$START_COMMAND" | jq -r 'keys[]'); do + START=$(echo "$START_COMMAND" | jq -r --arg k "$COMPONENT" '.[$k]') + RUNTIME=$(echo "$RUNTIME_LANGUAGE" | jq -r --arg k "$COMPONENT" '.[$k]') + BUILD=$(echo "$BUILD_COMMAND" | jq -r --arg k "$COMPONENT" '.[$k]') + INSTALL=$(echo "$INSTALL_COMMAND" | jq -r --arg k "$COMPONENT" '.[$k]') + if [ "$BUILD" == "null" ]; then + BUILD="" + fi + + if (( "$CONTAINER_ID" % 2 == 0 )); then + startComponentPVE2 "$RUNTIME" "$BUILD" "$START" "$COMPONENT" "$INSTALL" + else + startComponentPVE1 "$RUNTIME" "$BUILD" "$START" "$COMPONENT" "$INSTALL" + fi + done + if [ ! -z "$ROOT_START_COMMAND" ]; then + if (( $CONTAINER_ID % 2 == 0 )); then + ssh root@10.15.0.5 "pct exec $CONTAINER_ID -- bash -c 'cd /root/$REPO_BASE_NAME/$PROJECT_ROOT && $ROOT_START_COMMAND'" + else + pct exec $CONTAINER_ID -- bash -c "cd /root/$REPO_BASE_NAME/$PROJECT_ROOT && $ROOT_START_COMMAND" + fi + fi + # startComponent "$RUNTIME_LANGUAGE" "$BUILD_COMMAND" "$START_COMMAND" "." +else + if (( $CONTAINER_ID % 2 == 0 )); then + startComponentPVE2 "$RUNTIME_LANGUAGE" "$BUILD_COMMAND" "$START_COMMAND" "." "$INSTALL_COMMAND" + else + startComponentPVE1 "$RUNTIME_LANGUAGE" "$BUILD_COMMAND" "$START_COMMAND" "." "$INSTALL_COMMAND" + fi +fi + +# Update Log File + +if (( "$CONTAINER_ID" % 2 == 0 )); then + ssh root@10.15.0.5 "pct exec $CONTAINER_ID -- bash -c 'echo \"[$(date)]\" >> /root/container-updates.log'" +else + pct exec $CONTAINER_ID -- bash -c "echo \"[$(date)]\" >> /root/container-updates.log" +fi + +# Create new template if on default branch ===== + +UPDATE_CONTAINER="true" +BUILD_COMMAND_B64=$(echo -n "$BUILD_COMMAND" | base64) +RUNTIME_LANGUAGE_B64=$(echo -n "$RUNTIME_LANGUAGE" | base64) +START_COMMAND_B64=$(echo -n "$START_COMMAND" | base64) + +CMD=( +bash /var/lib/vz/snippets/start_services.sh +"$CONTAINER_ID" +"$CONTAINER_NAME" +"$REPO_BASE_NAME" +"$REPO_BASE_NAME_WITH_OWNER" +"$SSH_PORT" +"$CONTAINER_IP" +"$PROJECT_ROOT" +"$ROOT_START_COMMAND" +"$DEPLOY_ON_START" +"$MULTI_COMPONENT" +"$START_COMMAND_B64" +"$BUILD_COMMAND_B64" +"$RUNTIME_LANGUAGE_B64" +"$GH_ACTION" +"$PROJECT_BRANCH" +"$UPDATE_CONTAINER" +) + +# Safely quote each argument for the shell +QUOTED_CMD=$(printf ' %q' "${CMD[@]}") + +tmux new-session -d -s "$CONTAINER_NAME" "$QUOTED_CMD" +echo "✅ Container $CONTAINER_ID has been updated with new contents from branch \"$PROJECT_BRANCH\" on repository \"$PROJECT_REPOSITORY\"." +exit 0 + diff --git a/container creation/README.md b/container creation/README.md new file mode 100644 index 00000000..a2ca2a56 --- /dev/null +++ b/container creation/README.md @@ -0,0 +1 @@ +# Container Creation \ No newline at end of file diff --git a/container creation/configureLDAP.sh b/container creation/configureLDAP.sh new file mode 100755 index 00000000..950601e2 --- /dev/null +++ b/container creation/configureLDAP.sh @@ -0,0 +1,34 @@ +#!/bin/bash +# Script to connect a container to the LDAP server via SSSD +# Last Modified by Maxwell Klema on July 29th, 2025 +# ----------------------------------------------------- + +# Curl Pown.sh script to install SSSD and configure LDAP +pct enter $CONTAINER_ID < /dev/null 2>&1 && \ +chmod +x pown.sh +EOF + +# Copy .env file to container +ENV_FILE="/var/lib/vz/snippets/.env" +pct enter $CONTAINER_ID < /root/.env +$(cat "$ENV_FILE") +EOT +EOF + +# Run the pown.sh script to configure LDAP +pct exec $CONTAINER_ID -- bash -c "cd /root && ./pown.sh" > /dev/null 2>&1 + +# remove ldap_tls_cert from /etc/sssd/sssd.conf +pct exec $CONTAINER_ID -- sed -i '/ldap_tls_cacert/d' /etc/sssd/sssd.conf > /dev/null 2>&1 + +# Add TLS_REQCERT to never in ROCKY + +if [ "${LINUX_DISTRO^^}" == "ROCKY" ]; then + pct exec $CONTAINER_ID -- bash -c "echo 'TLS_REQCERT never' >> /etc/openldap/ldap.conf" > /dev/null 2>&1 + pct exec $CONTAINER_ID -- bash -c "authselect select sssd --force" > /dev/null 2>&1 + pct exec $CONTAINER_ID -- bash -c "systemctl restart sssd" > /dev/null 2>&1 +fi diff --git a/container creation/create-container.sh b/container creation/create-container.sh new file mode 100644 index 00000000..17d2148b --- /dev/null +++ b/container creation/create-container.sh @@ -0,0 +1,213 @@ +#!/bin/bash +# Script to create the pct container, run register container, and migrate container accordingly. +# Last Modified by August 5th, 2025 by Maxwell Klema +# ----------------------------------------------------- + +BOLD='\033[1m' +BLUE='\033[34m' +MAGENTA='\033[35m' +GREEN='\033[32m' +RESET='\033[0m' + +# Run cleanup commands in case script is interrupted + +cleanup() +{ + + echo -e "${BOLD}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${RESET}" + echo "⚠️ Script was abruptly exited. Running cleanup tasks." + echo -e "${BOLD}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${RESET}" + pct unlock $CTID_TEMPLATE + for file in \ + "/var/lib/vz/snippets/container-public-keys/$PUB_FILE" \ + "/var/lib/vz/snippets/container-port-maps/$PROTOCOL_FILE" \ + "/var/lib/vz/snippets/container-env-vars/$ENV_BASE_FOLDER" \ + "/var/lib/vz/snippets/container-services/$SERVICES_BASE_FILE" + do + [ -f "$file" ] && rm -rf "$file" + done + exit 1 +} + +# Echo Container Details +echoContainerDetails() { + echo -e "📦 ${BLUE}Container ID :${RESET} $CONTAINER_ID" + echo -e "🌐 ${MAGENTA}Internal IP :${RESET} $CONTAINER_IP" + echo -e "🔗 ${GREEN}Domain Name :${RESET} https://$CONTAINER_NAME.opensource.mieweb.org" + echo -e "🛠️ ${BLUE}SSH Access :${RESET} ssh -p $SSH_PORT $PROXMOX_USERNAME@$CONTAINER_NAME.opensource.mieweb.org" + echo -e "🔑 ${BLUE}Container Password :${RESET} Your proxmox account password" + echo -e "${BOLD}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${RESET}" + echo -e "${BOLD}${MAGENTA}NOTE: Additional background scripts are being ran in detached terminal sessions.${RESET}" + echo -e "${BOLD}${MAGENTA}Wait up to two minutes for all processes to complete.${RESET}" + echo -e "${BOLD}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${RESET}" + echo -e "${BOLD}${BLUE}Still not working? Contact Max K. at maxklema@gmail.com${RESET}" + echo -e "${BOLD}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${RESET}" + +} + +trap cleanup SIGINT SIGTERM SIGHUP + +CONTAINER_NAME="${CONTAINER_NAME,,}" + +CONTAINER_NAME="$1" +GH_ACTION="$2" +HTTP_PORT="$3" +PROXMOX_USERNAME="$4" +USERNAME_ONLY="${PROXMOX_USERNAME%@*}" +PUB_FILE="$5" +PROTOCOL_FILE="$6" + +# Deployment ENVS +DEPLOY_ON_START="$7" +PROJECT_REPOSITORY="$8" +PROJECT_BRANCH="$9" +PROJECT_ROOT="${10}" +INSTALL_COMMAND=$(echo "${11}" | base64 -d) +BUILD_COMMAND=$(echo "${12}" | base64 -d) +START_COMMAND=$(echo "${13}" | base64 -d) +RUNTIME_LANGUAGE=$(echo "${14}" | base64 -d) +ENV_BASE_FOLDER="${15}" +SERVICES_BASE_FILE="${16}" +LINUX_DISTRO="${17}" +MULTI_COMPONENTS="${18}" +ROOT_START_COMMAND="${19}" + +# Pick the correct template to clone ===== + +REPO_BASE_NAME=$(basename -s .git "$PROJECT_REPOSITORY") +REPO_BASE_NAME_WITH_OWNER=$(echo "$PROJECT_REPOSITORY" | cut -d'/' -f4) + +TEMPLATE_NAME="template-$REPO_BASE_NAME-$REPO_BASE_NAME_WITH_OWNER" +CTID_TEMPLATE=$( { pct list; ssh root@10.15.0.5 'pct list'; } | awk -v name="$TEMPLATE_NAME" '$3 == name {print $1}') + +case "${LINUX_DISTRO^^}" in + DEBIAN) PACKAGE_MANAGER="apt-get" ;; + ROCKY) PACKAGE_MANAGER="dnf" ;; +esac + +# If no template ID was provided, assign a default based on distro + +if [ -z "$CTID_TEMPLATE" ]; then + case "${LINUX_DISTRO^^}" in + DEBIAN) CTID_TEMPLATE="160" ;; + ROCKY) CTID_TEMPLATE="138" ;; + esac +fi + +# Create the Container Clone ==== + +if [ "${GH_ACTION^^}" != "Y" ]; then + CONTAINER_ID=$(pvesh get /cluster/nextid) #Get the next available LXC ID + + echo "⏳ Cloning Container..." + pct clone $CTID_TEMPLATE $CONTAINER_ID \ + --hostname $CONTAINER_NAME \ + --full true > /dev/null 2>&1 + + # Set Container Options + + echo "⏳ Setting Container Properties..." + pct set $CONTAINER_ID \ + --tags "$PROXMOX_USERNAME" \ + --tags "$LINUX_DISTRO" \ + --tags "LDAP" \ + --onboot 1 > /dev/null 2>&1 + + pct start $CONTAINER_ID > /dev/null 2>&1 + pveum aclmod /vms/$CONTAINER_ID --user "$PROXMOX_USERNAME@pve" --role PVEVMUser > /dev/null 2>&1 + + # Get the Container IP Address and install some packages + + echo "⏳ Waiting for DHCP to allocate IP address to container..." + sleep 5 +else + CONTAINER_ID=$( { pct list; ssh root@10.15.0.5 'pct list'; } | awk -v name="$CONTAINER_NAME" '$3 == name {print $1}') +fi + +if [ -f "/var/lib/vz/snippets/container-public-keys/$PUB_FILE" ]; then + echo "⏳ Appending Public Key..." + pct exec $CONTAINER_ID -- touch ~/.ssh/authorized_keys > /dev/null 2>&1 + pct exec $CONTAINER_ID -- bash -c "cat > ~/.ssh/authorized_keys"< /var/lib/vz/snippets/container-public-keys/$PUB_FILE > /dev/null 2>&1 + rm -rf /var/lib/vz/snippets/container-public-keys/$PUB_FILE > /dev/null 2>&1 +fi + +CONTAINER_IP="" +attempts=0 +max_attempts=10 + +while [[ -z "$CONTAINER_IP" && $attempts -lt $max_attempts ]]; do + CONTAINER_IP=$(pct exec "$CONTAINER_ID" -- hostname -I | awk '{print $1}') + [[ -z "$CONTAINER_IP" ]] && sleep 2 && ((attempts++)) +done + +if [[ -z "$CONTAINER_IP" ]]; then + echo "❌ Timed out waiting for container to get an IP address." + exit 1 +fi + +# Set up SSSD to communicate with LDAP server ==== +echo "⏳ Configuring LDAP connection via SSSD..." +source /var/lib/vz/snippets/helper-scripts/configureLDAP.sh + +# Attempt to Automatically Deploy Project Inside Container + +if [ "${DEPLOY_ON_START^^}" == "Y" ]; then + source /var/lib/vz/snippets/helper-scripts/deployOnStart.sh + + #cleanup + for file in \ + "/var/lib/vz/snippets/container-env-vars/$ENV_BASE_FOLDER" \ + "/var/lib/vz/snippets/container-services/$SERVICES_BASE_FILE" + do + [ -f "$file" ] && rm -rf "$file" > /dev/null 2>&1 + done +fi + +# Create Log File ==== + +pct exec $CONTAINER_ID -- bash -c "cd /root && touch container-updates.log" + +# Run Contianer Provision Script to add container to port_map.json +echo "⏳ Running Container Provision Script..." +if [ -f "/var/lib/vz/snippets/container-port-maps/$PROTOCOL_FILE" ]; then + /var/lib/vz/snippets/register-container.sh $CONTAINER_ID $HTTP_PORT /var/lib/vz/snippets/container-port-maps/$PROTOCOL_FILE "$USERNAME_ONLY" + rm -rf /var/lib/vz/snippets/container-port-maps/$PROTOCOL_FILE > /dev/null 2>&1 +else + /var/lib/vz/snippets/register-container.sh $CONTAINER_ID $HTTP_PORT "" "$PROXMOX_USERNAME" +fi + + +SSH_PORT=$(iptables -t nat -S PREROUTING | grep "to-destination $CONTAINER_IP:22" | awk -F'--dport ' '{print $2}' | awk '{print $1}' | head -n 1 || true) + +# Output container details and start services if necessary ===== + +echoContainerDetails + +BUILD_COMMAND_B64=$(echo -n "$BUILD_COMMAND" | base64) +RUNTIME_LANGUAGE_B64=$(echo -n "$RUNTIME_LANGUAGE" | base64) +START_COMMAND_B64=$(echo -n "$START_COMMAND" | base64) + +CMD=( +bash /var/lib/vz/snippets/start_services.sh +"$CONTAINER_ID" +"$CONTAINER_NAME" +"$REPO_BASE_NAME" +"$REPO_BASE_NAME_WITH_OWNER" +"$SSH_PORT" +"$CONTAINER_IP" +"$PROJECT_ROOT" +"$ROOT_START_COMMAND" +"$DEPLOY_ON_START" +"$MULTI_COMPONENTS" +"$START_COMMAND_B64" +"$BUILD_COMMAND_B64" +"$RUNTIME_LANGUAGE_B64" +"$GH_ACTION" +"$PROJECT_BRANCH" +) + +# Safely quote each argument for the shell +QUOTED_CMD=$(printf ' %q' "${CMD[@]}") + +tmux new-session -d -s "$CONTAINER_NAME" "$QUOTED_CMD" +exit 0 diff --git a/container creation/deployOnStart.sh b/container creation/deployOnStart.sh new file mode 100755 index 00000000..82d01a62 --- /dev/null +++ b/container creation/deployOnStart.sh @@ -0,0 +1,86 @@ +#!/bin/bash +# Automation Script for attempting to automatically deploy projects and services on a container +# Last Modifided by Maxwell Klema on July 16th, 2025 +# ----------------------------------------------------- + +echo "🚀 Attempting Automatic Deployment" +REPO_BASE_NAME=$(basename -s .git "$PROJECT_REPOSITORY") + +# Clone github repository from correct branch ==== + +echo "Repo base name: $REPO_BASE_NAME" + +pct enter $CONTAINER_ID < /dev/null +else +cd /root/$REPO_BASE_NAME && git fetch && git pull && \ +git checkout $PROJECT_BRANCH +fi +EOF + +pct exec $CONTAINER_ID -- bash -c "chmod 700 ~/.bashrc" # enable full R/W/X permissions + +# Copy over ENV variables ==== + +ENV_BASE_FOLDER="/var/lib/vz/snippets/container-env-vars/${ENV_BASE_FOLDER}" + +if [ ! -d "$ENV_BASE_FOLDER"]; then + if [ "${MULTI_COMPONENTS^^}" == "Y" ]; then + for FILE in $ENV_BASE_FOLDER/*; do + FILE_BASENAME=$(basename "$FILE") + FILE_NAME="${FILE_BASENAME%.*}" + ENV_ROUTE=$(echo "$FILE_NAME" | tr '_' '/') # acts as the route to the correct folder to place .env file in. + + ENV_VARS=$(cat $ENV_BASE_FOLDER/$FILE_BASENAME) + pct exec $CONTAINER_ID -- bash -c "cd /root/$REPO_BASE_NAME/$PROJECT_ROOT/$ENV_ROUTE && echo "$ENV_VARS" > .env" > /dev/null 2>&1 + done + else + ENV_FOLDER_BASE_NAME=$(basename "$ENV_BASE_FOLDER") + ENV_VARS=$(cat $ENV_BASE_FOLDER/$ENV_FOLDER_BASE_NAME.txt || true) + pct exec $CONTAINER_ID -- bash -c "cd /root/$REPO_BASE_NAME/$PROJECT_ROOT && echo "$ENV_VARS" > .env" > /dev/null 2>&1 + fi +fi + +# Run Installation Commands ==== + +runInstallCommands() { + + RUNTIME="$1" + COMP_DIR="$2" + + if [ "${RUNTIME^^}" == "NODEJS" ]; then + pct exec $CONTAINER_ID -- bash -c "cd /root/$REPO_BASE_NAME/$PROJECT_ROOT/$COMP_DIR && sudo $INSTALL_CMD" > /dev/null 2>&1 + elif [ "${RUNTIME^^}" == "PYTHON" ]; then + pct enter $CONTAINER_ID < /dev/null +cd /root/$REPO_BASE_NAME/$PROJECT_ROOT/$COMP_DIR && \ +python3 -m venv venv && source venv/bin/activate && \ +pip install --upgrade pip && \ +$INSTALL_CMD +EOF + fi +} + +if [ "${MULTI_COMPONENTS^^}" == "Y" ]; then + for COMPONENT in $(echo "$RUNTIME_LANGUAGE" | jq -r 'keys[]'); do + RUNTIME=$(echo "$RUNTIME_LANGUAGE" | jq -r --arg k "$COMPONENT" '.[$k]') #get runtime env + INSTALL_CMD=$(echo "$INSTALL_COMMAND" | jq -r --arg k "$COMPONENT" '.[$k]') #get install command + if [ "$INSTALL_CMD" != "null" ]; then + runInstallCommands "$RUNTIME" "$COMPONENT" + fi + done +else + INSTALL_CMD=$INSTALL_COMMAND + runInstallCommands "$RUNTIME_LANGUAGE" "." +fi + +# Install Services ==== + +if [ -f "/var/lib/vz/snippets/container-services/$SERVICES_BASE_FILE" ]; then + while read line; do + pct exec $CONTAINER_ID -- bash -c "$line" > /dev/null 2>&1 + done < "/var/lib/vz/snippets/container-services/$SERVICES_BASE_FILE" +fi \ No newline at end of file diff --git a/container creation/deployment-scripts/gatherEnvVars.sh b/container creation/deployment-scripts/gatherEnvVars.sh new file mode 100644 index 00000000..9453c065 --- /dev/null +++ b/container creation/deployment-scripts/gatherEnvVars.sh @@ -0,0 +1,111 @@ +#!/bin/bash + +# Helper function to gather environment variables +gatherEnvVars(){ + TEMP_ENV_FILE_PATH="$1" + + read -p "🔑 Enter Environment Variable Key → " ENV_VAR_KEY + read -p "🔑 Enter Environment Variable Value → " ENV_VAR_VALUE + + while [ "$ENV_VAR_KEY" == "" ] || [ "$ENV_VAR_VALUE" == "" ]; do + if [ "${GH_ACTION^^}" == "Y" ]; then + outputError "Key and value cannot be empty. Please try again." + writeLog "Empty environment variable key or value entered (GH_ACTION mode)" + exit 15 + fi + echo "⚠️ Key or value cannot be empty. Try again." + writeLog "Empty environment variable key or value entered" + read -p "🔑 Enter Environment Variable Key → " ENV_VAR_KEY + read -p "🔑 Enter Environment Variable Value → " ENV_VAR_VALUE + done + + echo "$ENV_VAR_KEY=$ENV_VAR_VALUE" >> $TEMP_ENV_FILE_PATH + + read -p "🔑 Do you want to enter another Environment Variable? (y/n) → " ENTER_ANOTHER_ENV + + while [ "${ENTER_ANOTHER_ENV^^}" == "Y" ]; do + gatherEnvVars "$TEMP_ENV_FILE_PATH" + done +} + +if [ -z "$REQUIRE_ENV_VARS" ]; then + read -p "🔑 Does your application require environment variables? (y/n) → " REQUIRE_ENV_VARS +fi + +while [ "${REQUIRE_ENV_VARS^^}" != "Y" ] && [ "${REQUIRE_ENV_VARS^^}" != "N" ] && [ "${REQUIRE_ENV_VARS^^}" != "" ]; do + echo "⚠️ Invalid option. Please try again." + writeLog "Invalid environment variables requirement option entered: $REQUIRE_ENV_VARS" + read -p "🔑 Does your application require environment variables? (y/n) → " REQUIRE_ENV_VARS +done + +if [ "${GH_ACTION^^}" == "Y" ]; then + if [ ! -z "$CONTAINER_ENV_VARS" ]; then + REQUIRE_ENV_VARS="Y" + fi +fi + +if [ "${REQUIRE_ENV_VARS^^}" == "Y" ]; then + # generate random temp .env folder to store all env files for different components + RANDOM_NUM=$(shuf -i 100000-999999 -n 1) + ENV_FOLDER="env_$RANDOM_NUM" + ENV_FOLDER_PATH="/root/bin/env/$ENV_FOLDER" + mkdir -p "$ENV_FOLDER_PATH" + + if [ "${MULTI_COMPONENT^^}" == "Y" ]; then + if [ ! -z "$CONTAINER_ENV_VARS" ]; then # Environment Variables + if echo "$CONTAINER_ENV_VARS" | jq -e > /dev/null 2>&1; then #if exit status of jq is 0 (valid JSON) // success + for key in $(echo "$CONTAINER_ENV_VARS" | jq -r 'keys[]'); do + gatherComponentDir "Enter the path of your component to enter environment variables" "$key" + ENV_FILE_NAME=$(echo "$COMPONENT_PATH" | tr '/' '_') + ENV_FILE_NAME="$ENV_FILE_NAME.txt" + ENV_FILE_PATH="/root/bin/env/$ENV_FOLDER/$ENV_FILE_NAME" + touch "$ENV_FILE_PATH" + echo "$CONTAINER_ENV_VARS" | jq -r --arg key "$key" '.[$key] | to_entries[] | "\(.key)=\(.value)"' > "$ENV_FILE_PATH" + addComponent "$key" + done + else + if [ "${GH_ACTION^^}" == "Y" ]; then + outputError "Your \"CONTAINER_ENV_VARS\" is not valid JSON. Please re-format and try again." + writeLog "Invalid JSON in CONTAINER_ENV_VARS (GH_ACTION mode)" + exit 16 + fi + echo "⚠️ Your \"CONTAINER_ENV_VARS\" is not valid JSON. Please re-format and try again." + writeLog "Invalid JSON in CONTAINER_ENV_VARS" + exit 16 + fi + else # No Environment Variables + gatherComponentDir "Enter the path of your component to enter environment variables" + + while [ "$COMPONENT_PATH" != "" ]; do + addComponent "$COMPONENT_PATH" + ENV_FILE_NAME=$(echo "$COMPONENT_PATH" | tr '/' '_') + ENV_FILE="$ENV_FILE_NAME.txt" + ENV_FILE_PATH="/root/bin/env/$ENV_FOLDER/$ENV_FILE" + touch "$ENV_FILE_PATH" + gatherEnvVars "$ENV_FILE_PATH" + gatherComponentDir "Enter the path of your component to enter environment variables" + done + fi + else # Single Component + ENV_FILE="env_$RANDOM_NUM.txt" + ENV_FILE_PATH="/root/bin/env/$ENV_FOLDER/$ENV_FILE" + touch "$ENV_FILE_PATH" + + if [ ! -z "$CONTAINER_ENV_VARS" ]; then # Environment Variables + if echo "$CONTAINER_ENV_VARS" | jq -e > /dev/null 2>&1; then #if exit status of jq is 0 (valid JSON) // success + echo "$CONTAINER_ENV_VARS " | jq -r 'to_entries[] | "\(.key)=\(.value)"' > "$ENV_FILE_PATH" #k=v pairs + else + if [ "${GH_ACTION^^}" == "Y" ]; then + outputError "Your \"CONTAINER_ENV_VARS\" is not valid JSON. Please re-format and try again." + writeLog "Invalid JSON in CONTAINER_ENV_VARS for single component (GH_ACTION mode)" + exit 16 + fi + echo "⚠️ Your \"CONTAINER_ENV_VARS\" is not valid JSON. Please re-format and try again." + writeLog "Invalid JSON in CONTAINER_ENV_VARS for single component" + exit 16 + fi + else # No Environment Variables + gatherEnvVars "$ENV_FILE_PATH" + fi + fi +fi \ No newline at end of file diff --git a/container creation/deployment-scripts/gatherRuntimeLangs.sh b/container creation/deployment-scripts/gatherRuntimeLangs.sh new file mode 100755 index 00000000..508eff5c --- /dev/null +++ b/container creation/deployment-scripts/gatherRuntimeLangs.sh @@ -0,0 +1,76 @@ +#!/bin/bash + +gatherRunTime() { + COMPONENT_PATH="$1" + + if [ -z "${RUNTIME_LANGUAGE}" ] || [ "$RT_ENV_VAR" != "true" ]; then + read -p "🖥️ Enter the underlying runtime environment for \"$COMPONENT_PATH\" (e.g., 'nodejs', 'python') → " RUNTIME_LANGUAGE + fi + + while [ "${RUNTIME_LANGUAGE^^}" != "NODEJS" ] && [ "${RUNTIME_LANGUAGE^^}" != "PYTHON" ]; do + echo "⚠️ Sorry, that runtime environment is not yet supported. Only \"nodejs\" and \"python\" are currently supported." + writeLog "Unsupported runtime environment entered: $RUNTIME_LANGUAGE for component: $COMPONENT_PATH" + if [ "${GH_ACTION^^}" == "Y" ]; then + outputError "⚠️ Sorry, that runtime environment is not yet supported. Only \"nodejs\" and \"python\" are currently supported." + writeLog "Unsupported runtime environment entered: $RUNTIME_LANGUAGE (GH_ACTION mode)" + exit 17 + fi + read -p "🖥️ Enter the underlying runtime environment for \"$COMPONENT_PATH\" (e.g., 'nodejs', 'python') → " RUNTIME_LANGUAGE + done +} + +# Helper function to remove an item from a list +removeFromList() { + ITEM_TO_REMOVE="$1" + NEW_LIST=() + for ITEM in "${UNIQUE_COMPONENTS_CLONE[@]}"; do + if [ "$ITEM" != "$ITEM_TO_REMOVE" ]; then + NEW_LIST+=("$ITEM") + fi + done + UNIQUE_COMPONENTS_CLONE=("${NEW_LIST[@]}") +} + +UNIQUE_COMPONENTS_CLONE=("${UNIQUE_COMPONENTS[@]}") +RUNTIME_LANGUAGE_DICT={} + + +if [ "${MULTI_COMPONENT^^}" == 'Y' ]; then + if [ ! -z "$RUNTIME_LANGUAGE" ]; then # Environment Variable Passed + if echo "$RUNTIME_LANGUAGE" | jq -e > /dev/null 2>&1; then # Valid JSON + for key in $(echo "$RUNTIME_LANGUAGE" | jq -r 'keys[]'); do + removeFromList "$key" + done + if [ ${#UNIQUE_COMPONENTS_CLONE[@]} -gt 0 ]; then #if there are still components in the list, then not all runtimes were provided, so exit on error + if [ "${GH_ACTION^^}" == "Y" ]; then + outputError "You did not provide runtime languages for these components: \"${UNIQUE_COMPONENTS_CLONE[@]}\"." + writeLog "Missing runtime languages for components: ${UNIQUE_COMPONENTS_CLONE[@]} (GH_ACTION mode)" + exit 18 + fi + echo "⚠️ You did not provide runtime languages for these components: \"${UNIQUE_COMPONENTS_CLONE[@]}\"." + writeLog "Missing runtime languages for components: ${UNIQUE_COMPONENTS_CLONE[@]}" + exit 18 + fi + else + if [ "${GH_ACTION^^}" == "Y" ]; then + outputError "Your \"$RUNTIME_LANGUAGE\" is not valid JSON. Please re-format and try again." + writeLog "Invalid JSON in RUNTIME_LANGUAGE (GH_ACTION mode)" + exit 16 + fi + echo "⚠️ Your \"$RUNTIME_LANGUAGE\" is not valid JSON. Please re-format and try again." + writeLog "Invalid JSON in RUNTIME_LANGUAGE" + exit 16 + fi + else # No Environment Variable Passed + for CURRENT in "${UNIQUE_COMPONENTS[@]}"; do + gatherRunTime "$CURRENT" + RUNTIME_LANGUAGE_DICT=$(echo "$RUNTIME_LANGUAGE_DICT" | jq --arg k "$CURRENT" --arg v "$RUNTIME_LANGUAGE" '. + {($k): $v}') + done + RUNTIME_LANGUAGE=$RUNTIME_LANGUAGE_DICT + fi +else + if [ ! -z "$RUNTIME_LANGUAGE" ]; then + RT_ENV_VAR="true" + fi + gatherRunTime "$PROJECT_REPOSITORY" +fi \ No newline at end of file diff --git a/container creation/deployment-scripts/gatherServices.sh b/container creation/deployment-scripts/gatherServices.sh new file mode 100755 index 00000000..1cc06c39 --- /dev/null +++ b/container creation/deployment-scripts/gatherServices.sh @@ -0,0 +1,158 @@ +SERVICE_MAP="/root/bin/services/service_map_$LINUX_DISTRIBUTION.json" +APPENDED_SERVICES=() + +# Helper function to check if a user has added the same service twice +serviceExists() { + SERVICE="$1" + for CURRENT in "${APPENDED_SERVICES[@]}"; do + if [ "${SERVICE,,}" == "${CURRENT,,}" ]; then + return 0 + fi + done + return 1 +} + +processService() { + local SERVICE="$1" + local MODE="$2" # "batch" or "single" + + SERVICE_IN_MAP=$(jq -r --arg key "${SERVICE,,}" '.[$key] // empty' "$SERVICE_MAP") + if serviceExists "$SERVICE"; then + if [ "$MODE" = "batch" ]; then + return 0 # skip to next in batch mode + else + echo "⚠️ You already added \"$SERVICE\" as a service. Please try again." + writeLog "Duplicate service attempted: $SERVICE" + return 0 + fi + elif [ "${SERVICE^^}" != "C" ] && [ "${SERVICE^^}" != "" ] && [ -n "$SERVICE_IN_MAP" ]; then + jq -r --arg key "${SERVICE,,}" '.[$key][]' "$SERVICE_MAP" >> "$TEMP_SERVICES_FILE_PATH" + echo "sudo systemctl daemon-reload" >> "$TEMP_SERVICES_FILE_PATH" + echo "✅ ${SERVICE^^} added to your container." + APPENDED_SERVICES+=("${SERVICE^^}") + elif [ "${SERVICE^^}" == "C" ]; then + appendCustomService + elif [ "${SERVICE^^}" != "" ]; then + if [ "${GH_ACTION^^}" == "Y" ]; then + outputError "⚠️ Service \"$SERVICE\" does not exist." + writeLog "Invalid service entered: $SERVICE (GH_ACTION mode)" + exit 20 + fi + echo "⚠️ Service \"$SERVICE\" does not exist." + writeLog "Invalid service entered: $SERVICE" + [ "$MODE" = "batch" ] && exit 20 + fi +} + +# Helper function to append a new service to a container +appendService() { + if [ ! -z "$SERVICES" ]; then + for SERVICE in $(echo "$SERVICES" | jq -r '.[]'); do + processService "$SERVICE" "batch" + done + else + read -p "➡️ Enter the name of a service to add to your container or type \"C\" to set up a custom service installation (Enter to exit) → " SERVICE + processService "$SERVICE" "single" + fi +} + +appendCustomService() { + # If there is an env variable for custom services, iterate through each command and append it to temporary services file + if [ ! -z "$CUSTOM_SERVICES" ]; then + echo "$CUSTOM_SERVICES" | jq -c -r '.[]' | while read -r CUSTOM_SERVICE; do + echo "$CUSTOM_SERVICE" | jq -c -r '.[]' | while read -r CUSTOM_SERVICE_COMMAND; do + if [ ! -z "$CUSTOM_SERVICE_COMMAND" ]; then + echo "$CUSTOM_SERVICE_COMMAND" >> "$TEMP_SERVICES_FILE_PATH" + else + if [ "${GH_ACTION^^}" == "Y" ]; then + outputError "⚠️ Custom Service Installation Command cannot be empty in \"$CUSTOM_SERVICE\"." + writeLog "Empty custom service command in: $CUSTOM_SERVICE (GH_ACTION mode)" + exit 21 + fi + echo "⚠️ Command cannot be empty." + writeLog "Empty custom service command in: $CUSTOM_SERVICE" + exit 21; + fi + done + done + echo "✅ Custom Services appended." + else + echo "🛎️ Configuring Custom Service Installation. For each prompt, enter a command that is a part of the installation process for your service on Debian Bookworm. Do not forget to enable and start the service at the end. Once you have entered all of your commands, press enter to continue" + COMMAND_NUM=1 + read -p "➡️ Enter Command $COMMAND_NUM: " CUSTOM_COMMAND + + echo "$CUSTOM_COMMAND" >> "$TEMP_SERVICES_FILE_PATH" + + while [ "${CUSTOM_COMMAND^^}" != "" ]; do + ((COMMAND_NUM++)) + read -p "➡️ Enter Command $COMMAND_NUM: " CUSTOM_COMMAND + echo "$CUSTOM_COMMAND" >> "$TEMP_SERVICES_FILE_PATH" + done + fi +} + +# Helper function to see if a user wants to set up a custom service +setUpService() { + read -p "🛎️ Do you wish to set up a custom service installation? (y/n) " SETUP_CUSTOM_SERVICE_INSTALLATION + while [ "${SETUP_CUSTOM_SERVICE_INSTALLATION^^}" != "Y" ] && [ "${SETUP_CUSTOM_SERVICE_INSTALLATION^^}" != "N" ] && [ "${SETUP_CUSTOM_SERVICE_INSTALLATION^^}" != "" ]; do + if [ "${GH_ACTION^^}" == "Y" ]; then + outputError "⚠️ Invalid custom service installation option. Please try again." + writeLog "Invalid custom service installation option entered: $SETUP_CUSTOM_SERVICE_INSTALLATION (GH_ACTION mode)" + exit 22 + fi + echo "⚠️ Invalid option. Please try again." + writeLog "Invalid custom service installation option entered: $SETUP_CUSTOM_SERVICE_INSTALLATION" + read -p "🛎️ Do you wish to set up a custom service installation? (y/n) " SETUP_CUSTOM_SERVICE_INSTALLATION + done +} + +if [ -z "$REQUIRE_SERVICES" ]; then + read -p "🛎️ Does your application require special services (i.e. Docker, MongoDB, etc.) to run on the container? (y/n) → " REQUIRE_SERVICES +fi + +while [ "${REQUIRE_SERVICES^^}" != "Y" ] && [ "${REQUIRE_SERVICES^^}" != "N" ] && [ "${REQUIRE_SERVICES^^}" != "" ]; do + echo "⚠️ Invalid option. Please try again." + writeLog "Invalid service requirement option entered: $REQUIRE_SERVICES" + read -p "🛎️ Does your application require special services (i.e. Docker, MongoDB, etc.) to run on the container? (y/n) → " REQUIRE_SERVICES +done + +if [ "${GH_ACTION^^}" == "Y" ]; then + if [ ! -z "$SERVICES" ] || [ ! -z "$CUSTOM_SERVICES" ]; then + REQUIRE_SERVICES="Y" + fi +fi + +if [ "${REQUIRE_SERVICES^^}" == "Y" ]; then + + # Generate random (temporary) file to store install commands for needed services + RANDOM_NUM=$(shuf -i 100000-999999 -n 1) + SERVICES_FILE="services_$RANDOM_NUM.txt" + TEMP_SERVICES_FILE_PATH="/root/bin/services/$SERVICES_FILE" + touch "$TEMP_SERVICES_FILE_PATH" + + appendService + while [ "${SERVICE^^}" != "" ] || [ ! -z "$SERVICES" ]; do + if [ -z "$SERVICES" ]; then + appendService + else + if [ ! -z "$CUSTOM_SERVICES" ]; then # assumes both services and custom services passed as ENV vars + appendCustomService + else # custom services not passed as ENV var, so must prompt the user for their custom services + setUpService + while [ "${SETUP_CUSTOM_SERVICE_INSTALLATION^^}" == "Y" ]; do + appendCustomService + setUpService + done + fi + break + fi + done +fi + +# Used for updating container services in GH Actions + +UPDATING_CONTAINER="$1" +if [ "$UPDATING_CONTAINER" == "true" ]; then + cat "$TEMP_SERVICES_FILE_PATH" + rm -rf "$TEMP_SERVICES_FILE_PATH" +fi \ No newline at end of file diff --git a/container creation/deployment-scripts/gatherSetupCommands.sh b/container creation/deployment-scripts/gatherSetupCommands.sh new file mode 100644 index 00000000..6ff40ecb --- /dev/null +++ b/container creation/deployment-scripts/gatherSetupCommands.sh @@ -0,0 +1,57 @@ +#!/bin/bash +# This function gathers start up commands, such as build, install, and start, for both single and multiple component applications +# Last Modified by Maxwell Klema on July 15th, 2025 +# --------------------------------------------- + +gatherSetupCommands() { + + TYPE="$1" + PROMPT="$2" + TYPE_COMMAND="${TYPE}_COMMAND" + TYPE_COMMAND="${!TYPE_COMMAND}" # get value stored by TYPE_COMMAND + declare "COMMANDS_DICT={}" + + if [ "${MULTI_COMPONENT^^}" == "Y" ]; then + if [ ! -z "$TYPE_COMMAND" ]; then # Environment Variable Passed + if echo "$TYPE_COMMAND" | jq -e > /dev/null 2>&1; then # Valid JSON + for key in $(echo "$TYPE_COMMAND" | jq -r 'keys[]'); do + gatherComponentDir "Enter the path of your component to enter the ${TYPE,,} command" "$key" + addComponent "$key" + done + else + if [ "${GH_ACTION^^}" == "Y" ]; then + outputError "Your \"$TYPE_COMMAND\" is not valid JSON. Please re-format and try again." + writeLog "Invalid JSON in $TYPE_COMMAND (GH_ACTION mode)" + exit 10 + fi + echo "⚠️ Your \"$TYPE_COMMAND\" is not valid JSON. Please re-format and try again." + writeLog "Invalid JSON in $TYPE_COMMAND" + exit 10 + fi + else # No Environment Variable Passed + gatherComponentDir "Enter the path of your component to enter the ${TYPE,,} command" + while [ "$COMPONENT_PATH" != "" ]; do + addComponent "$COMPONENT_PATH" + read -p "$PROMPT" COMMAND + + # Append Component:Command k:v pair to map + COMMANDS_DICT=$(echo "$COMMANDS_DICT" | jq --arg k "$COMPONENT_PATH" --arg v "$COMMAND" '. + {($k): $v}') + gatherComponentDir "Enter the path of your component to enter the ${TYPE,,} command" + done + TYPE_COMMAND=$COMMANDS_DICT + fi + else + if [ -z "$TYPE_COMMAND" ]; then + read -p "$PROMPT" TYPE_COMMAND + fi + fi + + # Write to correct command variable + if [ "$TYPE" == "BUILD" ]; then + BUILD_COMMAND=$TYPE_COMMAND + elif [ "$TYPE" == "INSTALL" ]; then + INSTALL_COMMAND=$TYPE_COMMAND + else + START_COMMAND=$TYPE_COMMAND + fi +} \ No newline at end of file diff --git a/container creation/get-deployment-details.sh b/container creation/get-deployment-details.sh new file mode 100755 index 00000000..992cd256 --- /dev/null +++ b/container creation/get-deployment-details.sh @@ -0,0 +1,222 @@ +#!/bin/bash +# Helper script to gather project details for automatic deployment +# Modified August 5th, 2025 by Maxwell Klema +# ------------------------------------------ + +# Define color variables (works on both light and dark backgrounds) +RESET="\033[0m" +BOLD="\033[1m" +MAGENTA='\033[35m' + +echo -e "${BOLD}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${RESET}" +echo -e "${BOLD}${MAGENTA}🌐 Let's Get Your Project Automatically Deployed ${RESET}" +echo -e "${BOLD}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${RESET}" + +writeLog "Starting deploy application script" + +# Get and validate project repository ======== + +if [ -z "$PROJECT_REPOSITORY" ]; then + read -p "🚀 Paste the link to your project repository → " PROJECT_REPOSITORY + writeLog "Prompted for project repository" +fi + +CheckRepository() { + PROJECT_REPOSITORY_SHORTENED=${PROJECT_REPOSITORY#*github.com/} + PROJECT_REPOSITORY_SHORTENED=${PROJECT_REPOSITORY_SHORTENED%.git} + REPOSITORY_EXISTS=$(curl -s -o /dev/null -w "%{http_code}" https://github.com/$PROJECT_REPOSITORY_SHORTENED) + writeLog "Checking repository existence for $PROJECT_REPOSITORY_SHORTENED" +} + +CheckRepository + +while [ "$REPOSITORY_EXISTS" != "200" ]; do + if [ "${GH_ACTION^^}" == "Y" ]; then + outputError "Invalid Repository Link. Make sure your repository is private." + writeLog "Invalid repository link entered: $PROJECT_REPOSITORY (GH_ACTION mode)" + exit 10 + fi + echo "⚠️ The repository link you provided, \"$PROJECT_REPOSITORY\" was not valid." + writeLog "Invalid repository link entered: $PROJECT_REPOSITORY" + read -p "🚀 Paste the link to your project repository → " PROJECT_REPOSITORY + CheckRepository +done + +writeLog "Repository validated: $PROJECT_REPOSITORY" + +# Get Repository Branch ======== + +if [ -z "$PROJECT_BRANCH" ]; then + read -p "🪾 Enter the project branch to deploy from (leave blank for \"main\") → " PROJECT_BRANCH + writeLog "Prompted for project branch" +fi + +if [ -z "$PROJECT_BRANCH" ]; then + PROJECT_BRANCH="main" + writeLog "Using default branch: main" +fi + +REPOSITORY_BRANCH_EXISTS=$(curl -s -o /dev/null -w "%{http_code}" https://github.com/$PROJECT_REPOSITORY_SHORTENED/tree/$PROJECT_BRANCH) +writeLog "Checking branch existence for $PROJECT_BRANCH" + +while [ "$REPOSITORY_BRANCH_EXISTS" != "200" ]; do + if [ "${GH_ACTION^^}" == "Y" ]; then + outputError "Invalid Branch. Make sure your branch exists on the repository." + writeLog "Invalid branch entered: $PROJECT_BRANCH (GH_ACTION mode)" + exit 11 + fi + echo "⚠️ The branch you provided, \"$PROJECT_BRANCH\", does not exist on repository at \"$PROJECT_REPOSITORY\"." + writeLog "Invalid branch entered: $PROJECT_BRANCH" + read -p "🪾 Enter the project branch to deploy from (leave blank for \"main\") → " PROJECT_BRANCH + if [ -z "$PROJECT_BRANCH" ]; then + PROJECT_BRANCH="main" + fi + REPOSITORY_BRANCH_EXISTS=$(curl -s -o /dev/null -w "%{http_code}" https://github.com/$PROJECT_REPOSITORY_SHORTENED/tree/$PROJECT_BRANCH) +done + +writeLog "Branch validated: $PROJECT_BRANCH" + +# Get Project Root Directory ======== + +if [ -z "$PROJECT_ROOT" ]; then + read -p "📁 Enter the project root directory (relative to repository root directory, or leave blank for root directory) → " PROJECT_ROOT + writeLog "Prompted for project root directory" +fi + +VALID_PROJECT_ROOT=$(node /root/bin/js/runner.js authenticateRepo "$PROJECT_REPOSITORY" "$PROJECT_BRANCH" "$PROJECT_ROOT") +writeLog "Validating project root directory: $PROJECT_ROOT" + +while [ "$VALID_PROJECT_ROOT" == "false" ]; do + if [ "${GH_ACTION^^}" == "Y" ]; then + outputError "Invalid Project Root Directory. Make sure your directory exists on the repository." + writeLog "Invalid project root directory entered: $PROJECT_ROOT (GH_ACTION mode)" + exit 12 + fi + echo "⚠️ The root directory you provided, \"$PROJECT_ROOT\", does not exist on branch, \"$PROJECT_BRANCH\", on repository at \"$PROJECT_REPOSITORY\"." + writeLog "Invalid project root directory entered: $PROJECT_ROOT" + read -p "📁 Enter the project root directory (relative to repository root directory, or leave blank for root directory) → " PROJECT_ROOT + VALID_PROJECT_ROOT=$(node /root/bin/js/runner.js authenticateRepo "$PROJECT_REPOSITORY" "$PROJECT_BRANCH" "$PROJECT_ROOT") +done + +writeLog "Project root directory validated: $PROJECT_ROOT" + +# Remove forward slash +if [[ "$PROJECT_ROOT" == "/*" ]]; then + PROJECT_ROOT="${PROJECT_ROOT:1}" +fi + +# Check if the App has multiple components (backend, frontend, multiple servers, etc.) ======== + +if [ -z "$MULTI_COMPONENT" ]; then + read -p "🔗 Does your app consist of multiple components that run independently, i.e. seperate frontend and backend (y/n) → " MULTI_COMPONENT + writeLog "Prompted for multi-component option" +fi + +while [ "${MULTI_COMPONENT^^}" != "Y" ] && [ "${MULTI_COMPONENT^^}" != "N" ] && [ "${MULTI_COMPONENT^^}" != "" ]; do + if [ "${GH_ACTION^^}" == "Y" ]; then + outputError "Invalid option for MULTI_COMPONENT. It must be 'y' or 'n'. Please try again." + writeLog "Invalid multi-component option entered: $MULTI_COMPONENT (GH_ACTION mode)" + exit 13 + fi + echo "⚠️ Invalid option. Please try again." + writeLog "Invalid multi-component option entered: $MULTI_COMPONENT" + read -p "🔗 Does your app consist of multiple components that run independently, i.e. seperate frontend and backend (y/n) → " MULTI_COMPONENT +done + +if [ "${GH_ACTION^^}" == "Y" ]; then + if [ ! -z "$RUNTIME_LANGUAGE" ] && echo "$RUNTIME_LANGUAGE" | jq . >/dev/null 2>&1; then # If RUNTIME_LANGUAGE is set and is valid JSON + MULTI_COMPONENT="Y" + fi +fi + +writeLog "Multi-component option set to: $MULTI_COMPONENT" + +# Gather Deployment Commands ======== + +# Helper functions to gather and validate component directory +gatherComponentDir() { + + COMPONENT_PATH="$2" + if [ -z "$COMPONENT_PATH" ]; then + read -p "$1, relative to project root directory (To Continue, Press Enter) → " COMPONENT_PATH + writeLog "Prompted for component directory: $1" + fi + # Check that component path is valid + VALID_COMPONENT_PATH=$(node /root/bin/js/runner.js authenticateRepo "$PROJECT_REPOSITORY" "$PROJECT_BRANCH" "$COMPONENT_PATH") + writeLog "Validating component path: $COMPONENT_PATH" + + while [ "$VALID_COMPONENT_PATH" == "false" ] && [ "$COMPONENT_PATH" != "" ]; do + if [ "${GH_ACTION^^}" == "Y" ]; then + outputError "Invalid Component Path: \"$COMPONENT_PATH\". Make sure your path exists on the repository." + writeLog "Invalid component path entered: $COMPONENT_PATH (GH_ACTION mode)" + exit 14 + fi + echo "⚠️ The component path you entered, \"$COMPONENT_PATH\", does not exist on branch, \"$PROJECT_BRANCH\", on repository at \"$PROJECT_REPOSITORY\"." + writeLog "Invalid component path entered: $COMPONENT_PATH" + if [ -z "$2" ]; then + read -p "$1, relative to project root directory (To Continue, Press Enter) → " COMPONENT_PATH + VALID_COMPONENT_PATH=$(node /root/bin/js/runner.js authenticateRepo "$PROJECT_REPOSITORY" "$PROJECT_BRANCH" "$COMPONENT_PATH") + else + exit 14 + fi + done + + if [[ "$COMPONENT_PATH" == /* ]]; then + COMPONENT_PATH="${COMPONENT_PATH:1}" # remove leading slash + fi + + if [ "$COMPONENT_PATH" != "" ]; then + writeLog "Component path validated: $COMPONENT_PATH" + fi +} + +UNIQUE_COMPONENTS=() + +# Helper function to add a component to unique components if its not already present +addComponent() { + COMPONENT="$1" + for CURRENT in "${UNIQUE_COMPONENTS[@]}"; do + if [ "${COMPONENT,,}" == "${CURRENT,,}" ]; then + return 0 + fi + done + UNIQUE_COMPONENTS+=("$COMPONENT") + writeLog "Added component: $COMPONENT" +} + +writeLog "Sourcing setup commands script" +source /root/bin/deployment-scripts/gatherSetupCommands.sh # Function to gather build, install, and start commands + +writeLog "Sourcing environment variables script" +source /root/bin/deployment-scripts/gatherEnvVars.sh # Gather Environment Variables + +writeLog "Gathering build commands" +gatherSetupCommands "BUILD" "🏗️ Enter the build command (leave blank if no build command) → " # Gather Build Command(s) + +writeLog "Gathering install commands" +gatherSetupCommands "INSTALL" "📦 Enter the install command (e.g., 'npm install') → " # Gather Install Command(s) + +writeLog "Gathering start commands" +gatherSetupCommands "START" "🚦 Enter the start command (e.g., 'npm start', 'python app.py') → " # Gather Start Command(s) + +if [ "${MULTI_COMPONENT^^}" == "Y" ]; then + if [ -z "$ROOT_START_COMMAND" ]; then + read -p "📍 If your container requires a start command at the root directory, i.e. Docker run, enter it here (leave blank for no command) → " ROOT_START_COMMAND + writeLog "Prompted for root start command" + fi + if [ "$ROOT_START_COMMAND" != "" ]; then + writeLog "Root start command set: $ROOT_START_COMMAND" + fi +fi + +# Get Runtime Language ======== + +writeLog "Sourcing runtime languages script" +source /root/bin/deployment-scripts/gatherRuntimeLangs.sh + +# Get Services ======== +writeLog "Sourcing services script" +source /root/bin/deployment-scripts/gatherServices.sh + +writeLog "Deployment process finished successfully" +echo -e "\n✅ Deployment Process Finished.\n" \ No newline at end of file diff --git a/container creation/get-lxc-container-details.sh b/container creation/get-lxc-container-details.sh new file mode 100644 index 00000000..148900f2 --- /dev/null +++ b/container creation/get-lxc-container-details.sh @@ -0,0 +1,312 @@ +#!/bin/bash +# Main Container Creation Script +# Modified July 28th, 2025 by Maxwell Klema +# ------------------------------------------ + +LOG_FILE="/var/log/create-container.log" + +writeLog() { + echo "[$(date +'%Y-%m-%d %H:%M:%S')]: $1" >> "$LOG_FILE" +} + +# Define color variables (works on both light and dark backgrounds) +RESET="\033[0m" +BOLD="\033[1m" +MAGENTA='\033[35m' + +echo -e "${BOLD}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${RESET}" +echo -e "${BOLD}${MAGENTA}📦 MIE Container Creation Script ${RESET}" +echo -e "${BOLD}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${RESET}" + +# Authenticate User (Only Valid Users can Create Containers) + +outputError() { + echo -e "${BOLD}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${RESET}" + echo -e "${BOLD}${MAGENTA}❌ Script Failed. Exiting... ${RESET}" + echo -e "$1" + echo -e "${BOLD}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${RESET}" +} + +writeLog "Starting Container Creation Script" + +if [ -z "$PROXMOX_USERNAME" ]; then + read -p "Enter Proxmox Username → " PROXMOX_USERNAME +fi + +if [ -z "$PROXMOX_PASSWORD" ]; then + read -sp "Enter Proxmox Password → " PROXMOX_PASSWORD + echo "" +fi + +USER_AUTHENTICATED=$(node /root/bin/js/runner.js authenticateUser "$PROXMOX_USERNAME" "$PROXMOX_PASSWORD") + +while [ $USER_AUTHENTICATED == 'false' ]; do + if [ "${GH_ACTION^^}" == "Y" ]; then + outputError "Invalid Proxmox Credentials." + writeLog "Invalid Proxmox credentials entered for user: $PROXMOX_USERNAME (GH_ACTION mode)" + exit 2 + fi + echo "❌ Authentication Failed. Try Again" + writeLog "Invalid Proxmox credentials entered for user: $PROXMOX_USERNAME" + read -p "Enter Proxmox Username → " PROXMOX_USERNAME + read -sp "Enter Proxmox Password → " PROXMOX_PASSWORD + echo "" + + USER_AUTHENTICATED=$(node /root/bin/js/runner.js authenticateUser "$PROXMOX_USERNAME" "$PROXMOX_PASSWORD") +done + +echo "🎉 Your proxmox account, $PROXMOX_USERNAME@pve, has been authenticated" + +# Gather Container Hostname (hostname.opensource.mieweb.org) ===== + +if [ -z "$CONTAINER_NAME" ]; then + read -p "Enter Application Name (One-Word) → " CONTAINER_NAME +fi + +CONTAINER_NAME="${CONTAINER_NAME,,}" #convert to lowercase +HOST_NAME_EXISTS=$(ssh root@10.15.20.69 "node /etc/nginx/checkHostnameRunner.js checkHostnameExists ${CONTAINER_NAME}") + +while [[ $HOST_NAME_EXISTS == 'true' ]] || ! [[ "$CONTAINER_NAME" =~ ^[A-Za-z0-9-]+$ ]]; do + if [ "${GH_ACTION^^}" == "Y" ]; then + outputError "Invalid Container Hostname." + writeLog "Invalid container hostname entered: $CONTAINER_NAME (GH_ACTION mode)" + exit 3 + fi + echo "Sorry! Either that name has already been registered or your hostname is ill-formatted. Try another name" + writeLog "Invalid container hostname entered: $CONTAINER_NAME (already exists or ill-formatted)" + read -p "Enter Application Name (One-Word) → " CONTAINER_NAME + HOST_NAME_EXISTS=$(ssh root@10.15.20.69 "node /etc/nginx/checkHostnameRunner.js checkHostnameExists ${CONTAINER_NAME}") + CONTAINER_NAME="${CONTAINER_NAME,,}" +done + +echo "✅ $CONTAINER_NAME is available" + +# Choose Linux Distribution + +if [ -z "$LINUX_DISTRIBUTION" ]; then + echo "🐧 Available Linux Distributions:" + echo "1. Debian 12 (Bookworm)" + echo "2. Rocky 9 " + read -p "➡️ Choose a Linux Distribution (debian/rocky) → " LINUX_DISTRIBUTION +fi + +if [ "${LINUX_DISTRIBUTION,,}" != "debian" ] && [ "${LINUX_DISTRIBUTION,,}" != "rocky" ]; then + LINUX_DISTRIBUTION="debian" +fi + +LINUX_DISTRIBUTION=${LINUX_DISTRIBUTION,,} + +# Attempt to detect public keys + +echo -e "\n🔑 Attempting to Detect SSH Public Key..." + +AUTHORIZED_KEYS="/root/.ssh/authorized_keys" +RANDOM_NUM=$(shuf -i 100000-999999 -n 1) +PUB_FILE="key_$RANDOM_NUM.pub" +TEMP_PUB_FILE="/root/bin/ssh/temp_pubs/$PUB_FILE" # in case two users are running this script at the same time, they do not overwrite each other's temp files +touch "$TEMP_PUB_FILE" +DETECT_PUBLIC_KEY=$(sudo /root/bin/ssh/detectPublicKey.sh "$SSH_KEY_FP" "$TEMP_PUB_FILE") + +if [ "$DETECT_PUBLIC_KEY" == "Public key found for create-container" ]; then + echo "🔐 Public Key Found!" +else + echo "🔍 Could not detect Public Key" + + if [ -z "$PUBLIC_KEY" ]; then + read -p "Enter Public Key (Allows Easy Access to Container) [OPTIONAL - LEAVE BLANK TO SKIP] → " PUBLIC_KEY + fi + + # Check if key is valid + + while [[ "$PUBLIC_KEY" != "" && $(echo "$PUBLIC_KEY" | ssh-keygen -l -f - 2>&1 | tr -d '\r') == "(stdin) is not a public key file." ]]; do + if [ "${GH_ACTION^^}" == "Y" ]; then + outputError "Invalid Public Key" + writeLog "Invalid public key entered (GH_ACTION mode)" + exit 5 + fi + echo "❌ \"$PUBLIC_KEY\" is not a valid key. Enter either a valid key or leave blank to skip." + writeLog "Invalid public key entered: $PUBLIC_KEY" + read -p "Enter Public Key (Allows Easy Access to Container) [OPTIONAL - LEAVE BLANK TO SKIP] → " PUBLIC_KEY + done + + if [ "$PUBLIC_KEY" != "" ]; then + echo "$PUBLIC_KEY" > "$AUTHORIZED_KEYS" && systemctl restart ssh + echo "$PUBLIC_KEY" > "$TEMP_PUB_FILE" + sudo /root/bin/ssh/publicKeyAppendJumpHost.sh "$PUBLIC_KEY" + fi +fi + +# Get HTTP Port Container Listens On + +if [ -z "$HTTP_PORT" ]; then + read -p "Enter HTTP Port for your container to listen on (80-60000) → " HTTP_PORT + if [ "${GH_ACTION^^}" == "Y" ]; then + HTTP_PORT="3000" # Default to 3000 if not set + fi +fi + +while ! [[ "$HTTP_PORT" =~ ^[0-9]+$ ]] || [ "$HTTP_PORT" -lt 80 ] || [ "$HTTP_PORT" -gt 60000 ]; do + if [ "${GH_ACTION^^}" == "Y" ]; then + outputError "Invalid HTTP Port. Must be between 80 and 60,000." + writeLog "Invalid HTTP port entered: $HTTP_PORT (GH_ACTION mode)" + exit 6 + fi + echo "❌ Invalid HTTP Port. It must be a number between 80 and 60,000." + writeLog "Invalid HTTP port entered: $HTTP_PORT" + read -p "Enter HTTP Port for your container to listen on (80-60000) → " HTTP_PORT +done + +echo "✅ HTTP Port is set to $HTTP_PORT" + +# Get any other protocols + +protocol_duplicate() { + PROTOCOL="$1" + shift #remaining params are part of list + LIST="$@" + + for item in $LIST; do + if [[ "$item" == "$PROTOCOL" ]]; then + return 0 # Protocol is a duplicate + fi + done + return 1 # Protocol is not a duplicate +} + +read -p "Does your Container require any protocols other than SSH and HTTP? (y/n) → " USE_OTHER_PROTOCOLS +while [ "${USE_OTHER_PROTOCOLS^^}" != "Y" ] && [ "${USE_OTHER_PROTOCOLS^^}" != "N" ] && [ "${USER_OTHER_PROTOCOLS^^}" != "" ]; do + echo "Please answer 'y' for yes or 'n' for no." + read -p "Does your Container require any protocols other than SSH and HTTP? (y/n) → " USE_OTHER_PROTOCOLS +done + +if [ "${USE_OTHER_PROTOCOLS^^}" == "Y" ]; then + + RANDOM_NUM=$(shuf -i 100000-999999 -n 1) + PROTOCOL_BASE_FILE="protocol_list_$RANDOM_NUM.txt" + PROTOCOL_FILE="/root/bin/protocols/$PROTOCOL_BASE_FILE" + touch "$PROTOCOL_FILE" + + LIST_PROTOCOLS=() + read -p "Enter the protocol abbreviation (e.g, LDAP for Lightweight Directory Access Protocol). Type \"e\" to exit → " PROTOCOL_NAME + while [ "${PROTOCOL_NAME^^}" != "E" ]; do + FOUND=0 #keep track if protocol was found + while read line; do + PROTOCOL_ABBRV=$(echo "$line" | awk '{print $1}') + protocol_duplicate "$PROTOCOL_ABBRV" "${LIST_PROTOCOLS[@]}" + IS_PROTOCOL_DUPLICATE=$? + if [[ "$PROTOCOL_ABBRV" == "${PROTOCOL_NAME^^}" && "$IS_PROTOCOL_DUPLICATE" -eq 1 ]]; then + LIST_PROTOCOLS+=("$PROTOCOL_ABBRV") + PROTOCOL_UNDRLYING_NAME=$(echo "$line" | awk '{print $3}') + PROTOCOL_DEFAULT_PORT=$(echo "$line" | awk '{print $2}') + echo "$PROTOCOL_ABBRV $PROTOCOL_UNDRLYING_NAME $PROTOCOL_DEFAULT_PORT" >> "$PROTOCOL_FILE" + echo "✅ Protocol ${PROTOCOL_NAME^^} added to container." + FOUND=1 #protocol was found + break + else + echo "❌ Protocol ${PROTOCOL_NAME^^} was already added to your container. Please try again." + FOUND=2 #protocol was a duplicate + break + fi + done < <(cat "/root/bin/protocols/master_protocol_list.txt" | grep "^${PROTOCOL_NAME^^}") + + if [ $FOUND -eq 0 ]; then #if no results found, let user know. + echo "❌ Protocol ${PROTOCOL_NAME^^} not found. Please try again." + fi + + read -p "Enter the protocol abbreviation (e.g, LDAP for Lightweight Directory Access Protocol). Type \"e\" to exit → " PROTOCOL_NAME + done +fi + +# Attempt to deploy application on start. + +if [ -z "$DEPLOY_ON_START" ]; then + read -p "🚀 Do you want to deploy your project automatically? (y/n) → " DEPLOY_ON_START +fi + +while [ "${DEPLOY_ON_START^^}" != "Y" ] && [ "${DEPLOY_ON_START^^}" != "N" ] && [ "${DEPLOY_ON_START^^}" != "" ]; do + echo "Please answer 'y' for yes or 'n' for no." + read -p "🚀 Do you want to deploy your project automatically? (y/n) → " DEPLOY_ON_START +done + +if [ "${GH_ACTION^^}" == "Y" ]; then + if [ ! -z "${RUNTIME_LANGUAGE^^}" ]; then + DEPLOY_ON_START="Y" + fi +fi + +if [ "${DEPLOY_ON_START^^}" == "Y" ]; then + source /root/bin/deploy-application.sh +fi + +# send public key, port mapping, env vars, and services to hypervisor + +send_file_to_hypervisor() { + local LOCAL_FILE="$1" + local REMOTE_FOLDER="$2" + if [ "$REMOTE_FOLDER" != "container-env-vars" ]; then + if [ -s "$LOCAL_FILE" ]; then + sftp root@10.15.0.4 < /dev/null +put $LOCAL_FILE /var/lib/vz/snippets/$REMOTE_FOLDER/ +EOF + fi + else + if [ -d "$LOCAL_FILE" ]; then + sftp root@10.15.0.4 < /dev/null +put -r $LOCAL_FILE /var/lib/vz/snippets/$REMOTE_FOLDER/ +EOF + else + ENV_FOLDER="null" + fi + fi +} + +send_file_to_hypervisor "$TEMP_PUB_FILE" "container-public-keys" +send_file_to_hypervisor "$PROTOCOL_FILE" "container-port-maps" +send_file_to_hypervisor "$ENV_FOLDER_PATH" "container-env-vars" +send_file_to_hypervisor "$TEMP_SERVICES_FILE_PATH" "container-services" + +echo -e "${BOLD}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${RESET}" +echo -e "${BOLD}${MAGENTA}🚀 Starting Container Creation...${RESET}" +echo -e "${BOLD}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${RESET}" + +# Encode JSON variables +INSTALL_COMMAND_B64=$(echo -n "$INSTALL_COMMAND" | base64) +BUILD_COMMAND_B64=$(echo -n "$BUILD_COMMAND" | base64) +RUNTIME_LANGUAGE_B64=$(echo -n "$RUNTIME_LANGUAGE" | base64) +START_COMMAND_B64=$(echo -n "$START_COMMAND" | base64) + +REMOTE_CMD=( +/var/lib/vz/snippets/create-container.sh +"$CONTAINER_NAME" +"$GH_ACTION" +"$HTTP_PORT" +"$PROXMOX_USERNAME" +"$PUB_FILE" +"$PROTOCOL_BASE_FILE" +"$DEPLOY_ON_START" +"$PROJECT_REPOSITORY" +"$PROJECT_BRANCH" +"$PROJECT_ROOT" +"$INSTALL_COMMAND_B64" +"$BUILD_COMMAND_B64" +"$START_COMMAND_B64" +"$RUNTIME_LANGUAGE_B64" +"$ENV_FOLDER" +"$SERVICES_FILE" +"$LINUX_DISTRIBUTION" +"$MULTI_COMPONENT" +"$ROOT_START_COMMAND" +) + +QUOTED_REMOTE_CMD=$(printf ' %q' "${REMOTE_CMD[@]}") + +ssh -t root@10.15.0.4 "bash -c \"$QUOTED_REMOTE_CMD\"" + +rm -rf "$PROTOCOL_FILE" +rm -rf "$TEMP_PUB_FILE" +rm -rf "$TEMP_SERVICES_FILE_PATH" +rm -rf "$ENV_FOLDER_PATH" + +unset CONFIRM_PASSWORD +unset PUBLIC_KEY +unset PROXMOX_PASSWORD diff --git a/container creation/js/authenticateRepo.js b/container creation/js/authenticateRepo.js new file mode 100644 index 00000000..37a57e7a --- /dev/null +++ b/container creation/js/authenticateRepo.js @@ -0,0 +1,23 @@ +const axios = require('axios'); + +function authenticateRepo(repositoryURL, branch, folderPath) { + + if (folderPath.indexOf('.') != -1) { + return Promise.resolve(false); //early exit if path points to a specific file + } + + repositoryURL = repositoryURL.replace('.git', ''); + if (folderPath.startsWith('/')) { + folderPath = folderPath.substring(1, folderPath.length); + } + fullURL = `${repositoryURL}/tree/${branch}/${folderPath}` + + const config = { + method: "get", + url: fullURL + } + + return axios.request(config).then((response) => response.status === 200).catch(() => false); +} + +module.exports = { authenticateRepo } \ No newline at end of file diff --git a/container-creation/js/authenticateUser.js b/container creation/js/authenticateUser.js similarity index 90% rename from container-creation/js/authenticateUser.js rename to container creation/js/authenticateUser.js index 61153f1e..36942c4f 100644 --- a/container-creation/js/authenticateUser.js +++ b/container creation/js/authenticateUser.js @@ -1,6 +1,3 @@ -// Script to authenticate a user into Proxmox -// Last updated June 24th, 2025 by Maxwell Klema - const axios = require('axios'); const qs = require('qs'); const https = require('https'); @@ -27,4 +24,5 @@ function authenticateUser(username, password) { return axios.request(config).then((response) => response.status === 200).catch(() => false); } + module.exports = { authenticateUser }; diff --git a/container-creation/js/authenticateUserRunner.js b/container creation/js/runner.js similarity index 54% rename from container-creation/js/authenticateUserRunner.js rename to container creation/js/runner.js index 5843dd9c..8fde30cb 100644 --- a/container-creation/js/authenticateUserRunner.js +++ b/container creation/js/runner.js @@ -1,11 +1,13 @@ -// Script to run authenticateUser in the shell -// Last updated June 24th, 2025 by Maxwell Klema - authenticateuser = require("./authenticateUser.js"); +authenticaterepo = require("./authenticateRepo.js") const [, , func, ...args] = process.argv; if (func == "authenticateUser") { authenticateuser.authenticateUser(...args).then((result) => { console.log(result); }); +} else if (func == "authenticateRepo") { + authenticaterepo.authenticateRepo(...args).then((result) => { + console.log(result); + }) } diff --git a/container creation/protocols/master_protocol_list.txt b/container creation/protocols/master_protocol_list.txt new file mode 100644 index 00000000..ba1d16e4 --- /dev/null +++ b/container creation/protocols/master_protocol_list.txt @@ -0,0 +1,145 @@ +TCPM 1 tcp +RJE 5 tcp +ECHO 7 tcp +DISCARD 9 tcp +DAYTIME 13 tcp +QOTD 17 tcp +MSP 18 tcp +CHARGEN 19 tcp +FTP 20 tcp +FTP 21 tcp +SSH 22 tcp +TELNET 23 tcp +SMTP 25 tcp +TIME 37 tcp +HNS 42 tcp +WHOIS 43 tcp +TACACS 49 tcp +DNS 53 tcp +BOOTPS 67 udp +BOOTPC 68 udp +TFTP 69 udp +GOPHER 70 tcp +FINGER 79 tcp +HTTP 80 tcp +KERBEROS 88 tcp +HNS 101 tcp +ISO-TSAP 102 tcp +POP2 109 tcp +POP3 110 tcp +RPC 111 tcp +AUTH 113 tcp +SFTP 115 tcp +UUCP-PATH 117 tcp +NNTP 119 tcp +NTP 123 udp +EPMAP 135 tcp +NETBIOS-NS 137 tcp +NETBIOS-DGM 138 udp +NETBIOS-SSN 139 tcp +IMAP 143 tcp +SQL-SRV 156 tcp +SNMP 161 udp +SNMPTRAP 162 udp +XDMCP 177 tcp +BGP 179 tcp +IRC 194 tcp +LDAP 389 tcp +NIP 396 tcp +HTTPS 443 tcp +SNPP 444 tcp +SMB 445 tcp +KPASSWD 464 tcp +SMTPS 465 tcp +ISAKMP 500 udp +EXEC 512 tcp +LOGIN 513 tcp +SYSLOG 514 udp +LPD 515 tcp +TALK 517 udp +NTALK 518 udp +RIP 520 udp +RIPNG 521 udp +RPC 530 tcp +UUCP 540 tcp +KLOGIN 543 tcp +KSHELL 544 tcp +DHCPV6-C 546 tcp +DHCPV6-S 547 tcp +AFP 548 tcp +RTSP 554 tcp +NNTPS 563 tcp +SUBMISSION 587 tcp +IPP 631 tcp +LDAPS 636 tcp +LDP 646 tcp +LINUX-HA 694 tcp +ISCSI 860 tcp +RSYNC 873 tcp +VMWARE 902 tcp +FTPS-DATA 989 tcp +FTPS 990 tcp +TELNETS 992 tcp +IMAPS 993 tcp +POP3S 995 tcp +SOCKS 1080 tcp +OPENVPN 1194 udp +OMGR 1311 tcp +MS-SQL-S 1433 tcp +MS-SQL-M 1434 udp +WINS 1512 tcp +ORACLE-SQL 1521 tcp +RADIUS 1645 tcp +RADIUS-ACCT 1646 tcp +L2TP 1701 udp +PPTP 1723 tcp +CISCO-ISL 1741 tcp +RADIUS 1812 udp +RADIUS-ACCT 1813 udp +NFS 2049 tcp +CPANEL 2082 tcp +CPANEL-SSL 2083 tcp +WHM 2086 tcp +WHM-SSL 2087 tcp +DA 2222 tcp +ORACLE-DB 2483 tcp +ORACLE-DBS 2484 tcp +XBOX 3074 tcp +HTTP-PROXY 3128 tcp +MYSQL 3306 tcp +RDP 3389 tcp +NDPS-PA 3396 tcp +SVN 3690 tcp +MSQL 4333 udp +METASPLOIT 4444 tcp +EMULE 4662 tcp +EMULE 4672 udp +RADMIN 4899 tcp +UPNP 5000 tcp +YMSG 5050 tcp +SIP 5060 tcp +SIP-TLS 5061 tcp +AIM 5190 tcp +XMPP-CLIENT 5222 tcp +XMPP-CLIENTS 5223 tcp +XMPP-SERVER 5269 tcp +POSTGRES 5432 tcp +VNC 5500 tcp +VNC-HTTP 5800 tcp +VNC 5900 tcp +X11 6000 tcp +BNET 6112 tcp +GNUTELLA 6346 tcp +SANE 6566 tcp +IRC 6667 tcp +IRCS 6697 tcp +BT 6881 tcp +HTTP-ALT 8000 tcp +HTTP-ALT 8008 tcp +HTTP-ALT 8080 tcp +HTTPS-ALT 8443 tcp +PDL-DS 9100 tcp +BACNET 9101 tcp +WEBMIN 10000 udp +MONGO 27017 tcp +TRACEROUTE 33434 udp \ No newline at end of file diff --git a/intern-phxdc-pve1/register-container.sh b/container creation/register-container.sh similarity index 100% rename from intern-phxdc-pve1/register-container.sh rename to container creation/register-container.sh diff --git a/container creation/services/service_map_debian.json b/container creation/services/service_map_debian.json new file mode 100644 index 00000000..2c99c524 --- /dev/null +++ b/container creation/services/service_map_debian.json @@ -0,0 +1,69 @@ +{ + "meteor": [ + "curl https://install.meteor.com/ | sh" + ], + "mongodb": [ + "sudo apt update -y", + "sudo apt install -y gnupg curl", + "curl -fsSL https://pgp.mongodb.com/server-7.0.asc | sudo gpg --dearmor -o /usr/share/keyrings/mongodb-server-7.0.gpg", + "echo \"deb [ signed-by=/usr/share/keyrings/mongodb-server-7.0.gpg ] https://repo.mongodb.org/apt/debian bookworm/mongodb-org/7.0 main\" | sudo tee /etc/apt/sources.list.d/mongodb-org-7.0.list", + "sudo apt update -y", + "sudo apt install -y mongodb-org", + "sudo systemctl enable mongod", + "sudo systemctl start mongod" + ], + "redis": [ + "sudo apt update -y", + "sudo apt install -y redis-server", + "sudo systemctl enable redis-server", + "sudo systemctl start redis-server" + ], + "postgresql": [ + "sudo apt update -y", + "sudo apt install -y postgresql postgresql-contrib", + "sudo systemctl enable postgresql", + "sudo systemctl start postgresql" + ], + "apache": [ + "sudo apt update -y", + "sudo apt install -y apache2", + "sudo systemctl enable apache2", + "sudo systemctl start apache2" + ], + "nginx": [ + "sudo apt update -y", + "sudo apt install -y nginx", + "sudo systemctl enable nginx", + "sudo systemctl start nginx" + ], + "docker": [ + "sudo apt update -y", + "sudo apt install -y lsb-release", + "sudo apt install -y ca-certificates curl gnupg lsb-release", + "sudo install -m 0755 -d /etc/apt/keyrings", + "curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg", + "echo 'deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian bookworm stable' | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null", + "sudo apt update -y", + "sudo apt install -y docker-ce docker-ce-cli containerd.io", + "sudo systemctl enable docker", + "sudo systemctl start docker" + ], + "rabbitmq": [ + "sudo apt update -y", + "sudo apt install -y rabbitmq-server", + "sudo systemctl enable rabbitmq-server", + "sudo systemctl start rabbitmq-server" + ], + "memcached": [ + "sudo apt update -y", + "sudo apt install -y memcached", + "sudo systemctl enable memcached", + "sudo systemctl start memcached" + ], + "mariadb": [ + "sudo apt update -y", + "sudo apt install -y mariadb-server", + "sudo systemctl enable mariadb", + "sudo systemctl start mariadb" + ] +} \ No newline at end of file diff --git a/container creation/services/service_map_rocky.json b/container creation/services/service_map_rocky.json new file mode 100644 index 00000000..1b661433 --- /dev/null +++ b/container creation/services/service_map_rocky.json @@ -0,0 +1,99 @@ +{ + "meteor": [ + "dnf install tar -y", + "curl https://install.meteor.com/ | sh" + ], + "mongodb": [ + "sudo dnf install -y epel-release", + "sudo dnf update -y", + "sudo dnf install -y gnupg curl", + "curl -fsSL https://pgp.mongodb.com/server-7.0.asc | sudo gpg --dearmor -o /etc/pki/rpm-gpg/RPM-GPG-KEY-mongodb", + "echo '[mongodb-org-7.0]' | sudo tee /etc/yum.repos.d/mongodb-org-7.0.repo", + "echo 'name=MongoDB Repository' >> /etc/yum.repos.d/mongodb-org-7.0.repo", + "echo 'baseurl=https://repo.mongodb.org/yum/redhat/9/mongodb-org/7.0/x86_64/' >> /etc/yum.repos.d/mongodb-org-7.0.repo", + "echo 'gpgcheck=1' >> /etc/yum.repos.d/mongodb-org-7.0.repo", + "echo 'enabled=1' >> /etc/yum.repos.d/mongodb-org-7.0.repo", + "echo 'gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-mongodb' >> /etc/yum.repos.d/mongodb-org-7.0.repo", + "sudo dnf install -y mongodb-org", + "sudo systemctl enable mongod", + "sudo systemctl start mongod" + ], + "redis": [ + "sudo dnf install -y epel-release", + "sudo dnf update -y", + "sudo dnf install -y redis", + "sudo systemctl enable redis", + "sudo systemctl start redis" + ], + "postgresql": [ + "sudo dnf install -y epel-release", + "sudo dnf update -y", + "sudo dnf install -y postgresql-server postgresql-contrib", + "sudo postgresql-setup --initdb", + "sudo systemctl enable postgresql", + "sudo systemctl start postgresql" + ], + "apache": [ + "sudo dnf install -y epel-release", + "sudo dnf update -y", + "sudo dnf install -y httpd", + "sudo systemctl enable httpd", + "sudo systemctl start httpd" + ], + "httpd": [ + "sudo dnf install -y epel-release", + "sudo dnf update -y", + "sudo dnf install -y httpd", + "sudo systemctl enable httpd", + "sudo systemctl start httpd" + ], + "nginx": [ + "sudo dnf install -y epel-release", + "sudo dnf update -y", + "sudo dnf install -y nginx", + "sudo systemctl enable nginx", + "sudo systemctl start nginx" + ], + "docker": [ + "sudo dnf update -y", + "sudo dnf install -y yum-utils device-mapper-persistent-data lvm2", + "sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo", + "sudo dnf install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin", + "sudo systemctl enable docker", + "sudo systemctl start docker" + ], + "rabbitmq": [ + "sudo dnf install -y epel-release", + "sudo dnf install -y erlang", + "sudo dnf update -y", + "rpm --import 'https://github.com/rabbitmq/signing-keys/releases/download/3.0/rabbitmq-release-signing-key.asc'", + "rpm --import 'https://github.com/rabbitmq/signing-keys/releases/download/3.0/cloudsmith.rabbitmq-server.9F4587F226208342.key'", + "echo '[rabbitmq-server]' | sudo tee /etc/yum.repos.d/rabbitmq_rabbitmq-server.repo", + "echo 'name=rabbitmq-server' | sudo tee -a /etc/yum.repos.d/rabbitmq_rabbitmq-server.repo", + "echo 'baseurl=https://packagecloud.io/rabbitmq/rabbitmq-server/el/9/$basearch' | sudo tee /etc/yum.repos.d/rabbitmq_rabbitmq-server.repo", + "echo 'repo_gpgcheck=1' | sudo tee -a /etc/yum.repos.d/rabbitmq_rabbitmq-server.repo", + "echo 'gpgcheck=1' | sudo tee -a /etc/yum.repos.d/rabbitmq_rabbitmq-server.repo", + "echo 'enabled=1' | sudo tee -a /etc/yum.repos.d/rabbitmq_rabbitmq-server.repo", + "echo 'gpgkey=https://packagecloud.io/rabbitmq/rabbitmq-server/gpgkey' | sudo tee -a /etc/yum.repos.d/rabbitmq_rabbitmq-server.repo", + "echo 'sslverify=1' | sudo tee -a /etc/yum.repos.d/rabbitmq_rabbitmq-server.repo", + "echo 'sslcacert=/etc/pki/tls/certs/ca-bundle.crt' | sudo tee -a /etc/yum.repos.d/rabbitmq_rabbitmq-server.repo", + "echo 'metadata_expire=300' | sudo tee -a /etc/yum.repos.d/rabbitmq_rabbitmq-server.repo", + "sudo dnf install -y rabbitmq-server", + "sudo systemctl enable rabbitmq-server", + "sudo systemctl start rabbitmq-server" + ], + "memcached": [ + "sudo dnf install -y epel-release", + "sudo dnf update -y", + "sudo dnf install -y memcached", + "sudo systemctl enable memcached", + "sudo systemctl start memcached" + ], + "mariadb": [ + "sudo dnf install -y epel-release", + "sudo dnf update -y", + "sudo dnf install -y mariadb-server", + "sudo systemctl enable mariadb", + "sudo systemctl start mariadb" + ] +} \ No newline at end of file diff --git a/container creation/setup-runner.sh b/container creation/setup-runner.sh new file mode 100644 index 00000000..382f4f87 --- /dev/null +++ b/container creation/setup-runner.sh @@ -0,0 +1,143 @@ +#!/bin/bash +# A script for cloning a Distro template, installing, and starting a runner on it. +# Last Modified by Maxwell Klema on August 5th, 2025 +# ------------------------------------------------ + +outputError() { + echo -e "${BOLD}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${RESET}" + echo -e "${BOLD}${MAGENTA}❌ Script Failed. Exiting... ${RESET}" + echo -e "$2" + echo -e "${BOLD}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${RESET}" + exit $1 +} + +BOLD='\033[1m' +RESET='\033[0m' + +echo -e "${BOLD}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${RESET}" +echo "🧬 Cloning a Template and installing a Runner" +echo -e "${BOLD}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${RESET}" + +# Validating Container Name ===== + +source /var/lib/vz/snippets/helper-scripts/PVE_user_authentication.sh #Authenticate User +source /var/lib/vz/snippets/helper-scripts/verify_container_ownership.sh #Ensure container does not exist. + +if [ ! -z "$CONTAINER_OWNERSHIP" ]; then + outputError 1 "You already own a container with name \"$CONTAINER_NAME\". Please delete it before creating a new one." +fi + +# Cloning Container Template and Setting it up ===== + +REPO_BASE_NAME=$(basename -s .git "$PROJECT_REPOSITORY") +REPO_BASE_NAME_WITH_OWNER=$(echo "$PROJECT_REPOSITORY" | cut -d'/' -f4) + +TEMPLATE_NAME="template-$REPO_BASE_NAME-$REPO_BASE_NAME_WITH_OWNER" +CTID_TEMPLATE=$( { pct list; ssh root@10.15.0.5 'pct list'; } | awk -v name="$TEMPLATE_NAME" '$3 == name {print $1}') + +case "${LINUX_DISTRIBUTION^^}" in + "") PACKAGE_MANAGER="apt-get" ;; + ROCKY) PACKAGE_MANAGER="dnf" ;; +esac + +# If no template ID was provided, assign a default based on distro + +if [ -z "$CTID_TEMPLATE" ]; then + case "${LINUX_DISTRIBUTION^^}" in + "") CTID_TEMPLATE="160" ;; + ROCKY) CTID_TEMPLATE="138" ;; + esac +fi + +if [ "${LINUX_DISTRIBUTION^^}" != "ROCKY" ]; then + LINUX_DISTRIBUTION="DEBIAN" +fi + +REPO_BASE_NAME=$(basename -s .git "$PROJECT_REPOSITORY") +REPO_BASE_NAME_WITH_OWNER=$(echo "$PROJECT_REPOSITORY" | cut -d'/' -f4) + +NEXT_ID=$(pvesh get /cluster/nextid) #Get the next available LXC ID + +# Create the Container Clone +echo "⏳ Cloning Container..." +pct clone $CTID_TEMPLATE $NEXT_ID \ + --hostname $CONTAINER_NAME \ + --full true > /dev/null 2>&1 + +# Set Container Options +echo "⏳ Setting Container Properties..." +pct set $NEXT_ID \ + --tags "$PROXMOX_USERNAME" \ + --tags "$LINUX_DISTRIBUTION" \ + --onboot 1 \ + --cores 4 \ + --memory 4096 > /dev/null 2>&1 + +pct start $NEXT_ID > /dev/null 2>&1 +pveum aclmod /vms/$NEXT_ID --user "$PROXMOX_USERNAME@pve" --role PVEVMUser > /dev/null 2>&1 + +sleep 5 +echo "⏳ DHCP Allocating IP Address..." +CONTAINER_IP=$(pct exec $NEXT_ID -- hostname -I | awk '{print $1}') + +# Setting Up Github Runner ===== + +# Get Temporary Token +echo "🪙 Getting Authentication Token..." +AUTH_TOKEN_RESPONSE=$(curl --location --request POST https://api.github.com/repos/$REPO_BASE_NAME_WITH_OWNER/$REPO_BASE_NAME/actions/runners/registration-token --header "Authorization: token $GITHUB_PAT" --write-out "HTTPSTATUS:%{http_code}" --silent) + +HTTP_STATUS=$(echo "$AUTH_TOKEN_RESPONSE" | grep -o "HTTPSTATUS:[0-9]*" | cut -d: -f2) +AUTH_TOKEN_BODY=$(echo "$AUTH_TOKEN_RESPONSE" | sed 's/HTTPSTATUS:[0-9]*$//') + +if [ "$HTTP_STATUS" != "201" ]; then + outputError 1 "Failed to get GitHub authentication token. HTTP Status: $HTTP_STATUS\nResponse: $AUTH_TOKEN_BODY" +fi + +TOKEN=$(echo "$AUTH_TOKEN_BODY" | jq -r '.token') + +pct enter $NEXT_ID < /dev/null 2>&1 +rm -rf /root/container-updates.log || true && \ +cd /actions-runner && export RUNNER_ALLOW_RUNASROOT=1 && \ +runProcess=\$(ps aux | grep "[r]un.sh" | awk '{print \$2}' | head -n 1) && \ +if [ ! -z "\$runProcess" ]; then kill -9 \$runProcess || true; fi && \ +rm -rf .runner .credentials && rm -rf _work/* /var/log/runner/* 2>/dev/null || true && \ +export RUNNER_ALLOW_RUNASROOT=1 && \ +./config.sh --url $PROJECT_REPOSITORY --token $TOKEN --labels $CONTAINER_NAME --name $CONTAINER_NAME --unattended +EOF + +# Generate RSA Keys ===== + +echo "🔑 Generating RSA Key Pair..." +pct exec $NEXT_ID -- bash -c "ssh-keygen -t rsa -N '' -f /root/.ssh/id_rsa -q" +PUB_KEY=$(pct exec $NEXT_ID -- bash -c "cat /root/.ssh/id_rsa.pub") + +# Place public key in all necessary authorized_keys files +echo "$PUB_KEY" >> /home/create-container/.ssh/authorized_keys +echo "$PUB_KEY" >> /home/update-container/.ssh/authorized_keys +echo "$PUB_KEY" >> /home/delete-container/.ssh/authorized_keys +echo "$PUB_KEY" >> /home/container-exists/.ssh/authorized_keys + +ssh root@10.15.234.122 "echo \"$PUB_KEY\" >> /root/.ssh/authorized_keys" + +echo "🔑 Creating Service File..." +pct exec $NEXT_ID -- bash -c "cat < /etc/systemd/system/github-runner.service +[Unit] +Description=GitHub Actions Runner +After=network.target + +[Service] +Type=simple +WorkingDirectory=/actions-runner +Environment=\"RUNNER_ALLOW_RUNASROOT=1\" +ExecStart=/actions-runner/run.sh +Restart=always + +[Install] +WantedBy=multi-user.target +EOF" + +pct exec $NEXT_ID -- systemctl daemon-reload +pct exec $NEXT_ID -- systemctl enable github-runner +pct exec $NEXT_ID -- systemctl start github-runner + +exit 3 diff --git a/container-creation/ssh/detectPublicKey.sh b/container creation/ssh/detectPublicKey.sh similarity index 100% rename from container-creation/ssh/detectPublicKey.sh rename to container creation/ssh/detectPublicKey.sh diff --git a/container-creation/ssh/publicKeyAppendJumpHost.sh b/container creation/ssh/publicKeyAppendJumpHost.sh similarity index 100% rename from container-creation/ssh/publicKeyAppendJumpHost.sh rename to container creation/ssh/publicKeyAppendJumpHost.sh diff --git a/container creation/start_services.sh b/container creation/start_services.sh new file mode 100644 index 00000000..7942ad02 --- /dev/null +++ b/container creation/start_services.sh @@ -0,0 +1,135 @@ +#!/bin/bash +# Script ran by a virtual terminal session to start services and migrate a container +# Script is only ran on GH action workflows when runner disconnects +# Last Modified by Maxwell Klema on August 5th, 2025 +# ------------------------------------------------ + +CONTAINER_ID="$1" +CONTAINER_NAME="$2" +REPO_BASE_NAME="$3" +REPO_BASE_NAME_WITH_OWNER="$4" +SSH_PORT="$5" +CONTAINER_IP="$6" +PROJECT_ROOT="$7" +ROOT_START_COMMAND="$8" +DEPLOY_ON_START="$9" +MULTI_COMPONENT="${10}" +START_COMMAND=$(echo "${11}" | base64 -d) +BUILD_COMMAND=$(echo "${12}" | base64 -d) +RUNTIME_LANGUAGE=$(echo "${13}" | base64 -d) +GH_ACTION="${14}" +PROJECT_BRANCH="${15}" +UPDATE_CONTAINER="${16}" +CONTAINER_NAME="${CONTAINER_NAME,,}" + +if [ "${GH_ACTION^^}" == "Y" ]; then + sleep 8 # Wait for Job to Complete +fi + +if (( $CONTAINER_ID % 2 == 0 )) && [ "$UPDATE_CONTAINER" == "true" ]; then + ssh root@10.15.0.5 "pct stop $CONTAINER_ID" > /dev/null 2>&1 +else + pct stop $CONTAINER_ID > /dev/null 2>&1 +fi + +# Create template if on default branch ==== +source /var/lib/vz/snippets/helper-scripts/create-template.sh + +if (( $CONTAINER_ID % 2 == 0 )); then + + if [ "$UPDATE_CONTAINER" != "true" ]; then + pct migrate $CONTAINER_ID intern-phxdc-pve2 --target-storage containers-pve2 --online > /dev/null 2>&1 + sleep 5 # wait for migration to finish (fix this later) + fi + + ssh root@10.15.0.5 "pct start $CONTAINER_ID" + ssh root@10.15.0.5 "pct exec $CONTAINER_ID -- bash -c 'chmod 700 ~/.bashrc'" # enable full R/W/X permissions + ssh root@10.15.0.5 "pct set $CONTAINER_ID --memory 4096 --swap 0 --cores 4" + + if [ "${GH_ACTION^^}" == "Y" ]; then + ssh root@10.15.0.5 "pct exec $CONTAINER_ID -- systemctl start github-runner" + fi + + startProject() { + + RUNTIME="$1" + BUILD_CMD="$2" + START_CMD="$3" + COMP_DIR="$4" + + if [ -z "$BUILD_CMD" ]; then + BUILD_CMD="true" + fi + + if [ "${RUNTIME^^}" == "NODEJS" ]; then + ssh root@10.15.0.5 "pct exec $CONTAINER_ID -- bash -c \"mkdir -p /tmp && chmod 1777 /tmp && mkdir -p /tmp/tmux-0 && chmod 700 /tmp/tmux-0 && TMUX_TMPDIR=/tmp tmux new-session -d 'export HOME=/root export PATH=\\\$PATH:/usr/local/bin && cd /root/$REPO_BASE_NAME/$PROJECT_ROOT/$COMP_DIR && $BUILD_CMD && $START_CMD'\"" > /dev/null 2>&1 + elif [ "${RUNTIME^^}" == "PYTHON" ]; then + ssh root@10.15.0.5 "pct exec $CONTAINER_ID -- bash -c \"mkdir -p /tmp && chmod 1777 /tmp && mkdir -p /tmp/tmux-0 && chmod 700 /tmp/tmux-0 && TMUX_TMPDIR=/tmp tmux new-session -d 'export HOME=/root export PATH=\\\$PATH:/usr/local/bin && cd /root/$REPO_BASE_NAME/$PROJECT_ROOT/$COMP_DIR && source venv/bin/activate $BUILD_CMD && $START_CMD'\"" > /dev/null 2>&1 + fi + + } + + if [ "${DEPLOY_ON_START^^}" == "Y" ]; then + if [ "${MULTI_COMPONENT^^}" == "Y" ]; then + for COMPONENT in $(echo "$START_COMMAND" | jq -r 'keys[]'); do + START=$(echo "$START_COMMAND" | jq -r --arg k "$COMPONENT" '.[$k]') + RUNTIME=$(echo "$RUNTIME_LANGUAGE" | jq -r --arg k "$COMPONENT" '.[$k]') + BUILD=$(echo "$BUILD_COMMAND" | jq -r --arg k "$COMPONENT" '.[$k]') + if [ "$BUILD" == "null" ]; then + BUILD="" + fi + startProject "$RUNTIME" "$BUILD" "$START" "$COMPONENT" + done + if [ ! -z "$ROOT_START_COMMAND" ]; then + ssh root@10.15.0.5 "pct exec $CONTAINER_ID -- bash -c 'cd /root/$REPO_BASE_NAME/$PROJECT_ROOT && $ROOT_START_COMMAND'" > /dev/null 2>&1 + fi + else + startProject "$RUNTIME_LANGUAGE" "$BUILD_COMMAND" "$START_COMMAND" "." + fi + fi + +# PVE 1 +else + pct start $CONTAINER_ID || true + sleep 5 + if [ "${GH_ACTION^^}" == "Y" ]; then + pct exec $CONTAINER_ID -- bash -c "systemctl start github-runner" + fi + + startComponent() { + + RUNTIME="$1" + BUILD_CMD="$2" + START_CMD="$3" + COMP_DIR="$4" + + if [ -z "$BUILD_CMD" ]; then + BUILD_CMD="true" + fi + + if [ "${RUNTIME^^}" == "NODEJS" ]; then + pct exec "$CONTAINER_ID" -- bash -c "mkdir -p /tmp && chmod 1777 /tmp && mkdir -p /tmp/tmux-0 && chmod 700 /tmp/tmux-0 && TMUX_TMPDIR=/tmp/tmux-0 tmux new-session -d \"export HOME=/root && export PATH=\$PATH:/usr/local/bin && cd /root/$REPO_BASE_NAME/$PROJECT_ROOT/$COMP_DIR && $BUILD_CMD && $START_CMD\"" + elif [ "${RUNTIME^^}" == "PYTHON" ]; then + pct exec "$CONTAINER_ID" -- bash -c "mkdir -p /tmp && chmod 1777 /tmp && mkdir -p /tmp/tmux-0 && chmod 700 /tmp/tmux-0 && TMUX_TMPDIR=/tmp/tmux-0 tmux new-session -d \"export HOME=/root &&export PATH=\$PATH:/usr/local/bin && cd /root/$REPO_BASE_NAME/$PROJECT_ROOT/$COMP_DIR && source venv/bin/activate && $BUILD_CMD && $START_CMD\"" + fi + } + + pct set $CONTAINER_ID --memory 4096 --swap 0 --cores 4 > /dev/null #temporarily bump up container resources for computation hungry processes (e.g. meteor) + if [ "${MULTI_COMPONENT^^}" == "Y" ]; then + for COMPONENT in $(echo "$START_COMMAND" | jq -r 'keys[]'); do + START=$(echo "$START_COMMAND" | jq -r --arg k "$COMPONENT" '.[$k]') + RUNTIME=$(echo "$RUNTIME_LANGUAGE" | jq -r --arg k "$COMPONENT" '.[$k]') + BUILD=$(echo "$BUILD_COMMAND" | jq -r --arg k "$COMPONENT" '.[$k]') + if [ "$BUILD" == "null" ]; then + BUILD="" + fi + + startComponent "$RUNTIME" "$BUILD" "$START" "$COMPONENT" + done + if [ ! -z "$ROOT_START_COMMAND" ]; then + pct exec $CONTAINER_ID -- bash -c "cd /root/$REPO_BASE_NAME/$PROJECT_ROOT && $ROOT_START_COMMAND" > /dev/null 2>&1 + fi + else + startComponent "$RUNTIME_LANGUAGE" "$BUILD_COMMAND" "$START_COMMAND" "." + fi +fi diff --git a/container-creation/create-container.sh b/container-creation/create-container.sh deleted file mode 100644 index 583be02a..00000000 --- a/container-creation/create-container.sh +++ /dev/null @@ -1,106 +0,0 @@ -#!/bin/bash -# Script to create the pct container, run register container, and migrate container accordingly. -# Last Modified by June 30th, 2025 by Maxwell Klema - -trap cleanup SIGINT SIGTERM SIGHUP - -CONTAINER_NAME="$1" -CONTAINER_PASSWORD="$2" -HTTP_PORT="$3" -PROXMOX_USERNAME="$4" -PUB_FILE="$5" -PROTOCOL_FILE="$6" -NEXT_ID=$(pvesh get /cluster/nextid) #Get the next available LXC ID - -# Run cleanup commands in case script is interrupted - -function cleanup() -{ - BOLD='\033[1m' - RESET='\033[0m' - - echo -e "${BOLD}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${RESET}" - echo "⚠️ Script was abruptly exited. Running cleanup tasks." - echo -e "${BOLD}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${RESET}" - pct unlock 114 - if [ -f "/var/lib/vz/snippets/container-public-keys/$PUB_FILE" ]; then - rm -rf /var/lib/vz/snippets/container-public-keys/$PUB_FILE - fi - if [ -f "/var/lib/vz/snippets/container-port-maps/$PROTOCOL_FILE" ]; then - rm -rf /var/lib/vz/snippets/container-port-maps/$PROTOCOL_FILE - fi - exit 1 -} - - -# Create the Container Clone - -echo "⏳ Cloning Container..." -pct clone 114 $NEXT_ID \ - --hostname $CONTAINER_NAME \ - --full true \ - -# Set Container Options - -echo "⏳ Setting Container Properties.." -pct set $NEXT_ID \ - --tags "$PROXMOX_USERNAME" \ - --onboot 1 \ - -pct start $NEXT_ID -pveum aclmod /vms/$NEXT_ID --user "$PROXMOX_USERNAME@pve" --role PVEVMUser -#pct delete $NEXT_ID - -# Get the Container IP Address and install some packages - -echo "⏳ Waiting for DHCP to allocate IP address to container..." -sleep 10 - -CONTAINER_IP=$(pct exec $NEXT_ID -- hostname -I | awk '{print $1}') -pct exec $NEXT_ID -- apt-get upgrade -pct exec $NEXT_ID -- apt install -y sudo -pct exec $NEXT_ID -- apt install -y git -if [ -f "/var/lib/vz/snippets/container-public-keys/$PUB_FILE" ]; then - pct exec $NEXT_ID -- touch ~/.ssh/authorized_keys - pct exec $NEXT_ID -- bash -c "cat > ~/.ssh/authorized_keys"< /var/lib/vz/snippets/container-public-keys/$PUB_FILE - rm -rf /var/lib/vz/snippets/container-public-keys/$PUB_FILE -fi - -# Set password inside the container - -pct exec $NEXT_ID -- bash -c "echo 'root:$CONTAINER_PASSWORD' | chpasswd" - -# Run Contianer Provision Script to add container to port_map.json - -if [ -f "/var/lib/vz/snippets/container-port-maps/$PROTOCOL_FILE" ]; then - echo "CONTAINS PROTOCOL FILE" - /var/lib/vz/snippets/register-container-test.sh $NEXT_ID $HTTP_PORT /var/lib/vz/snippets/container-port-maps/$PROTOCOL_FILE - rm -rf /var/lib/vz/snippets/container-port-maps/$PROTOCOL_FILE -else - /var/lib/vz/snippets/register-container-test.sh $NEXT_ID $HTTP_PORT -fi - -SSH_PORT=$(iptables -t nat -S PREROUTING | grep "to-destination $CONTAINER_IP:22" | awk -F'--dport ' '{print $2}' | awk '{print $1}' | head -n 1 || true) - -# Migrate to pve2 if Container ID is even - -if (( $NEXT_ID % 2 == 0 )); then - pct stop $NEXT_ID - pct migrate $NEXT_ID intern-phxdc-pve2 --target-storage containers-pve2 --online - ssh root@10.15.0.5 "pct start $NEXT_ID" -fi - -# Echo Container Details - -# Define friendly, high-contrast colors -BOLD='\033[1m' -BLUE='\033[34m' -MAGENTA='\033[35m' -GREEN='\033[32m' -RESET='\033[0m' - -echo -e "📦 ${BLUE}Container ID :${RESET} $NEXT_ID" -echo -e "🌐 ${MAGENTA}Internal IP :${RESET} $CONTAINER_IP" -echo -e "🔗 ${GREEN}Domain Name :${RESET} https://$CONTAINER_NAME.opensource.mieweb.org" -echo -e "🛠️ ${BLUE}SSH Access :${RESET} ssh -p $SSH_PORT root@$CONTAINER_NAME.opensource.mieweb.org" -echo -e "${BOLD}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${RESET}" \ No newline at end of file diff --git a/container-creation/get-lxc-container-details.sh b/container-creation/get-lxc-container-details.sh deleted file mode 100644 index 9715de8c..00000000 --- a/container-creation/get-lxc-container-details.sh +++ /dev/null @@ -1,218 +0,0 @@ -#!/bin/bash -# Main Container Creation Script -# Modified June 23rd, 2025 by Maxwell Klema -# ------------------------------------------ - -# Define color variables (works on both light and dark backgrounds) -RESET="\033[0m" -BOLD="\033[1m" -MAGENTA='\033[35m' - -echo -e "${BOLD}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${RESET}" -echo -e "${BOLD}${MAGENTA}📦 MIE Container Creation Script ${RESET}" -echo -e "${BOLD}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${RESET}" - -# Authenticate User (Only Valid Users can Create Containers) - -if [ -z "$PROXMOX_USERNAME" ]; then - read -p "Enter Proxmox Username → " PROXMOX_USERNAME -fi - -if [ -z "$PROXMOX_PASSWORD" ]; then - read -sp "Enter Proxmox Password → " PROXMOX_PASSWORD - echo "" -fi - -USER_AUTHENTICATED=$(node /root/bin/js/authenticateUserRunner.js authenticateUser "$PROXMOX_USERNAME" "$PROXMOX_PASSWORD") -RETRIES=3 - -while [ $USER_AUTHENTICATED == 'false' ]; do - if [ $RETRIES -gt 0 ]; then - echo "❌ Authentication Failed. Try Again" - read -p "Enter Proxmox Username → " PROXMOX_USERNAME - read -sp "Enter Proxmox Password → " PROXMOX_PASSWORD - echo "" - - USER_AUTHENTICATED=$(node /root/bin/js/authenticateUserRunner.js authenticateUser "$PROXMOX_USERNAME" "$PROXMOX_PASSWORD") - RETRIES=$(($RETRIES-1)) - else - echo "Too many incorrect attempts. Exiting..." - exit 0 - fi -done - -echo "🎉 Your proxmox account, $PROXMOX_USERNAME@pve, has been authenticated" - -# Gather Container Hostname (hostname.opensource.mieweb.org) - -if [ -z "$CONTAINER_NAME" ]; then - read -p "Enter Application Name (One-Word) → " CONTAINER_NAME -fi - -HOST_NAME_EXISTS=$(ssh root@10.15.20.69 "node /etc/nginx/checkHostnameRunner.js checkHostnameExists ${CONTAINER_NAME}") - -while [ $HOST_NAME_EXISTS == 'true' ]; do - echo "Sorry! That name has already been registered. Try another name" - read -p "Enter Application Name (One-Word) → " CONTAINER_NAME - HOST_NAME_EXISTS=$(ssh root@10.15.20.69 "node /etc/nginx/checkHostnameRunner.js checkHostnameExists ${CONTAINER_NAME}") -done - -echo "✅ $CONTAINER_NAME is available" - -# Gather Container Password - -if [ -z "$CONTAINER_PASSWORD" ]; then - read -sp "Enter Container Password → " CONTAINER_PASSWORD - echo - read -sp "Confirm Container Password → " CONFIRM_PASSWORD - echo - - while [[ "$CONFIRM_PASSWORD" != "$CONTAINER_PASSWORD" || ${#CONTAINER_PASSWORD} -lt 8 ]]; do - echo "Sorry, try again. Ensure passwords are at least 8 characters." - read -sp "Enter Container Password → " CONTAINER_PASSWORD - echo - read -sp "Confirm Container Password → " CONFIRM_PASSWORD - echo - done -else - while [ ${#CONTAINER_PASSWORD} -lt 8 ]; do - echo "Sorry, try again. Ensure passwords are at least 8 characters." - read -sp "Enter Container Password → " CONTAINER_PASSWORD - echo - read -sp "Confirm Container Password → " CONFIRM_PASSWORD - echo - done -fi - -# Attempt to detect public keys - -echo -e "\n🔑 Attempting to Detect SSH Public Key..." - -AUTHORIZED_KEYS="/root/.ssh/authorized_keys" -RANDOM_NUM=$(shuf -i 100000-999999 -n 1) -PUB_FILE="key_$RANDOM_NUM.pub" -TEMP_PUB_FILE="/root/bin/ssh/temp_pubs/$PUB_FILE" # in case two users are running this script at the same time, they do not overwrite each other's temp files -touch "$TEMP_PUB_FILE" -DETECT_PUBLIC_KEY=$(sudo /root/bin/ssh/detectPublicKey.sh "$SSH_KEY_FP" "$TEMP_PUB_FILE") - -if [ "$DETECT_PUBLIC_KEY" == "Public key found for create-container" ]; then - echo "🔐 Public Key Found!" -else - echo "🔍 Could not detect Public Key" - - if [ -z "$PUBLIC_KEY" ]; then - read -p "Enter Public Key (Allows Easy Access to Container) [OPTIONAL - LEAVE BLANK TO SKIP] → " PUBLIC_KEY - fi - - # Check if key is valid - - while [[ "$PUBLIC_KEY" != "" && $(echo "$PUBLIC_KEY" | ssh-keygen -l -f - 2>&1 | tr -d '\r') == "(stdin) is not a public key file." ]]; do - echo "❌ \"$PUBLIC_KEY\" is not a valid key. Enter either a valid key or leave blank to skip." - read -p "Enter Public Key (Allows Easy Access to Container) [OPTIONAL - LEAVE BLANK TO SKIP] → " PUBLIC_KEY - done - - if [ "$PUBLIC_KEY" != "" ]; then - echo "$PUBLIC_KEY" > "$AUTHORIZED_KEYS" && systemctl restart ssh - echo "$PUBLIC_KEY" > "$TEMP_PUB_FILE" - sudo /root/bin/ssh/publicKeyAppendJumpHost.sh "$PUBLIC_KEY" - fi -fi - -# Get HTTP Port Container Listens On - -if [ -z "$HTTP_PORT" ]; then - read -p "Enter HTTP Port for your container to listen on (80-9999) → " HTTP_PORT -fi - -while ! [[ "$HTTP_PORT" =~ ^[0-9]+$ ]] || [ "$HTTP_PORT" -lt 80 ] || [ "$HTTP_PORT" -gt 9999 ]; do - echo "❌ Invalid HTTP Port. It must be a number between 80 and 9,999." - read -p "Enter HTTP Port for your container to listen on (80-9999) → " HTTP_PORT -done - -echo "✅ HTTP Port is set to $HTTP_PORT" - -# Get any other protocols - -protocol_duplicate() { - PROTOCOL="$1" - shift #remaining params are part of list - LIST="$@" - - for item in $LIST; do - if [[ "$item" == "$PROTOCOL" ]]; then - return 0 # Protocol is a duplicate - fi - done - return 1 # Protocol is not a duplicate -} - -read -p "Does your Container require any protocols other than SSH and HTTP? (y/n) → " USE_OTHER_PROTOCOLS -while [ "${USE_OTHER_PROTOCOLS^^}" != "Y" ] && [ "${USE_OTHER_PROTOCOLS^^}" != "N" ]; do - echo "Please answer 'y' for yes or 'n' for no." - read -p "Does your Container require any protocols other than SSH and HTTP? (y/n) → " USE_OTHER_PROTOCOLS -done - -RANDOM_NUM=$(shuf -i 100000-999999 -n 1) -PROTOCOL_BASE_FILE="protocol_list_$RANDOM_NUM.txt" -PROTOCOL_FILE="/root/bin/protocols/$PROTOCOL_BASE_FILE" -touch "$PROTOCOL_FILE" - -if [ "${USE_OTHER_PROTOCOLS^^}" == "Y" ]; then - LIST_PROTOCOLS=() - read -p "Enter the protocol abbreviation (e.g, LDAP for Lightweight Directory Access Protocol). Type \"e\" to exit → " PROTOCOL_NAME - while [ "${PROTOCOL_NAME^^}" != "E" ]; do - FOUND=0 #keep track if protocol was found - while read line; do - PROTOCOL_ABBRV=$(echo "$line" | awk '{print $1}') - protocol_duplicate "$PROTOCOL_ABBRV" "${LIST_PROTOCOLS[@]}" - IS_PROTOCOL_DUPLICATE=$? - if [[ "$PROTOCOL_ABBRV" == "${PROTOCOL_NAME^^}" && "$IS_PROTOCOL_DUPLICATE" -eq 1 ]]; then - LIST_PROTOCOLS+=("$PROTOCOL_ABBRV") - PROTOCOL_UNDRLYING_NAME=$(echo "$line" | awk '{print $3}') - PROTOCOL_DEFAULT_PORT=$(echo "$line" | awk '{print $2}') - echo "$PROTOCOL_ABBRV $PROTOCOL_UNDRLYING_NAME $PROTOCOL_DEFAULT_PORT" >> "$PROTOCOL_FILE" - echo "✅ Protocol ${PROTOCOL_NAME^^} added to container." - FOUND=1 #protocol was found - break - else - echo "❌ Protocol ${PROTOCOL_NAME^^} was already added to your container. Please try again." - FOUND=2 #protocol was a duplicate - break - fi - done < <(cat "/root/bin/protocols/master_protocol_list.txt" | grep "^${PROTOCOL_NAME^^}") - - if [ $FOUND -eq 0 ]; then #if no results found, let user know. - echo "❌ Protocol ${PROTOCOL_NAME^^} not found. Please try again." - fi - - read -p "Enter the protocol abbreviation (e.g, LDAP for Lightweight Directory Access Protocol). Type \"e\" to exit → " PROTOCOL_NAME - done -fi - -# send public key file & port map file to hypervisor and ssh, Create the Container, run port mapping script - -if [ -s $TEMP_PUB_FILE ]; then -sftp root@10.15.0.4 <> "$LOG_FILE" +} + +# --- 1. Fetch port_map.json from remote host --- +log_message "Fetching port_map.json from $REMOTE_HOST..." +if ! scp "$REMOTE_HOST:$REMOTE_FILE" "$LOCAL_FILE" >/dev/null 2>&1; then + log_message "ERROR: Could not fetch $REMOTE_FILE from $REMOTE_HOST" + exit 1 +fi +log_message "Successfully fetched $REMOTE_FILE to $LOCAL_FILE." + +# --- 2. Build list of existing hostnames --- +EXISTING_HOSTNAMES="" +for node in "${PVE_NODES[@]}"; do + log_message "Checking containers on $node..." + if [[ "$node" == "localhost" ]]; then + CTIDS=$(pct list | awk 'NR>1 {print $1}' || true) + log_message "DEBUG: Local CTIDs: [${CTIDS:-}]" + for id in $CTIDS; do + hn=$(pct config "$id" 2>/dev/null | grep -i '^hostname:' | awk '{print $2}' | tr -d '[:space:]' || true) + [[ -n "$hn" ]] && EXISTING_HOSTNAMES+="$hn"$'\n' + done + else + log_message "DEBUG: Checking remote node: $node" + CTIDS_CMD="pct list | awk 'NR>1 {print \$1}'" + CTIDS_OUTPUT=$(ssh "$node" "$CTIDS_CMD" 2>&1 || true) + if [[ "$CTIDS_OUTPUT" =~ "Permission denied" || "$CTIDS_OUTPUT" =~ "Connection refused" || "$CTIDS_OUTPUT" =~ "Host key verification failed" ]]; then + log_message "ERROR: SSH to $node failed: $CTIDS_OUTPUT" + continue + fi + log_message "DEBUG: CTIDs on $node: [${CTIDS_OUTPUT:-}]" + for id in $CTIDS_OUTPUT; do + HN_CMD="pct config $id 2>/dev/null | grep -i '^hostname:' | awk '{print \$2}'" + HN_OUTPUT=$(ssh "$node" "$HN_CMD" 2>&1 || true) + if [[ "$HN_OUTPUT" =~ "Permission denied" || "$HN_OUTPUT" =~ "No such file" ]]; then + log_message "ERROR: Failed to get hostname for $id on $node: $HN_OUTPUT" + continue + fi + hn=$(echo "$HN_OUTPUT" | tr -d '[:space:]') + [[ -n "$hn" ]] && EXISTING_HOSTNAMES+="$hn"$'\n' + done + fi +done + +# Remove any empty lines from EXISTING_HOSTNAMES +EXISTING_HOSTNAMES=$(echo "$EXISTING_HOSTNAMES" | sed '/^$/d') +log_message "Existing hostnames collected:" +log_message "$EXISTING_HOSTNAMES" + +# --- 3. Prune iptables and port_map.json --- +log_message "Pruning iptables and port_map.json..." +cp "$LOCAL_FILE" "$LOCAL_FILE.bak" +log_message "Created backup of $LOCAL_FILE at $LOCAL_FILE.bak" + +HOSTNAMES_IN_JSON=$(jq -r 'keys[]' "$LOCAL_FILE") +mapfile -t EXISTING_ARRAY <<< "$EXISTING_HOSTNAMES" + +# Helper function to check if a hostname exists in the collected list +hostname_exists() { + local h=$(echo "$1" | tr -d '[:space:]') + for existing in "${EXISTING_ARRAY[@]}"; do + if [[ "${h,,}" == "${existing,,}" ]]; then # Case-insensitive comparison + return 0 + fi + done + return 1 +} + +for hostname in $HOSTNAMES_IN_JSON; do + trimmed_hostname=$(echo "$hostname" | tr -d '[:space:]') + if hostname_exists "$trimmed_hostname"; then + log_message "Keeping entry: $trimmed_hostname" + else + ip=$(jq -r --arg h "$hostname" '.[$h].ip // "unknown"' "$LOCAL_FILE") + ports=$(jq -c --arg h "$hostname" '.[$h].ports // {}' "$LOCAL_FILE") + log_message "Stale entry detected: $hostname (IP: $ip, Ports: $ports) - removing..." + + # --- IPTABLES REMOVAL --- + # Capture rules into an array first to avoid subshell issues with 'while read' + mapfile -t RULES_TO_DELETE < <(sudo iptables -t nat -S | grep -w "$ip" || true) # Added sudo, || true to prevent pipefail if grep finds nothing + + if [[ ${#RULES_TO_DELETE[@]} -gt 0 ]]; then + log_message "Found ${#RULES_TO_DELETE[@]} iptables rules for $hostname. Attempting removal..." + for rule in "${RULES_TO_DELETE[@]}"; do + cleaned_rule=$(echo "$rule" | sed 's/^-A /-D /') + log_message "Attempting to remove iptables rule: sudo iptables -t nat $cleaned_rule" + if sudo iptables -t nat $cleaned_rule; then + log_message "Removed iptables rule: $cleaned_rule" + else + log_message "ERROR: Failed to remove iptables rule: $cleaned_rule (Exit status: $?)" + fi + done + else + log_message "No iptables rules found for $hostname to remove." + fi + + # --- JSON ENTRY REMOVAL --- + log_message "Attempting to remove $hostname from local port_map.json..." + if jq "del(.\"$hostname\")" "$LOCAL_FILE" > "${LOCAL_FILE}.tmp"; then + if mv "${LOCAL_FILE}.tmp" "$LOCAL_FILE"; then + log_message "Successfully removed $hostname from local port_map.json." + else + log_message "ERROR: Failed to move temporary file to $LOCAL_FILE for $hostname." + exit 1 # Critical failure, exit + fi + else + log_message "ERROR: jq failed to delete $hostname from $LOCAL_FILE." + exit 1 # Critical failure, exit + fi + + # Confirm deletion from local file + if jq -e --arg h "$hostname" 'has($h)' "$LOCAL_FILE" >/dev/null; then + log_message "ERROR: $hostname still exists in local port_map.json after deletion attempt!" + else + log_message "Confirmed $hostname removed from local port_map.json." + fi + fi +done + +# --- 4. Upload and verify updated file on remote --- +log_message "Uploading updated port_map.json to $REMOTE_HOST..." +TEMP_REMOTE="/tmp/port_map.json" + +if scp "$LOCAL_FILE" "$REMOTE_HOST:$TEMP_REMOTE" >/dev/null 2>&1; then + log_message "Uploaded to $REMOTE_HOST:$TEMP_REMOTE" +else + log_message "ERROR: Failed to upload $TEMP_REMOTE to $REMOTE_HOST" + exit 1 +fi + +# Check if deleted hostnames still exist in uploaded file +log_message "Verifying remote file content..." +for hostname in $HOSTNAMES_IN_JSON; do + if ! hostname_exists "$hostname"; then # Only check for hostnames that *should* have been deleted + if ssh "$REMOTE_HOST" "grep -q '\"$hostname\"' $TEMP_REMOTE"; then + log_message "WARNING: $hostname still exists in uploaded $TEMP_REMOTE on $REMOTE_HOST!" + else + log_message "Verified $hostname was removed in uploaded file on $REMOTE_HOST." + fi + fi +done + +# Move uploaded file into place on the remote host +log_message "Moving uploaded file into final position on $REMOTE_HOST..." +if ssh "$REMOTE_HOST" "sudo cp $TEMP_REMOTE $REMOTE_FILE && sudo chown root:root $REMOTE_FILE && sudo chmod 644 $REMOTE_FILE && rm $TEMP_REMOTE"; then + log_message "Copied updated port_map.json to $REMOTE_FILE on $REMOTE_HOST" +else + log_message "ERROR: Failed to replace $REMOTE_FILE on $REMOTE_HOST" + exit 1 +fi + +log_message "Prune complete." \ No newline at end of file diff --git a/gateway/prune_temp_files.sh b/gateway/prune_temp_files.sh new file mode 100644 index 00000000..1b171fd1 --- /dev/null +++ b/gateway/prune_temp_files.sh @@ -0,0 +1,67 @@ +#!/bin/bash +# Script to prune all temporary files (env vars, protocols, services, and public keys) +# Last Updated July 28th 2025 Maxwell Klema + +LOG_FILE="/var/log/pruneTempFiles.log" + +writeLog() { + echo "[$(date +'%Y-%m-%d %H:%M:%S')]: $1" >> "$LOG_FILE" +} + +# Function to remove temporary environment variable Folders +removeTempEnvVars() { + TEMP_ENV_FOLDER="/var/lib/vz/snippets/container-env-vars" + while read -r line; do + if [[ "$line" == /var/lib/vz/snippets/container-env-vars/env_* ]]; then + rm -rf "$line" > /dev/null 2>&1 + writeLog "Removed temporary environment variable folder: $line" + fi + done < <(find "$TEMP_ENV_FOLDER" -maxdepth 1 -type d -name "env_*") +} + +# Function to remove temporary services file +removeTempServices() { + TEMP_SERVICES_FOLDER="/var/lib/vz/snippets/container-services" + while read -r line; do + if [[ "$line" == /var/lib/vz/snippets/container-services/services_* ]]; then + rm -f "$line" + writeLog "Removed temporary services file: $line" + fi + done < <(find "$TEMP_SERVICES_FOLDER" -maxdepth 1 -type f -name "services_*") +} + +# Function to remove temporary public key files +removeTempPublicKeys() { + TEMP_PUB_FOLDER="/var/lib/vz/snippets/container-public-keys" + while read -r line; do + if [[ "$line" == /var/lib/vz/snippets/container-public-keys/key_* ]]; then + rm -f "$line" + writeLog "Removed temporary public key file: $line" + fi + done < <(find "$TEMP_PUB_FOLDER" -maxdepth 1 -type f -name "key_*") +} + +# Function to remove temporary protocol files +removeTempProtocols() { + TEMP_PROTOCOL_FOLDER="/var/lib/vz/snippets/container-port-maps" + while read -r line; do + if [[ "$line" == /var/lib/vz/snippets/container-port-maps/protocol_list* ]]; then + rm -f "$line" + writeLog "Removed temporary protocol file: $line" + fi + done < <(find "$TEMP_PROTOCOL_FOLDER" -maxdepth 1 -type f -name "protocol_list*") +} + +# Main function to prune all temporary files +pruneTempFiles() { + writeLog "Starting to prune temporary files..." + removeTempEnvVars + removeTempServices + removeTempPublicKeys + removeTempProtocols + writeLog "Finished pruning temporary files." +} + +# Execute the main function +pruneTempFiles +exit 0 \ No newline at end of file diff --git a/intern-phxdc-pve1/register_proxy_hook.sh b/intern-phxdc-pve1/register_proxy_hook.sh deleted file mode 100644 index a4ade333..00000000 --- a/intern-phxdc-pve1/register_proxy_hook.sh +++ /dev/null @@ -1,17 +0,0 @@ -#!/bin/bash -# /var/lib/vz/snippets/register_proxy_hook.sh - -echo "DEBUG: Hook script /var/lib/vz/snippets/register_proxy_hook.sh started. Event: $2, CTID: $1" >> /tmp/hook_debug.log - -# Hook script for container events -case "$2" in - post-start) - echo "DEBUG: Calling register-container.sh for CTID: $1" >> /tmp/hook_debug.log - /var/lib/vz/snippets/register-container.sh "$1" >> /tmp/hook_debug.log 2>&1 - echo "DEBUG: register-container.sh finished." >> /tmp/hook_debug.log - ;; - *) - echo "DEBUG: Unhandled hook event: $2 for CTID: $1" >> /tmp/hook_debug.log - ;; -esac -echo "DEBUG: Hook script /var/lib/vz/snippets/register_proxy_hook.sh finished." >> /tmp/hook_debug.log \ No newline at end of file diff --git a/nginx reverse proxy/README.md b/nginx reverse proxy/README.md new file mode 100644 index 00000000..f28dcebe --- /dev/null +++ b/nginx reverse proxy/README.md @@ -0,0 +1 @@ +# Nginx Reverse Proxy \ No newline at end of file diff --git a/intern-nginx/nginx.conf b/nginx reverse proxy/nginx.conf similarity index 100% rename from intern-nginx/nginx.conf rename to nginx reverse proxy/nginx.conf diff --git a/intern-nginx/port_map.js b/nginx reverse proxy/port_map.js similarity index 100% rename from intern-nginx/port_map.js rename to nginx reverse proxy/port_map.js diff --git a/intern-nginx/reverse_proxy.conf b/nginx reverse proxy/reverse_proxy.conf similarity index 100% rename from intern-nginx/reverse_proxy.conf rename to nginx reverse proxy/reverse_proxy.conf