diff --git a/dev/env/defaults/02-infra.env b/dev/env/defaults/02-infra.env index 6283db9ea5..8b22e85c23 100644 --- a/dev/env/defaults/02-infra.env +++ b/dev/env/defaults/02-infra.env @@ -12,7 +12,7 @@ is_infra_cluster() { echo "Can't auto-detect if the selected cluster is an infra cluster because the infractl tool is not installed." return 1 fi - if infractl list --json | jq -r '.Clusters[].URL' | grep -q $(oc whoami --show-console); then + if infractl list --json | jq -r '.Clusters[].URL' 2>/dev/null | grep -q $(oc whoami --show-console); then return 0 fi fi diff --git a/docs/development/setup-developer-osd-cluster.md b/docs/development/setup-developer-osd-cluster.md deleted file mode 100644 index f49a4e6f2e..0000000000 --- a/docs/development/setup-developer-osd-cluster.md +++ /dev/null @@ -1,361 +0,0 @@ -# How-To setup developer OSD cluster (step by step copy/paste guide) - -### Pre-requirements - -You will require several commands in order to use simple copy/paste. -1. `jq` and `yq` - JSON and YAML query CLI tools. -2. `bw` - BitWarden CLI. We need this to get values from BitWarden directly without paste/copy. -3. `ocm` - Openshift cluster manager CLI tool. We need it to create OSD cluster and manage it. -4. `oc` - Openshift cluster CLI tool (similar to kubectl). We need it to deploy resource into OSD cluster. -5. `ktunnel` - Reverse proxy to proxy service from kubernetes to local machine. You can find more info here: https://github.com/omrikiei/ktunnel -6. `watch` - (optional) To repeatedly executes specific command. -7. `grpcurl` - (optional) Requirement for execute gRPC calls. - -Additionally, you will also require `quay.io` credentials. - -### Intro - -All commands should be executed in root directory of `stackrox/acs-fleet-manager` project. - -### Create development OSD Cluster - -1. Create development OSD Cluster with `ocm` - -Export name for your cluster. Prefix it with your initials or something similar to avoid name collisions. i.e. `mt-osd-1307` -``` -export OSD_CLUSTER_NAME="" -``` - -To create development OSD cluster in OCM staging platform, you should login to staging platform. You should use `rhacs-managed-service-dev` account. To retrieve token required to login via `ocm` command, -you have to go to: https://console.redhat.com/openshift/token/show# - login there as `rhacs-managed-service-dev`. You can find `rhacs-managed-service-dev` login credentials in BitWarden. - -The `ocm` command is aware of differences and defining `--url staging` is all what is required in order to login to OCM staging platform. -``` -ocm login --url staging --token=" -``` -Staging UI is accessible on this URL: https://qaprodauth.cloud.redhat.com - -To ensure that we have enough quota on the account, you can run the following command and see the output: -``` -ocm list quota | grep -E "QUOTA|osd" -``` - -Create cluster with `ocm` command -``` -# Get AWS Keys from BitWarden -export AWS_REGION="us-east-1" -export AWS_ACCOUNT_ID=$(bw get item "23a0e6d6-7b7d-44c8-b8d0-aecc00e1fa0a" | jq '.fields[] | select(.name | contains("AccountID")) | .value' --raw-output) -export AWS_ACCESS_KEY_ID=$(bw get item "23a0e6d6-7b7d-44c8-b8d0-aecc00e1fa0a" | jq '.fields[] | select(.name | contains("AccessKeyID")) | .value' --raw-output) -export AWS_SECRET_ACCESS_KEY=$(bw get item "23a0e6d6-7b7d-44c8-b8d0-aecc00e1fa0a" | jq '.fields[] | select(.name | contains("SecretAccessKey")) | .value' --raw-output) - -# Execute creation command -ocm create cluster \ - --ccs \ - --aws-access-key-id "${AWS_ACCESS_KEY_ID}" \ - --aws-account-id "${AWS_ACCOUNT_ID}" \ - --aws-secret-access-key "${AWS_SECRET_ACCESS_KEY}" \ - --region "${AWS_REGION}" \ - --multi-az \ - --compute-machine-type "m5a.xlarge" \ - --version "4.11.2" \ - "${OSD_CLUSTER_NAME}" -``` - -You will see output of command. Output should contain "ID" of the cluster. Export that ID to `CLUSTER_ID` environment variable. -``` -export CLUSTER_ID="" -``` - -Now, you have to wait for cluster to be provisioned. Check status of cluster creation: -``` -watch --interval 10 ocm cluster status ${CLUSTER_ID} -``` - -2. Add auth provider for OSD cluster - -This is required in order to be able to log-in to cluster. In UI or with `oc` command. You can pick your own admin pass, here we use `md5`. -If you need password for UI login, be sure to store it somewhere. -``` -export OSD_ADMIN_USER="osd-admin" -export OSD_ADMIN_PASS=$(date | md5) -echo $OSD_ADMIN_PASS > ./tmp-osd-admin-pass.txt - -ocm create idp \ - --cluster "${CLUSTER_ID}" \ - --type htpasswd \ - --name HTPasswd \ - --username "${OSD_ADMIN_USER}" \ - --password "${OSD_ADMIN_PASS}" - -ocm create user \ - --group cluster-admins \ - --cluster "${CLUSTER_ID}" \ - "${OSD_ADMIN_USER}" -``` - -3. Login to OSD cluster with `ocm` command. This will automatically set the correct context for the `oc` command. -``` -ocm cluster login "${CLUSTER_ID}" -``` -If login step fails, it can be the case that previously created auth provider and user are not applied yet on the cluster. You can wait few seconds and try again. - -### Prepare cluster for RHACS Operator - -4. Export defaults -``` -export RHACS_OPERATOR_CATALOG_VERSION="3.71.0" -export RHACS_OPERATOR_CATALOG_NAME="redhat-operators" -``` - -5. Check if the latest version of available ACS Operator is high enough for you. If that is OK for you, you can skip next steps prefixed with `(ACS operator from branch)`. - -Execute the following command in separate terminal (new shell). -``` -oc port-forward -n openshift-marketplace svc/redhat-operators 50051:50051 -``` -If port-forward step fails with `Unable to connect to the server: x509: certificate signed by unknown authority`, wait few seconds and try again. - -``` -grpcurl -plaintext -d '{"name":"rhacs-operator"}' localhost:50051 api.Registry/GetPackage | jq '.channels[0].csvName' -``` -You can stop port-forward after this. - -6. (ACS operator from branch) Prepare pull secret -**Important** This will change cluster wide pull secrets. It's not advised to use on clusters where credentials can be compromized. - -**Pay attention:** `docker-credential-osxkeychain` is specific for MacOS. For Linux please check `docker-credential-secretservice`. -``` -export QUAY_REGISTRY_AUTH_BASIC=$(docker-credential-osxkeychain get <<<"https://quay.io" | jq -r '"\(.Username):\(.Secret)"') - -oc get secret/pull-secret -n openshift-config --template='{{index .data ".dockerconfigjson" | base64decode}}' > ./tmp-pull-secret.json -oc registry login --registry="quay.io/rhacs-eng" --auth-basic="${QUAY_REGISTRY_AUTH_BASIC}" --to=./tmp-pull-secret.json -oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=./tmp-pull-secret.json -``` - -7. (ACS operator from branch) Deploy catalog - -You should find catalog build from your branch or from master branch of `stackrox/stackrox` repository. You should look at CircleCI job with name `build-operator` and step `Build and push images for quay.io/rhacs-eng`. In log, you can find image tag. Something like `v3.71.0-16-g3f8fcd60c6`. Export that value without `v` -``` -export RHACS_OPERATOR_CATALOG_VERSION="" -``` - -Run the following command to register new ACS Observability operator catalog. -``` -export RHACS_OPERATOR_CATALOG_NAME="rhacs-operators" - -oc apply -f - < "./${CLUSTER_ID}.yaml" ---- -clusters: - - name: '${OC_CURRENT_CONTEXT}' - cluster_id: '${CLUSTER_ID}' - cloud_provider: aws - region: ${AWS_REGION} - schedulable: true - status: ready - multi_az: true - central_instance_limit: 10 - provider_type: standalone - supported_instance_type: "eval,standard" - cluster_dns: '${OSD_CLUSTER_NAME}.${OSD_CLUSTER_DOMAIN}' -EOF -``` - -15. Build, setup and start local fleet manager - -Execute the following command in separate terminal (new shell). Ensure that you have same exported `CLUSTER_ID`. -``` -# Build binary -make binary - -# Setup DB -make db/teardown db/setup db/migrate - -# Start local fleet manager -./fleet-manager serve --dataplane-cluster-config-file "./${CLUSTER_ID}.yaml" -``` - -### Install central - -16. Prepare default values -``` -# Copy static token from BitWarden -export STATIC_TOKEN=$(bw get item "64173bbc-d9fb-4d4a-b397-aec20171b025" | jq '.fields[] | select(.name | contains("JWT")) | .value' --raw-output) - -export AWS_REGION="us-east-1" -``` - -17. Call curl to install central - -``` -export CENTRAL_ID=$(curl --location --request POST "http://localhost:8000/api/rhacs/v1/centrals?async=true" --header "Content-Type: application/json" --header "Accept: application/json" --header "Authorization: Bearer ${STATIC_TOKEN}" --data-raw "{\"name\":\"test-on-cluster\",\"cloud_provider\":\"aws\",\"region\":\"${AWS_REGION}\",\"multi_az\":true}" | jq '.id' --raw-output) -``` - -18. Check if new namespace is created and if all pods are up and running -``` -export CENTRAL_NAMESPACE="${NAMESPACE}-${CENTRAL_ID}" - -oc get pods --namespace "${CENTRAL_NAMESPACE}" -``` - -### Install sensor to same data plane cluster where central is installed - -19. Fetch sensor configuration -``` -export ROX_ADMIN_PASSWORD=$(oc get secrets -n "${CENTRAL_NAMESPACE}" central-htpasswd -o yaml | yq .data.password | base64 --decode) -roxctl sensor generate openshift --openshift-version=4 --endpoint "https://central-${CENTRAL_NAMESPACE}.apps.${OSD_CLUSTER_NAME}.${OSD_CLUSTER_DOMAIN}:443" --insecure-skip-tls-verify -p "${ROX_ADMIN_PASSWORD}" --admission-controller-listen-on-events=false --disable-audit-logs=true --central="https://central-${CENTRAL_NAMESPACE}.apps.${OSD_CLUSTER_NAME}.${OSD_CLUSTER_DOMAIN}:443" --collection-method=none --name osd-cluster-sensor -``` - -20. Install sensor - -This step requires `quay.io` username and password. Have that prepared. -``` -./sensor-osd-cluster-sensor/sensor.sh -``` - -21. Check that sensor is up and running - -Sensor uses `stackrox` namespace by default. -``` -oc get pods -n stackrox -``` - -### Run local front-end (UI project) - -The front-end is located in the following repo: https://github.com/RedHatInsights/acs-ui. Clone that repo locally. - -22. Prepare `/etc/hosts` file. Add development host to the hosts file. The grep command ensures that entry is added only once. -``` -sudo sh -c 'grep -qxF "127.0.0.1 stage.foo.redhat.com" /etc/hosts || echo "127.0.0.1 stage.foo.redhat.com" >> /etc/hosts' -``` -**Note:** If you are unsure what the command will do, be free to manually add the entry `127.0.0.1 stage.foo.redhat.com` in the `/etc/hosts` file. - -23. Install the UI project - -Execute the following commands in the root directory of the UI project: -``` -npm install -``` - -24. Start the UI project - -Execute the following commands in the root directory of the UI project: -``` -export FLEET_MANAGER_API_ENDPOINT=http://localhost:8000 - -npm run start:beta -``` -After that, you can open the following URL in your browser: https://stage.foo.redhat.com:1337/beta/application-services/acs - -**Note:** Since staging External RedHat SSO is used for authentication, you may have to create your personal account. - -### Extend development OSD cluster lifetime to 7 days - -By default, staging cluster will be up for 2 days. You can extend it to 7 days. To do that, execute the following command for MacOS: -``` -echo "{\"expiration_timestamp\":\"$(date -v+7d -u +'%Y-%m-%dT%H:%M:%SZ')\"}" | ocm patch "/api/clusters_mgmt/v1/clusters/${CLUSTER_ID}" -``` - -Or on Linux: -``` -echo "{\"expiration_timestamp\":\"$(date --iso-8601=seconds -d '+7 days')\"}" | ocm patch "/api/clusters_mgmt/v1/clusters/${CLUSTER_ID}" -``` - -### Re-deploy new Fleetshard synchronizer - -To deploy a new build of Fleetshard synchronizer, you can simply re-build and push the image and after that rollout restart of deployment is sufficient. -``` -GOARCH=amd64 GOOS=linux CGO_ENABLED=0 make image/build/push/internal -oc rollout restart -n "${NAMESPACE}" deployment fleetshard-sync -``` - -### Re-start new local Fleetshard manager - -``` -make binary -./fleet-manager serve --dataplane-cluster-config-file "./${CLUSTER_ID}.yaml" -``` diff --git a/docs/development/setup-developer-rosa-cluster.md b/docs/development/setup-developer-rosa-cluster.md new file mode 100644 index 0000000000..fa774568c7 --- /dev/null +++ b/docs/development/setup-developer-rosa-cluster.md @@ -0,0 +1,221 @@ +# How-To setup developer ROSA cluster (step by step copy/paste guide) + +### Prerequisites + +You will require several commands in order to use simple copy/paste. +1. [rosa](https://console.redhat.com/openshift/downloads) - CLI for Red Hat OpenShift Service on AWS. +1. [aws-saml.py](https://gitlab.corp.redhat.com/compute/aws-automation) - helper tool for authenticating in AWS using SAML +1. `bw` - BitWarden CLI. We need this to get values from BitWarden directly without paste/copy. +1. `oc` - OpenShift cluster CLI tool (similar to kubectl). We need it to deploy resources into the ROSA cluster. +1. `ocm` - OpenShift cluster manager CLI tool. We need it to extend cluster lifetime and create Centrals. +1. `roxctl` - StackRox CLI for managing Central and downloading cluster registration secrets. + +### Intro + +All commands should be executed in root directory of `stackrox/acs-fleet-manager` project. + +### Create development ROSA Cluster with `rosa` CLI + +1. Login to OCM staging + + Export name for your cluster. Prefix it with your initials or something similar to avoid name collisions. i.e. `mt-rosa-1307` + ```shell + ROSA_CLUSTER_NAME="johndoe-test" # use your name + ``` + + To create development ROSA cluster in OCM staging platform, you should login to staging platform. You should use `rhacs-managed-service-dev` account. + To retrieve token required to login via `rosa` command: + ```shell + rosa login --url staging --use-device-code + ``` + Please follow the instructions to open the browser and enter the code from the command output. Login there as `rhacs-managed-service-dev`. You can find `rhacs-managed-service-dev` login credentials in BitWarden. + The `rosa` command is aware of differences and defining `--url staging` is all what is required in order to login to OCM staging platform. + +1. Login to AWS dev account + Run `aws-saml.py` and select dev account (see [secret-management.md](./secret-management.md)). + +1. Select the desired AWS region + ```shell + export AWS_REGION="us-east-1" + ``` + +1. Preflight checks + + Double check selected AWS account, AWS region, OCM user and OCM staging API + ```shell + rosa whoami + ``` + +1. Create cluster + ```shell + rosa create cluster --cluster-name "${ROSA_CLUSTER_NAME}" --sts --mode auto + ``` + Follow the interactive instructions. + If prompted for the IAM account roles, create them + ```shell + rosa create account-roles + ``` + Now, you have to wait for cluster to be provisioned. Check status of cluster creation: + ```shell + rosa logs install -c "${ROSA_CLUSTER_NAME}" --watch + ``` + +1. Create admin user + + This is required in order to be able to log in to cluster in the UI or with `oc` command. + + ```shell + rosa create admin -c $ROSA_CLUSTER_NAME + ``` + + The command will output the `oc login` command containing the cluster kube API URL, username and password. + Please securely store this generated password. If you lose this password you can delete and recreate the cluster admin user. + +1. Login to the cluster + + Use the command output from the previous step: + ```shell + oc login --username cluster-admin --password <..generated..> + ``` + If login step fails, it can be the case that previously created auth provider and user are not applied yet on the cluster. You can wait few seconds and try again. + +### Deploy ACSCS + +```shell +export CLUSTER_TYPE=infra-openshift +make deploy/bootstrap deploy/dev +``` +See [setup-test-environment.md](setup-test-environment.md) for more info. + +### Install central +1. Prerequisites: + 1. [step](https://github.com/smallstep/cli) CLI + 2. `roxctl` +1. Log in with your personal account to **stage** RH SSO and capture the OAuth token + ```shell + OAUTH_TOKEN=$(step oauth --bare \ + --client-id="cloud-services" \ + --provider="https://sso.stage.redhat.com/auth/realms/redhat-external" \ + --scope="openid") + ``` +1. Create Central + ```shell + curl -X POST -H "Authorization: Bearer ${OAUTH_TOKEN}" -H "Content-Type: application/json" \ + http://127.0.0.1:8000/api/rhacs/v1/centrals\?async\=true \ + -d '{"name": "rosa-test", "multi_az": true, "cloud_provider": "standalone", "region": "standalone"}' + ``` +1. Capture `id` JSON field from the command output + ```shell + CENTRAL_ID= + ``` +1. Set `CENTRAL_NAMESPACE` environment variable + ```shell + CENTRAL_NAMESPACE="rhacs-${CENTRAL_ID}" + ``` +1. Check if new namespace is created and if all pods are up and running + ```shell + oc get pods -n "$CENTRAL_NAMESPACE" + ``` +1. Set `CENTRAL_ENDPOINT` environment variable + ```shell + CENTRAL_ENDPOINT="$(oc get route managed-central-reencrypt -n $CENTRAL_NAMESPACE -o jsonpath="{.spec.host}"):443" + ``` + +### Install sensor to same data plane cluster where central is installed +1. Login to central + ```shell + roxctl --endpoint "${CENTRAL_ENDPOINT}" --insecure-skip-tls-verify central login + ``` +1. Generate CRS + ```shell + roxctl --endpoint "${CENTRAL_ENDPOINT}" --insecure-skip-tls-verify central crs generate rosa-test-secured-cluster --output /tmp/rosa-test-secured-cluster-crs.yaml + ``` +1. Install sensor + ```shell + oc create ns rhacs-secured-cluster + oc apply -n rhacs-secured-cluster -f /tmp/rosa-test-secured-cluster-crs.yaml + oc apply -n rhacs-secured-cluster -f - <> /etc/hosts' + ``` + **Note:** If you are unsure what the command will do, be free to manually add the entry `127.0.0.1 stage.foo.redhat.com` in the `/etc/hosts` file. +1. Install the UI project. +Execute the following commands in the root directory of the UI project: + ``` + npm install + ``` +1. Start the UI project. Execute the following commands in the root directory of the UI project: + ``` + export FLEET_MANAGER_API_ENDPOINT=http://localhost:8000 + + npm run start:beta + ``` + After that, you can open the following URL in your browser: https://stage.foo.redhat.com:1337/beta/application-services/acs + + **Note:** Since staging External RedHat SSO is used for authentication, you may have to create your personal account. + +### Extend development ROSA cluster lifetime to 7 days + + +By default, staging cluster will be up for 2 days. You can extend it to 7 days. + +Determine cluster's ID: +```shell +rosa describe cluster -c $ROSA_CLUSTER_NAME +``` +Capture the ID value: +```shell +CLUSTER_ID= +``` + +Execute the following command for macOS: +``` +echo "{\"expiration_timestamp\":\"$(date -v+7d -u +'%Y-%m-%dT%H:%M:%SZ')\"}" | ocm patch "/api/clusters_mgmt/v1/clusters/${CLUSTER_ID}" +``` + +Or on Linux: +``` +echo "{\"expiration_timestamp\":\"$(date --iso-8601=seconds -d '+7 days')\"}" | ocm patch "/api/clusters_mgmt/v1/clusters/${CLUSTER_ID}" +``` + +## See also +1. [ROSA quick start guide](https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/install_clusters/rosa-hcp-quickstart-guide)