-
Notifications
You must be signed in to change notification settings - Fork 1.9k
OSDOCS-986 GCP UPI shared VPC #22332
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,137 @@ | ||
| [id="installing-gcp-user-infra-vpc"] | ||
| = Installing a cluster with shared VPC on user-provisioned infrastructure in GCP by using Deployment Manager templates | ||
| include::modules/common-attributes.adoc[] | ||
| :context: installing-gcp-user-infra-vpc | ||
|
|
||
| toc::[] | ||
|
|
||
| In {product-title} version {product-version}, you can install a cluster into a shared Virtual Private Cloud (VPC) on | ||
| Google Cloud Platform (GCP) that uses infrastructure that you provide. | ||
|
|
||
| The steps for performing a user-provided infrastructure installation into a shared VPC are outlined here. Several | ||
| link:https://cloud.google.com/deployment-manager/docs[Deployment Manager] templates are provided to assist in | ||
| completing these steps or to help model your own. You are also free to create | ||
| the required resources through other methods; the templates are just an | ||
| example. | ||
|
|
||
| .Prerequisites | ||
|
|
||
| * Review details about the | ||
| xref:../../architecture/architecture-installation.adoc#architecture-installation[{product-title} installation and update] | ||
| processes. | ||
| * If you use a firewall and plan to use telemetry, you must | ||
| xref:../../installing/install_config/configuring-firewall.adoc#configuring-firewall[configure the firewall to allow the sites] that your cluster requires access to. | ||
| + | ||
| [NOTE] | ||
| ==== | ||
| Be sure to also review this site list if you are configuring a proxy. | ||
| ==== | ||
|
|
||
| [id="csr-management-gcp-vpc"] | ||
| == Certificate signing requests management | ||
|
|
||
| Because your cluster has limited access to automatic machine management when you | ||
| use infrastructure that you provision, you must provide a mechanism for approving | ||
| cluster certificate signing requests (CSRs) after installation. The | ||
| `kube-controller-manager` only approves the kubelet client CSRs. The | ||
| `machine-approver` cannot guarantee the validity of a serving certificate | ||
| that is requested by using kubelet credentials because it cannot confirm that | ||
| the correct machine issued the request. You must determine and implement a | ||
| method of verifying the validity of the kubelet serving certificate requests | ||
| and approving them. | ||
|
|
||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. As mentioned before, add a section "Configuring your GCP host project" here |
||
| [id="installation-gcp-user-infra-config-project-vpc"] | ||
| == Configuring the GCP project that hosts your cluster | ||
|
|
||
| Before you can install {product-title}, you must configure a Google Cloud | ||
| Platform (GCP) project to host it. | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Again, missing service before project. |
||
|
|
||
| include::modules/installation-gcp-project.adoc[leveloffset=+2] | ||
| include::modules/installation-gcp-enabling-api-services.adoc[leveloffset=+2] | ||
| include::modules/installation-gcp-limits.adoc[leveloffset=+2] | ||
| include::modules/installation-gcp-service-account.adoc[leveloffset=+2] | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. A service account in host project is also required. And required roles are: |
||
| include::modules/installation-gcp-permissions.adoc[leveloffset=+3] | ||
| include::modules/installation-gcp-regions.adoc[leveloffset=+2] | ||
| include::modules/installation-gcp-install-cli.adoc[leveloffset=+2] | ||
|
|
||
| include::modules/installation-gcp-user-infra-config-host-project-vpc.adoc[leveloffset=+1] | ||
| include::modules/installation-gcp-dns.adoc[leveloffset=+2] | ||
| include::modules/installation-creating-gcp-vpc.adoc[leveloffset=+2] | ||
| include::modules/installation-deployment-manager-vpc.adoc[leveloffset=+3] | ||
|
|
||
| include::modules/installation-user-infra-generate.adoc[leveloffset=+1] | ||
|
|
||
| include::modules/installation-initializing-manual.adoc[leveloffset=+2] | ||
|
|
||
| include::modules/installation-gcp-user-infra-shared-vpc-config-yaml.adoc[leveloffset=+2] | ||
|
|
||
| include::modules/installation-configure-proxy.adoc[leveloffset=+2] | ||
|
|
||
| //include::modules/installation-three-node-cluster.adoc[leveloffset=+2] | ||
|
|
||
| include::modules/installation-user-infra-generate-k8s-manifest-ignition.adoc[leveloffset=+2] | ||
| .Additional resources | ||
|
|
||
| [id="installation-gcp-user-infra-exporting-common-variables-vpc"] | ||
| == Exporting common variables | ||
|
|
||
| include::modules/installation-extracting-infraid.adoc[leveloffset=+2] | ||
| include::modules/installation-user-infra-exporting-common-variables.adoc[leveloffset=+2] | ||
|
|
||
| include::modules/installation-creating-gcp-lb.adoc[leveloffset=+1] | ||
| include::modules/installation-deployment-manager-ext-lb.adoc[leveloffset=+2] | ||
| include::modules/installation-deployment-manager-int-lb.adoc[leveloffset=+2] | ||
|
|
||
| include::modules/installation-creating-gcp-private-dns.adoc[leveloffset=+1] | ||
| include::modules/installation-deployment-manager-private-dns.adoc[leveloffset=+2] | ||
|
|
||
| include::modules/installation-creating-gcp-firewall-rules-vpc.adoc[leveloffset=+1] | ||
| include::modules/installation-deployment-manager-firewall-rules.adoc[leveloffset=+2] | ||
|
|
||
| include::modules/installation-creating-gcp-iam-shared-vpc.adoc[leveloffset=+1] | ||
| include::modules/installation-deployment-manager-iam-shared-vpc.adoc[leveloffset=+2] | ||
|
|
||
| include::modules/installation-gcp-user-infra-rhcos.adoc[leveloffset=+1] | ||
|
|
||
| include::modules/installation-creating-gcp-bootstrap.adoc[leveloffset=+1] | ||
| include::modules/installation-deployment-manager-bootstrap.adoc[leveloffset=+2] | ||
|
|
||
| include::modules/installation-creating-gcp-control-plane.adoc[leveloffset=+1] | ||
| include::modules/installation-deployment-manager-control-plane.adoc[leveloffset=+2] | ||
|
|
||
| include::modules/installation-gcp-user-infra-wait-for-bootstrap.adoc[leveloffset=+1] | ||
|
|
||
| include::modules/installation-creating-gcp-worker.adoc[leveloffset=+1] | ||
| include::modules/installation-deployment-manager-worker.adoc[leveloffset=+2] | ||
|
|
||
| include::modules/cli-installing-cli.adoc[leveloffset=+1] | ||
|
|
||
| include::modules/cli-logging-in-kubeadmin.adoc[leveloffset=+1] | ||
|
|
||
| include::modules/installation-approve-csrs.adoc[leveloffset=+1] | ||
|
|
||
| include::modules/installation-gcp-user-infra-adding-ingress.adoc[leveloffset=+1] | ||
|
|
||
| [id="installation-gcp-user-infra-vpc-adding-firewall-rules"] | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Per upstream doc, there are 2 approaches to add firewall rules: 1 is Adding firewall rules based on cluster events and another is Adding a cluster-wide firewall rules. So I suggest to change the section name as below: Adding the ingress firewall rules
Adding firewall rules based on cluster events
Adding cluster-wide firewall rules (alternative)
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I checked with the author of the dev docs, and dev's preference is to use the event-based rules to better mimic the IPI functionality. I structured the docs to mimic that. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I see. What I mean was that we need to instruct that Adding cluster-wide firewall rules is an alternative of event-based rules, if there is any issue on event, user can add cluster-wide firewall rules instead. |
||
| == Adding ingress firewall rules | ||
| The cluster requires several firewall rules. If you do not use a shared VPC, these rules are created by the ingress controller via the GCP cloud provider. When you use a shared VPC, you can either create cluster-wide firewall rules for all services now or create each rule based on events, when the cluster requests access. By creating each rule when the cluster requests access, you know exactly which firewall rules are required. By creating cluster-wide firewall rules, you can apply the same rule set across multiple clusters. | ||
|
|
||
| If you choose to create each rule based on events, you must create firewall rules after you provision the cluster and during the life of the cluster when the console notifies you that rules are missing. Events that are similar to the following event are displayed, and you must add the firewall rules that are required: | ||
|
|
||
| ---- | ||
| Firewall change required by security admin: `gcloud compute firewall-rules create k8s-fw-a26e631036a3f46cba28f8df67266d55 --network example-network --description "{\"kubernetes.io/service-name\":\"openshift-ingress/router-default\", \"kubernetes.io/service-ip\":\"35.237.236.234\"}\" --allow tcp:443,tcp:80 --source-ranges 0.0.0.0/0 --target-tags exampl-fqzq7-master,exampl-fqzq7-worker --project example-project` | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The cluster has trouble emitting events currently. We have a bug to track it. I'm not sure if it's proper to have a problematic function described here.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This comment on the bug makes it sound like it's fixed in 4.5. If it's still an issue, I will open a separate PR to fix it. |
||
| ---- | ||
|
|
||
| If you encounter issues when creating these rule-based events, you can configure the cluster-wide firewall rules while your cluster is running. | ||
|
|
||
| include::modules/installation-creating-gcp-shared-vpc-cluster-wide-firewall-rules.adoc[leveloffset=+2] | ||
|
|
||
| //include::modules/installation-creating-gcp-shared-vpc-ingress-firewall-rules.adoc[leveloffset=+1] | ||
|
|
||
| include::modules/installation-gcp-user-infra-completing.adoc[leveloffset=+1] | ||
|
|
||
| .Next steps | ||
|
|
||
| * xref:../../installing/install_config/customizations.adoc#customizations[Customize your cluster]. | ||
| * If necessary, you can | ||
| xref:../../support/remote_health_monitoring/opting-out-of-remote-health-reporting.adoc#opting-out-remote-health-reporting_opting-out-remote-health-reporting[opt out of remote health reporting]. | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -3,6 +3,10 @@ | |
| // * installing/installing_gcp/installing-gcp-user-infra.adoc | ||
| // * installing/installing_gcp/installing-restricted-networks-gcp.adoc | ||
|
|
||
| ifeval::["{context}" == "installing-gcp-user-infra-vpc"] | ||
| :shared-vpc: | ||
| endif::[] | ||
|
|
||
| [id="installation-creating-gcp-bootstrap_{context}"] | ||
| = Creating the bootstrap machine in GCP | ||
|
|
||
|
|
@@ -32,15 +36,37 @@ have to contact Red Hat support with your installation logs. | |
| section of this topic and save it as `04_bootstrap.py` on your computer. This | ||
| template describes the bootstrap machine that your cluster requires. | ||
|
|
||
| . Export the following variables required by the resource definition: | ||
| . Export the variables that the deployment template uses: | ||
| //You need these variables before you deploy the load balancers for the shared VPC case, so the export statements that are if'd out for shared-vpc are in the load balancer module. | ||
| .. Export the control plane subnet location: | ||
| + | ||
| ifndef::shared-vpc[] | ||
| ---- | ||
| $ export CONTROL_SUBNET=`gcloud compute networks subnets describe ${INFRA_ID}-master-subnet --region=${REGION} --format json | jq -r .selfLink` | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Need to specify --project --account: |
||
| ---- | ||
| endif::shared-vpc[] | ||
|
|
||
| .. Export the location of the {op-system-first} image that the installation program requires: | ||
| + | ||
| ---- | ||
| $ export CLUSTER_IMAGE=`gcloud compute images describe ${INFRA_ID}-rhcos-image --format json | jq -r .selfLink` | ||
| ---- | ||
|
|
||
| ifndef::shared-vpc[] | ||
| .. Export each zone that the cluster uses: | ||
| + | ||
| ---- | ||
| $ export ZONE_0=`gcloud compute regions describe ${REGION} --format=json | jq -r .zones[0] | cut -d "/" -f9` | ||
| ---- | ||
| + | ||
| ---- | ||
| $ export ZONE_1=`gcloud compute regions describe ${REGION} --format=json | jq -r .zones[1] | cut -d "/" -f9` | ||
| ---- | ||
| + | ||
| ---- | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The bootstrap deployment template is changed. please load the latest from https://github.com/openshift/installer/blob/master/upi/gcp/04_bootstrap.py |
||
| $ export ZONE_2=`gcloud compute regions describe ${REGION} --format=json | jq -r .zones[2] | cut -d "/" -f9` | ||
| ---- | ||
| endif::shared-vpc[] | ||
|
|
||
| . Create a bucket and upload the `bootstrap.ign` file: | ||
| + | ||
|
|
@@ -82,8 +108,8 @@ resources: | |
| EOF | ||
| ---- | ||
| <1> `infra_id` is the `INFRA_ID` infrastructure name from the extraction step. | ||
| <2> `region` is the region to deploy the cluster into, for example `us-east1`. | ||
| <3> `zone` is the zone to deploy the bootstrap instance into, for example `us-east1-b`. | ||
| <2> `region` is the region to deploy the cluster into, for example `us-central1`. | ||
| <3> `zone` is the zone to deploy the bootstrap instance into, for example `us-central1-b`. | ||
| <4> `cluster_network` is the `selfLink` URL to the cluster network. | ||
| <5> `control_subnet` is the `selfLink` URL to the control subnet. | ||
| <6> `image` is the `selfLink` URL to the {op-system} image. | ||
|
|
@@ -96,6 +122,7 @@ EOF | |
| $ gcloud deployment-manager deployments create ${INFRA_ID}-bootstrap --config 04_bootstrap.yaml | ||
| ---- | ||
|
|
||
| ifndef::shared-vpc[] | ||
| . The templates do not manage load balancer membership due to limitations of Deployment | ||
| Manager, so you must add the bootstrap machine manually: | ||
| + | ||
|
|
@@ -105,3 +132,22 @@ $ gcloud compute target-pools add-instances \ | |
| $ gcloud compute target-pools add-instances \ | ||
| ${INFRA_ID}-ign-target-pool --instances-zone="${ZONE_0}" --instances=${INFRA_ID}-bootstrap | ||
| ---- | ||
| endif::shared-vpc[] | ||
|
|
||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. For Add the bootstrap instance to the load balancers step:
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Adding bootstrap to external load balancer target pool is not required. So please remove step 7: The templates do not manage load balancer membership due to limitations of Deployment Manager, so you must add the bootstrap machine manually: $ gcloud compute target-pools add-instances |
||
| ifdef::shared-vpc[] | ||
| . Add the bootstrap instance to the internal load balancer instance group: | ||
| + | ||
| ---- | ||
| $ gcloud compute instance-groups unmanaged add-instances ${INFRA_ID}-bootstrap-instance-group --zone=${ZONE_0} --instances=${INFRA_ID}-bootstrap | ||
| ---- | ||
|
|
||
| . Add the bootstrap instance group to the internal load balancer backend service: | ||
| + | ||
| ---- | ||
| $ gcloud compute backend-services add-backend ${INFRA_ID}-api-internal-backend-service --region=${REGION} --instance-group=${INFRA_ID}-bootstrap-instance-group --instance-group-zone=${ZONE_0} | ||
| ---- | ||
| endif::shared-vpc[] | ||
|
|
||
| ifeval::["{context}" == "installing-gcp-user-infra-vpc"] | ||
| :!shared-vpc: | ||
| endif::[] | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -2,6 +2,11 @@ | |
| // | ||
| // * installing/installing_gcp/installing-gcp-user-infra.adoc | ||
| // * installing/installing_gcp/installing-restricted-networks-gcp.adoc | ||
| // * installing/installing_gcp/installing-gcp-user-infra-vpc.adoc | ||
|
|
||
| ifeval::["{context}" == "installing-gcp-user-infra-vpc"] | ||
| :shared-vpc: | ||
| endif::[] | ||
|
|
||
| [id="installation-creating-gcp-control-plane_{context}"] | ||
| = Creating the control plane machines in GCP | ||
|
|
@@ -68,8 +73,8 @@ resources: | |
| EOF | ||
| ---- | ||
| <1> `infra_id` is the `INFRA_ID` infrastructure name from the extraction step. | ||
| <2> `region` is the region to deploy the cluster into, for example `us-east1`. | ||
| <3> `zones` are the zones to deploy the bootstrap instance into, for example `us-east1-b`, `us-east1-c`, and `us-east1-d`. | ||
| <2> `region` is the region to deploy the cluster into, for example `us-central1`. | ||
| <3> `zones` are the zones to deploy the bootstrap instance into, for example `us-central1-a`, `us-central1-b`, and `us-central1-c`. | ||
| <4> `control_subnet` is the `selfLink` URL to the control subnet. | ||
| <5> `image` is the `selfLink` URL to the {op-system} image. | ||
| <6> `machine_type` is the machine type of the instance, for example `n1-standard-4`. | ||
|
|
@@ -85,6 +90,7 @@ $ gcloud deployment-manager deployments create ${INFRA_ID}-control-plane --confi | |
| . The templates do not manage DNS entries due to limitations of Deployment | ||
| Manager, so you must add the etcd entries manually: | ||
| + | ||
| ifndef::shared-vpc[] | ||
| ---- | ||
| $ export MASTER0_IP=`gcloud compute instances describe ${INFRA_ID}-m-0 --zone ${ZONE_0} --format json | jq -r .networkInterfaces[0].networkIP` | ||
| $ export MASTER1_IP=`gcloud compute instances describe ${INFRA_ID}-m-1 --zone ${ZONE_1} --format json | jq -r .networkInterfaces[0].networkIP` | ||
|
|
@@ -101,7 +107,27 @@ $ gcloud dns record-sets transaction add \ | |
| --name _etcd-server-ssl._tcp.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type SRV --zone ${INFRA_ID}-private-zone | ||
| $ gcloud dns record-sets transaction execute --zone ${INFRA_ID}-private-zone | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. As the private zone is in host project, need to add --project --account to these cli. if [ -f transaction.yaml ]; then rm transaction.yaml; fi
gcloud dns record-sets transaction start --zone ${INFRA_ID}-private-zone --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT}
gcloud dns record-sets transaction add ${MASTER0_IP} --name etcd-0.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type A --zone ${INFRA_ID}-private-zone --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT}
gcloud dns record-sets transaction add ${MASTER1_IP} --name etcd-1.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type A --zone ${INFRA_ID}-private-zone --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT}
gcloud dns record-sets transaction add ${MASTER2_IP} --name etcd-2.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type A --zone ${INFRA_ID}-private-zone --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT}
gcloud dns record-sets transaction add \
"0 10 2380 etcd-0.${CLUSTER_NAME}.${BASE_DOMAIN}." \
"0 10 2380 etcd-1.${CLUSTER_NAME}.${BASE_DOMAIN}." \
"0 10 2380 etcd-2.${CLUSTER_NAME}.${BASE_DOMAIN}." \
--name _etcd-server-ssl._tcp.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type SRV --zone ${INFRA_ID}-private-zone --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT}
gcloud dns record-sets transaction execute --zone ${INFRA_ID}-private-zone --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT} |
||
| ---- | ||
| endif::shared-vpc[] | ||
| ifdef::shared-vpc[] | ||
| ---- | ||
| $ export MASTER0_IP=`gcloud compute instances describe ${INFRA_ID}-m-0 --zone ${ZONE_0} --format json | jq -r .networkInterfaces[0].networkIP` | ||
| $ export MASTER1_IP=`gcloud compute instances describe ${INFRA_ID}-m-1 --zone ${ZONE_1} --format json | jq -r .networkInterfaces[0].networkIP` | ||
| $ export MASTER2_IP=`gcloud compute instances describe ${INFRA_ID}-m-2 --zone ${ZONE_2} --format json | jq -r .networkInterfaces[0].networkIP` | ||
| if [ -f transaction.yaml ]; then rm transaction.yaml; fi | ||
| $ gcloud dns record-sets transaction start --zone ${INFRA_ID}-private-zone --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT} | ||
| $ gcloud dns record-sets transaction add ${MASTER0_IP} --name etcd-0.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type A --zone ${INFRA_ID}-private-zone --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT} | ||
| $ gcloud dns record-sets transaction add ${MASTER1_IP} --name etcd-1.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type A --zone ${INFRA_ID}-private-zone --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT} | ||
| $ gcloud dns record-sets transaction add ${MASTER2_IP} --name etcd-2.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type A --zone ${INFRA_ID}-private-zone --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT} | ||
| $ gcloud dns record-sets transaction add \ | ||
| "0 10 2380 etcd-0.${CLUSTER_NAME}.${BASE_DOMAIN}." \ | ||
| "0 10 2380 etcd-1.${CLUSTER_NAME}.${BASE_DOMAIN}." \ | ||
| "0 10 2380 etcd-2.${CLUSTER_NAME}.${BASE_DOMAIN}." \ | ||
| --name _etcd-server-ssl._tcp.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type SRV --zone ${INFRA_ID}-private-zone --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT} | ||
| $ gcloud dns record-sets transaction execute --zone ${INFRA_ID}-private-zone --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT} | ||
| ---- | ||
| endif::shared-vpc[] | ||
|
|
||
| ifndef::shared-vpc[] | ||
| . The templates do not manage load balancer membership due to limitations of Deployment | ||
| Manager, so you must add the control plane machines manually: | ||
| + | ||
|
|
@@ -113,3 +139,32 @@ $ gcloud compute target-pools add-instances ${INFRA_ID}-ign-target-pool --instan | |
| $ gcloud compute target-pools add-instances ${INFRA_ID}-ign-target-pool --instances-zone="${ZONE_1}" --instances=${INFRA_ID}-m-1 | ||
| $ gcloud compute target-pools add-instances ${INFRA_ID}-ign-target-pool --instances-zone="${ZONE_2}" --instances=${INFRA_ID}-m-2 | ||
| ---- | ||
| endif::shared-vpc[] | ||
|
|
||
| ifdef::shared-vpc[] | ||
| . The templates do not manage load balancer membership due to limitations of Deployment | ||
| Manager, so you must add the control plane machines manually. | ||
| ** For an internal cluster, use the following commands: | ||
| + | ||
| ---- | ||
| $ gcloud compute instance-groups unmanaged add-instances ${INFRA_ID}-master-${ZONE_0}-instance-group --zone=${ZONE_0} --instances=${INFRA_ID}-m-0 | ||
| $ gcloud compute instance-groups unmanaged add-instances ${INFRA_ID}-master-${ZONE_1}-instance-group --zone=${ZONE_1} --instances=${INFRA_ID}-m-1 | ||
| $ gcloud compute instance-groups unmanaged add-instances ${INFRA_ID}-master-${ZONE_2}-instance-group --zone=${ZONE_2} --instances=${INFRA_ID}-m-2 | ||
| ---- | ||
|
|
||
| ** For an external cluster, use the following commands: | ||
| + | ||
| ---- | ||
| $ gcloud compute instance-groups unmanaged add-instances ${INFRA_ID}-master-${ZONE_0}-instance-group --zone=${ZONE_0} --instances=${INFRA_ID}-m-0 | ||
| $ gcloud compute instance-groups unmanaged add-instances ${INFRA_ID}-master-${ZONE_1}-instance-group --zone=${ZONE_1} --instances=${INFRA_ID}-m-1 | ||
| $ gcloud compute instance-groups unmanaged add-instances ${INFRA_ID}-master-${ZONE_2}-instance-group --zone=${ZONE_2} --instances=${INFRA_ID}-m-2 | ||
|
|
||
| $ gcloud compute target-pools add-instances ${INFRA_ID}-api-target-pool --instances-zone="${ZONE_0}" --instances=${INFRA_ID}-m-0 | ||
| $ gcloud compute target-pools add-instances ${INFRA_ID}-api-target-pool --instances-zone="${ZONE_1}" --instances=${INFRA_ID}-m-1 | ||
| $ gcloud compute target-pools add-instances ${INFRA_ID}-api-target-pool --instances-zone="${ZONE_2}" --instances=${INFRA_ID}-m-2 | ||
| ---- | ||
| endif::shared-vpc[] | ||
|
|
||
| ifeval::["{context}" == "installing-gcp-user-infra-vpc"] | ||
| :!shared-vpc: | ||
| endif::[] | ||
Uh oh!
There was an error while loading. Please reload this page.