From d9d6e4bd80cedca7b59223e185ee002e24499914 Mon Sep 17 00:00:00 2001 From: "W. Trevor King" Date: Thu, 18 Apr 2019 10:21:51 -0700 Subject: [PATCH 1/5] Drop AWS UPI control-plane Machines and compute MachineSets Folks are free to opt-in to the machine API during a UPI flow, but creating Machine(Set)s that match their host environment requires matching a few properties (subnet, securityGroups, ...). Our default templates are unlikely to do that out of the box, so just remove them with the standard flow. Users who want to wade in can do so, and I've adjusted our CloudFormation templates to set the same tags as our IPI assets to make this easier. But with the rm call, other folks don't have to worry about broken Machine(Set)s in their cluster confusing the machine API or other admins. The awkward join syntax for subnet names is because YAML doesn't support nesting !s [1]: You can't nest short form functions consecutively, so a pattern like !GetAZs !Ref is invalid. Also fix a few unrelated nits, e.g. the unused VpcId property in 06_cluster_worker_node.yaml. [1]: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-getavailabilityzones.html#w2ab1c21c24c36c17b8 --- docs/user/aws/install_upi.md | 195 ++++++++--------------------------- 1 file changed, 45 insertions(+), 150 deletions(-) diff --git a/docs/user/aws/install_upi.md b/docs/user/aws/install_upi.md index a14d9b9d7fe..2a4b1e8c697 100644 --- a/docs/user/aws/install_upi.md +++ b/docs/user/aws/install_upi.md @@ -6,19 +6,40 @@ resources through other methods; the CloudFormation templates are just an exampl ## Create Ignition Configs -The machines will be started manually. Therefore, it is required to generate the bootstrap and machine Ignition configs -and store them for later steps. +The machines will be started manually. +Therefore, it is required to generate the bootstrap and machine Ignition configs and store them for later steps. +Use [a staged install](../overview.md#multiple-invocations) to remove the control-plane Machines and compute MachineSets, because we'll be providing those ourselves and don't want to involve [the machine-API operator][machine-api-operator]. ```console -$ openshift-install-linux-amd64 create ignition-configs +$ openshift-install create install-config ? SSH Public Key /home/user_id/.ssh/id_rsa.pub ? Platform aws -? Region us-east-1 +? Region us-east-2 ? Base Domain example.com ? Cluster Name openshift ? Pull Secret [? for help] ``` +Create manifests to get access to the control-plane Machines and compute MachineSets: + +```console +$ openshift-install create manifests +INFO Consuming "Install Config" from target directory +``` + +From the manifest assets, remove the control-plane Machines and the compute MachineSets: + +```console +$ rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machinesets-*.yaml +``` + +You are free to leave the compute MachineSets in if you want to create compute machines via the machine API, but if you do you may need to update the various references (`subnet`, etc.) to match your environment. +Now we can create the bootstrap Ignition configs: + +```console +$ openshift-install create ignition-configs +``` + After running the command, several files will be available in the directory. ```console @@ -114,162 +135,35 @@ INFO Waiting up to 30m0s for the bootstrap-complete event... At this point, you should delete the bootstrap resources. If using the CloudFormation template, you would [delete the stack][delete-stack] created for the bootstrap to clean up all the temporary resources. -## Cleanup Machine API Resources - -By querying the Machine API, you'll notice the cluster is attempting to reconcile the predefined -Machine and MachineSet definitions. We will begin to correct that here. In this step, we delete -the pre-defined master nodes. Our masters are not controlled by the Machine API. - -### Example: Deleting Master Machine Definitions - -```console -$ export KUBECONFIG=auth/kubeconfig -$ oc get machines --namespace openshift-machine-api -NAME INSTANCE STATE TYPE REGION ZONE AGE -test-tkh7l-master-0 m4.xlarge us-east-2 us-east-2a 9m22s -test-tkh7l-master-1 m4.xlarge us-east-2 us-east-2b 9m22s -test-tkh7l-master-2 m4.xlarge us-east-2 us-east-2c 9m21s -test-tkh7l-worker-us-east-2a-qjcxq m4.large us-east-2 us-east-2a 8m6s -test-tkh7l-worker-us-east-2b-nq8zs m4.large us-east-2 us-east-2b 8m6s -test-tkh7l-worker-us-east-2c-ww6c6 m4.large us-east-2 us-east-2c 8m7s -$ oc delete machine --namespace openshift-machine-api test-tkh7l-master-0 -machine.machine.openshift.io "test-tkh7l-master-0" deleted -$ oc delete machine --namespace openshift-machine-api test-tkh7l-master-1 -machine.machine.openshift.io "test-tkh7l-master-1" deleted -$ oc delete machine --namespace openshift-machine-api test-tkh7l-master-2 -machine.machine.openshift.io "test-tkh7l-master-2" deleted -``` - -## Launch Additional Worker Nodes - -To launch workers, you are able to launch individual EC2 instances discretely or by automated processes outside the -cluster (e.g. Auto Scaling Groups). However, you are also able to take advantage of the built in cluster scaling mechanisms -and the machine API in OCP. - -### Option 1: Dynamic Compute using Machine API +## Launch Additional Compute Nodes -By default, MachineSets are created and will have failed to launch. We can correct the desired subnet filter, -target security group, RHEL CoreOS AMI and EC2 instance profile. +You may create compute nodes by launching individual EC2 instances discretely or by automated processes outside the cluster (e.g. Auto Scaling Groups). +You can also take advantage of the built in cluster scaling mechanisms and the machine API in OpenShift, as mentioned [above](#create-ignition-configs). +In this example, we'll manually launch instances via the CloudFormatio template [here](../../../upi/aws/cloudformation/06_cluster_worker_node.yaml). +You can launch a CloudFormation stack to manage each individual compute node (you should launch at least two for a high-availability ingress router). +A similar launch configuration could be used by outside automation or AWS auto scaling groups. -```console -$ oc get machinesets --namespace openshift-machine-api -NAME DESIRED CURRENT READY AVAILABLE AGE -test-tkh7l-worker-us-east-2a 1 1 11m -test-tkh7l-worker-us-east-2b 1 1 11m -test-tkh7l-worker-us-east-2c 1 1 11m -``` - -```console -$ oc get machineset --namespace openshift-machine-api test-tkh7l-worker-us-east-2a -o yaml -apiVersion: machine.openshift.io/v1beta1 -kind: MachineSet -metadata: - creationTimestamp: 2019-03-14T14:03:03Z - generation: 1 - labels: - machine.openshift.io/cluster-api-cluster: test-tkh7l - machine.openshift.io/cluster-api-machine-role: worker - machine.openshift.io/cluster-api-machine-type: worker - name: test-tkh7l-worker-us-east-2a - namespace: openshift-machine-api - resourceVersion: "2350" - selfLink: /apis/machine.openshift.io/v1beta1/namespaces/openshift-machine-api/machinesets/test-tkh7l-worker-us-east-2a - uid: e2a6c8a6-4661-11e9-a9b0-0296069fd3a2 -spec: - replicas: 1 - selector: - matchLabels: - machine.openshift.io/cluster-api-cluster: test-tkh7l - machine.openshift.io/cluster-api-machineset: test-tkh7l-worker-us-east-2a - template: - metadata: - creationTimestamp: null - labels: - machine.openshift.io/cluster-api-cluster: test-tkh7l - machine.openshift.io/cluster-api-machine-role: worker - machine.openshift.io/cluster-api-machine-type: worker - machine.openshift.io/cluster-api-machineset: test-tkh7l-worker-us-east-2a - spec: - metadata: - creationTimestamp: null - providerSpec: - value: - ami: - id: ami-0eecbb884c8b35b1e - apiVersion: awsproviderconfig.openshift.io/v1beta1 - blockDevices: - - ebs: - iops: 0 - volumeSize: 120 - volumeType: gp2 - credentialsSecret: - name: aws-cloud-credentials - deviceIndex: 0 - iamInstanceProfile: - id: test-tkh7l-worker-profile - instanceType: m4.large - kind: AWSMachineProviderConfig - metadata: - creationTimestamp: null - placement: - availabilityZone: us-east-2a - region: us-east-2 - publicIp: null - securityGroups: - - filters: - - name: tag:Name - values: - - test-tkh7l-worker-sg - subnet: - filters: - - name: tag:Name - values: - - test-tkh7l-private-us-east-2a - tags: - - name: kubernetes.io/cluster/test-tkh7l - value: owned - userDataSecret: - name: worker-user-data - versions: - kubelet: "" -status: - fullyLabeledReplicas: 1 - observedGeneration: 1 - replicas: 1 -``` - -At this point, you'd edit the YAML to update the relevant values to match your UPI installation. - -```console -$ oc edit machineset --namespace openshift-machine-api test-tkh7l-worker-us-east-2a -machineset.machine.openshift.io/test-tkh7l-worker-us-east-2a edited -``` +#### Approving the CSR requests for nodes -Once the Machine API has a chance to reconcile and begin launching hosts with the correct attributes, you -should start to see new output in your EC2 console and oc commands. +The CSR requests for client and server certificates for nodes joining the cluster will need to be approved by the administrator. +You can view them with: ```console -$ oc get machines --namespace openshift-machine-api -NAME INSTANCE STATE TYPE REGION ZONE AGE -test-tkh7l-worker-us-east-2a-hxlqn i-0e7f3a52b2919471e pending m4.4xlarge us-east-2 us-east-2a 3s +$ oc get csr +NAME AGE REQUESTOR CONDITION +csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued +csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued +csr-b96j4 25s system:node:ip-10-0-52-215.us-east-2.compute.internal Approved,Issued +csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending +csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending +... ``` -### Option 2: Manually Launching Worker Instances - -The worker launch is provided within a CloudFormation template [here](../../../upi/aws/cloudformation/06_cluster_worker_node.yaml). -You can launch a CloudFormation stack to manage each individual worker. A similar launch configuration could be used by -outside automation or AWS auto scaling groups. - -#### Approving the CSR requests for nodes - -The CSR requests for client and server certificates for nodes joining the cluster will need to be approved by the administrator. - Administrators should carefully examine each CSR request and approve only the ones that belong to the nodes created by them. - -The CSR can be approved by using +CSRs can be approved by name, for example: ```sh -oc adm certificate approve +oc adm certificate approve csr-bfd72 ``` ## Configure Router for UPI @@ -343,3 +237,4 @@ openshift-service-catalog-controller-manager-operator openshift-service-catalo [cloudformation]: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html [delete-stack]: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-delete-stack.html +[machine-api-operator]: https://github.com/openshift/machine-api-operator From 79b5cece46a5746f3de1f1443b8174584c31085c Mon Sep 17 00:00:00 2001 From: "W. Trevor King" Date: Tue, 23 Apr 2019 22:04:07 -0700 Subject: [PATCH 2/5] upi/aws/cloudformation: api-int Route 53 record Catching up with 13e4b702f7 (data/aws: create an api-int dns name, 2019-04-11, #1601), now that 052fceeeaf (asset/manifests: use internal apiserver name, 2019-04-17, #1633) has moved some internal assets over to that name. --- docs/user/aws/install_upi.md | 4 ++-- upi/aws/cloudformation/02_cluster_infra.yaml | 11 ++++++++++- upi/aws/cloudformation/05_cluster_master_nodes.yaml | 2 +- upi/aws/cloudformation/06_cluster_worker_node.yaml | 2 +- 4 files changed, 14 insertions(+), 5 deletions(-) diff --git a/docs/user/aws/install_upi.md b/docs/user/aws/install_upi.md index 2a4b1e8c697..afe16d3b312 100644 --- a/docs/user/aws/install_upi.md +++ b/docs/user/aws/install_upi.md @@ -97,8 +97,8 @@ external to the cluster and nodes within the cluster. Port 22623 must be accessi ### Optional: Manually Create Route53 Hosted Zones & Records -For the cluster name identified earlier in [Create Ignition Configs](#create-ignition-configs), you must create a DNS -entry which resolves to your created load balancer. The entry `api.$clustername.$domain` should point to the load balancer. +For the cluster name identified earlier in [Create Ignition Configs](#create-ignition-configs), you must create a DNS entry which resolves to your created load balancer. +The entry `api.$clustername.$domain` should point to the external load balancer and `api-int.$clustername.$domain` should point to the internal load balancer. ## Create Security Groups and IAM Roles diff --git a/upi/aws/cloudformation/02_cluster_infra.yaml b/upi/aws/cloudformation/02_cluster_infra.yaml index b25babb76a5..39dd275244d 100644 --- a/upi/aws/cloudformation/02_cluster_infra.yaml +++ b/upi/aws/cloudformation/02_cluster_infra.yaml @@ -132,6 +132,15 @@ Resources: AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName + - Name: + !Join [ + ".", + ["api-int", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]], + ] + Type: A + AliasTarget: + HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID + DNSName: !GetAtt IntApiElb.DNSName ExternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener @@ -356,7 +365,7 @@ Outputs: Value: !GetAtt IntApiElb.LoadBalancerFullName ApiServerDnsName: Description: Full hostname of the API server - Needed for ignition configs - Value: !Join [".", ["api", !Ref ClusterName, !Ref HostedZoneName]] + Value: !Join [".", ["api-int", !Ref ClusterName, !Ref HostedZoneName]] RegisterNlbIpTargetsLambda: Description: Lambda ARN useful to help register/deregister IP targets for these load balancers Value: !GetAtt RegisterNlbIpTargets.Arn diff --git a/upi/aws/cloudformation/05_cluster_master_nodes.yaml b/upi/aws/cloudformation/05_cluster_master_nodes.yaml index 42bd81e5b07..69544e92c87 100644 --- a/upi/aws/cloudformation/05_cluster_master_nodes.yaml +++ b/upi/aws/cloudformation/05_cluster_master_nodes.yaml @@ -38,7 +38,7 @@ Parameters: Description: The master security group ID to associate with master nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: - Default: https://api.$CLUSTER_NAME.$DOMAIN:22623/config/master + Default: https://api-int.$CLUSTER_NAME.$DOMAIN:22623/config/master Description: Location to fetch bootstrap ignition from. (Recommend to use the autocreated ignition config location.) Type: String CertificateAuthorities: diff --git a/upi/aws/cloudformation/06_cluster_worker_node.yaml b/upi/aws/cloudformation/06_cluster_worker_node.yaml index e1f873531b0..5f9b72caf2f 100644 --- a/upi/aws/cloudformation/06_cluster_worker_node.yaml +++ b/upi/aws/cloudformation/06_cluster_worker_node.yaml @@ -19,7 +19,7 @@ Parameters: Description: The master security group ID to associate with master nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: - Default: https://api.$CLUSTER_NAME.$DOMAIN:22623/config/worker + Default: https://api-int.$CLUSTER_NAME.$DOMAIN:22623/config/worker Description: Location to fetch bootstrap ignition from. (Recommend to use the autocreated ignition config location.) Type: String CertificateAuthorities: From 9572a06d2046e81adf5d9d36a25c39a485b6cb76 Mon Sep 17 00:00:00 2001 From: "W. Trevor King" Date: Wed, 1 May 2019 10:38:08 -0700 Subject: [PATCH 3/5] upi/aws/cloudformation: Remove unnecessary stuff from templates The point of these templates is to provide the bare minimum needed to get our cluster off the ground. Things like resource names and auxilliary tags are nice to have in a production deploy for admin orientation, but you can have a healthy cluster without them, so I'm culling them in this commit. Users, who may not be using our CloudFormation templates at all, should now have an easier time seeing what they need to set, and where they can go their own way. We need to keep the kubernetes.io/cluster/... tags on instances to avoid: May 01 21:31:53 ip-10-0-57-198 hyperkube[2311]: E0501 21:31:53.061462 2311 tags.go:95] Tag "KubernetesCluster" nor "kubernetes.io/cluster/..." not found; Kubernetes may behave unexpectedly. --- upi/aws/cloudformation/01_vpc.yaml | 74 ------------------- .../cloudformation/03_cluster_security.yaml | 8 -- .../cloudformation/04_cluster_bootstrap.yaml | 7 -- .../05_cluster_master_nodes.yaml | 23 +----- .../06_cluster_worker_node.yaml | 17 +---- 5 files changed, 7 insertions(+), 122 deletions(-) diff --git a/upi/aws/cloudformation/01_vpc.yaml b/upi/aws/cloudformation/01_vpc.yaml index 0c7c94bd882..fdf66ae1497 100644 --- a/upi/aws/cloudformation/01_vpc.yaml +++ b/upi/aws/cloudformation/01_vpc.yaml @@ -54,13 +54,6 @@ Resources: EnableDnsSupport: "true" EnableDnsHostnames: "true" CidrBlock: !Ref VpcCidr - Tags: - - Key: Application - Value: !Ref "AWS::StackName" - - Key: Network - Value: Public - - Key: Name - Value: !Ref "AWS::StackName" PublicSubnet: Type: "AWS::EC2::Subnet" Properties: @@ -69,11 +62,6 @@ Resources: AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref "AWS::Region" - Tags: - - Key: Application - Value: !Ref "AWS::StackName" - - Key: Network - Value: Public PublicSubnet2: Type: "AWS::EC2::Subnet" Condition: DoAz2 @@ -83,11 +71,6 @@ Resources: AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref "AWS::Region" - Tags: - - Key: Application - Value: !Ref "AWS::StackName" - - Key: Network - Value: Public PublicSubnet3: Type: "AWS::EC2::Subnet" Condition: DoAz3 @@ -97,19 +80,8 @@ Resources: AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref "AWS::Region" - Tags: - - Key: Application - Value: !Ref "AWS::StackName" - - Key: Network - Value: Public InternetGateway: Type: "AWS::EC2::InternetGateway" - Properties: - Tags: - - Key: Application - Value: !Ref "AWS::StackName" - - Key: Network - Value: Public GatewayToInternet: Type: "AWS::EC2::VPCGatewayAttachment" Properties: @@ -119,11 +91,6 @@ Resources: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VPC - Tags: - - Key: Application - Value: !Ref "AWS::StackName" - - Key: Network - Value: Public PublicRoute: Type: "AWS::EC2::Route" DependsOn: GatewayToInternet @@ -152,11 +119,6 @@ Resources: Type: "AWS::EC2::NetworkAcl" Properties: VpcId: !Ref VPC - Tags: - - Key: Application - Value: !Ref "AWS::StackName" - - Key: Network - Value: Public InboundHTTPPublicNetworkAclEntry: Type: "AWS::EC2::NetworkAclEntry" Properties: @@ -242,22 +204,10 @@ Resources: AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref "AWS::Region" - Tags: - - Key: Application - Value: !Ref "AWS::StackName" - - Key: Network - Value: Private - - Key: kubernetes.io/role/internal-elb - Value: "" PrivateRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VPC - Tags: - - Key: Application - Value: !Ref "AWS::StackName" - - Key: Network - Value: Private PrivateSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: @@ -294,23 +244,11 @@ Resources: AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref "AWS::Region" - Tags: - - Key: Application - Value: !Ref "AWS::StackName" - - Key: Network - Value: Private - - Key: kubernetes.io/role/internal-elb - Value: "" PrivateRouteTable2: Type: "AWS::EC2::RouteTable" Condition: DoAz2 Properties: VpcId: !Ref VPC - Tags: - - Key: Application - Value: !Ref "AWS::StackName" - - Key: Network - Value: Private PrivateSubnetRouteTableAssociation2: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz2 @@ -351,23 +289,11 @@ Resources: AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref "AWS::Region" - Tags: - - Key: Application - Value: !Ref "AWS::StackName" - - Key: Network - Value: Private - - Key: kubernetes.io/role/internal-elb - Value: "" PrivateRouteTable3: Type: "AWS::EC2::RouteTable" Condition: DoAz3 Properties: VpcId: !Ref VPC - Tags: - - Key: Application - Value: !Ref "AWS::StackName" - - Key: Network - Value: Private PrivateSubnetRouteTableAssociation3: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz3 diff --git a/upi/aws/cloudformation/03_cluster_security.yaml b/upi/aws/cloudformation/03_cluster_security.yaml index 14295de65fc..e05a143723e 100644 --- a/upi/aws/cloudformation/03_cluster_security.yaml +++ b/upi/aws/cloudformation/03_cluster_security.yaml @@ -49,7 +49,6 @@ Resources: MasterSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: - GroupName: !Join ["-", ["master-sg", !Ref InfrastructureName]] GroupDescription: Cluster Master Security Group SecurityGroupIngress: - IpProtocol: icmp @@ -73,7 +72,6 @@ Resources: WorkerSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: - GroupName: !Join ["-", ["worker-sg", !Ref InfrastructureName]] GroupDescription: Cluster Worker Security Group SecurityGroupIngress: - IpProtocol: icmp @@ -259,7 +257,6 @@ Resources: MasterIamRole: Type: AWS::IAM::Role Properties: - RoleName: !Join ["-", [!Ref InfrastructureName, "master", "role"]] AssumeRolePolicyDocument: Version: "2012-10-17" Statement: @@ -269,7 +266,6 @@ Resources: - "ec2.amazonaws.com" Action: - "sts:AssumeRole" - Path: "/" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "master", "policy"]] PolicyDocument: @@ -291,14 +287,12 @@ Resources: MasterInstanceProfile: Type: "AWS::IAM::InstanceProfile" Properties: - Path: "/" Roles: - Ref: "MasterIamRole" WorkerIamRole: Type: AWS::IAM::Role Properties: - RoleName: !Join ["-", [!Ref InfrastructureName, "worker", "role"]] AssumeRolePolicyDocument: Version: "2012-10-17" Statement: @@ -308,7 +302,6 @@ Resources: - "ec2.amazonaws.com" Action: - "sts:AssumeRole" - Path: "/" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "worker", "policy"]] PolicyDocument: @@ -321,7 +314,6 @@ Resources: WorkerInstanceProfile: Type: "AWS::IAM::InstanceProfile" Properties: - Path: "/" Roles: - Ref: "WorkerIamRole" diff --git a/upi/aws/cloudformation/04_cluster_bootstrap.yaml b/upi/aws/cloudformation/04_cluster_bootstrap.yaml index b9af63c4a26..5374bc084d3 100644 --- a/upi/aws/cloudformation/04_cluster_bootstrap.yaml +++ b/upi/aws/cloudformation/04_cluster_bootstrap.yaml @@ -103,7 +103,6 @@ Resources: BootstrapIamRole: Type: AWS::IAM::Role Properties: - RoleName: !Join ["-", [!Ref InfrastructureName, "bootstrap", "role"]] AssumeRolePolicyDocument: Version: "2012-10-17" Statement: @@ -142,7 +141,6 @@ Resources: BootstrapSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: - GroupName: !Join ["-", ["bootstrap-sg", !Ref InfrastructureName]] GroupDescription: Cluster Bootstrap Security Group SecurityGroupIngress: - IpProtocol: tcp @@ -174,11 +172,6 @@ Resources: - { S3Loc: !Ref BootstrapIgnitionLocation } - Tags: - - Key: "Name" - Value: !Join ["-", [!Ref InfrastructureName, "bootstrap"]] - - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] - Value: "Owned" RegisterBootstrapApiTarget: Condition: DoRegistration diff --git a/upi/aws/cloudformation/05_cluster_master_nodes.yaml b/upi/aws/cloudformation/05_cluster_master_nodes.yaml index 69544e92c87..b8982169064 100644 --- a/upi/aws/cloudformation/05_cluster_master_nodes.yaml +++ b/upi/aws/cloudformation/05_cluster_master_nodes.yaml @@ -7,7 +7,7 @@ Parameters: MaxLength: 32 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter and a maximum of 32 characters - Description: A short, unique cluster ID used to tag cloud resources and identify items owned/used by the cluster. + Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current RHEL CoreOS AMI to use for boostrap @@ -89,11 +89,6 @@ Parameters: Metadata: AWS::CloudFormation::Interface: ParameterGroups: - - Label: - default: "Cluster Information" - Parameters: - - ClusterName - - InfrastructureName - Label: default: "Host Information" Parameters: @@ -126,10 +121,6 @@ Metadata: - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: - ClusterName: - default: "Cluster Name" - InfrastructureName: - default: "Infrastructure Name" VpcId: default: "VPC ID" Master0Subnet: @@ -184,10 +175,8 @@ Resources: CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - - Key: "Name" - Value: !Join ["-", [!Ref InfrastructureName, "master", "0"]] - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] - Value: "owned" + Value: "shared" RegisterMaster0: Condition: DoRegistration @@ -233,10 +222,8 @@ Resources: CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - - Key: "Name" - Value: !Join ["-", [!Ref InfrastructureName, "master", "1"]] - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] - Value: "owned" + Value: "shared" RegisterMaster1: Condition: DoRegistration @@ -282,10 +269,8 @@ Resources: CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - - Key: "Name" - Value: !Join ["-", [!Ref InfrastructureName, "master", "2"]] - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] - Value: "owned" + Value: "shared" RegisterMaster2: Condition: DoRegistration diff --git a/upi/aws/cloudformation/06_cluster_worker_node.yaml b/upi/aws/cloudformation/06_cluster_worker_node.yaml index 5f9b72caf2f..269b9a015cc 100644 --- a/upi/aws/cloudformation/06_cluster_worker_node.yaml +++ b/upi/aws/cloudformation/06_cluster_worker_node.yaml @@ -7,7 +7,7 @@ Parameters: MaxLength: 32 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter and a maximum of 32 characters - Description: A short, unique cluster ID used to tag cloud resources and identify items owned/used by the cluster. + Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current RHEL CoreOS AMI to use for boostrap @@ -55,10 +55,6 @@ Parameters: Metadata: AWS::CloudFormation::Interface: ParameterGroups: - - Label: - default: "Cluster Information" - Parameters: - - InfrastructureName - Label: default: "Host Information" Parameters: @@ -71,13 +67,8 @@ Metadata: - Label: default: "Network Configuration" Parameters: - - VpcId - WorkerSubnet ParameterLabels: - InfrastructureName: - default: "Infrastructure Name" - VpcId: - default: "VPC ID" WorkerSubnet: default: "Worker Subnet" WorkerInstanceType: @@ -86,7 +77,7 @@ Metadata: default: "Worker Instance Profile Name" RhcosAmi: default: "RHEL CoreOS AMI ID" - BootstrapIgnitionLocation: + IgnitionLocation: default: "Worker Ignition Source" CertificateAuthorities: default: "Ignition CA String" @@ -114,7 +105,5 @@ Resources: CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - - Key: "Name" - Value: !Join ["-", [!Ref InfrastructureName, "worker"]] - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] - Value: "owned" + Value: "shared" From c8951d9995ac4b745d67faf8c47a1ba4511db58f Mon Sep 17 00:00:00 2001 From: "W. Trevor King" Date: Wed, 1 May 2019 15:15:58 -0700 Subject: [PATCH 4/5] upi/aws/cloudformation/05_cluster_master_nodes: Output private IPs To make it easier to recover these for: $ openshift-install gather bootstrap ... --- upi/aws/cloudformation/05_cluster_master_nodes.yaml | 9 +++++++++ upi/aws/cloudformation/06_cluster_worker_node.yaml | 5 +++++ 2 files changed, 14 insertions(+) diff --git a/upi/aws/cloudformation/05_cluster_master_nodes.yaml b/upi/aws/cloudformation/05_cluster_master_nodes.yaml index b8982169064..76264d5234c 100644 --- a/upi/aws/cloudformation/05_cluster_master_nodes.yaml +++ b/upi/aws/cloudformation/05_cluster_master_nodes.yaml @@ -350,3 +350,12 @@ Resources: - !GetAtt Master2.PrivateIp TTL: 60 Type: A + +Outputs: + PrivateIPs: + Description: The control-plane node private IP addresses + Value: + !Join [ + ",", + [!GetAtt Master0.PrivateIp, !GetAtt Master1.PrivateIp, !GetAtt Master2.PrivateIp] + ] diff --git a/upi/aws/cloudformation/06_cluster_worker_node.yaml b/upi/aws/cloudformation/06_cluster_worker_node.yaml index 269b9a015cc..2a418d2012a 100644 --- a/upi/aws/cloudformation/06_cluster_worker_node.yaml +++ b/upi/aws/cloudformation/06_cluster_worker_node.yaml @@ -107,3 +107,8 @@ Resources: Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" + +Outputs: + PrivateIP: + Description: The compute node private IP address + Value: !GetAtt Worker0.PrivateIp From c22d042fe16ab06c217b7f4d5c1507743c7737a2 Mon Sep 17 00:00:00 2001 From: "W. Trevor King" Date: Thu, 2 May 2019 09:57:35 -0700 Subject: [PATCH 5/5] docs/user/aws/install_upi: Add 'sed' call to zero compute replicas This isn't strictly required, because we're removing the resulting MachineSets right afterwards. It's setting the stage for a future where 'replicas: 0' means "no MachineSets" instead of "we'll make you some dummy MachineSets". And we can always remove the sed later if that future ends up not happening. The sed is based on [1], to replace 'replicas' only for the compute pool (and not the control-plane pool). While it should be POSIX-compliant (and not specific to GNU sed or other implementations), it is a bit finicky for a few reasons: * The range matching will not detect matches in the first line, but 'replicas' will always follow its parent 'compute', so we don't have to worry about first-line matches. * 'compute' sorts before 'controlPlane', so we don't have to worry about their 'replicas: ' coming first. * 'baseDomain' is the only other property that sorts before 'compute', but 'replicas: ' is not a legal substring for its domain-name value, so we don't have to worry about accidentally matching that. * While all of the above mean we're safe for now, this approach could break down if we add additional properties in the future that sort before 'compute' but do allow 'replicas: ' as a valid substring. [1]: https://stackoverflow.com/a/33416489 --- docs/user/aws/install_upi.md | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/docs/user/aws/install_upi.md b/docs/user/aws/install_upi.md index afe16d3b312..3960a7f6b8b 100644 --- a/docs/user/aws/install_upi.md +++ b/docs/user/aws/install_upi.md @@ -20,6 +20,12 @@ $ openshift-install create install-config ? Pull Secret [? for help] ``` +Edit the resulting `openshift-install.yaml` to set `replicas` to 0 for the `compute` pool: + +```console +$ sed -i '1,/replicas: / s/replicas: .*/replicas: 0/' install-config.yaml +``` + Create manifests to get access to the control-plane Machines and compute MachineSets: ```console