Skip to content

aws-gp3 support: update vendor aws-sdk-go and terraform-provider-aws#5373

Merged
openshift-merge-robot merged 2 commits intoopenshift:masterfrom
mtulio:splat343-aws-gp3-dep
Dec 10, 2021
Merged

aws-gp3 support: update vendor aws-sdk-go and terraform-provider-aws#5373
openshift-merge-robot merged 2 commits intoopenshift:masterfrom
mtulio:splat343-aws-gp3-dep

Conversation

@mtulio
Copy link
Copy Markdown
Contributor

@mtulio mtulio commented Nov 11, 2021

The motivation of this PR is to update dependencies on vendors to support gp3 volumes by default, used on the original PR to change the default IPI to gp3 - and leave it ready for the installer team take the decision to the next steps.

Vendors:

  • aws-sdk-go : v1.35.37 is the minimal version to support gp3 volumes
  • terraform-provider-aws (openshift's fork): the minimal validation was added, which means that not all the entire features were added on gp3 (IOPS and throughput) was supported - it is out of the scope of 5239.

Blockers / dependencies (terraform-provider-aws bump):

Reference:

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Nov 11, 2021

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@openshift-ci openshift-ci Bot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. labels Nov 11, 2021
@openshift-ci openshift-ci Bot requested review from jhixson74 and rna-afk November 11, 2021 02:58
Comment thread go.mod Outdated
@openshift-ci openshift-ci Bot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Nov 26, 2021
Bump AWS SDK to v1.35.37 to support gp3 volume type.

Required to change the default volumes on control-plane, reference:
- openshift#5239
@mtulio mtulio force-pushed the splat343-aws-gp3-dep branch from 00246dd to 421ad25 Compare December 9, 2021 14:04
@openshift-ci openshift-ci Bot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Dec 9, 2021
@mtulio mtulio force-pushed the splat343-aws-gp3-dep branch from 421ad25 to c8886e9 Compare December 9, 2021 14:52
@mtulio
Copy link
Copy Markdown
Contributor Author

mtulio commented Dec 9, 2021

I just launch a new cluster and everything looks good:

$ oc get co
NAME                                       VERSION                              AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
authentication                             4.10.0-0.nightly-2021-12-06-201335   True        False         False      42m     
baremetal                                  4.10.0-0.nightly-2021-12-06-201335   True        False         False      59m     
cloud-controller-manager                   4.10.0-0.nightly-2021-12-06-201335   True        False         False      61m     
cloud-credential                           4.10.0-0.nightly-2021-12-06-201335   True        False         False      61m     
cluster-api                                4.10.0-0.nightly-2021-12-06-201335   True        False         False      61m     
cluster-autoscaler                         4.10.0-0.nightly-2021-12-06-201335   True        False         False      59m     
config-operator                            4.10.0-0.nightly-2021-12-06-201335   True        False         False      61m     
console                                    4.10.0-0.nightly-2021-12-06-201335   True        False         False      44m     
csi-snapshot-controller                    4.10.0-0.nightly-2021-12-06-201335   True        False         False      60m     
dns                                        4.10.0-0.nightly-2021-12-06-201335   True        False         False      59m     
etcd                                       4.10.0-0.nightly-2021-12-06-201335   True        False         False      59m     
image-registry                             4.10.0-0.nightly-2021-12-06-201335   True        False         False      51m     
ingress                                    4.10.0-0.nightly-2021-12-06-201335   True        False         False      51m     
insights                                   4.10.0-0.nightly-2021-12-06-201335   True        False         False      54m     
kube-apiserver                             4.10.0-0.nightly-2021-12-06-201335   True        False         False      57m     
kube-controller-manager                    4.10.0-0.nightly-2021-12-06-201335   True        False         False      58m     
kube-scheduler                             4.10.0-0.nightly-2021-12-06-201335   True        False         False      58m     
kube-storage-version-migrator              4.10.0-0.nightly-2021-12-06-201335   True        False         False      61m     
machine-api                                4.10.0-0.nightly-2021-12-06-201335   True        False         False      57m     
machine-approver                           4.10.0-0.nightly-2021-12-06-201335   True        False         False      60m     
machine-config                             4.10.0-0.nightly-2021-12-06-201335   True        False         False      59m     
marketplace                                4.10.0-0.nightly-2021-12-06-201335   True        False         False      60m     
monitoring                                 4.10.0-0.nightly-2021-12-06-201335   True        False         False      49m     
network                                    4.10.0-0.nightly-2021-12-06-201335   True        False         False      61m     
node-tuning                                4.10.0-0.nightly-2021-12-06-201335   True        False         False      60m     
openshift-apiserver                        4.10.0-0.nightly-2021-12-06-201335   True        False         False      51m     
openshift-controller-manager               4.10.0-0.nightly-2021-12-06-201335   True        False         False      59m     
openshift-samples                          4.10.0-0.nightly-2021-12-06-201335   True        False         False      50m     
operator-lifecycle-manager                 4.10.0-0.nightly-2021-12-06-201335   True        False         False      60m     
operator-lifecycle-manager-catalog         4.10.0-0.nightly-2021-12-06-201335   True        False         False      60m     
operator-lifecycle-manager-packageserver   4.10.0-0.nightly-2021-12-06-201335   True        False         False      51m     
service-ca                                 4.10.0-0.nightly-2021-12-06-201335   True        False         False      61m     
storage                                    4.10.0-0.nightly-2021-12-06-201335   True        False         False      60m   

@mtulio mtulio marked this pull request as ready for review December 9, 2021 17:06
@openshift-ci openshift-ci Bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Dec 9, 2021
@mtulio
Copy link
Copy Markdown
Contributor Author

mtulio commented Dec 9, 2021

@staebler @jstuever @jhixson74 continuing the change from terraform-provider-aws , it's ready for review. ptal?

@openshift-ci openshift-ci Bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Dec 9, 2021
@mtulio
Copy link
Copy Markdown
Contributor Author

mtulio commented Dec 9, 2021

/retest-required

@staebler
Copy link
Copy Markdown
Contributor

Need to run go mod tidy.
/test e2e-gcp

Bump terraform-provider-aws to pseudo v3.1.0-openshift-1 to minimal support
gp3 volume type.

This commit should be updated when the fork's PR will be merged and the new
tag 'v3.1.0-openshift-1' is created on the correct repo.

Dependency:
- openshift/terraform-provider-aws#13

Required to change the default volumes on control-plane, reference:
- openshift#5239
@mtulio mtulio force-pushed the splat343-aws-gp3-dep branch from c8886e9 to d0c9922 Compare December 10, 2021 13:14
@mtulio
Copy link
Copy Markdown
Contributor Author

mtulio commented Dec 10, 2021

Need to run go mod tidy

@staebler Thanks, I missed that. Done!

Copy link
Copy Markdown
Contributor

@staebler staebler left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm
/approve

@openshift-ci openshift-ci Bot added the lgtm Indicates that a PR is ready to be merged. label Dec 10, 2021
@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Dec 10, 2021

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: staebler

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci Bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Dec 10, 2021
@mtulio
Copy link
Copy Markdown
Contributor Author

mtulio commented Dec 10, 2021

/unhold

@openshift-ci openshift-ci Bot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Dec 10, 2021
@mtulio
Copy link
Copy Markdown
Contributor Author

mtulio commented Dec 10, 2021

/retest-required

@mtulio
Copy link
Copy Markdown
Contributor Author

mtulio commented Dec 10, 2021

I will review the tests failing and report here if it's flakes.
/hold

@openshift-ci openshift-ci Bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Dec 10, 2021
@gregsheremeta
Copy link
Copy Markdown
Contributor

I am so excited about this! Do we have any idea about which version of OCP will get this? Any chance of a backport to 4.9?

@staebler
Copy link
Copy Markdown
Contributor

I am so excited about this! Do we have any idea about which version of OCP will get this? Any chance of a backport to 4.9?

4.10. A backport is unlikely.

@mtulio
Copy link
Copy Markdown
Contributor Author

mtulio commented Dec 10, 2021

/test okd-e2e-aws

@mtulio
Copy link
Copy Markdown
Contributor Author

mtulio commented Dec 10, 2021

I just launch a new cluster with the latest version[1]. I am ok to remove WIP labels if CI is fine.

Looking the failed jobs, I am concerned only with okd-e2e-aws - specific for this error. I am not sure if it's directly related with this PR. wdyt @staebler @rvanderp3 ?

[1]

$ oc get clusterversion
NAME      VERSION                              AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.10.0-0.nightly-2021-12-10-123024   True        False         11m     Cluster version is 4.10.0-0.nightly-2021-12-10-123024

$ oc get co
NAME                                       VERSION                              AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
authentication                             4.10.0-0.nightly-2021-12-10-123024   True        False         False      11m     
baremetal                                  4.10.0-0.nightly-2021-12-10-123024   True        False         False      29m     
cloud-controller-manager                   4.10.0-0.nightly-2021-12-10-123024   True        False         False      31m     
cloud-credential                           4.10.0-0.nightly-2021-12-10-123024   True        False         False      30m     
cluster-api                                4.10.0-0.nightly-2021-12-10-123024   True        False         False      30m     
cluster-autoscaler                         4.10.0-0.nightly-2021-12-10-123024   True        False         False      29m     
config-operator                            4.10.0-0.nightly-2021-12-10-123024   True        False         False      30m     
console                                    4.10.0-0.nightly-2021-12-10-123024   True        False         False      17m     
csi-snapshot-controller                    4.10.0-0.nightly-2021-12-10-123024   True        False         False      30m     
dns                                        4.10.0-0.nightly-2021-12-10-123024   True        False         False      29m     
etcd                                       4.10.0-0.nightly-2021-12-10-123024   True        False         False      29m     
image-registry                             4.10.0-0.nightly-2021-12-10-123024   True        False         False      21m     
ingress                                    4.10.0-0.nightly-2021-12-10-123024   True        False         False      23m     
insights                                   4.10.0-0.nightly-2021-12-10-123024   True        False         False      24m     
kube-apiserver                             4.10.0-0.nightly-2021-12-10-123024   True        False         False      23m     
kube-controller-manager                    4.10.0-0.nightly-2021-12-10-123024   True        False         False      29m     
kube-scheduler                             4.10.0-0.nightly-2021-12-10-123024   True        False         False      29m     
kube-storage-version-migrator              4.10.0-0.nightly-2021-12-10-123024   True        False         False      30m     
machine-api                                4.10.0-0.nightly-2021-12-10-123024   True        False         False      26m     
machine-approver                           4.10.0-0.nightly-2021-12-10-123024   True        False         False      29m     
machine-config                             4.10.0-0.nightly-2021-12-10-123024   True        False         False      21m     
marketplace                                4.10.0-0.nightly-2021-12-10-123024   True        False         False      29m     
monitoring                                 4.10.0-0.nightly-2021-12-10-123024   True        False         False      18m     
network                                    4.10.0-0.nightly-2021-12-10-123024   True        False         False      31m     
node-tuning                                4.10.0-0.nightly-2021-12-10-123024   True        False         False      30m     
openshift-apiserver                        4.10.0-0.nightly-2021-12-10-123024   True        False         False      22m     
openshift-controller-manager               4.10.0-0.nightly-2021-12-10-123024   True        False         False      30m     
openshift-samples                          4.10.0-0.nightly-2021-12-10-123024   True        False         False      21m     
operator-lifecycle-manager                 4.10.0-0.nightly-2021-12-10-123024   True        False         False      30m     
operator-lifecycle-manager-catalog         4.10.0-0.nightly-2021-12-10-123024   True        False         False      30m     
operator-lifecycle-manager-packageserver   4.10.0-0.nightly-2021-12-10-123024   True        False         False      21m     
service-ca                                 4.10.0-0.nightly-2021-12-10-123024   True        False         False      30m     
storage                                    4.10.0-0.nightly-2021-12-10-123024   True        False         False      30m   

@rvanderp3
Copy link
Copy Markdown
Contributor

I just launch a new cluster with the latest version[1]. I am ok to remove WIP labels if CI is fine.

Looking the failed jobs, I am concerned only with okd-e2e-aws - specific for this error. I am not sure if it's directly related with this PR. wdyt @staebler @rvanderp3 ?

[1]

$ oc get clusterversion
NAME      VERSION                              AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.10.0-0.nightly-2021-12-10-123024   True        False         11m     Cluster version is 4.10.0-0.nightly-2021-12-10-123024

$ oc get co
NAME                                       VERSION                              AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
authentication                             4.10.0-0.nightly-2021-12-10-123024   True        False         False      11m     
baremetal                                  4.10.0-0.nightly-2021-12-10-123024   True        False         False      29m     
cloud-controller-manager                   4.10.0-0.nightly-2021-12-10-123024   True        False         False      31m     
cloud-credential                           4.10.0-0.nightly-2021-12-10-123024   True        False         False      30m     
cluster-api                                4.10.0-0.nightly-2021-12-10-123024   True        False         False      30m     
cluster-autoscaler                         4.10.0-0.nightly-2021-12-10-123024   True        False         False      29m     
config-operator                            4.10.0-0.nightly-2021-12-10-123024   True        False         False      30m     
console                                    4.10.0-0.nightly-2021-12-10-123024   True        False         False      17m     
csi-snapshot-controller                    4.10.0-0.nightly-2021-12-10-123024   True        False         False      30m     
dns                                        4.10.0-0.nightly-2021-12-10-123024   True        False         False      29m     
etcd                                       4.10.0-0.nightly-2021-12-10-123024   True        False         False      29m     
image-registry                             4.10.0-0.nightly-2021-12-10-123024   True        False         False      21m     
ingress                                    4.10.0-0.nightly-2021-12-10-123024   True        False         False      23m     
insights                                   4.10.0-0.nightly-2021-12-10-123024   True        False         False      24m     
kube-apiserver                             4.10.0-0.nightly-2021-12-10-123024   True        False         False      23m     
kube-controller-manager                    4.10.0-0.nightly-2021-12-10-123024   True        False         False      29m     
kube-scheduler                             4.10.0-0.nightly-2021-12-10-123024   True        False         False      29m     
kube-storage-version-migrator              4.10.0-0.nightly-2021-12-10-123024   True        False         False      30m     
machine-api                                4.10.0-0.nightly-2021-12-10-123024   True        False         False      26m     
machine-approver                           4.10.0-0.nightly-2021-12-10-123024   True        False         False      29m     
machine-config                             4.10.0-0.nightly-2021-12-10-123024   True        False         False      21m     
marketplace                                4.10.0-0.nightly-2021-12-10-123024   True        False         False      29m     
monitoring                                 4.10.0-0.nightly-2021-12-10-123024   True        False         False      18m     
network                                    4.10.0-0.nightly-2021-12-10-123024   True        False         False      31m     
node-tuning                                4.10.0-0.nightly-2021-12-10-123024   True        False         False      30m     
openshift-apiserver                        4.10.0-0.nightly-2021-12-10-123024   True        False         False      22m     
openshift-controller-manager               4.10.0-0.nightly-2021-12-10-123024   True        False         False      30m     
openshift-samples                          4.10.0-0.nightly-2021-12-10-123024   True        False         False      21m     
operator-lifecycle-manager                 4.10.0-0.nightly-2021-12-10-123024   True        False         False      30m     
operator-lifecycle-manager-catalog         4.10.0-0.nightly-2021-12-10-123024   True        False         False      30m     
operator-lifecycle-manager-packageserver   4.10.0-0.nightly-2021-12-10-123024   True        False         False      21m     
service-ca                                 4.10.0-0.nightly-2021-12-10-123024   True        False         False      30m     
storage                                    4.10.0-0.nightly-2021-12-10-123024   True        False         False      30m   

@mtulio this looks like a possible flake similar to what was reported in hashicorp/terraform-provider-aws#16142 . I'm not sure it's related but let's see how the currently running okd install goes.

@staebler
Copy link
Copy Markdown
Contributor

@mtulio this looks like a possible flake similar to what was reported in hashicorp/terraform-provider-aws#16142 . I'm not sure it's related but let's see how the currently running okd install goes.

Yes, I concur.

@staebler
Copy link
Copy Markdown
Contributor

/hold cancel

@openshift-ci openshift-ci Bot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Dec 10, 2021
@staebler staebler removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Dec 10, 2021
@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Dec 10, 2021

@mtulio: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-aws-workers-rhel8 d0c9922 link false /test e2e-aws-workers-rhel8
ci/prow/e2e-aws-fips d0c9922 link false /test e2e-aws-fips
ci/prow/e2e-gcp c8886e97f4fb0608472184dbe439b78bb96ff3f4 link false /test e2e-gcp
ci/prow/e2e-crc d0c9922 link false /test e2e-crc
ci/prow/e2e-aws-workers-rhel7 d0c9922 link false /test e2e-aws-workers-rhel7
ci/prow/e2e-libvirt d0c9922 link false /test e2e-libvirt
ci/prow/e2e-aws-single-node d0c9922 link false /test e2e-aws-single-node

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@openshift-bot
Copy link
Copy Markdown
Contributor

/retest-required

Please review the full test history for this PR and help us cut down flakes.

7 similar comments
@openshift-bot
Copy link
Copy Markdown
Contributor

/retest-required

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Copy Markdown
Contributor

/retest-required

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Copy Markdown
Contributor

/retest-required

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Copy Markdown
Contributor

/retest-required

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Copy Markdown
Contributor

/retest-required

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Copy Markdown
Contributor

/retest-required

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Copy Markdown
Contributor

/retest-required

Please review the full test history for this PR and help us cut down flakes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants