Skip to content
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
77 changes: 10 additions & 67 deletions test/e2e/e2e-template.yml
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest comparing this with boilerplate template and minimizing any non-necessary changes, as these will need to be updated in all PD pipelines

Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# THIS FILE IS GENERATED BY BOILERPLATE. DO NOT EDIT.
# Temporarily add S3 upload capability to the osde2e template for testing
# Native osde2e S3 upload is enabled - no sidecar needed
apiVersion: template.openshift.io/v1
kind: Template
metadata:
Expand Down Expand Up @@ -30,7 +30,7 @@ parameters:
value: ''
required: true
- name: LOG_BUCKET
value: 'osde2e-logs'
value: 'osde2e-loki-logs'
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what's the difference between these 2 buckets? are they in different accounts?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No big difference. osde2e-loki-logs is the existing bucket , using it just avoids creating a new one.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

osde2e-logs is an existing bucket as well, specified in all PD pipelines.

- name: USE_EXISTING_CLUSTER
value: 'TRUE'
- name: CAD_PAGERDUTY_ROUTING_KEY
Expand All @@ -43,14 +43,6 @@ parameters:
required: false
- name: SLACK_NOTIFY
required: false
- name: S3_RESULTS_BUCKET
value: 'osde2e-loki-logs'
- name: S3_RESULTS_REGION
value: 'us-east-1'
- name: ENABLE_S3_UPLOAD
value: 'true'
- name: OPERATOR_NAME
value: 'osd-example-operator'
objects:
- apiVersion: batch/v1
kind: Job
Expand All @@ -61,21 +53,18 @@ objects:
template:
spec:
restartPolicy: Never
volumes:
- name: test-results
emptyDir: {}
containers:
- name: osde2e
image: quay.io/redhat-services-prod/osde2e-cicada-tenant/osde2e:latest
command:
- /bin/sh
- -c
- |
/osde2e test --only-health-check-nodes --skip-destroy-cluster --skip-must-gather --log-analysis-enable --configs ${OSDE2E_CONFIGS}
TEST_EXIT_CODE=$?
cp -r /test-run-results/* /shared-results/ 2>/dev/null || true
echo "$TEST_EXIT_CODE" > /shared-results/.test-complete
exit $TEST_EXIT_CODE
- /osde2e
- test
- --only-health-check-nodes
- --skip-destroy-cluster
- --skip-must-gather
- --log-analysis-enable
- --configs
- ${OSDE2E_CONFIGS}
securityContext:
runAsNonRoot: true
allowPrivilegeEscalation: false
Expand Down Expand Up @@ -135,49 +124,3 @@ objects:
key: cad-pagerduty-routing-key
- name: SLACK_NOTIFY
value: ${SLACK_NOTIFY}
volumeMounts:
- name: test-results
mountPath: /shared-results
- name: s3-uploader
image: quay.io/app-sre/aws-cli
command:
- /bin/sh
- -c
- |
while [ ! -f /shared-results/.test-complete ]; do sleep 10; done
if [ "${ENABLE_S3_UPLOAD}" != "true" ]; then exit 0; fi
DATE=$(date -u +%Y-%m-%d)
S3_PREFIX="test-results/${OPERATOR_NAME}/${DATE}/${IMAGE_TAG}-${JOBID}"
aws s3 sync /shared-results/ "s3://${S3_RESULTS_BUCKET}/${S3_PREFIX}/" --exclude ".test-complete" --no-progress
echo "Uploaded to s3://${S3_RESULTS_BUCKET}/${S3_PREFIX}/"
securityContext:
runAsNonRoot: true
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
seccompProfile:
type: RuntimeDefault
env:
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: osde2e-aws-credentials
key: aws-access-key-id
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: osde2e-aws-credentials
key: aws-secret-access-key
- name: AWS_DEFAULT_REGION
value: ${S3_RESULTS_REGION}
- name: S3_RESULTS_BUCKET
value: ${S3_RESULTS_BUCKET}
- name: OPERATOR_NAME
value: ${OPERATOR_NAME}
- name: IMAGE_TAG
value: ${IMAGE_TAG}
- name: ENABLE_S3_UPLOAD
value: ${ENABLE_S3_UPLOAD}
volumeMounts:
- name: test-results
mountPath: /shared-results