Eventing upgrade tests prober fully configurable#4421
Conversation
Codecov Report
@@ Coverage Diff @@
## master #4421 +/- ##
=======================================
Coverage 81.07% 81.07%
=======================================
Files 281 281
Lines 7963 7963
=======================================
Hits 6456 6456
Misses 1121 1121
Partials 386 386 Continue to review full report at Codecov.
|
|
|
||
| // Config represents a configuration for prober. | ||
| type Config struct { | ||
| Wathola WatholaConfig |
There was a problem hiding this comment.
Would it make sense to inline/embed this so that we don't have so many levels of inspection
There was a problem hiding this comment.
How you would like to have it? Can you give me an example?
p.config.Wathola.Config.MountPoint -> p.config.ConfigMapMountPoint ?
In my opinion the first one is more descriptive, and got Go struct comments.
There was a problem hiding this comment.
Yes basically just remove the Wathola name so when we use the struct don't have so many named levels of indirection, kind of like k8s typemeta and objectmeta
|
|
||
| // WatholaConfig represents options related strictly to wathola testing tool. | ||
| type WatholaConfig struct { | ||
| Config ConfigMapConfig |
There was a problem hiding this comment.
Would it make sense to inline/embed this so that we don't have so many levels of inspection
|
Please add a README (or update existing) on how to override these values with couple of examples on how to override them. You can do that in a follow on. /lgtm |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: cardil, vaikas The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
@vaikas README paragraph was already prepared before, see: https://github.com/knative/eventing/blob/master/test/upgrade/README.md#probe-test-configuration Tell me is that sufficient or a follow-up PR is needed to clarify something. Thx. |
|
Yeah, I think that's fine, I just recall in our slack convo you were saying that some of these were a bit tricky, but you have an example there. Perhaps add an example how the embedded fields can be specified? |
* Eventing upgrade tests prober fully configurable * Embedding configuration structs
* Eventing upgrade tests prober fully configurable (knative#4421) * Eventing upgrade tests prober fully configurable * Embedding configuration structs * Reduce a test name length to prevent DNS label too long error (knative#4442) Having too long namespace or kservice name can lead to an error like: ``` $ host wathola-receiver-test-continuous-events-propagation-with-prober-zxmkp.apps.example.org host: 'wathola-receiver-test-continuous-events-propagation-with-prober-zxmkp.apps.example.org' is not a legal IDN name (domain label longer than 63 characters), use +noidnin ``` In this case my namespace is test-continuous-events-propagation-with-prober-zxmkp and knative service name is wathola-receiver. The namespace is taken from Go test method name. The limit is 63 characters. In this example the subdomain is 69 characters. This does affect OpenShift Serverless as kservices there have a URL format of `${ksvc.name}-${ksvc.namespace}` to enable usage of TLS wildcard certificates. Reducing this test method name length will help fit within this strict limit of 63 chars. * Use deployment to avoid disparity in effective user (knative#4445) On OpenShift we've observed a disparity when using pods vs deployments. Using both of those can lead to having different effective user for a bare pods and pods managed by deployment. That leads to differences in reading a config file by wathola components, as `~` points to different places sender and receiver+forwarder. This changes the code to avoid using bare pods for wathola components. * Refactor fetching of wathola receiver's delivery report using special batch Job (knative#4460) * Reimplementing fetching of wathola report with K8s job This change targets the problem of how to get report from cluster. Clusters may have different networking setup, and it might not be possible to directly make HTTP request from outside of cluster. Previous approach used to guess an external address of cluster. That for sure fails on OpenShift deployed on AWS. This approach deploys a special Job that, being inside cluster, can download a report and print it in it's logs. Then test client can fetch logs of completed job, and parse it, replay the logs, and process report further. * Removal of unneeded external node address package * Fixing lints & boilerplate * spec.template.spec.restartPolicy=never * Apply @devguyio suggestions for test/upgrade/README.md Co-authored-by: Ahmed Abdalla Abdelrehim <aabdelre@redhat.com> * Changes after review Co-authored-by: Ahmed Abdalla Abdelrehim <aabdelre@redhat.com> Co-authored-by: Ahmed Abdalla Abdelrehim <aabdelre@redhat.com>
Fixes #4420
Proposed Changes
configuration.gofile