-
-
Notifications
You must be signed in to change notification settings - Fork 1
Test race condition #27
Changes from all commits
a19068f
031bd0f
40fe31f
d02a699
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -333,10 +333,6 @@ impl KubeClient { | |
| .any(|condition| condition.type_ == condition_type && condition.status == "True") | ||
| }; | ||
|
|
||
| if is_condition_true(&pod) { | ||
| return Ok(()); | ||
| } | ||
|
|
||
| let timeout_secs = self.timeouts.verify_pod_condition.as_secs() as u32; | ||
| let pods: Api<Pod> = Api::namespaced(self.client.clone(), &self.namespace); | ||
|
|
||
|
|
@@ -345,6 +341,12 @@ impl KubeClient { | |
| .timeout(timeout_secs); | ||
| let mut stream = pods.watch(&lp, "0").await?.boxed(); | ||
|
|
||
| let pod = pods.get_status(&pod.name()).await?; | ||
|
|
||
| if is_condition_true(&pod) { | ||
| return Ok(()); | ||
| } | ||
|
|
||
| while let Some(status) = stream.try_next().await? { | ||
| if let WatchEvent::Modified(pod) = status { | ||
| if is_condition_true(&pod) { | ||
|
|
@@ -442,3 +444,13 @@ pub fn get_node_taints(node: &Node) -> Vec<Taint> { | |
| .and_then(|spec| spec.taints.clone()) | ||
| .unwrap_or_else(Vec::new) | ||
| } | ||
|
|
||
| /// Returns the number of allocatable pods of the given node. | ||
| pub fn get_allocatable_pods(node: &Node) -> u32 { | ||
| node.status | ||
| .as_ref() | ||
| .and_then(|status| status.allocatable.as_ref()) | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is not anything wrong with this test, just something I noticed when playing around:
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I was aware of this behavior but too lazy to write a comment. I will do that.
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Comment added. Thanks for the thorough review. |
||
| .and_then(|allocatable| allocatable.get("pods")) | ||
| .and_then(|allocatable_pods| allocatable_pods.0.parse().ok()) | ||
| .unwrap_or_default() | ||
| } | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A general comment on this test, my understanding is, that this is to stress test a single agent, correct?
In that case it might be useful to pick one of the available nodes and explicitly assing all pods to that node, either via directly setting nodeName in the spec, or by adding a nodeSelector that targets that host.
Alternatively maybe make it "NUM_PODS_PER_NODE" and multiply by the number of nodes ... but that might create additional issues depending on how the scheduler assigns them..
Currently this runs fairly dissimilar tests from the perspective of an individual agent, depending on whether it is run against a cluster with 1, 3 or 10 nodes..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All pods are now scheduled on the same node via nodeName.