-
Notifications
You must be signed in to change notification settings - Fork 4.8k
test/extended: Add MultiNetworkPolicy test case #27449
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
/retest-required |
| err = oc.AsAdmin().Run("create").Args("-f", nad_yaml).Execute() | ||
| o.Expect(err).NotTo(o.HaveOccurred()) | ||
|
|
||
| g.By("launching pod with an annotation to use the net-attach-def") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would our tests be simpler if we added some NAD-related helper functions?
err = exutil.CreateNetworkAttachmentDefinition("macvlan-nad.yml")
exutil.SetNADAnnotation(pod, "macvlan1-nad", map[string]interface{
"ips": []string{"2.2.2.1/24"},
})
etc? Or something? (I don't remember exactly what the overlap is with the other NAD-related tests, but I know there are a lot of NAD-related tests at this point...)
(If you were going to do this, it would be best to have one commit first that adds the helpers and ports the existing tests to use them, and then a second commit adding the new MultiNetworkPolicy test, using the helpers.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
um, wait, there's already a createNetworkAttachmentDefinition method in test/extending/networking/utils.go
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, saw it .But it uses dynamic client instead of Yaml files. Is it ok to turn every NAD creation (I found 4 in the code base) in a call to createNetworkAttachmentDefinition()? and then removing yaml files?
| ]` | ||
|
|
||
| frameworkpod.CreateExecPodOrFail(f.ClientSet, ns, podName, func(pod *v1.Pod) { | ||
| pod.Spec.Containers[0].Args = []string{"net", "--serve", "2.2.2.1:8889"} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be better to use an IP in a reserved range, rather than an IP that actually does belong to someone in the real world. I think for the ovn-kube egress IP tests we use IPs from the "reserved for documentation examples" range?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool, I didn't know that: https://www.rfc-editor.org/rfc/rfc5737 .
|
|
||
| o.Eventually(func() error { | ||
| return oc.AsAdmin().Run("get").Args("multi-networkpolicies.k8s.cni.cncf.io").Execute() | ||
| }, "30s", "2s").Should(o.Succeed()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doesn't reliably indicate that MNP enforcement is in effect... that depends on deploying a DaemonSet as well, right?
The timeouts help, but it would be better if there was a good way of ensuring that the DaemonSet is fully deployed, so the test doesn't end up flaking under heavy load. (But also, we don't want to make too many assumptions about exactly what CNO is doing when you enable the feature, especially since we might change to a different MNP implementation in the future... I'm not sure what the best approach here is...)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with your considerations, it's not easy to decide what's the best approach. From the user's perspective, turning on the feature means that MNP resource becomes available.
If the Daemonset (or any other component) is not working or it is not started, then some other assertion will fail.
As a further example, in the future we can also decide to keep the Daemonset up and running but not creating iptables if the flag is set to false.
That said, I would keep it as simple as possible.
| err = oc.AsAdmin().Run("create").Args("-f", multinetpolicy_yaml).Execute() | ||
| o.Expect(err).To(o.Succeed()) | ||
|
|
||
| g.By("checking podB can NOT connnect to podA") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you need to test more than this. As written, this test would pass even if the actual behavior was "activating MNP completely breaks all secondary networks" 😬
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point! To achieve it, I need to:
- add a third pod (podC)
- change the policy from a deny-all style to something like:
...
podSelector:
matchLabels:
pod: a
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
pod: c
- check pod-C can connect to pod-A
But it will make the test a little more complicated. Do you have any suggestions for this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I refactored the test a little, now there is a deny-all-ingress rule applied only to pod-A, and I added another server pod-C. This way I can test pod-B can't connect to A but can still reach C. WDYT?
| }, "30s", "2s").Should(o.Succeed()) | ||
| } | ||
|
|
||
| func disablMultiNetworkPolicy(oc *exutil.CLI) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typo "disabl"
| g.By("enabling MultiNetworkPolicies on cluster") | ||
| enableMultiNetworkPolicy(oc) | ||
| defer disablMultiNetworkPolicy(oc) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're assuming that the feature is disabled by default in all clusters where the e2e suite runs, but that doesn't seem safe. I'd check to see if it's already enabled, and only do the "enable and defer disable" if it's not.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, I added logic to store initial useMultinetworkPolicy state and restore it afterward
99e897d to
fb04f7a
Compare
fb04f7a to
c7d0849
Compare
|
/retest-required |
|
@danwinship can you please take another look at this? |
|
/retest-required |
c7d0849 to
cfcede3
Compare
Test case use Cluster Network Operator field `useMultiNetworkPolicy` field and a basic `deny-all-ingress` policy to ensure the feature is correctly setup. Fixture involves a macvlan net-attach-def and network policy. Connectivity tests are implemented using agnhost http server and cURL. Signed-off-by: Andrea Panattoni <apanatto@redhat.com>
cfcede3 to
64404c5
Compare
|
/retest-required |
2 similar comments
|
/retest-required |
|
/retest-required |
|
@zeeke: The following tests failed, say
Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
|
All the required tests are passing. e.g. |
|
/lgtm |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: cgoncalves, zeeke The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
@s1061123 @dougbtv @danwinship |
s1061123
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Almost stuff seems to be good. Thanks!
|
|
||
| import ( | ||
| "fmt" | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should be removed due to CI failure
| annotations: | ||
| k8s.v1.cni.cncf.io/policy-for: macvlan1-nad | ||
| spec: | ||
| podSelector: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we add policyType?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sure!
|
@s1061123 thanks for reviewing. Since this issue staled for a while, I opened an IPv6 / IPv4 version in: I addressed your comment there. Please, have a look |
|
Closing in favor of: |
The test case uses Cluster Network Operator field
useMultiNetworkPolicyfield and a basicdeny-allpolicy to ensure the feature is correctly setup.Fixture involves a macvlan net-attach-def and network policy. Connectivity tests are implemented using agnhost http server and cURL.
Signed-off-by: Andrea Panattoni apanatto@redhat.com