Cleanup monitoring resource files#1017
Conversation
|
/retest |
|
/assign @matzew |
|
/retest |
|
@markusthoemmes could you review pls? :) |
markusthoemmes
left a comment
There was a problem hiding this comment.
Some driveby comments. I was wondering if we should even create manifests in this case vs. applying the resources with the client "directly". I guess we safe the fetch-if-not-found-update-else-create type logic.
| RoleRef: rbacv1.RoleRef{ | ||
| APIGroup: "rbac.authorization.k8s.io", | ||
| Kind: "Role", | ||
| Name: "knative-serving-prometheus-k8s", |
There was a problem hiding this comment.
| Name: "knative-serving-prometheus-k8s", | |
| Name: role.Name, |
| path := os.Getenv(envVar) | ||
| if path == "" { | ||
| return defaultVal | ||
| if *smManifest, err = smManifest.Transform(mf.InjectOwner(srv)); err != nil { |
There was a problem hiding this comment.
Why not use the instance as an owner here too? Both are created from instance technically. The ServiceMonitor isn't owned by the service.
That way you could prevent the interim fetching and also create just one manifest as with the RBAC stuff.
There was a problem hiding this comment.
One way to do this is to have the service depend on the instance deployment and service monitor on the service because when the deployment is gone there is nothing to scrape from thus service is gone and then service monitor which depends on the existence of the service is gone. This is how it was done at the operator sdk side.
There was a problem hiding this comment.
Meh, I guess it's a detail anyway but I personally don't see a reason to build a requirement chain here as both resources are owned by the same thing, technically.
There was a problem hiding this comment.
Ok I can remove it no problem for me.
What is happening here? |
I thought about this because I was tempted to remove them completely. So I thought of two advantages:
|
|
@markusthoemmes comments addressed lets see if all tests pass. At some point I need to add e2e tests for source metrics. |
|
@skonto: The following test failed, say
Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
|
infra |
|
/retest |
markusthoemmes
left a comment
There was a problem hiding this comment.
/lgtm
/approve
Some flyby comments, that are just nits really though. Feel free to unhold if you don't think we should change stuff.
/hold
| MatchLabels: map[string]string{"name": depName}, | ||
| }, | ||
| }} | ||
| sm.Labels["name"] = sm.Name |
There was a problem hiding this comment.
Does the ServiceMonitor even need this label?
There was a problem hiding this comment.
Added for consistency with the service labels as done in that operator framework.
| MatchNames: []string{ns}, | ||
| }, | ||
| Selector: metav1.LabelSelector{ | ||
| MatchLabels: map[string]string{"name": depName}, |
There was a problem hiding this comment.
It'd be cool to "declaratively" connect this with the service above, for example like so:
selector := map[string]string{"name": depName}
sms := v1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: depName,
Namespace: ns,
Labels: kmeta.UnionMaps(labels, selector),
},
Spec: v1.ServiceSpec{
Ports: []v1.ServicePort{{
Name: "http-metrics",
Port: 9090,
TargetPort: intstr.FromInt(9090),
Protocol: "TCP",
}},
Selector: kmeta.CopyMap(labels),
}}
sm := monitoringv1.ServiceMonitor{
ObjectMeta: metav1.ObjectMeta{
Name: depName,
Namespace: ns,
Labels: kmeta.CopyMap(labels),
},
Spec: monitoringv1.ServiceMonitorSpec{
Endpoints: []monitoringv1.Endpoint{{Port: "http-metrics"}},
NamespaceSelector: monitoringv1.NamespaceSelector{
MatchNames: []string{ns},
},
Selector: metav1.LabelSelector{
MatchLabels: selector,
}
}}| ObjectMeta: metav1.ObjectMeta{ | ||
| Name: depName, | ||
| Namespace: ns, | ||
| Labels: kmeta.CopyMap(labels), |
There was a problem hiding this comment.
Any particular reason we apply the same labels here that we match on? Same for the ServiceMonitor. Do we need labels at all (modulo the one to match on for the ServiceMonitor).
There was a problem hiding this comment.
These labels come from the deployment so I use them to tag the svc/sm too. It is a way to filter resources also from a cli perspective.
There was a problem hiding this comment.
They actually are the selector of the deployment though, right? So they'd be the same labels as on the pods.
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: markusthoemmes, skonto The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
@markusthoemmes I will unhold this one and will update anything needed in another PR. I am planning to refactor more stuff anyway. |
|
/unhold |
|
WFM, thanks! 👍 |
Note: This the first of a series of PR to make things a bit more compact. There is still code that can be shared between knative-operator and openshift-knative-operator.