-
Notifications
You must be signed in to change notification settings - Fork 4.8k
Registry daemonset #7596
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Registry daemonset #7596
Conversation
| cmd.Flags().StringVar(&cfg.Credentials, "credentials", "", "Path to a .kubeconfig file that will contain the credentials the registry should use to contact the master.") | ||
| cmd.Flags().StringVar(&cfg.ServiceAccount, "service-account", cfg.ServiceAccount, "Name of the service account to use to run the registry pod.") | ||
| cmd.Flags().StringVar(&cfg.Selector, "selector", cfg.Selector, "Selector used to filter nodes on deployment. Used to run registries on a specific set of nodes.") | ||
| cmd.Flags().BoolVar(&cfg.DaemonSet, "daemonset", cfg.DaemonSet, "Use a daemonset instead of a deployment config.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not specific to this pull, but do we have a plan for rolling out updates to daemonsets like DC's allow?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Orchestration of daemonset updates are in progress upstream at:
a16537a to
5654f67
Compare
pkg/cmd/admin/registry/registry.go
Outdated
| // which only supports DCs. | ||
| if cfg.DaemonSet { | ||
| daemonSet := generateDaemonSet(podTemplate, name, label) | ||
| objects = replaceDeploymentConfig(objects, daemonSet) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can't we just append desired object in both cases instead of replacing unwanted?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The referenced code for adding a service would need to change - it only supports DCs at this point. That isn't a huge deal in and of itself but I hesitated to make the change. I can look in to it though.
|
I'd really, really, really want all our commands follow upstream Complete-Validate-Run scheme. Since you're touching this command can you please refactor it appropriately? I just did that with image-import in #7581. |
sure, I'll take a look since I'm already refactoring it. |
|
🎉 |
5654f67 to
63ef8d1
Compare
bb87e04 to
c5ce23d
Compare
|
[test] |
|
LGTM. re-[test] |
|
LGTM |
|
[test] |
2 similar comments
|
[test] |
|
[test] |
|
rebased and re[test] |
|
any other comments here? @deads2k would you mind taking a glance at the e2e additions? |
pkg/cmd/admin/registry/registry.go
Outdated
| } | ||
|
|
||
| // if we're using a daemon set we'll add it here and remove the DC. | ||
| // We allow the dc to be created above to get the goodness of app.AddServices |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That doesn't look too bad to fix. Why wouldn't we make the update in that function?
test/cmd/admin.sh
Outdated
| os::cmd::expect_success_and_text "oadm registry --daemonset -o yaml --credentials=${KUBECONFIG}" 'DaemonSet' | ||
| os::cmd::expect_success "oadm registry --daemonset --credentials=${KUBECONFIG} --images='${USE_IMAGES}'" | ||
| os::cmd::expect_success_and_text 'oadm registry --daemonset' 'service exists' | ||
| os::cmd::expect_success_and_text 'oc describe ds/docker-registry' 'Desired Number of Nodes Scheduled:\s*1' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oc get ds/docker-registry with a template will be more stable.
|
Minor comments. lgtm otherwise. |
|
We should document the fact that there is no automatic rolling update functionality currently. Even the initial implementation discussed upstream will be in the client. Oh, and we don't even use |
|
@smarterclayton do we have any plans for having process-based deployments for DaemonSets? |
Rolling update is in the graduation list: kubernetes/kubernetes#15310 |
|
We need a "generic trigger" controller for image streams On Mon, May 2, 2016 at 11:07 AM, David Eads notifications@github.com
|
|
@deads2k updates for your comments in commit two. If that looks good I'll squash an merge. Thanks! |
pkg/generate/app/pipeline.go
Outdated
| // the labels on the pod template | ||
| selector := t.Spec.Template.Labels | ||
| if t.Spec.Selector != nil { | ||
| selector = t.Spec.Selector.MatchLabels |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we default in this direction though. The daemonset node selector can be destroyed via defaulting, but the pod template spec (the one that the service has to match) is never defaulted, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, (in deployments/deploymentconfigs and I imagine it's the same for daemonsets) you can have a nil selector and it will be defaulted to the podtemplate labels.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we default in this direction though. The daemonset node selector can be destroyed via defaulting, but the pod template spec (the one that the service has to match) is never defaulted, right?
Seen the error of my ways regarding node selection, but I still think we should base our service selector on the podtemplatespec labels, not the ds labels.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Still seems weird to me that your service selector would not be the exact same selector that the DS uses to determine what pods it is managing. If we create it based on pod labels the only weirdness that could happen is that a DS is still managing the pod but it no longer appears in the service. I don't think we can run into a scenario where a pod not managed by the DS shows up in the service without some deliberate updates to the service.
Will change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updated
2be1972 to
d19a633
Compare
|
squashed. Will merge on green unless there are any other comments |
doc'd in openshift/openshift-docs#2118 |
|
lgtm |
|
Still LGTM. |
|
re[test] |
|
[merge] |
|
continuous-integration/openshift-jenkins/merge SUCCESS (https://ci.openshift.redhat.com/jenkins/job/merge_pull_requests_origin/5944/) (Image: devenv-rhel7_4236) |
d19a633 to
899062e
Compare
|
Evaluated for origin merge up to 899062e |
|
Evaluated for origin test up to 899062e |
|
continuous-integration/openshift-jenkins/test SUCCESS (https://ci.openshift.redhat.com/jenkins/job/test_pr_origin/3915/) |
Adds the ability to run the registry as a daemonset via the oadm command.
TODO:
if this looks ok then we will add the output of this command as a template in contribThis command is pretty complicated now. I don't think a template is high priority. Edit: confirmed that template is not a high priority and not necessary for this PR.test-cmd tests to exercise the option@aweiteka @legionus @miminar @soltysh