Skip to content

Conversation

@Miciah
Copy link
Contributor

@Miciah Miciah commented Nov 22, 2018

Import openshift/api/config/v1

pkg/asset: Add asset for ingress config

  • data/data/manifests/bootkube/cluster-ingress-01-crd.yaml: New file containing the ingresses.config.openshift.io CRD.
  • pkg/asset/manifests/ingress.go (Ingress): New asset.
    (Name): New method.
    (Dependencies): New method. Depend on InstallConfig.
    (Generate): New method. Generate files for the ingress config CRD and for a default ingress config CR with an ingress domain that is based on the cluster name and base domain from InstallConfig.
    (Files, Load): New methods.
  • pkg/asset/manifests/operators.go (Dependencies): Add dependency on the ingress asset.
    (Generate): Add the ingress asset's files to the file list.

Related to NE-100.


Depends on openshift/api#139.

@Miciah
Copy link
Contributor Author

Miciah commented Nov 22, 2018

/hold
No sense testing till the openshift/api change is in.

@openshift-ci-robot openshift-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Nov 22, 2018
@Miciah Miciah force-pushed the pkg-slash-asset-add-asset-for-ingress-config branch 2 times, most recently from d45c751 to 1a4b9da Compare November 26, 2018 16:30
@abhinavdahiya
Copy link
Contributor

Read the ingress domain interactively or from the OPENSHIFT_INSTALL_INGRESS_DOMAIN environment variable.

This is no go. Why should the user be prompted for this? Why is the cluster name and base domain not enough. Also any other customization should be supported bu running create manifests , edit this asset on disk, create cluster

@Miciah
Copy link
Contributor Author

Miciah commented Nov 26, 2018

Thanks! Latest push deletes asking for ingress domain, deletes OPENSHIFT_INSTALL_INGRESS_DOMAIN , and simply generates a manifest based on cluster name and base domain.

@Miciah Miciah force-pushed the pkg-slash-asset-add-asset-for-ingress-config branch 4 times, most recently from d7d50df to d55a24f Compare November 26, 2018 22:44
@Miciah
Copy link
Contributor Author

Miciah commented Nov 26, 2018

openshift/api#139 merged, so
/hold cancel

@openshift-ci-robot openshift-ci-robot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Nov 26, 2018
@abhinavdahiya
Copy link
Contributor

/approve

@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Nov 26, 2018
@Miciah Miciah force-pushed the pkg-slash-asset-add-asset-for-ingress-config branch from d55a24f to 8898ab2 Compare November 27, 2018 01:32
@Miciah
Copy link
Contributor Author

Miciah commented Nov 27, 2018

Latest push moves the ingresses.config.openshift.io CRD to data/data/manifests/bootkube/cluster-ingress-01-crd.yaml.

@Miciah Miciah force-pushed the pkg-slash-asset-add-asset-for-ingress-config branch from 8898ab2 to 0fd36f8 Compare November 27, 2018 01:36
@Miciah
Copy link
Contributor Author

Miciah commented Nov 27, 2018

Fixed yaml indentation.

@wking
Copy link
Member

wking commented Nov 27, 2018

0fd36f8 looks good to me. But stepping back, does this need to be in the installer code? Could the ingress operator use the usual approach to push the CRD and then push the config object when the operator comes up? If it's easier, I'm fine landing this as it stands and potentially pivoting to that approach later.

@wking
Copy link
Member

wking commented Nov 27, 2018

e2e-aws included:

Nov 27 02:43:43.957: INFO: Warning: Failed to add role to e2e service account: Get https://nested-api.openshift.testing:6443/apis/rbac.authorization.k8s.io/v1/namespaces/e2e-tests-watch-q45zt/rolebindings: dial tcp 192.168.126.11:6443: connect: connection refused

/retest

@Miciah
Copy link
Contributor Author

Miciah commented Nov 27, 2018

does this need to be in the installer code? Could the ingress operator use the usual approach to push the CRD and then push the config object when the operator comes up? If it's easier, I'm fine landing this as it stands and potentially pivoting to that approach later.

I'm not sure there is a usual approach to what we're doing in this PR:

  • This is apparently the first ever manifest for a resource in config.openshift.io.
  • This manifest has configuration that was set during installation and handled by openshift-ansible in OCP 3.x.
  • This manifest is not for a specific operator; it is configuration that will be read by cluster-openshift-apiserver-operator, cluster-ingress-operator, and possibly more.
  • The configuration includes a default value that is derived from other installer configuration, and the documentation you linked doesn't address non-static manifest files.

To this last point, I do see some .template.yaml files under data/data/manifests/ and code in pkg/asset/manifests/operators.go to instantiate these templates. Would generateBootKubeManifests in operators.go be the appropriate place to handle the manifests that this PR adds? Maybe the documentation to which you linked has fallen behind the installer's capabilities. On the other hand, operators.go looks like it has perhaps accidentally become a catch-all—at least the name makes me think I should be looking elsewhere for a home for a config.openshift.io manifest.

@wking
Copy link
Member

wking commented Nov 27, 2018

  • This manifest is not for a specific operator; it is configuration that will be read by cluster-openshift-apiserver-operator, cluster-ingress-operator, and possibly more.

That's a fair point. But it seems to be in the cluster-ingress-operator wheelhouse. The OpenShift API server should be able to operate without ingress configured, but I don't see the ingress operator running without a configuration.

  • The configuration includes a default value that is derived from other installer configuration, and the documentation you linked doesn't address non-static manifest files.

I should push an update there. But does this help? Basically, the install-config you're using here is going to live in the cluster as a resource. So you could have your install-config -> ingress config logic live in your operator itself. When the operator comes up it would:

  1. Check for an existing ingress config.
    • If found, use that configuration.
    • If not found:
      1. Pull the install-config from cluster-config-v1 in the kube-system namespace.
      2. Generate the ingress config using the logic you have in this PR.
      3. Push that ingress config into the cluster for future ingress-operator runs, the OpenShift API server, and others who may be interested.

@wking
Copy link
Member

wking commented Nov 27, 2018

I should push an update there.

Done with openshift/cluster-version-operator#59.

Also, with #731 landed, this needs a rebase to resolve vendoring conflicts.

* data/data/manifests/bootkube/cluster-ingress-01-crd.yaml: New file
containing the ingresses.config.openshift.io CRD.
* pkg/asset/manifests/ingress.go (Ingress): New asset.
(Name): New method.
(Dependencies): New method.  Depend on InstallConfig.
(Generate): New method.  Generate files for the ingress config CRD and for
a default ingress config CR with an ingress domain that is based on the
cluster name and base domain from InstallConfig.
(Files, Load): New methods.
* pkg/asset/manifests/operators.go (Dependencies): Add dependency on the
ingress asset.
(Generate): Add the ingress asset's files to the file list.

Related to NE-100.

https://jira.coreos.com/browse/NE-100
@abhinavdahiya
Copy link
Contributor

abhinavdahiya commented Nov 27, 2018

  • This manifest is not for a specific operator; it is configuration that will be read by cluster-openshift-apiserver-operator, cluster-ingress-operator, and possibly more.

That's a fair point. But it seems to be in the cluster-ingress-operator wheelhouse. The OpenShift API server should be able to operate without ingress configured, but I don't see the ingress operator running without a configuration.

  • The configuration includes a default value that is derived from other installer configuration, and the documentation you linked doesn't address non-static manifest files.

I should push an update there. But does this help? Basically, the install-config you're using here is going to live in the cluster as a resource. So you could have your install-config -> ingress config logic live in your operator itself. When the operator comes up it would:

  1. Check for an existing ingress config.

    • If found, use that configuration.

    • If not found:

      1. Pull the install-config from cluster-config-v1 in the kube-system namespace.
      2. Generate the ingress config using the logic you have in this PR.
      3. Push that ingress config into the cluster for future ingress-operator runs, the OpenShift API server, and others who may be interested.

The installer team has been planning to drop install-config from cluster as source for configuration for operators. And I think we aggreed that the installer will break the install-config into config.openshift.io objects that then represent the cluster configuration and push them into the cluster.

On the topic of reconciling the config.openshift.io objects I think these are user driven and only reconciled in special situations during update. @deads2k can add more as @smarterclayton and he have been thinking more about it.

@wking
Copy link
Member

wking commented Nov 27, 2018

And I think we agreed that the installer will break the install-config into config.openshift.io objects that then represent the cluster configuration and push them into the cluster.

I'm fine with this, although I'm not clear on the motivation. Maybe just that it's one less thing for in-cluster consumers to have to worry about when those consumers need to share configuration beyond what lives in the install config. Anyhow, this PR looks good to me as a step towards that goal; just needs the rebase an new dep ensure to clear the Gopkg.lock contention.

@Miciah Miciah force-pushed the pkg-slash-asset-add-asset-for-ingress-config branch from 0fd36f8 to 63a161c Compare November 27, 2018 17:37
@Miciah
Copy link
Contributor Author

Miciah commented Nov 27, 2018

Rebased.

@wking
Copy link
Member

wking commented Nov 27, 2018

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Nov 27, 2018
@openshift-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: abhinavdahiya, Miciah, wking

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:
  • OWNERS [abhinavdahiya,wking]

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci-robot
Copy link
Contributor

@Miciah: The following test failed, say /retest to rerun them all:

Test name Commit Details Rerun command
ci/prow/e2e-libvirt 63a161c link /test e2e-libvirt

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@openshift-merge-robot openshift-merge-robot merged commit a4b758d into openshift:master Nov 27, 2018
@ironcladlou
Copy link
Contributor

I think something didn't go as planned with this PR. Here's what I'm seeing in new clusters:

$ oc get customresourcedefinitions | grep config.openshift.io
builds.config.openshift.io                                                                   2018-11-27T22:22:21Z
clusteroperators.config.openshift.io                                                         2018-11-27T22:21:55Z
clusterversions.config.openshift.io                                                          2018-11-27T22:21:55Z
images.config.openshift.io                                                                   2018-11-27T22:22:37Z
samplesresources.samplesoperator.config.openshift.io                                         2018-11-27T22:31:19Z
servicecertsigneroperatorconfigs.servicecertsigner.config.openshift.io                       2018-11-27T22:22:14Z

@wking
Copy link
Member

wking commented Nov 27, 2018

I think something didn't go as planned with this PR...

Here's what I'm seeing:

$ jq -r .ignition_bootstrap wking/terraform.tfvars | jq '.storage.files[] | .path' | grep ingress
"/opt/tectonic/templates/cluster-ingress-01-crd.yaml"
"/opt/tectonic/manifests/cluster-ingress-02-config.yml"

Maybe the issue is /opt/tectonic/templates/cluster-ingress-01-crd.yaml -> /opt/tectonic/manifests/cluster-ingress-01-crd.yaml?

@Miciah
Copy link
Contributor Author

Miciah commented Nov 27, 2018

Yeah, I found the same and am trying an install with that path corrected.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants