Skip to content

Conversation

@dinhxuanvu
Copy link
Member

@dinhxuanvu dinhxuanvu commented Oct 10, 2022

User-facing Configuration API Discussion Kickoff

Signed-off-by: Vu Dinh vudinh@outlook.com

Which issue(s) this PR addresses:

Closes #

@openshift-ci openshift-ci bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Oct 10, 2022
@openshift-ci openshift-ci bot requested review from copejon and pmtk October 10, 2022 17:04
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Oct 10, 2022

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: dinhxuanvu
Once this PR has been reviewed and has the lgtm label, please assign fzdarsky for approval by writing /assign @fzdarsky in a comment. For more information see:The Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@dinhxuanvu
Copy link
Member Author

/cc @dhellmann Please take a look.

@dinhxuanvu dinhxuanvu changed the title [WIP] Introduce new config spec [WIP] User-facing Configuration API Discussion Kickoff Oct 10, 2022
Signed-off-by: Vu Dinh <vudinh@outlook.com>
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Oct 10, 2022

@dinhxuanvu: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-openshift-conformance-sig-network 9ebb35f link true /test e2e-openshift-conformance-sig-network
ci/prow/e2e-openshift-conformance-sig-instrumentation 9ebb35f link true /test e2e-openshift-conformance-sig-instrumentation
ci/prow/e2e-openshift-conformance-sig-arch 9ebb35f link true /test e2e-openshift-conformance-sig-arch
ci/prow/e2e-openshift-conformance-sig-storage 9ebb35f link true /test e2e-openshift-conformance-sig-storage
ci/prow/e2e-openshift-conformance-sig-cli 9ebb35f link true /test e2e-openshift-conformance-sig-cli
ci/prow/images 9ebb35f link true /test images
ci/prow/periodics-images 9ebb35f link true /test periodics-images
ci/prow/e2e-openshift-conformance-sig-scheduling 9ebb35f link true /test e2e-openshift-conformance-sig-scheduling
ci/prow/e2e-rpm-install 9ebb35f link true /test e2e-rpm-install
ci/prow/e2e-openshift-conformance-sig-api-machinery 9ebb35f link true /test e2e-openshift-conformance-sig-api-machinery
ci/prow/e2e-openshift-conformance-sig-auth 9ebb35f link true /test e2e-openshift-conformance-sig-auth
ci/prow/test-unit 9ebb35f link true /test test-unit
ci/prow/e2e-openshift-conformance-sig-apps 9ebb35f link true /test e2e-openshift-conformance-sig-apps
ci/prow/periodic-ocp-4.12-images 9ebb35f link true /test periodic-ocp-4.12-images
ci/prow/e2e-reboot 9ebb35f link true /test e2e-reboot
ci/prow/e2e-openshift-conformance-sig-node 9ebb35f link true /test e2e-openshift-conformance-sig-node

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

Copy link
Contributor

@fzdarsky fzdarsky left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a good idea to weed out config params that are no longer needed or can better be auto-configured and to rename for better UX. Just wondering what motivated the proposed structure / naming? In how far is this aligning with OCP?

Components Components `json:components`
}

type Components struct {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We're currently using "Components" for the service components that we either embed into the binary (apiserver, kubelet, etc.) or host on the cluster (service-ca, openshift-dns, etc.), so this is overloading the term.

On the other hand, we'll need the possibility to disable hosted components (default CNI, CSI, Ingress, etc. implementations) so users can BYO and so may want to consider this requirement from the start.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We certainly don't have to use the Components term as it is something we randomly picked. We can change it to Inputs to differentiate it from other components.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Obviously you're grouping config params by some criterion and the name should reflect that criterion. If would have expected that you group them by the fact that they're pertaining to the cluster (--> "ClusterConfig"?) rather than, say, config parameters for the MicroShift instance/process.
Or is the intention to signal "this group is user-configurable" and not auto-configured?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the intention is "this is a group of user-configurable parameters".

// IP address pool for services.
// Currently, we only support a single entry here.
// This field is immutable after installation.
ServiceNetwork []string `json:"serviceNetwork"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are ServiceNetwork and ClusterNetwork being modelled differently?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is modelled after the Network type in OpenShift config v1.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I see ClusterNetworkEntry may also take a HostPrefix there. Wondering when this would be relevant and whether it'd be useful for MicroShift, too?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Both of these network fields are marked as immutable. Do we have that restriction for MicroShift, or do we need to allow them to change in case a host switches networks and ends up on one that conflicts with older settings here?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you change something like the servicenetwork, then you will end up with a cluster that has services persisted in it that are not in a valid network range. Protection like this is part of the reason to unify the types.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have 2 reasons we can't say these fields are immutable:

  • We can't keep people from editing the configuration file.
  • MicroShift hosts are portable and may transit between networks, so we're going to eventually need to protect against the case where the host lands on a network that conflicts with these internal networks.

That use case isn't a priority right now, but we can't ignore it.

// ServiceNodePortRange string `json:"serviceNodePortRange"`
// DNS string `json:"dns"`
// Domain string `json:"domain"`
// MTU string `json:"mtu"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We'd need to add auto-discovery of this parameter if we don't want it to be user-configurable.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The default value for MTU is set at 1400 at the moment so we can keep that default number.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zshi-redhat @mangelajo does MTU have to be user-configurable or can we auto-detect it somehow?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fzdarsky Until the path mtu is verified to work in ovnk, we need the mtu to be configurable.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@benluddy @dgrisonnet MTU is stored on NetworkStatus. You sure you don't want the entire config objects as opposed to just spec?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sometimes PMTU won't work because something on a remote network is broken and out of the customer's control. In this case, they need to be able to limit MTU locally to workaround issues.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using the NetworkStatus that is supposed to be read-only in OCP to specify a configuration in Microshift is not ideal and kind of confusing, I'd rather diverge from the existing config spec and do conversion later on than bringing the status.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with @dgrisonnet. MTU is an example of a way that MicroShift needs to be configurable that (right now) OCP does not.

DNS string `json:"dns"`
Domain string `json:"domain"`
MTU string `json:"mtu"`
// URL string `json:"url"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This parameter was meant to be used to point to the Kube API service (for other instances to connect to), but is instead used to specify the IP and port this instance's API server binds to.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@benluddy Any comments on this?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but is instead used to specify the IP and port this instance's API server binds to.

I don't see why we would want that to be configurable, the address the API server binds to should always be the same.

The url of the API server will also be present in the kubeconfig so other instances can use the kubeconfig to connect to the server.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can't assume that network addresses stay the same. MicroShift hosts can be autonomous devices that move between networks.

Hosts may have multiple NICs and multiple IPs.

Given those constraints, how do we ensure that the correct API service IP is used?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Then we need dns if the ips are dynamic and we can't trust them. Adding a route to reach out to the API server doesn't sound too bad if we need to support scenarios where hosts have multiple NICs and IPs and they still need an immutable way to reach out to the API server.

Also, can you confirm that these constraints are coming from the initial customers of microshift? We are trying to come up with a minimal version of the config to avoid having options configurable that we don't want to support in the future but can't remove because some of the initial customers may already depend on them.

// ClusterCIDR string `json:"clusterCIDR"`
// ServiceCIDR string `json:"serviceCIDR"`
// ServiceNodePortRange string `json:"serviceNodePortRange"`
// DNS string `json:"dns"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The IP of the ClusterDNS probably doesn't need to be user-configurable (as it's hardcoded as 10th IP of the ServiceNetwork), but we may want to keep it as auto-configured value?

// DataDir string `json:"dataDir"`
//
// AuditLogDir string `json:"auditLogDir"`
// LogVLevel int `json:"logVLevel"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How else would we configure the verbosity level?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be acceptable to use a default value that will be sufficient for most use cases such as level 3 or 4?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Logging means overhead and disk space may be super limited, so I'd assume some users will dial verbosity down by default and will want to dynamically increase it again in case they need to troubleshoot.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to have a top-level log-level parameter for convenience purposes. In OCP we usually have those defined at the operator level, but in the case of MicroShift since we don't have any operator, I think such an option should be handled at the MicroShift level directly.

Would it be acceptable to use a default value that will be sufficient for most use cases such as level 3 or 4?

I don't think so unless you are debugging a verbosity level of 0 is enough for the users. Also in OCP the default is 0, so I think we should stick to that.

With regards to the actual name of this option as well as the values it can take, I think we should reuse the existing mechanism we have for OCP operators to keep it consistent with the existing products: https://github.com/openshift/api/blob/622889ac07cf3dd8787679af0d3176bf292b3185/operator/v1/types.go#L55-L62

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The log level is going to need to be configurable in MicroShift.

// AuditLogDir string `json:"auditLogDir"`
// LogVLevel int `json:"logVLevel"`
//
// Roles []string `json:"roles"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We'll need this once we split control plane and node roles again, but until then this doesn't have to be user-configurable.

// ConfigFile string `json:"configFile"`
// DataDir string `json:"dataDir"`
//
// AuditLogDir string `json:"auditLogDir"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the proposal to log these to stdout instead?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think so. Can we get by with a well-known fixed location?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can probably put it somewhere under /var/log, of course, assuming it's user responsibility to do log rotation and ensure /var/log is on a dedicated partition. We'd at least want to allow users to disable logging, though.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Keeping an on-disk layout consistent with OCP reduces our documentation, training, and tooling load. It's not configurable on OCP, let's use the same locations.

log rotation is handled in-process by the kube-apiserver.

When the config.openshift.io types are brought in, there's an option for audit logging levels. Disabling audit logging is accompanied with a warning that, "if something fails, it's quite likely we won't be able to find it without audit logs"

//
// Roles []string `json:"roles"`
//
// NodeName string `json:"nodeName"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We'll eventually need this override as host names cannot be guaranteed to be unique or stable.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It could be selected arbitrarily (even randomly) and persisted to address uniqueness and stability. Is uniqueness an important consideration for MicroShift? My understanding is that there would never be more than one node, so even a fixed node name would be fine. What happens if a user reconfigures the node name after first start?

Copy link
Contributor

@fzdarsky fzdarsky Oct 12, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We're starting with single node, because that's where 80% of the use cases are, but expect to need 3-node (HA) eventually. I guess we can cross that bridge when we get there, though.

Right now, I believe we need to address mainly the following scenarios:
a) User has resource manifests selecting on node (name) on OCP that shall run unmodified on MicroShift, too. Could be handled by defaulting node name == host name (current behaviour).
b) Host changes name during runtime, either as result of DHCP lease renewal or the device owner / device agent deliberately changing name from the image defaults for better host identification after MicroShift booted for the first time.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Uniqueness will be important for fleet management, but do we have a separate cluster ID for that? Can we have MicroShift set the node name to a hardware device ID of some sort instead of asking users to configure it?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we agree we can use a hardware ID, we'll want to leave the field in place for now and file a ticket to do the work to change it later.

//
// Cluster ClusterConfig `json:"cluster"`
//
// Manifests []string `json:"manifests"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is where the bootstrap / workload manifests live. Probably doesn't have to be user-configurable, but we'll need well-known locations (in /etc, /var) for different use cases.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are the defaults, /etc/microshift/manifests and /usr/lib/microshift/manifests (https://github.com/openshift/microshift/pull/1011/files#diff-a3d824da3c42420cd5cbb0a4a2c0e7b5bfddd819652788a0596d195dc6e31fa5R29-R32) sufficient?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To start with, I think so, yes.

// This parameter can be updated after the cluster is
// installed.
// +kubebuilder:validation:Pattern=`^([0-9]{1,4}|[1-5][0-9]{4}|6[0-4][0-9]{3}|65[0-4][0-9]{2}|655[0-2][0-9]|6553[0-5])-([0-9]{1,4}|[1-5][0-9]{4}|6[0-4][0-9]{3}|65[0-4][0-9]{2}|655[0-2][0-9]|6553[0-5])$`
ServiceNodePortRange string `json:"serviceNodePortRange,omitempty"`
Copy link
Member

@atiratree atiratree Oct 11, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member Author

@dinhxuanvu dinhxuanvu Oct 11, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We model the Network part of the config using Network type in OCP config v1. That type spec does have ExternalIP config which we can use in the future if there is a need to support this feature in the future.
For now, we just rework the existing fields in the config into a new config that is aligned with OCP config v1 API.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok. @fzdarsky @dhellmann just note that if this is not required / does not get used in the future, we can disable the "openshift.io/ingress-ip" controller altogether in the config

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zshi-redhat or @mangelajo do either of you have an opinion about @atiratree's question about ExternalIPNetworkCIDR?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dhellmann sorry I missed this question.

Yes, ovn-kubernetes supports k8s service of ExternalIP type and it can be used today with prerequisites that the network infrastructure (outside MicroShift cluster) needs to route traffic for the external IP address to MicroShift cluster. However, we probably don't need ExternalIPNetworkCIDRs or AutoAssignCIDRs since these two API fields are exposed by OpenShift operators which are not supposed to run in regular MicroShift deployment.
So we don't have external IP pool management in MicroShift due to lack of operators support, but user can attach the selected external IP manually when creating/patching the k8s service.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in that case I have created a PR that disables the controller so it doesn't have to run #1058

// Currently, we only support a single entry here.
// This field is immutable after installation.
ServiceNetwork []string `json:"serviceNetwork"`

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to add the MTU field here. It's a feature we added for one of our testing customers; sometimes, the other side of a remote network cannot do PMTU, and reducing the Pod's MTU helps as a workaround.

Copy link
Contributor

@dhellmann dhellmann left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did not originally realize that this proposal was tied to moving the schema definition out of openshift/microshift and into openshift/api. We cannot do that, for several reasons including the fact that we intend to take this project back upstream and we don't want a community project bound to reviews in the OpenShift repo. So, let's focus on organizing the schema to align with OCP where it makes sense, and then bring the implementation of those changes into this repository.

// IP address pool for services.
// Currently, we only support a single entry here.
// This field is immutable after installation.
ServiceNetwork []string `json:"serviceNetwork"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have 2 reasons we can't say these fields are immutable:

  • We can't keep people from editing the configuration file.
  • MicroShift hosts are portable and may transit between networks, so we're going to eventually need to protect against the case where the host lands on a network that conflicts with these internal networks.

That use case isn't a priority right now, but we can't ignore it.

// This parameter can be updated after the cluster is
// installed.
// +kubebuilder:validation:Pattern=`^([0-9]{1,4}|[1-5][0-9]{4}|6[0-4][0-9]{3}|65[0-4][0-9]{2}|655[0-2][0-9]|6553[0-5])-([0-9]{1,4}|[1-5][0-9]{4}|6[0-4][0-9]{3}|65[0-4][0-9]{2}|655[0-2][0-9]|6553[0-5])$`
ServiceNodePortRange string `json:"serviceNodePortRange,omitempty"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zshi-redhat or @mangelajo do either of you have an opinion about @atiratree's question about ExternalIPNetworkCIDR?

DNS string `json:"dns"`
Domain string `json:"domain"`
MTU string `json:"mtu"`
// URL string `json:"url"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can't assume that network addresses stay the same. MicroShift hosts can be autonomous devices that move between networks.

Hosts may have multiple NICs and multiple IPs.

Given those constraints, how do we ensure that the correct API service IP is used?

// ServiceNodePortRange string `json:"serviceNodePortRange"`
// DNS string `json:"dns"`
// Domain string `json:"domain"`
// MTU string `json:"mtu"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with @dgrisonnet. MTU is an example of a way that MicroShift needs to be configurable that (right now) OCP does not.


type DebugConfig struct {
Pprof bool `json:"pprof"`
// Pprof bool `json:"pprof"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the benefit of having profiling on by default? Can we quantify the overhead?

// DataDir string `json:"dataDir"`
//
// AuditLogDir string `json:"auditLogDir"`
// LogVLevel int `json:"logVLevel"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The log level is going to need to be configurable in MicroShift.

//
// Roles []string `json:"roles"`
//
// NodeName string `json:"nodeName"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we agree we can use a hardware ID, we'll want to leave the field in place for now and file a ticket to do the work to change it later.

@deads2k
Copy link
Contributor

deads2k commented Oct 14, 2022

I did not originally realize that this proposal was tied to moving the schema definition out of openshift/microshift and into openshift/api. We cannot do that, for several reasons including the fact that we intend to take this project back upstream and we don't want a community project bound to reviews in the OpenShift repo.

I do not agree. openshift/api is where the openshift API contracts that ship on an OCP cadence are managed and reviewed for consistency and maintainability over time. Simply wishing the project wasn't subject to an external API review for maintainability isn't a compelling reason to avoid it. Even in this review we've seen cases where the current configuration for microshift doesn't actually reflect the reality of configuration changes that can be made over time. This should be discussed with staff-eng, I'll add it to the agenda. API reviews are not a barrier to getting things done. They provide consistency and longevity for project direction over time.

@deads2k
Copy link
Contributor

deads2k commented Oct 14, 2022

We have 2 reasons we can't say these fields are immutable:
We can't keep people from editing the configuration file.
MicroShift hosts are portable and may transit between networks, so we're going to eventually need to protect against the case where the host lands on a network that conflicts with these internal networks.
That use case isn't a priority right now, but we can't ignore it.

I don't think you understand. That's not a choice you get to make. Changing the service network after it has been established causes existing services to exist in the wrong network range. This in turn creates a situation the cluster doesn't function when you start it again. You can pretend that its possible to change it, but it's simply not a thing you can do and maintain a functional cluster.

@dinhxuanvu
Copy link
Member Author

Hey folks, thank you all for valuable inputs and feedbacks. This PR has served its purpose to kick off the conversation regarding MicroShift config changes that we wish to pursue. It is time to close it out to focus on spinoff PRs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

10 participants