Skip to content

pkg/generator: Work in my namespace, not default#227

Merged
fanminshi merged 1 commit intooperator-framework:masterfrom
eparis:any-namespace
May 11, 2018
Merged

pkg/generator: Work in my namespace, not default#227
fanminshi merged 1 commit intooperator-framework:masterfrom
eparis:any-namespace

Conversation

@eparis
Copy link
Copy Markdown
Contributor

@eparis eparis commented May 4, 2018

This will make it, by default, watch for CRs in the same namespace as the
operator and it will create the busybox pods in that namespace as well.

Resolves #187

@fanminshi
Copy link
Copy Markdown
Contributor

fanminshi commented May 4, 2018

@eparis In the issue #187, I'd suggest that we set the pod's namespace via a environment variable instead of via mounting a volume . e.g

apiVersion: apps/v1
kind: Deployment
metadata:
  name: <your-operator>
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: <your-operator>
    spec:
      containers:
      - name: <your-operator>
        image: <your-operator-image>
        env:
        - name: MY_POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace

In main.go:

namespace := os.Getenv("MY_POD_NAMESPACE")
...

As you can see, this approach has less generated code than the mounting volume approach, and you don't have to retrieve the namespace value from a local file which is also susceptible to IO error; hence setting and getting namespace from env seems simpler.

@fanminshi
Copy link
Copy Markdown
Contributor

@eparis kindly ping?

@eparis
Copy link
Copy Markdown
Contributor Author

eparis commented May 7, 2018

I'm at summit, but I hope to get this re-pushed today/tomorrow (the other 4 tee tiny PRs I created yesterday on the plane)

@fanminshi
Copy link
Copy Markdown
Contributor

@eparis sounds good!

@eparis
Copy link
Copy Markdown
Contributor Author

eparis commented May 8, 2018

FYI, I pushed it with ENV

@fanminshi
Copy link
Copy Markdown
Contributor

fanminshi commented May 9, 2018

After discussing this with rest of operator-sdk team, we decided to that it is probably the best to wrap the namespace function into a operator-sdk util function.

The reason for behind that is to provide an opinionated way of getting namespace so that the user doesn't have figure out what's the best way to get namespace.

Hence, I think that it is best to have the following function MustGetNamespace inside the operator-sdk/pkg/util/k8sutil.go:

// MustGetNamespace gets the namespace of the pod and panics if namespace is not found.
func MustGetNamespace() string {
	pn := os.Getenv(PodNameSpaceEnvVar)
	if len(pn) == 0 {
		panic("namespace can not be empty")
	}
	return pn
}

Note: We don't want the namespace to be empty because the sdk.Watch() should watch changes from one namespace. Also we want to panic inside MustGetNamespace when the namespace is invalid because operator shouldn't proceed if the namespace is undermined.

and the PodNameSpaceEnvVar in theoperator-sdk/pkg/util/contants.go:

// PodNameSpaceEnvVar is the constant for env variable POD_NAMESPACE
// which is the namespace that the pod is currently running in.
PodNameSpaceEnvVar = "POD_NAMESPACE"

And in <your-operator>/main.go:

namespace := k8utils.MustGetNamespace()
...

Also use POD_NAMESPACE instead in the operator.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: <your-operator>
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: <your-operator>
    spec:
      containers:
      - name: <your-operator>
        image: <your-operator-image>
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace

cc/ @hasbro17 @spahl @eparis

@eparis
Copy link
Copy Markdown
Contributor Author

eparis commented May 10, 2018

pkg/generator/olm_catalog_tmpl.go uses MY_POD_NAMESPACE should we standardize the ENV vars?

Is there a reason to use panic instead of just returning and error? main.go can handle the error and bail with a useful/meaninful message instead of a backtrace that is hostile to the user.

Should the description for PodNameSpaceEnvVar describe what it really does? A user "could" manually set the env var in the operator deployment and have the operator watch for CRs in a different namespace. Are you hoping to force users to always run the operator in the target ns or are we hoping to default it to do so while explaining to more sophisticated users how to configure things?

This will make it, by default, watch for CRs in the same namespace as the
operator and it will create the busybox pods in that namespace as well.

The watch namespace can be set by setting the WATCH_NAMESPACE env explicitly
in the operator deployment.

Resolves operator-framework#187
@fanminshi
Copy link
Copy Markdown
Contributor

pkg/generator/olm_catalog_tmpl.go uses MY_POD_NAMESPACE should we standardize the ENV vars?

Yes, we should standardize env vars. I realized that MY_POD_NAMESPACE is a bit informal compare to POD_NAMESPACE. So I think POD_NAMESPACE is a better naming choice. And good catch on this!

Is there a reason to use panic instead of just returning and error?
The reason behind panic is to let the opeartor-sdk util to dictate how namespace is returned and handled. In this case, we want the operator to panic because the operator can't continue without having an valid namespace. There shouldn't be a reason on trying to recover from this case.

main.go can handle the error and bail with a useful/meaninful message instead of a backtrace that is hostile to the user.

If MustGetNamespace return an error, the user might not know how to handle this error. The user can do what you described to log and bail which is what we want user to do. However, user might do something else such as trying to recover from this error which is something we don't want the user to do. That's why I think we should panic inside MustGetNamespace instead of returning an error.

Should the description for PodNameSpaceEnvVar describe what it really does?
Yeah, feel free to modify comments to what you think it is the best.

Are you hoping to force users to always run the operator in the target ns or are we hoping to default it to do so while explaining to more sophisticated users how to configure things?

We want to force operator to only watch one ns and this ns is default to the namespace that the operator pod is running in. We probably won't explain the unrecommended modification to user for now until we have a use case. The goal of operator-sdk is to provide an opinionated way of building operator.

Some references on above decision:

An operator and the resources it operates must be restricted to one namespace. This is the only reasonable way to manage a multi-tenant cluster and enforce RBAC and chargeback on operator resources.

ref: https://github.com/operator-framework/operator-lifecycle-manager/blob/master/Documentation/design/philosopy.md#design

The opinion that we have developed overtime is that installing Operators in each namespace is the best choice as it enables decoupled upgrades, namespace isolation for failure/monitoring, and differing API definitions.

By using Operator Lifecycle Management the intention is to simplify the management of these per-namespace Operators.

ref: #236 (comment)

@spahl
Copy link
Copy Markdown
Contributor

spahl commented May 10, 2018

I think we can merge this as is, it is better than the current behavior. We can implement a MustGetNamespace in a follow up PR, both functions are useful.

@fanminshi
Copy link
Copy Markdown
Contributor

@eparis agreed with @spahl. We can merge as it is and improve upon it later. However, do you want to update the description WatchNamespaceEnvVar to what ever you think it is best? Then we can merge it afterward?

@fanminshi
Copy link
Copy Markdown
Contributor

merging. let's add any improvements in future prs if deemed necessary.

@fanminshi fanminshi merged commit 977a4b4 into operator-framework:master May 11, 2018
@eparis eparis deleted the any-namespace branch May 14, 2018 18:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants