Skip to content
This repository was archived by the owner on Mar 3, 2025. It is now read-only.

(makefile) Polish targets#26

Merged
anik120 merged 1 commit intooperator-framework:mainfrom
anik120:cleanup-makefile
Apr 11, 2023
Merged

(makefile) Polish targets#26
anik120 merged 1 commit intooperator-framework:mainfrom
anik120:cleanup-makefile

Conversation

@anik120
Copy link
Copy Markdown
Member

@anik120 anik120 commented Apr 7, 2023

Closes #9

Comment thread Makefile
Comment on lines +50 to +52
generate: controller-gen ## Generate code and manifests.
$(CONTROLLER_GEN) object:headerFile="hack/boilerplate.go.txt" paths="./..."
$(CONTROLLER_GEN) rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason these are separate is because one generates YAML and the other generates Go source code. I don't have an opinion either way, but just want to make sure its understood that build and test targets will unnecessarily re-generate YAML and the kustomize-based targets will unnecessarily re-generate Go code.

I think I combined these in my Taskfile experiment stuff, just for the sake of simplicity of the targets, but just wanted to raise this in case you all care about the speed of the higher level tasks.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we're all independently coming to the conclusion that

build and test targets will unnecessarily re-generate YAML and the kustomize-based targets will unnecessarily re-generate Go code.

isn't that expensive, when you weigh that with the pro of having less cluttered Makefile (less targets).

At least that was my intention behind combining the two.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have a strong preference here but I would lean more in the direction of having two distinct Makefile targets to have faster higher level tasks. That also may just be because I am used to having the two distinct targets but I am open to trying something new :)

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a plug again for taskfile, which totally solves this with the concept of internal tasks. If we adopt that, we could pretty easily split these up and maintain sanity.

Comment thread Makefile Outdated
Comment thread Makefile Outdated
Copy link
Copy Markdown
Collaborator

@everettraven everettraven left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First round - a couple changes I would like to see but am open to discussion on them.

Overall looks good!

Comment thread Makefile Outdated
Comment thread Makefile Outdated
Comment thread Makefile
Comment thread Makefile
Comment thread Makefile Outdated
Comment thread Makefile Outdated
Comment thread Makefile Outdated
ignore-not-found = true
endif
.PHONY: uninstall
uninstall: ## Undeploy CatalogSource controller and ApiServer from the K8s cluster specified in ~/.kube/config. Call with ignore-not-found=true to ignore resource not found errors during deletion.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if install creates a kind cluster, I would expect uninstall to clean up that cluster. That may mean we make the actual kubectl delete ... step in an intermediate target and make uninstall pretty much be:

Suggested change
uninstall: ## Undeploy CatalogSource controller and ApiServer from the K8s cluster specified in ~/.kube/config. Call with ignore-not-found=true to ignore resource not found errors during deletion.
uninstall: <intermediate> kind-cluster-cleanup ## Undeploy CatalogSource controller and ApiServer from the K8s cluster specified in ~/.kube/config. Call with ignore-not-found=true to ignore resource not found errors during deletion.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm torn about this one. I'm imagining myself installing catalogd another project, say rukpak in the same cluster, but wanting to uninstall just one of those. I could modify the target to remove kind-cluster-cleanup target for myself, not sure if that's a good experience for others wanting to use that target?

Rukpak might not be the best example coz oppy being the gathering place all of these multi project installations/uninstallations would be handled in oppy's makefile, but for what is worth the install target has kind-cluster-cleanup so that the old cluster is deleted before a new one is created.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For me this is all about the local development so all the clusters I am running at any point should be easy to blow away and recreate and I should be relying on any long running clusters.

I liked @joelanford's suggestion of adding a target the does a sort of re-deploy without blowing anything away since that could be useful but I would personally rather just blow the entire kind cluster away and start fresh.

Another thing is, and maybe I am being a bit naive, I don't think there would be much reason to run another process in the local dev cluster other than catalogd when we are developing for catalogd.

Using the example of oppy - If I am testing stuff for oppy I should be using oppy's Makefile and targets for local development and if I see an issue that is occurring in catalogd I would switch over, fix it, and verify it using the catalogd Makefile targets for a more isolated environment. Once it is fixed, I could hop back over to using oppy's Makefile target and pull in my catalogd changes to verify that in oppy there are no changes that need to be made to fix the issue.

In my head it is all about keeping the individual components that don't have dependencies on each other as isolated as possible to speed up the overall development time (i don't want to have to fiddle with getting rukpak or oppy to work if all I am doing at the moment is making changes in catalogd)

All that being said, if you are strongly against this I am fine keeping it that way

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You know what, let's just blow it out during uninstall. Let's see if as we start using these targets if there's any wish for a different flavor :)

Updated the targets to have an undeploy and a uninstall that is undeploy+kind-cluster-cleanup

@joelanford
Copy link
Copy Markdown
Member

+1 to @everettraven's suggestions on simplifying all the deploy, install, uninstall, undeploy stuff.

I personally never use those. My go-to is pretty much make run.

As a follow on we could also think about coming up with a re-deploy sort of make invocation, something that essentially does this:

## Rebuild the manifests/code, load the images into the existing cluster, re-apply the k8s manifest
make manifests build-containers kind-load install deploy

## Force re-creation of pods to pick up newly re-built images.
kubectl delete pod -n catalog-system -l <selectorThatMatchesAllOurPods>

The nice part here is that it doesn't incur the cost of kind cluster recycling or cert-manager reinstall/wait.

I've often written non-checked-in scripts in repos that do this sort of thing to speed up my cycle, but it would be good to make this available to everyone.

@anik120 anik120 force-pushed the cleanup-makefile branch from 6e96d6a to 37979bd Compare April 7, 2023 19:01
@anik120
Copy link
Copy Markdown
Member Author

anik120 commented Apr 7, 2023

@joelanford @everettraven updated the commit to simplify the targets. This is what it looks like now:

$ make help 
Usage:
  make <target>

General
  help             Display this help.

Development
  clean            Remove binaries and test artifacts
  generate         Generate code and manifests.
  fmt              Run go fmt against code.
  vet              Run go vet against code.
  test-unit        Run tests.
  tidy             Update dependencies
  verify           Verify the current code generation and lint

Build
  build-controller  Build manager binary.
  build-server     Build api-server binary.
  run              Run a controller from your host.
  docker-build-controller  Build docker image with the controller manager.
  docker-push-controller  Push docker image with the controller manager.
  docker-build-server  Build docker image with the apiserver.
  docker-push-server  Push docker image with the apiserver.

Deploy
  kind-cluster     Standup a kind cluster
  kind-cluster-cleanup  Delete the kind cluster
  kind-load        Load the built images onto the local cluster 
  install          Install local catalogd
  deploy           Deploy CatalogSource controller and ApiServer to the K8s cluster specified in ~/.kube/config.
  uninstall        Undeploy CatalogSource controller and ApiServer from the K8s cluster specified in ~/.kube/config. 
  cert-manager     Deploy cert-manager on the cluster

Release
  release          Runs goreleaser for catalogd. By default, this will run only as a snapshot and will not publish any artifacts unless it is run with different arguments. To override the arguments, run with "GORELEASER_ARGS=...". When run as a github action from a tag, this target will publish a full release.

hack/tools:
  kind             Build a local copy of kind
  controller-gen   Build a local copy of controller-gen
  goreleaser       Build a local copy of goreleaser
  setup-envtest    Build a local copy of envtest
  kustomize        Build a local copy of kustomize

I'd started down the path of adding a re-install target, but realized the labels for our workloads need some reworking (generic/inconsistent labels like controller-manager: manager etc from the bootstrap are what's currently there). So probably best to leave as follow on like Joe said.

Copy link
Copy Markdown
Collaborator

@everettraven everettraven left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just some minor changes (removing the kustomize build config/crd ...)

Comment thread Makefile
Comment on lines +50 to +52
generate: controller-gen ## Generate code and manifests.
$(CONTROLLER_GEN) object:headerFile="hack/boilerplate.go.txt" paths="./..."
$(CONTROLLER_GEN) rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have a strong preference here but I would lean more in the direction of having two distinct Makefile targets to have faster higher level tasks. That also may just be because I am used to having the two distinct targets but I am open to trying something new :)

Comment thread Makefile Outdated
Comment thread Makefile Outdated
Comment thread Makefile Outdated
ignore-not-found = true
endif
.PHONY: uninstall
uninstall: ## Undeploy CatalogSource controller and ApiServer from the K8s cluster specified in ~/.kube/config. Call with ignore-not-found=true to ignore resource not found errors during deletion.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For me this is all about the local development so all the clusters I am running at any point should be easy to blow away and recreate and I should be relying on any long running clusters.

I liked @joelanford's suggestion of adding a target the does a sort of re-deploy without blowing anything away since that could be useful but I would personally rather just blow the entire kind cluster away and start fresh.

Another thing is, and maybe I am being a bit naive, I don't think there would be much reason to run another process in the local dev cluster other than catalogd when we are developing for catalogd.

Using the example of oppy - If I am testing stuff for oppy I should be using oppy's Makefile and targets for local development and if I see an issue that is occurring in catalogd I would switch over, fix it, and verify it using the catalogd Makefile targets for a more isolated environment. Once it is fixed, I could hop back over to using oppy's Makefile target and pull in my catalogd changes to verify that in oppy there are no changes that need to be made to fix the issue.

In my head it is all about keeping the individual components that don't have dependencies on each other as isolated as possible to speed up the overall development time (i don't want to have to fiddle with getting rukpak or oppy to work if all I am doing at the moment is making changes in catalogd)

All that being said, if you are strongly against this I am fine keeping it that way

@anik120 anik120 force-pushed the cleanup-makefile branch 3 times, most recently from c00959d to db07910 Compare April 7, 2023 20:34
@anik120
Copy link
Copy Markdown
Member Author

anik120 commented Apr 7, 2023

Just some minor changes (removing the kustomize build config/crd ...)

@everettraven done.

Copy link
Copy Markdown
Collaborator

@everettraven everettraven left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typo - other than that it looks good to me! Once the typo is fixed I'll throw an approval on here

Comment thread Makefile Outdated
$(KUSTOMIZE) build config/default | kubectl delete --ignore-not-found=true -f -

.PHONY: uninstall
uninstall: uninstall kind-cluster-cleanup ## Uninstall local catalogd
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
uninstall: uninstall kind-cluster-cleanup ## Uninstall local catalogd
uninstall: undeploy kind-cluster-cleanup ## Uninstall local catalogd

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As soon as I saw this comment the first question that came to my head was "wait how did I not catch this when I was testing these changes myself?" and realized the cluster was getting blown up anyway so the undeploy there is redundant lol

And that highlights why I wasn't sure about blowing up the cluster in uninstall. uninstall and kind-cluster-cleanup are essentially the same targets at that point.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah good point. I think it makes sense to just leave it as kind-cluster-cleanup then.

As an aside, I do personally like the idea of it being install/uninstall from a UX perspective. It feels a bit weird to do make install to bring everything up and then make kind-cluster-cleanup to bring everything down (it's not the target I would have assumed in my head). Now that I think about it (un)install doesn't even sound the best since we aren't technically just "installing" it but rather standing everything up. Maybe something like make up paired with make down makes more sense from a UX perspective? (This is all totally an aside and shouldn't prevent this from merging)

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now that I'm thinking about it more, I feel like the ideal set of targets for me are:

deploy: # deploy stuff
install: docker-build-server docker-build-controller kind-load cert-manager deploy wait
undeploy: #undeploy stuff
uninstall: undeploy and then wait for namespace to be deleted.

So that kind cluster creation and deletion stays separate from the install/uninstall workflow.

You'd make kind-cluster once, make install/make uninstall multiple times during your dev workflow, and make kind-cluster-cleanup once you're done.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And we'd add make reinstall as a follow up to do what Joe does in his dev workflow.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That all sounds good to me

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@everettraven did that ☝🏽 and realized that it essentially gets us the reinstall workflow with make uninstall + make install:) didn't add the target in this PR though in case that conversation becomes larger than "surereinstall: uninstall install` lgtm" so I'll leave that for the follow up

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this design might get blown up by competing interests in the future (e.g. generate a release manifest and install script and then actually use those in dev and e2e so that we know they'll work when we cut a release).

But this seems good and useful for now. I will call out that this is different and potentially conflicting terminology than rukpak and operator controller use (no judgement on me for either being right or wrong), but its one of those inconsistency things to be aware of.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm guessing that it's okay for us to reassess the terminology for the targets when those competing interests force us to refractor the design. (make help should help new contributors till then 🤞🏽 )

Copy link
Copy Markdown
Collaborator

@everettraven everettraven left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@openshift-ci openshift-ci Bot added the lgtm Indicates that a PR is ready to be merged. label Apr 10, 2023
@openshift-ci openshift-ci Bot removed the lgtm Indicates that a PR is ready to be merged. label Apr 10, 2023
Copy link
Copy Markdown
Collaborator

@everettraven everettraven left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@openshift-ci openshift-ci Bot added the lgtm Indicates that a PR is ready to be merged. label Apr 10, 2023
Comment thread Makefile Outdated
Comment thread Makefile Outdated
Comment thread Makefile
KIND_VERSION ?= v0.15.0
CONTROLLER_TOOLS_VERSION ?= v0.10.0
GORELEASER_VERSION ?= v1.16.2
ENVTEST_VERSION ?= latest
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably fine for now if CI is passing, but we likely want this to be pinned using our client-go depenendency version. Something like this: https://github.com/operator-framework/rukpak/blob/218217cdcfb190acafa8e0f4fb8ccec56b22faaf/Makefile#L98

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh I see. We're using this for both setup-envtest version and for the cluster version we ask setup-envtest to use. We should not do that. Using latest for setup-envtest is probably fine, but we should pin the server version like I linked above.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Introduced a ENVTEST_SERVER_VERSION variable

@openshift-ci openshift-ci Bot removed the lgtm Indicates that a PR is ready to be merged. label Apr 10, 2023
@joelanford
Copy link
Copy Markdown
Member

/lgtm
/approve

@openshift-ci openshift-ci Bot added the lgtm Indicates that a PR is ready to be merged. label Apr 11, 2023
@anik120 anik120 merged commit bc7778c into operator-framework:main Apr 11, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

lgtm Indicates that a PR is ready to be merged.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Makefile clean up

3 participants