(makefile) Polish targets#26
Conversation
| generate: controller-gen ## Generate code and manifests. | ||
| $(CONTROLLER_GEN) object:headerFile="hack/boilerplate.go.txt" paths="./..." | ||
| $(CONTROLLER_GEN) rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases |
There was a problem hiding this comment.
The reason these are separate is because one generates YAML and the other generates Go source code. I don't have an opinion either way, but just want to make sure its understood that build and test targets will unnecessarily re-generate YAML and the kustomize-based targets will unnecessarily re-generate Go code.
I think I combined these in my Taskfile experiment stuff, just for the sake of simplicity of the targets, but just wanted to raise this in case you all care about the speed of the higher level tasks.
There was a problem hiding this comment.
I think we're all independently coming to the conclusion that
build and test targets will unnecessarily re-generate YAML and the kustomize-based targets will unnecessarily re-generate Go code.
isn't that expensive, when you weigh that with the pro of having less cluttered Makefile (less targets).
At least that was my intention behind combining the two.
There was a problem hiding this comment.
I don't have a strong preference here but I would lean more in the direction of having two distinct Makefile targets to have faster higher level tasks. That also may just be because I am used to having the two distinct targets but I am open to trying something new :)
There was a problem hiding this comment.
Just a plug again for taskfile, which totally solves this with the concept of internal tasks. If we adopt that, we could pretty easily split these up and maintain sanity.
everettraven
left a comment
There was a problem hiding this comment.
First round - a couple changes I would like to see but am open to discussion on them.
Overall looks good!
| ignore-not-found = true | ||
| endif | ||
| .PHONY: uninstall | ||
| uninstall: ## Undeploy CatalogSource controller and ApiServer from the K8s cluster specified in ~/.kube/config. Call with ignore-not-found=true to ignore resource not found errors during deletion. |
There was a problem hiding this comment.
if install creates a kind cluster, I would expect uninstall to clean up that cluster. That may mean we make the actual kubectl delete ... step in an intermediate target and make uninstall pretty much be:
| uninstall: ## Undeploy CatalogSource controller and ApiServer from the K8s cluster specified in ~/.kube/config. Call with ignore-not-found=true to ignore resource not found errors during deletion. | |
| uninstall: <intermediate> kind-cluster-cleanup ## Undeploy CatalogSource controller and ApiServer from the K8s cluster specified in ~/.kube/config. Call with ignore-not-found=true to ignore resource not found errors during deletion. |
There was a problem hiding this comment.
I'm torn about this one. I'm imagining myself installing catalogd another project, say rukpak in the same cluster, but wanting to uninstall just one of those. I could modify the target to remove kind-cluster-cleanup target for myself, not sure if that's a good experience for others wanting to use that target?
Rukpak might not be the best example coz oppy being the gathering place all of these multi project installations/uninstallations would be handled in oppy's makefile, but for what is worth the install target has kind-cluster-cleanup so that the old cluster is deleted before a new one is created.
There was a problem hiding this comment.
For me this is all about the local development so all the clusters I am running at any point should be easy to blow away and recreate and I should be relying on any long running clusters.
I liked @joelanford's suggestion of adding a target the does a sort of re-deploy without blowing anything away since that could be useful but I would personally rather just blow the entire kind cluster away and start fresh.
Another thing is, and maybe I am being a bit naive, I don't think there would be much reason to run another process in the local dev cluster other than catalogd when we are developing for catalogd.
Using the example of oppy - If I am testing stuff for oppy I should be using oppy's Makefile and targets for local development and if I see an issue that is occurring in catalogd I would switch over, fix it, and verify it using the catalogd Makefile targets for a more isolated environment. Once it is fixed, I could hop back over to using oppy's Makefile target and pull in my catalogd changes to verify that in oppy there are no changes that need to be made to fix the issue.
In my head it is all about keeping the individual components that don't have dependencies on each other as isolated as possible to speed up the overall development time (i don't want to have to fiddle with getting rukpak or oppy to work if all I am doing at the moment is making changes in catalogd)
All that being said, if you are strongly against this I am fine keeping it that way
There was a problem hiding this comment.
You know what, let's just blow it out during uninstall. Let's see if as we start using these targets if there's any wish for a different flavor :)
Updated the targets to have an undeploy and a uninstall that is undeploy+kind-cluster-cleanup
|
+1 to @everettraven's suggestions on simplifying all the deploy, install, uninstall, undeploy stuff. I personally never use those. My go-to is pretty much As a follow on we could also think about coming up with a re-deploy sort of make invocation, something that essentially does this: ## Rebuild the manifests/code, load the images into the existing cluster, re-apply the k8s manifest
make manifests build-containers kind-load install deploy
## Force re-creation of pods to pick up newly re-built images.
kubectl delete pod -n catalog-system -l <selectorThatMatchesAllOurPods>The nice part here is that it doesn't incur the cost of kind cluster recycling or cert-manager reinstall/wait. I've often written non-checked-in scripts in repos that do this sort of thing to speed up my cycle, but it would be good to make this available to everyone. |
6e96d6a to
37979bd
Compare
|
@joelanford @everettraven updated the commit to simplify the targets. This is what it looks like now: $ make help
Usage:
make <target>
General
help Display this help.
Development
clean Remove binaries and test artifacts
generate Generate code and manifests.
fmt Run go fmt against code.
vet Run go vet against code.
test-unit Run tests.
tidy Update dependencies
verify Verify the current code generation and lint
Build
build-controller Build manager binary.
build-server Build api-server binary.
run Run a controller from your host.
docker-build-controller Build docker image with the controller manager.
docker-push-controller Push docker image with the controller manager.
docker-build-server Build docker image with the apiserver.
docker-push-server Push docker image with the apiserver.
Deploy
kind-cluster Standup a kind cluster
kind-cluster-cleanup Delete the kind cluster
kind-load Load the built images onto the local cluster
install Install local catalogd
deploy Deploy CatalogSource controller and ApiServer to the K8s cluster specified in ~/.kube/config.
uninstall Undeploy CatalogSource controller and ApiServer from the K8s cluster specified in ~/.kube/config.
cert-manager Deploy cert-manager on the cluster
Release
release Runs goreleaser for catalogd. By default, this will run only as a snapshot and will not publish any artifacts unless it is run with different arguments. To override the arguments, run with "GORELEASER_ARGS=...". When run as a github action from a tag, this target will publish a full release.
hack/tools:
kind Build a local copy of kind
controller-gen Build a local copy of controller-gen
goreleaser Build a local copy of goreleaser
setup-envtest Build a local copy of envtest
kustomize Build a local copy of kustomizeI'd started down the path of adding a |
everettraven
left a comment
There was a problem hiding this comment.
Just some minor changes (removing the kustomize build config/crd ...)
| generate: controller-gen ## Generate code and manifests. | ||
| $(CONTROLLER_GEN) object:headerFile="hack/boilerplate.go.txt" paths="./..." | ||
| $(CONTROLLER_GEN) rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases |
There was a problem hiding this comment.
I don't have a strong preference here but I would lean more in the direction of having two distinct Makefile targets to have faster higher level tasks. That also may just be because I am used to having the two distinct targets but I am open to trying something new :)
| ignore-not-found = true | ||
| endif | ||
| .PHONY: uninstall | ||
| uninstall: ## Undeploy CatalogSource controller and ApiServer from the K8s cluster specified in ~/.kube/config. Call with ignore-not-found=true to ignore resource not found errors during deletion. |
There was a problem hiding this comment.
For me this is all about the local development so all the clusters I am running at any point should be easy to blow away and recreate and I should be relying on any long running clusters.
I liked @joelanford's suggestion of adding a target the does a sort of re-deploy without blowing anything away since that could be useful but I would personally rather just blow the entire kind cluster away and start fresh.
Another thing is, and maybe I am being a bit naive, I don't think there would be much reason to run another process in the local dev cluster other than catalogd when we are developing for catalogd.
Using the example of oppy - If I am testing stuff for oppy I should be using oppy's Makefile and targets for local development and if I see an issue that is occurring in catalogd I would switch over, fix it, and verify it using the catalogd Makefile targets for a more isolated environment. Once it is fixed, I could hop back over to using oppy's Makefile target and pull in my catalogd changes to verify that in oppy there are no changes that need to be made to fix the issue.
In my head it is all about keeping the individual components that don't have dependencies on each other as isolated as possible to speed up the overall development time (i don't want to have to fiddle with getting rukpak or oppy to work if all I am doing at the moment is making changes in catalogd)
All that being said, if you are strongly against this I am fine keeping it that way
c00959d to
db07910
Compare
@everettraven done. |
everettraven
left a comment
There was a problem hiding this comment.
Typo - other than that it looks good to me! Once the typo is fixed I'll throw an approval on here
| $(KUSTOMIZE) build config/default | kubectl delete --ignore-not-found=true -f - | ||
|
|
||
| .PHONY: uninstall | ||
| uninstall: uninstall kind-cluster-cleanup ## Uninstall local catalogd |
There was a problem hiding this comment.
| uninstall: uninstall kind-cluster-cleanup ## Uninstall local catalogd | |
| uninstall: undeploy kind-cluster-cleanup ## Uninstall local catalogd |
There was a problem hiding this comment.
As soon as I saw this comment the first question that came to my head was "wait how did I not catch this when I was testing these changes myself?" and realized the cluster was getting blown up anyway so the undeploy there is redundant lol
And that highlights why I wasn't sure about blowing up the cluster in uninstall. uninstall and kind-cluster-cleanup are essentially the same targets at that point.
There was a problem hiding this comment.
Ah good point. I think it makes sense to just leave it as kind-cluster-cleanup then.
As an aside, I do personally like the idea of it being install/uninstall from a UX perspective. It feels a bit weird to do make install to bring everything up and then make kind-cluster-cleanup to bring everything down (it's not the target I would have assumed in my head). Now that I think about it (un)install doesn't even sound the best since we aren't technically just "installing" it but rather standing everything up. Maybe something like make up paired with make down makes more sense from a UX perspective? (This is all totally an aside and shouldn't prevent this from merging)
There was a problem hiding this comment.
Now that I'm thinking about it more, I feel like the ideal set of targets for me are:
deploy: # deploy stuff
install: docker-build-server docker-build-controller kind-load cert-manager deploy wait
undeploy: #undeploy stuff
uninstall: undeploy and then wait for namespace to be deleted.
So that kind cluster creation and deletion stays separate from the install/uninstall workflow.
You'd make kind-cluster once, make install/make uninstall multiple times during your dev workflow, and make kind-cluster-cleanup once you're done.
There was a problem hiding this comment.
And we'd add make reinstall as a follow up to do what Joe does in his dev workflow.
There was a problem hiding this comment.
That all sounds good to me
There was a problem hiding this comment.
@everettraven did that ☝🏽 and realized that it essentially gets us the reinstall workflow with make uninstall + make install:) didn't add the target in this PR though in case that conversation becomes larger than "surereinstall: uninstall install` lgtm" so I'll leave that for the follow up
There was a problem hiding this comment.
I think this design might get blown up by competing interests in the future (e.g. generate a release manifest and install script and then actually use those in dev and e2e so that we know they'll work when we cut a release).
But this seems good and useful for now. I will call out that this is different and potentially conflicting terminology than rukpak and operator controller use (no judgement on me for either being right or wrong), but its one of those inconsistency things to be aware of.
There was a problem hiding this comment.
I'm guessing that it's okay for us to reassess the terminology for the targets when those competing interests force us to refractor the design. (make help should help new contributors till then 🤞🏽 )
db07910 to
e8d1df3
Compare
| KIND_VERSION ?= v0.15.0 | ||
| CONTROLLER_TOOLS_VERSION ?= v0.10.0 | ||
| GORELEASER_VERSION ?= v1.16.2 | ||
| ENVTEST_VERSION ?= latest |
There was a problem hiding this comment.
Probably fine for now if CI is passing, but we likely want this to be pinned using our client-go depenendency version. Something like this: https://github.com/operator-framework/rukpak/blob/218217cdcfb190acafa8e0f4fb8ccec56b22faaf/Makefile#L98
There was a problem hiding this comment.
Oh I see. We're using this for both setup-envtest version and for the cluster version we ask setup-envtest to use. We should not do that. Using latest for setup-envtest is probably fine, but we should pin the server version like I linked above.
There was a problem hiding this comment.
Introduced a ENVTEST_SERVER_VERSION variable
e8d1df3 to
71a2669
Compare
71a2669 to
33e1613
Compare
|
/lgtm |
Closes #9