diff --git a/.github/workflows/sanity.yaml b/.github/workflows/sanity.yaml new file mode 100644 index 00000000..808a9295 --- /dev/null +++ b/.github/workflows/sanity.yaml @@ -0,0 +1,32 @@ +name: sanity + +on: + workflow_dispatch: + pull_request: + push: + branches: + - main + +jobs: + verify: + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v3 + - uses: actions/setup-go@v4 + with: + go-version-file: "go.mod" + - name: Run verification checks + run: make verify + markdown: + runs-on: ubuntu-latest + steps: + - name: Checkout repository + uses: actions/checkout@v3 + + - name: Lint markdown files + uses: github/super-linter/slim@v4 + env: + VALIDATE_ALL_CODEBASE: true + DEFAULT_BRANCH: main + # only runs the markdown linter + VALIDATE_MARKDOWN: true \ No newline at end of file diff --git a/Makefile b/Makefile index 449cec18..e99ff837 100644 --- a/Makefile +++ b/Makefile @@ -66,7 +66,6 @@ test-unit: generate fmt vet setup-envtest ## Run tests. .PHONY: tidy tidy: ## Update dependencies go mod tidy - (cd $(TOOLS_DIR) && go mod tidy) .PHONY: verify verify: tidy fmt generate ## Verify the current code generation and lint diff --git a/README.md b/README.md index a39349b9..fe345fa3 100644 --- a/README.md +++ b/README.md @@ -1,26 +1,20 @@ -# catalogd +# Catalogd -This repository is a prototype for a custom apiserver that uses a (dedicated ectd instance)[configs/etcd] to serve [FBC](https://olm.operatorframework.io/docs/reference/file-based-catalogs/#docs) content on cluster in a Kubernetes native way on cluster. +Catalogd runs in a Kubernetes cluster and servers content of [FBCs](https://olm.operatorframework.io/docs/reference/file-based-catalogs/) to clients. +## Quickstart -## Enhacement - -https://hackmd.io/@i2YBW1rSQ8GcKcTIHn9CCA/B1cMe1kHj - -## Quickstart. - -``` -$ kind create cluster -$ kubectl apply -f https://github.com/operator-framework/catalogd/config/crd/bases/ -$ kubectl apply -f https://github.com/operator-framework/catalogd/config/ -$ kubectl create ns test -$ kubectl apply -f config/samples/catalogsource.yaml +```bash +$ make kind-cluster; make install; kubectl apply -f config/samples/core_v1beta1_catalogsource.yaml +. +. +. -$ kubectl get catalogsource -n test +$ kubectl get catalogsource NAME AGE catalogsource-sample 98s -$ kubectl get bundlemetadata -n test +$ kubectl get bundlemetadata NAME AGE 3scale-community-operator.v0.7.0 28s 3scale-community-operator.v0.8.2 28s @@ -40,7 +34,7 @@ flux.v0.15.3 1s . . -$ kubectl get packages -n test +$ kubectl get packages NAME AGE 3scale-community-operator 77m ack-apigatewayv2-controller 77m @@ -62,6 +56,7 @@ ack-opensearchservice-controller 77m ``` ## Contributing + Thanks for your interest in contributing to `catalogd`! `catalogd` is in the very early stages of development and a more in depth contributing guide will come in the near future. @@ -69,47 +64,39 @@ Thanks for your interest in contributing to `catalogd`! In the mean time, it is assumed you know how to make contributions to open source projects in general and this guide will only focus on how to manually test your changes (no automated testing yet). If you have any questions, feel free to reach out to us on the Kubernetes Slack channel [#olm-dev](https://kubernetes.slack.com/archives/C0181L6JYQ2) or [create an issue](https://github.com/operator-framework/catalogd/issues/new) + ### Testing Local Changes + **Prerequisites** + - [Install kind](https://kind.sigs.k8s.io/docs/user/quick-start/#installation) **Local (not on cluster)** + > **Note**: This will work *only* for the controller + - Create a cluster: + ```sh -kind create cluster +make kind-cluster ``` -- Install CRDs and run the controller locally: + +- Install CRDs and run the controller locally + ```sh kubectl apply -f config/crd/bases/ && make run ``` **On Cluster** -- Build the images locally: -```sh -make docker-build-controller && make docker-build-server -``` + - Create a cluster: + ```sh -kind create cluster -``` -- Load the images onto the cluster: -```sh -kind load docker-image quay.io/operator-framework/catalogd-controller:latest && kind load docker-image quay.io/operator-framework/catalogd-server:latest -``` -- Install cert-manager: -```sh - make cert-manager -``` -- Install the CRDs -```sh -kubectl apply -f config/crd/bases/ -``` -- Deploy the apiserver, etcd, and controller: -```sh -kubectl apply -f config/ +make kind-cluster ``` -- Create the sample CatalogSource (this will trigger the reconciliation loop): + +- Install catalogd on cluster + ```sh -kubectl apply -f config/samples/catalogsource.yaml +make install ``` diff --git a/config/crd/bases/catalogd.operatorframework.io_catalogsources.yaml b/config/crd/bases/catalogd.operatorframework.io_catalogsources.yaml index 9b5c723f..a3638ddb 100644 --- a/config/crd/bases/catalogd.operatorframework.io_catalogsources.yaml +++ b/config/crd/bases/catalogd.operatorframework.io_catalogsources.yaml @@ -52,14 +52,79 @@ spec: status: description: CatalogSourceStatus defines the observed state of CatalogSource properties: - latestImagePoll: - description: The last time the image has been polled to ensure the - image is up-to-date - format: date-time - type: string - required: - - latestImagePoll + conditions: + description: Conditions store the status conditions of the CatalogSource + instances + items: + description: "Condition contains details for one aspect of the current + state of this API Resource. --- This struct is intended for direct + use as an array at the field path .status.conditions. For example, + \n type FooStatus struct{ // Represents the observations of a + foo's current state. // Known .status.conditions.type are: \"Available\", + \"Progressing\", and \"Degraded\" // +patchMergeKey=type // +patchStrategy=merge + // +listType=map // +listMapKey=type Conditions []metav1.Condition + `json:\"conditions,omitempty\" patchStrategy:\"merge\" patchMergeKey:\"type\" + protobuf:\"bytes,1,rep,name=conditions\"` \n // other fields }" + properties: + lastTransitionTime: + description: lastTransitionTime is the last time the condition + transitioned from one status to another. This should be when + the underlying condition changed. If that is not known, then + using the time when the API field changed is acceptable. + format: date-time + type: string + message: + description: message is a human readable message indicating + details about the transition. This may be an empty string. + maxLength: 32768 + type: string + observedGeneration: + description: observedGeneration represents the .metadata.generation + that the condition was set based upon. For instance, if .metadata.generation + is currently 12, but the .status.conditions[x].observedGeneration + is 9, the condition is out of date with respect to the current + state of the instance. + format: int64 + minimum: 0 + type: integer + reason: + description: reason contains a programmatic identifier indicating + the reason for the condition's last transition. Producers + of specific condition types may define expected values and + meanings for this field, and whether the values are considered + a guaranteed API. The value should be a CamelCase string. + This field may not be empty. + maxLength: 1024 + minLength: 1 + pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$ + type: string + status: + description: status of the condition, one of True, False, Unknown. + enum: + - "True" + - "False" + - Unknown + type: string + type: + description: type of condition in CamelCase or in foo.example.com/CamelCase. + --- Many .condition.type values are consistent across resources + like Available, but because arbitrary conditions can be useful + (see .node.status.conditions), the ability to deconflict is + important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) + maxLength: 316 + pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$ + type: string + required: + - lastTransitionTime + - message + - reason + - status + - type + type: object + type: array type: object type: object served: true storage: true + subresources: + status: {} diff --git a/pprof/README.md b/pprof/README.md index 88d76435..ffb0c170 100644 --- a/pprof/README.md +++ b/pprof/README.md @@ -1,11 +1,11 @@ -## pprof +# pprof This folder contains some profiles that can be read using [pprof](https://github.com/google/pprof) to show how the core kubernetes apiserver and the custom catalogd apiserver CPU & Memory utilization is affected by the creation and reconciliation of the sample `CatalogSource` CR found at `../config/samples/catalogsource.yaml`. Instead of providing static screenshots and losing the interactivity associated with these `pprof` profiles, each of the files with the extension `.pb` can be used to view the profiles that were the result of running `pprof` against the live processes. To view the `pprof` profiles in the most interactive way (or if you have no prior `pprof`experience) it is recommended to run: -``` +```bash go tool pprof -http=localhost: somefile.pb ``` @@ -38,7 +38,7 @@ In this section, we will break down the differences between how the core kube-ap | cpu | 1.72s / 30s (5.73%) | 1.99s / 30.06s (6.62%) | 1720ms / 60.06s (2.86%) | The `Normalized Difference` Metric was evaluated by running: -``` +```bash go tool pprof -http=localhost:6060 -diff_base=pprof/kubeapiserver_alone_cpu_profile.pb -normalize pprof/kubeapiserver_cpu_profile.pb ``` This command will normalize the profiles to better compare the differences. In its simplest form this difference was calculated by `pprof/kubeapiserver_alone_cpu_profile.pb - pprof/kubeapiserver_cpu_profile.pb` @@ -55,7 +55,7 @@ According to the `Normalized Difference`, there appears to be little to no diffe | alloc_objects | 19717785 | 33134306 | 102, 0.00052% of 19717785 total | The `Normalized Difference` Metric was evaluated by running: -``` +```bash go tool pprof -http=localhost:6060 -diff_base=pprof/kubeapiserver_alone_heap_profile.pb -normalize pprof/kubeapiserver_heap_profile.pb ``` This command will normalize the profiles to better compare the differences. In its simplest form this difference was calculated by `pprof/kubeapiserver_alone_heap_profile.pb - pprof/kubeapiserver_heap_profile.pb` @@ -80,7 +80,7 @@ This section is being added as the pprof metrics don't necessarily show the whol **TLDR**: CPU utilization spike of 0.156 cores and settles ~0.011 cores above prior utilization. Memory consumption increase of 22Mi. This image shows the spike in CPU utilization and the increase in Memory consumption. In this scenario, the command: -``` +```bash kubectl apply -f config/samples/catalogsource.yaml ``` was run right at 1:44 PM. @@ -90,7 +90,7 @@ The CPU spike lasted ~3 minutes and the values were: - 1:45PM (PEAK) - 0.223 cores - 1:47PM - 0.078 cores -With this, we can see that without the catalogd apiserver the core kube-apiserver had a CPU utilization spike of 0.156 cores and then settled at ~0.011 cores above what the utilization was prior to the reconciliation of the sample `CatalogSource` CR. +With this, we can see that without the catalogd apiserver the core kube-apiserver had a CPU utilization spike of 0.156 cores and then settled at ~0.011 cores above what the utilization was prior to the reconciliation of the sample `CatalogSource` CR. The memory consumption increased over the span of ~3 minutes and then stabilized. The values were: - 1:44PM - 289Mi @@ -101,13 +101,13 @@ With this, we can see that without the catalogd apiserver the core kube-apiserve ### Core kube-apiserver with catalogd apiserver -#### kube-apiserver: +#### kube-apiserver ![kube-apiserver CPU and mem metric graph with custom apiserver](images/kubeapiserver_metrics.png) **TLDR**: CPU utilization spike of 0.125 cores and settles ~0.001 cores above prior utilization. Memory consumption increase of ~26Mi. This image shows the spike in CPU utilization and the increase in Memory consumption. In this scenario, the command: -``` +```bash kubectl apply -f config/samples/catalogsource.yaml ``` was run right at 3:06 PM @@ -118,7 +118,7 @@ The CPU spike lasted ~3 minutes and the values were: - 3:08 PM (PEAK) - 0.215 cores - 3:09 PM - 0.091 cores -With this, we can see that with the catalogd apiserver the core kube-apiserver had a CPU utilization spike of 0.125 cores and then settled at ~0.001 cores above what the utilization was prior to the reconciliation of the sample `CatalogSource` CR. +With this, we can see that with the catalogd apiserver the core kube-apiserver had a CPU utilization spike of 0.125 cores and then settled at ~0.001 cores above what the utilization was prior to the reconciliation of the sample `CatalogSource` CR. The memory consumption increased over the span of ~3 minutes and then stabilized. The values were: - 3:06PM - 337Mi @@ -134,7 +134,7 @@ With this, we can see that with the catalogd apiserver the core kube-apiserver h **TLDR**: potential increase of ~0.012 cores, but more likely ~0.002 cores. Memory consumption increase of ~0.1Mi This image shows the spike in CPU utilization and the increase in Memory consumption. In this scenario, the command: -``` +```bash kubectl apply -f config/samples/catalogsource.yaml ``` was run right at 3:06 PM @@ -169,7 +169,7 @@ Overall, when running both the kube-apiserver and the catalogd apiserver the tot **TLDR**: CPU spike of 0.288 cores, settling ~0.003 cores above the previous consumption. Memory consumption of ~232.2Mi. This image shows the spike in CPU utilization and the increase in Memory consumption. In this scenario, the command: -``` +```bash kubectl apply -f config/samples/catalogsource.yaml ``` was run right at 3:06 PM