Skip to content
This repository was archived by the owner on Mar 3, 2025. It is now read-only.
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
32 changes: 32 additions & 0 deletions .github/workflows/sanity.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
name: sanity

on:
workflow_dispatch:
pull_request:
push:
branches:
- main

jobs:
verify:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-go@v4
with:
go-version-file: "go.mod"
- name: Run verification checks
run: make verify
markdown:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v3

- name: Lint markdown files
uses: github/super-linter/slim@v4
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(repeating question from original PR and adding another:

  1. Is this something that could easily be run locally (ala our other build tools that we pull/run under make targets)? If so, I'd prefer we follow that pattern to make our CI environment reproducible locally. (based on this, I don't see us being cool with npm and node as a local build tool dependency)
  2. What are we trying to catch with this? It's hard to tell from the markdownlint readme what value it brings. Reason I ask is because SDK's markdown linting is flaky (due to link checking) and is sometimes helpful but sometimes actively in the way.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this something that could easily be run locally (ala our other build tools that we pull/run under make targets)? If so, I'd prefer we follow that pattern to make our CI environment reproducible locally. (based on this, I don't see us being cool with npm and node as a local build tool dependency)

+1 on not adding npm+node as a dependency. If we felt we needed this we could instead use the docker image like:

$(CONTAINER_TOOL) run -v $PWD:/workdir ghcr.io/igorshubovych/markdownlint-cli:latest "*.md"

where CONTAINER_TOOL is another Makefile variable. This should be fine since we are already requiring Docker usage in the Makefile at the moment. As an aside, it might be nice, as another follow up, to allow the use of Podman+Docker in our Makefile (not pressing but just something to think about).

What are we trying to catch with this? It's hard to tell from the markdownlint readme what value it brings. Reason I ask is because SDK's markdown linting is flaky (due to link checking) and is sometimes helpful but sometimes actively in the way.

From what I can tell rukpak enabled it to ensure the documentation is more uniform by running the linter against it. I personally think that might be a bit overkill for us right now especially because we have no catalogd documentation at the moment. I think if we notice that consistent docs formatting is causing us major headache down the road then it might be worth adding this, but until then I would vote for leaving this out.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it feels right to have at least some guard rails around documents we add though. In OLM we had a different experience without any guard rails being preset. It's very easy for the docs to get out of date, and at the minimum if a PR author is made aware that the changes being introduced in the PR has broken a link mentioned in a doc, it forces the PR author to also update the doc with the PR.

The other fixes I had to do for this PR for example was all related to healthy doc writing habits (space between sections etc) that make docs look readable so I'm +1 for them.

(Holding off on any changes to this PR until we're in agreement about adding/not adding this check. +1 on making sure the binary is downloadable and runnable using a make target so that PR authors can run it locally, I can look into that if we decide to move ahead with this).

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using a docker image is definitely preferable to local npm+node. But it also does feel a little dirty, tbh. @anik120 Do you know if there are any alternative markdown linters written in Go that we could use/follow the same pattern as our other tools? That would be ideal.

Re: what does this get us, I think I saw:

  1. link checking
  2. some sort of "best practice" opinions on markdown formatting.

Not necessarily opposed, but it would be nice to enumerate what we're hoping to get out of this so that we aren't constantly chasing some linter rules over time that we don't actually care about.

env:
VALIDATE_ALL_CODEBASE: true
DEFAULT_BRANCH: main
# only runs the markdown linter
VALIDATE_MARKDOWN: true
1 change: 0 additions & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,6 @@ test-unit: generate fmt vet setup-envtest ## Run tests.
.PHONY: tidy
tidy: ## Update dependencies
go mod tidy
(cd $(TOOLS_DIR) && go mod tidy)

.PHONY: verify
verify: tidy fmt generate ## Verify the current code generation and lint
Expand Down
71 changes: 29 additions & 42 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,26 +1,20 @@
# catalogd
# Catalogd

This repository is a prototype for a custom apiserver that uses a (dedicated ectd instance)[configs/etcd] to serve [FBC](https://olm.operatorframework.io/docs/reference/file-based-catalogs/#docs) content on cluster in a Kubernetes native way on cluster.
Catalogd runs in a Kubernetes cluster and servers content of [FBCs](https://olm.operatorframework.io/docs/reference/file-based-catalogs/) to clients.

## Quickstart

## Enhacement

https://hackmd.io/@i2YBW1rSQ8GcKcTIHn9CCA/B1cMe1kHj

## Quickstart.

```
$ kind create cluster
$ kubectl apply -f https://github.com/operator-framework/catalogd/config/crd/bases/
$ kubectl apply -f https://github.com/operator-framework/catalogd/config/
$ kubectl create ns test
$ kubectl apply -f config/samples/catalogsource.yaml
```bash
$ make kind-cluster; make install; kubectl apply -f config/samples/core_v1beta1_catalogsource.yaml
.
.
.

$ kubectl get catalogsource -n test
$ kubectl get catalogsource
NAME AGE
catalogsource-sample 98s

$ kubectl get bundlemetadata -n test
$ kubectl get bundlemetadata
NAME AGE
3scale-community-operator.v0.7.0 28s
3scale-community-operator.v0.8.2 28s
Expand All @@ -40,7 +34,7 @@ flux.v0.15.3 1s
.
.

$ kubectl get packages -n test
$ kubectl get packages
NAME AGE
3scale-community-operator 77m
ack-apigatewayv2-controller 77m
Expand All @@ -62,54 +56,47 @@ ack-opensearchservice-controller 77m
```

## Contributing

Thanks for your interest in contributing to `catalogd`!

`catalogd` is in the very early stages of development and a more in depth contributing guide will come in the near future.

In the mean time, it is assumed you know how to make contributions to open source projects in general and this guide will only focus on how to manually test your changes (no automated testing yet).

If you have any questions, feel free to reach out to us on the Kubernetes Slack channel [#olm-dev](https://kubernetes.slack.com/archives/C0181L6JYQ2) or [create an issue](https://github.com/operator-framework/catalogd/issues/new)

### Testing Local Changes

**Prerequisites**

- [Install kind](https://kind.sigs.k8s.io/docs/user/quick-start/#installation)

**Local (not on cluster)**

> **Note**: This will work *only* for the controller

- Create a cluster:

```sh
kind create cluster
make kind-cluster
```
- Install CRDs and run the controller locally:

- Install CRDs and run the controller locally

```sh
kubectl apply -f config/crd/bases/ && make run
```

**On Cluster**
- Build the images locally:
```sh
make docker-build-controller && make docker-build-server
```

- Create a cluster:

```sh
kind create cluster
```
- Load the images onto the cluster:
```sh
kind load docker-image quay.io/operator-framework/catalogd-controller:latest && kind load docker-image quay.io/operator-framework/catalogd-server:latest
```
- Install cert-manager:
```sh
make cert-manager
```
- Install the CRDs
```sh
kubectl apply -f config/crd/bases/
```
- Deploy the apiserver, etcd, and controller:
```sh
kubectl apply -f config/
make kind-cluster
```
- Create the sample CatalogSource (this will trigger the reconciliation loop):

- Install catalogd on cluster

```sh
kubectl apply -f config/samples/catalogsource.yaml
make install
```
Original file line number Diff line number Diff line change
Expand Up @@ -52,14 +52,79 @@ spec:
status:
description: CatalogSourceStatus defines the observed state of CatalogSource
properties:
latestImagePoll:
description: The last time the image has been polled to ensure the
image is up-to-date
format: date-time
type: string
required:
- latestImagePoll
conditions:
description: Conditions store the status conditions of the CatalogSource
instances
items:
description: "Condition contains details for one aspect of the current
state of this API Resource. --- This struct is intended for direct
use as an array at the field path .status.conditions. For example,
\n type FooStatus struct{ // Represents the observations of a
foo's current state. // Known .status.conditions.type are: \"Available\",
\"Progressing\", and \"Degraded\" // +patchMergeKey=type // +patchStrategy=merge
// +listType=map // +listMapKey=type Conditions []metav1.Condition
`json:\"conditions,omitempty\" patchStrategy:\"merge\" patchMergeKey:\"type\"
protobuf:\"bytes,1,rep,name=conditions\"` \n // other fields }"
properties:
lastTransitionTime:
description: lastTransitionTime is the last time the condition
transitioned from one status to another. This should be when
the underlying condition changed. If that is not known, then
using the time when the API field changed is acceptable.
format: date-time
type: string
message:
description: message is a human readable message indicating
details about the transition. This may be an empty string.
maxLength: 32768
type: string
observedGeneration:
description: observedGeneration represents the .metadata.generation
that the condition was set based upon. For instance, if .metadata.generation
is currently 12, but the .status.conditions[x].observedGeneration
is 9, the condition is out of date with respect to the current
state of the instance.
format: int64
minimum: 0
type: integer
reason:
description: reason contains a programmatic identifier indicating
the reason for the condition's last transition. Producers
of specific condition types may define expected values and
meanings for this field, and whether the values are considered
a guaranteed API. The value should be a CamelCase string.
This field may not be empty.
maxLength: 1024
minLength: 1
pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$
type: string
status:
description: status of the condition, one of True, False, Unknown.
enum:
- "True"
- "False"
- Unknown
type: string
type:
description: type of condition in CamelCase or in foo.example.com/CamelCase.
--- Many .condition.type values are consistent across resources
like Available, but because arbitrary conditions can be useful
(see .node.status.conditions), the ability to deconflict is
important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt)
maxLength: 316
pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$
type: string
required:
- lastTransitionTime
- message
- reason
- status
- type
type: object
type: array
type: object
type: object
served: true
storage: true
subresources:
status: {}
22 changes: 11 additions & 11 deletions pprof/README.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
## pprof
# pprof

This folder contains some profiles that can be read using [pprof](https://github.com/google/pprof) to show how the core kubernetes apiserver and the custom catalogd apiserver CPU & Memory utilization is affected by the creation and reconciliation of the sample `CatalogSource` CR found at `../config/samples/catalogsource.yaml`.

Instead of providing static screenshots and losing the interactivity associated with these `pprof` profiles, each of the files with the extension `.pb` can be used to view the profiles that were the result of running `pprof` against the live processes.

To view the `pprof` profiles in the most interactive way (or if you have no prior `pprof`experience) it is recommended to run:
```
```bash
go tool pprof -http=localhost:<port> somefile.pb
```

Expand Down Expand Up @@ -38,7 +38,7 @@ In this section, we will break down the differences between how the core kube-ap
| cpu | 1.72s / 30s (5.73%) | 1.99s / 30.06s (6.62%) | 1720ms / 60.06s (2.86%) |

The `Normalized Difference` Metric was evaluated by running:
```
```bash
go tool pprof -http=localhost:6060 -diff_base=pprof/kubeapiserver_alone_cpu_profile.pb -normalize pprof/kubeapiserver_cpu_profile.pb
```
This command will normalize the profiles to better compare the differences. In its simplest form this difference was calculated by `pprof/kubeapiserver_alone_cpu_profile.pb - pprof/kubeapiserver_cpu_profile.pb`
Expand All @@ -55,7 +55,7 @@ According to the `Normalized Difference`, there appears to be little to no diffe
| alloc_objects | 19717785 | 33134306 | 102, 0.00052% of 19717785 total |

The `Normalized Difference` Metric was evaluated by running:
```
```bash
go tool pprof -http=localhost:6060 -diff_base=pprof/kubeapiserver_alone_heap_profile.pb -normalize pprof/kubeapiserver_heap_profile.pb
```
This command will normalize the profiles to better compare the differences. In its simplest form this difference was calculated by `pprof/kubeapiserver_alone_heap_profile.pb - pprof/kubeapiserver_heap_profile.pb`
Expand All @@ -80,7 +80,7 @@ This section is being added as the pprof metrics don't necessarily show the whol
**TLDR**: CPU utilization spike of 0.156 cores and settles ~0.011 cores above prior utilization. Memory consumption increase of 22Mi.

This image shows the spike in CPU utilization and the increase in Memory consumption. In this scenario, the command:
```
```bash
kubectl apply -f config/samples/catalogsource.yaml
```
was run right at 1:44 PM.
Expand All @@ -90,7 +90,7 @@ The CPU spike lasted ~3 minutes and the values were:
- 1:45PM (PEAK) - 0.223 cores
- 1:47PM - 0.078 cores

With this, we can see that without the catalogd apiserver the core kube-apiserver had a CPU utilization spike of 0.156 cores and then settled at ~0.011 cores above what the utilization was prior to the reconciliation of the sample `CatalogSource` CR.
With this, we can see that without the catalogd apiserver the core kube-apiserver had a CPU utilization spike of 0.156 cores and then settled at ~0.011 cores above what the utilization was prior to the reconciliation of the sample `CatalogSource` CR.

The memory consumption increased over the span of ~3 minutes and then stabilized. The values were:
- 1:44PM - 289Mi
Expand All @@ -101,13 +101,13 @@ With this, we can see that without the catalogd apiserver the core kube-apiserve

### Core kube-apiserver with catalogd apiserver

#### kube-apiserver:
#### kube-apiserver
![kube-apiserver CPU and mem metric graph with custom apiserver](images/kubeapiserver_metrics.png)

**TLDR**: CPU utilization spike of 0.125 cores and settles ~0.001 cores above prior utilization. Memory consumption increase of ~26Mi.

This image shows the spike in CPU utilization and the increase in Memory consumption. In this scenario, the command:
```
```bash
kubectl apply -f config/samples/catalogsource.yaml
```
was run right at 3:06 PM
Expand All @@ -118,7 +118,7 @@ The CPU spike lasted ~3 minutes and the values were:
- 3:08 PM (PEAK) - 0.215 cores
- 3:09 PM - 0.091 cores

With this, we can see that with the catalogd apiserver the core kube-apiserver had a CPU utilization spike of 0.125 cores and then settled at ~0.001 cores above what the utilization was prior to the reconciliation of the sample `CatalogSource` CR.
With this, we can see that with the catalogd apiserver the core kube-apiserver had a CPU utilization spike of 0.125 cores and then settled at ~0.001 cores above what the utilization was prior to the reconciliation of the sample `CatalogSource` CR.

The memory consumption increased over the span of ~3 minutes and then stabilized. The values were:
- 3:06PM - 337Mi
Expand All @@ -134,7 +134,7 @@ With this, we can see that with the catalogd apiserver the core kube-apiserver h
**TLDR**: potential increase of ~0.012 cores, but more likely ~0.002 cores. Memory consumption increase of ~0.1Mi

This image shows the spike in CPU utilization and the increase in Memory consumption. In this scenario, the command:
```
```bash
kubectl apply -f config/samples/catalogsource.yaml
```
was run right at 3:06 PM
Expand Down Expand Up @@ -169,7 +169,7 @@ Overall, when running both the kube-apiserver and the catalogd apiserver the tot
**TLDR**: CPU spike of 0.288 cores, settling ~0.003 cores above the previous consumption. Memory consumption of ~232.2Mi.

This image shows the spike in CPU utilization and the increase in Memory consumption. In this scenario, the command:
```
```bash
kubectl apply -f config/samples/catalogsource.yaml
```
was run right at 3:06 PM
Expand Down