Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 6 additions & 7 deletions DEVELOPMENT.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,13 +18,13 @@ Next install our CAPMS provider into the cluster.
make push-to-capi-lab
```

Before creating a cluster some manual steps are required beforehand: you need to allocate a node network.
Before creating a cluster the control plane IP needs to be created first:

```bash
make -C capi-lab node-network control-plane-ip
make -C capi-lab control-plane-ip
```

A basic cluster configuration that relies on `config/clusterctl-templates/cluster-template.yaml` and uses the aforementioned node network can be generated and applied to the management cluster using a make target.
A basic cluster configuration that relies on `config/clusterctl-templates/cluster-template-calico.yaml` and uses the aforementioned IP can be generated and applied to the management cluster using a make target.

```bash
make -C capi-lab apply-sample-cluster
Expand Down Expand Up @@ -84,7 +84,8 @@ If you want to test the local changes you made to the provider, run:

```bash
unset E2E_KUBECONFIG # ensure a new kind cluster is created
make docker-build-e2e test-e2e
# skip move tests as they won't have access to the docker image on your local machine
make docker-build-e2e test-e2e E2E_LABEL_FILTER="\!move"
```

This will automatically build and load your image.
Expand Down Expand Up @@ -201,7 +202,6 @@ export METAL_API_URL=

export METAL_PARTITION=
export METAL_PROJECT_ID=
export METAL_NODE_NETWORK_ID=
export CONTROL_PLANE_IP=

export FIREWALL_MACHINE_IMAGE=
Expand All @@ -225,11 +225,10 @@ export project_name=
export tenant_name=
```

Create project, node network and control plane ip if needed:
Create project and control plane ip if needed:

```bash
metalctl project create --name $project_name --tenant $tenant_name --description "Cluster API test project"
metalctl network allocate --description "Node network for $CLUSTER_NAME" --name $CLUSTER_NAME --project $METAL_PROJECT_ID --partition $METAL_PARTITION
metalctl network ip create --network internet --project $METAL_PROJECT_ID --name "$CLUSTER_NAME-vip" --type static -o template --template "{{ .ipaddress }}"
```

Expand Down
62 changes: 5 additions & 57 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,6 @@ Currently, we provide the following custom resources:

We plan to cover more resources in the future:

- Node Networks
- Complete Firewall Deployments using the [Firewall Controller Manager](https://github.com/metal-stack/firewall-controller-manager)
- Improved configuration suggestion of CNIs

Expand Down Expand Up @@ -62,28 +61,21 @@ clusterctl init --infrastructure metal-stack
> **Manual steps needed:**
> Due to the early development stage, manual actions are needed for the cluster to operate. Some metal-stack resources need to be created manually.

A node network needs to be created.
Allocate a VIP for the control plane.

```bash
export CLUSTER_NAME=<cluster-name>
export METAL_PARTITION=<partition>
export METAL_PROJECT_ID=<project-id>
metalctl network allocate --description "Node network for $CLUSTER_NAME" --name $CLUSTER_NAME --project $METAL_PROJECT_ID --partition $METAL_PARTITION

# export environment variable for use in the next steps
export METAL_NODE_NETWORK_ID=$(metalctl network list --name $CLUSTER_NAME -o template --template '{{ .id }}')
```

Allocate a VIP for the control plane.

```bash
export CONTROL_PLANE_IP=$(metalctl network ip create --network internet --project $METAL_PROJECT_ID --name "$CLUSTER_NAME-vip" --type static -o template --template "{{ .ipaddress }}")
```

For your first cluster, it is advised to start with our generated template. Ensure that the namespaced cluster name is unique within the metal stack project.

```bash
# display required environment variables
clusterctl generate cluster $CLUSTER_NAME --infrastructure metal-stack --list-variables
clusterctl generate cluster $CLUSTER_NAME --infrastructure metal-stack --list-variables --flavor calico

# set additional environment variables
export CONTROL_PLANE_MACHINE_IMAGE=<machine-image>
Expand All @@ -94,7 +86,7 @@ export FIREWALL_MACHINE_IMAGE=<machine-image>
export FIREWALL_MACHINE_SIZE=<machine-size>

# generate manifest
clusterctl generate cluster $CLUSTER_NAME --kubernetes-version v1.32.9 --infrastructure metal-stack
clusterctl generate cluster $CLUSTER_NAME --kubernetes-version v1.32.9 --infrastructure metal-stack --flavor calico
```

Apply the generated manifest from the `clusterctl` output.
Expand All @@ -103,51 +95,7 @@ Apply the generated manifest from the `clusterctl` output.
kubectl apply -f <manifest>
```

Once your control plane and worker machines have been provisioned, you need to install your CNI of choice into your created cluster. This is required due to CAPI. An example is provided below:

```bash
# get the kubeconfig
clusterctl get kubeconfig metal-test > capms-cluster.kubeconfig

# install the calico operator
kubectl --kubeconfig=capms-cluster.kubeconfig create -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.2/manifests/tigera-operator.yaml

# install the calico CNI
cat <<EOF | kubectl --kubeconfig=capms-cluster.kubeconfig create -f -
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
spec:
# Configures Calico networking.
calicoNetwork:
bgp: Disabled
ipPools:
- name: default-ipv4-ippool
blockSize: 26
cidr: 10.240.0.0/12
encapsulation: None
mtu: 1440
cni:
ipam:
type: HostLocal
type: Calico
EOF
```

Meanwhile, the `metal-ccm` has to be deployed for the machines to reach `Running` phase. For this use the [`config/target-cluster/metal-ccm.yaml` template](config/target-cluster/metal-ccm.yaml) and fill in the required variables.

```bash
export NAMESPACE=<namespace>
export CLUSTER_NAME=<cluster name>
cat config/target-cluster/metal-ccm.yaml | envsubst | kubectl --kubeconfig capms-cluster.kubeconfig apply -f -
```

If you want to provide service's of type `LoadBalancer` through MetalLB by the `metal-ccm`, you need to deploy MetalLB:

```bash
kubectl --kubeconfig capms-cluster.kubeconfig apply --kustomize capi-lab/metallb
```
That's it!

## Frequently Asked Questions

Expand Down
7 changes: 5 additions & 2 deletions api/v1alpha1/metalstackcluster_types.go
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,7 @@ const (
ClusterControlPlaneEndpointDefaultPort = 443

ClusterControlPlaneIPEnsured = "ClusterControlPlaneIPEnsured"
ClusterNodeNetworkEnsured = "ClusterNodeNetworkEnsured"
)

var (
Expand All @@ -55,7 +56,9 @@ type MetalStackClusterSpec struct {
ProjectID string `json:"projectID"`

// NodeNetworkID is the network ID in metal-stack in which the worker nodes and the firewall of the cluster are placed.
NodeNetworkID string `json:"nodeNetworkID"`
// If not provided this will automatically be acquired during reconcile.
// +optional
NodeNetworkID *string `json:"nodeNetworkID,omitempty"`

// ControlPlaneIP is the ip address in metal-stack on which the control plane will be exposed.
// If this ip and the control plane endpoint are not provided, an ephemeral ip will automatically be acquired during reconcile.
Expand Down Expand Up @@ -165,6 +168,6 @@ func (c *MetalStackCluster) SetConditions(conditions []metav1.Condition) {
c.Status.Conditions = conditions
}

func (c *MetalStackCluster) GetClusterID() string {
func (c *MetalStackCluster) GetClusterName() string {
return fmt.Sprintf("%s.%s", c.GetNamespace(), c.GetName())
}
4 changes: 2 additions & 2 deletions api/v1alpha1/metalstackcluster_types_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ var _ = Describe("MetalStackCluster", func() {
},
}

clusterID := cluster.GetClusterID()
clusterID := cluster.GetClusterName()
Expect(utilvalidation.IsValidLabelValue(clusterID)).To(BeEmpty())
})

Expand All @@ -29,7 +29,7 @@ var _ = Describe("MetalStackCluster", func() {
},
}

clusterID := cluster.GetClusterID()
clusterID := cluster.GetClusterName()
Expect(clusterID).To(Equal("some-namespace.my-cluster"))
})
})
5 changes: 5 additions & 0 deletions api/v1alpha1/zz_generated.deepcopy.go

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

6 changes: 0 additions & 6 deletions capi-lab/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -59,17 +59,12 @@ controller:
kubectl --kubeconfig=$(KUBECONFIG) patch deployments.apps -n cap-metal-stack metal-stack-controller-manager --patch='{"spec":{"template":{"spec":{"containers":[{"name": "manager","imagePullPolicy":"IfNotPresent","image":"$(IMG)"}]}}}}'
kubectl --kubeconfig=$(KUBECONFIG) delete pod -n cap-metal-stack -l control-plane=metal-stack-controller-manager

.PHONY: node-network
node-network:
metalctl network allocate --description "node network for $(CLUSTER_NAME) cluster" --name $(CLUSTER_NAME) --project 00000000-0000-0000-0000-000000000001 --partition mini-lab

.PHONY: control-plane-ip
control-plane-ip:
metalctl network ip create --network internet-mini-lab --project $(METAL_PROJECT_ID) --name "$(CLUSTER_NAME)-vip" --type static -o template --template "{{ .ipaddress }}"

.PHONY: apply-sample-cluster
apply-sample-cluster:
$(eval METAL_NODE_NETWORK_ID = $(shell metalctl network list --name $(CLUSTER_NAME) -o template --template '{{ .id }}'))
$(eval CONTROL_PLANE_IP = $(shell metalctl network ip list --name "$(CLUSTER_NAME)-vip" -o template --template '{{ .ipaddress }}'))
echo $(CLUSTER_NAME)
clusterctl generate cluster $(CLUSTER_NAME) \
Expand All @@ -82,7 +77,6 @@ apply-sample-cluster:

.PHONY: delete-sample-cluster
delete-sample-cluster:
$(eval METAL_NODE_NETWORK_ID = $(shell metalctl network list --name $(CLUSTER_NAME) -o template --template '{{ .id }}'))
$(eval CONTROL_PLANE_IP = $(shell metalctl network ip list --name "$(CLUSTER_NAME)-vip" -o template --template '{{ .ipaddress }}'))
clusterctl generate cluster $(CLUSTER_NAME) \
--kubeconfig=$(KUBECONFIG) \
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ metadata:
spec:
projectID: ${METAL_PROJECT_ID}
partition: ${METAL_PARTITION}
nodeNetworkID: ${METAL_NODE_NETWORK_ID}
nodeNetworkID: ${METAL_NODE_NETWORK_ID:=null}
controlPlaneIP: ${CONTROL_PLANE_IP}
firewallDeploymentRef:
name: ${CLUSTER_NAME}
Expand Down
2 changes: 1 addition & 1 deletion config/clusterctl-templates/cluster-template-calico.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ metadata:
spec:
projectID: ${METAL_PROJECT_ID}
partition: ${METAL_PARTITION}
nodeNetworkID: ${METAL_NODE_NETWORK_ID}
nodeNetworkID: ${METAL_NODE_NETWORK_ID:=null}
controlPlaneIP: ${CONTROL_PLANE_IP}
firewallDeploymentRef:
name: ${CLUSTER_NAME}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ metadata:
spec:
projectID: ${METAL_PROJECT_ID}
partition: ${METAL_PARTITION}
nodeNetworkID: ${METAL_NODE_NETWORK_ID}
nodeNetworkID: ${METAL_NODE_NETWORK_ID:=null}
controlPlaneIP: ${CONTROL_PLANE_IP}
firewallDeploymentRef:
name: ${CLUSTER_NAME}
Expand Down
2 changes: 1 addition & 1 deletion config/clusterctl-templates/cluster-template.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ metadata:
spec:
projectID: ${METAL_PROJECT_ID}
partition: ${METAL_PARTITION}
nodeNetworkID: ${METAL_NODE_NETWORK_ID}
nodeNetworkID: ${METAL_NODE_NETWORK_ID:=null}
controlPlaneIP: ${CONTROL_PLANE_IP}
firewallDeploymentRef:
name: ${CLUSTER_NAME}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -104,8 +104,9 @@ spec:
- name
type: object
nodeNetworkID:
description: NodeNetworkID is the network ID in metal-stack in which
the worker nodes and the firewall of the cluster are placed.
description: |-
NodeNetworkID is the network ID in metal-stack in which the worker nodes and the firewall of the cluster are placed.
If not provided this will automatically be acquired during reconcile.
type: string
partition:
description: Partition is the data center partition in which the resources
Expand All @@ -116,7 +117,6 @@ spec:
in which the associated metal-stack resources are created.
type: string
required:
- nodeNetworkID
- partition
- projectID
type: object
Expand Down
65 changes: 63 additions & 2 deletions internal/controller/metalstackcluster_controller.go
Original file line number Diff line number Diff line change
Expand Up @@ -309,6 +309,26 @@ func (r *MetalStackClusterReconciler) metalStackFirewallToMetalStackCluster(log
}

func (r *clusterReconciler) reconcile() error {
nodeNetworkID, err := r.ensureNodeNetwork()
if err != nil {
conditions.Set(r.infraCluster, metav1.Condition{
Type: v1alpha1.ClusterNodeNetworkEnsured,
Status: metav1.ConditionFalse,
Reason: "InternalError",
Message: err.Error(),
})
return fmt.Errorf("unable to ensure node network: %w", err)
}
conditions.Set(r.infraCluster, metav1.Condition{
Type: v1alpha1.ClusterNodeNetworkEnsured,
Status: metav1.ConditionTrue,
Reason: "ClusterNodeNetworkEnsured",
Message: "Node network ensured",
})
r.infraCluster.Spec.NodeNetworkID = &nodeNetworkID

r.log.Info("reconciled node network", "network-id", nodeNetworkID)

if r.infraCluster.Spec.FirewallDeploymentRef != nil {
err := r.ensureFirewallDeployment()
if err != nil {
Expand Down Expand Up @@ -390,12 +410,53 @@ func (r *clusterReconciler) delete() error {
}
r.infraCluster.Spec.ControlPlaneIP = nil

err = r.deleteNodeNetwork()
if err != nil {
return fmt.Errorf("unable to delete node network: %w", err)
}
r.infraCluster.Spec.NodeNetworkID = nil

r.log.Info("deletion finished, removing finalizer")
controllerutil.RemoveFinalizer(r.infraCluster, v1alpha1.ClusterFinalizer)

return err
}

func (r *clusterReconciler) ensureNodeNetwork() (string, error) {
if r.infraCluster.Spec.NodeNetworkID != nil {
return *r.infraCluster.Spec.NodeNetworkID, nil
}

resp, err := r.metalClient.Network().AllocateNetwork(network.NewAllocateNetworkParams().WithBody(&models.V1NetworkAllocateRequest{
Projectid: r.infraCluster.Spec.ProjectID,
Partitionid: r.infraCluster.Spec.Partition,
Name: r.infraCluster.GetName(),
Description: fmt.Sprintf("%s/%s", r.infraCluster.GetNamespace(), r.infraCluster.GetName()),
Labels: map[string]string{
tag.ClusterID: r.infraCluster.GetClusterName(),
},
}).WithContext(r.ctx), nil)
if err != nil {
return "", fmt.Errorf("error creating node network: %w", err)
}

return *resp.Payload.ID, nil
}

func (r *clusterReconciler) deleteNodeNetwork() error {
if r.infraCluster.Spec.NodeNetworkID == nil {
return nil
}

_, err := r.metalClient.Network().FreeNetwork(network.NewFreeNetworkParams().WithID(*r.infraCluster.Spec.NodeNetworkID).WithContext(r.ctx), nil)
if err != nil {
return err
}
r.log.Info("deleted node network")

return nil
}

func (r *clusterReconciler) ensureFirewallDeployment() error {
fwdeploy := &v1alpha1.MetalStackFirewallDeployment{}
err := r.client.Get(r.ctx, types.NamespacedName{
Expand Down Expand Up @@ -445,12 +506,12 @@ func (r *clusterReconciler) ensureControlPlaneIP() (string, error) {

defaultNetwork := nwResp.Payload[0]
resp, err := r.metalClient.IP().AllocateIP(ipmodels.NewAllocateIPParams().WithBody(&models.V1IPAllocateRequest{
Description: fmt.Sprintf("%s control plane ip", r.infraCluster.GetClusterID()),
Description: fmt.Sprintf("%s control plane ip", r.infraCluster.GetClusterName()),
Name: r.infraCluster.GetName() + "-control-plane",
Networkid: defaultNetwork.ID,
Projectid: &r.infraCluster.Spec.ProjectID,
Tags: []string{
tag.New(tag.ClusterID, r.infraCluster.GetClusterID()),
tag.New(tag.ClusterID, r.infraCluster.GetClusterName()),
v1alpha1.TagControlPlanePurpose,
},
Type: ptr.To(models.V1IPBaseTypeEphemeral),
Expand Down
Loading