Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 6 additions & 1 deletion .github/workflows/e2e-tests.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -79,6 +79,8 @@ jobs:
# the safest way to avoid race conditions between apps and infra, e.g. vault.

kubectl apply -f k8s/base
- name: "Wait for JAD infrastructure to be ready"
run: |-
kubectl wait --namespace edc-v \
--for=condition=ready pod \
--selector=type=edcv-infra \
Expand All @@ -93,9 +95,12 @@ jobs:
sed -i "s/imagePullPolicy:.*Always/imagePullPolicy: Never/g" k8s/apps/identityhub.yaml

kubectl apply -f k8s/apps


- name: "Wait for JAD applications to be ready"
run: |-
# wait until all init jobs are done
kubectl wait --namespace edc-v \
--selector=type=edcv-job \
--for=condition=complete job --all \
--timeout=90s

Expand Down
85 changes: 69 additions & 16 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,15 +45,29 @@ kind create cluster -n edcv --config kind.config.yaml --kubeconfig ~/.kube/edcv-
ln -sf ~/.kube/edcv-kind.conf ~/.kube/config # to use KinD's kubeconfig
```

#### 1.1 Option 1: Use pre-built images
Next, deploy the NGINX ingress controller:

There are pre-built images for all JAD apps available from [GHCR](https://github.com/Metaform/jad/packages). Those are
tested and we strongly recommend using them.
```shell
kubectl apply -f https://kind.sigs.k8s.io/examples/ingress/deploy-ingress-nginx.yaml
kubectl wait --namespace ingress-nginx \
--for=condition=ready pod \
--selector=app.kubernetes.io/component=controller \
--timeout=90s
```

### 2. Deploy applications

#### 2.1 Option 1: Use pre-built images

There are pre-built images for all JAD apps available from [GHCR](https://github.com/Metaform/jad/packages) and the
Connector Fabric Manager images are available from
the [CFM GitHub Repository](https://github.com/Metaform/connector-fabric-manager/packages). Those are tested and we
strongly recommend using them.

#### 1.2 Option 2: Build images from source
#### 2.2 Option 2: Build images from source

However, for the adventurous among us who want to build them from source, for example, because they've modified the code
and now want to see it in action, please follow the following steps:
and now want to see it in action, please follow the following steps to build and load JAD apps:

- build Docker images:

Expand All @@ -62,14 +76,15 @@ and now want to see it in action, please follow the following steps:
```

This will build the Docker images for all components and store them in the local Docker registry. JAD requires a
special version of PostgreSQL,n particular, it installs the `wal2json` extension. You can create this special Postgres
special version of PostgreSQL, in particular, it installs the `wal2json` extension. You can create this special
Postgres
version by running

```shell
docker buildx build -f launchers/postgres/Dockerfile --platform linux/amd64,linux/arm64 -t ghcr.io/metaform/jad/postgres:wal2json launchers/postgres
```

this will create the image `postgres:wal2json` for both amd64 and arm64 (e.g., Apple Silicon) architectures Add
this will create the image `postgres:wal2json` for both amd64 and arm64 (e.g., Apple Silicon) architectures. Add
platforms as needed.

- load images into KinD: KinD has no access to the host's docker context, so we need to load the images into KinD. Note
Expand All @@ -88,13 +103,39 @@ and now want to see it in action, please follow the following steps:
or if you're a bash God:

```shell
kind load docker-image -n edcv $(docker images --format "{{.Repository}}:{{.Tag}}" | grep '^ghcr.io/metaform/jad/')
kind load docker-image -n edcv $(docker images --format "{{.Repository}}:{{.Tag}}" | grep '^ghcr.io/metaform/jad.*:latest')
```

- modify the deployment manifests `controlplane.yaml`, `dataplane.yaml`, `identityhub.yaml`, `issuerservice.yaml` and
`postgres.yaml` and set `imagePullPolicy: Never` to force KinD to use the local images.
- build CFM docker images locally:
```shell
cd /path/to/cfm/
make load-into-kind
```
This builds all CFM components' docker images and loads them into your KinD cluster, assuming that your KinD cluster
is named `"edcv"`. If not, set the cluster name for the make file accordingly:
```
cd /path/to/cfm/
make load-into-kind KIND_CLUSTER_NAME=your_cluster_name`.
```
Note that individual `make` targets for all CFM components exist, for example `make load-into-kind-pmanager`.

### 2. Deploy the services
- modify the deployment manifests of the components you want to load locally by setting the `imagePullPolicy: Never`
which forces KinD to rely on local images rather than pulling them. This can be done with search-and-replace from your
favorite editor, or you can do it from the command line by running
```shell
sed -i "s/imagePullPolicy:.*Always/imagePullPolicy: Never/g" <FILENAME>
```
**CAUTION Mac users**: this requires GNU-sed. By default, macOS, has a special version of `sed` so you will have
to [install GNU sed first](https://medium.com/@bramblexu/install-gnu-sed-on-mac-os-and-set-it-as-default-7c17ef1b8f64)
- For the EDC-V components, the relevant files are `controlplane.yaml`, `dataplane.yaml`, `identityhub.yaml` and
`issuerservice.yaml`
- as a simplification, and to modify the image pull policy of both EDC-V _and_ CFM components, run:
```shell
grep -rlZ "imagePullPolicy: Always" k8s/apps | xargs sed -i "s/imagePullPolicy:.*Always/imagePullPolicy: Never/g"
```
For this, both the EDC-V and CFM docker images must be built locally!!

### 3. Deploy the services

JAD uses plain Kubernetes manifests to deploy the services. All the manifests are located in the [k8s](./k8s) folder.
While it is possible to just use the Kustomize plugin and running `kubectl apply -k k8s/`, you may experience nasty race
Expand Down Expand Up @@ -123,7 +164,7 @@ kubectl wait --namespace edc-v \
Here's a copy-and-pasteable command to delete and redeploy everything:

```shell
kubectl delete -k k8s/ && \
kubectl delete -k k8s/; \
kubectl apply -f k8s/base && \
kubectl wait --namespace edc-v \
--for=condition=ready pod \
Expand All @@ -135,13 +176,16 @@ kubectl wait --namespace edc-v \
--timeout=90s
```

_Note: the `";"` after `kubectl delete -k k8s/` is on purpose for robustness, to allow the command to fail if no
resources are deployed yet._

This deploys all the services in the correct order. The services are deployed in the `edc-v` namespace. Please verify
that everything got deployed correctly by running `kubectl get deployments -n edcv`. This should output something like:

```text
NAME READY UP-TO-DATE AVAILABLE AGE
cfm-agents 1/1 1 1 117m
cfm-participant-manager 1/1 1 1 117m
cfm-provision-manager 1/1 1 1 117m
cfm-tenant-manager 1/1 1 1 117m
controlplane 1/1 1 1 117m
dataplane 1/1 1 1 117m
Expand All @@ -153,7 +197,7 @@ postgres 1/1 1 1 110m
vault 1/1 1 1 110m
```

### 3. Inspect your deployment
### 4. Inspect your deployment

- database: the PostgreSQL database is accessible from outside the cluster via
`jdbc:postgresql://postgres.localhost/controlplane`, username `cp`, password `cp`.
Expand All @@ -167,13 +211,13 @@ In addition, you should see the following Kubernetes jobs (`k get jobs -n edcv`)
```text
NAME STATUS COMPLETIONS DURATION AGE
issuerservice-seed Complete 1/1 13s 119m
participant-manager-seed Complete 1/1 15s 119m
provision-manager-seed Complete 1/1 15s 119m
vault-bootstrap Complete 1/1 19s 120m
```

Those are needed to populate the databases and the vault with initial data.

### 4. Prepare the data space
### 5. Prepare the data space

In addition to the initial seed data, a few bits and pieces are required for it to become fully operational. These can
be put in place by running the REST requests in the `CFM - Provision Consumer` folder and in the
Expand Down Expand Up @@ -274,6 +318,15 @@ To remove the deployment, run:
kubectl delete -k k8s/
```

## Troubleshooting

In case any errors occur referring to authentication or authorization, it is recommended to delete and re-deploy the
entire base and all apps.

For example, if a participant onboarding went only through half-way, we recommend to do a clean-slate redeployment.

In some cases, even deleting and re-creating the KinD cluster may be required.

## Deploying JAD on a bare-metal/cloud-hosted Kubernetes

KinD is geared towards local development and testing. For example, it comes with a bunch of useful defaults, such as
Expand Down
2 changes: 2 additions & 0 deletions k8s/apps/controlplane-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -72,3 +72,5 @@ data:
edc.iam.trusted-issuer.issuer.id: "did:web:issuerservice.edc-v.svc.cluster.local%3A10016:issuer"

JAVA_TOOL_OPTIONS: "-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=1044"

edc.encryption.strict: "false"
4 changes: 3 additions & 1 deletion k8s/apps/identityhub-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -51,4 +51,6 @@ data:
# proxy_set_header Host $host;
# proxy_set_header X-Forwarded-Proto $scheme;
edc.iam.oauth2.issuer: "http://keycloak.edc-v.svc.cluster.local:8080/realms/edcv"
edc.iam.oauth2.jwks.url: "http://keycloak.edc-v.svc.cluster.local:8080/realms/edcv/protocol/openid-connect/certs"
edc.iam.oauth2.jwks.url: "http://keycloak.edc-v.svc.cluster.local:8080/realms/edcv/protocol/openid-connect/certs"

edc.encryption.strict: "false"
1 change: 1 addition & 0 deletions k8s/apps/issuerservice-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -65,3 +65,4 @@ data:
edc.datasource.membership.user: "issuer"
edc.datasource.membership.password: "issuer"

edc.encryption.strict: "false"
Original file line number Diff line number Diff line change
Expand Up @@ -14,49 +14,49 @@
apiVersion: batch/v1
kind: Job
metadata:
name: participant-manager-seed
name: provision-manager-seed
namespace: edc-v
labels:
app: participant-manager-seed
app: provision-manager-seed
platform: edcv
type: edcv-job
spec:
backoffLimit: 5
template:
metadata:
labels:
app: participant-manager-seed
app: provision-manager-seed
platform: edcv
type: edcv-job
spec:
restartPolicy: OnFailure
initContainers:
# Wait for participant-manager to be ready
- name: wait-for-participant-manager
# Wait for provision-manager to be ready
- name: wait-for-provision-manager
image: curlimages/curl:latest
command:
- sh
- -c
- |
until curl -sf http://participant-manager.edc-v.svc.cluster.local:8080/api/v1alpha1/activity-definitions; do
echo "Waiting for participant-manager to be ready..."
until curl -sf http://provision-manager.edc-v.svc.cluster.local:8080/api/v1alpha1/activity-definitions; do
echo "Waiting for provision-manager to be ready..."
sleep 5
done
echo "Participant Manager is ready!"
echo "Provision Manager is ready!"
containers:
- name: seed-participant-manager
- name: seed-provision-manager
image: curlimages/curl:latest
env:
- name: PM_BASE_URL
value: "http://participant-manager.edc-v.svc.cluster.local:8080"
value: "http://provision-manager.edc-v.svc.cluster.local:8080"
command:
- sh
- -c
- |
set -e

echo "================================================"
echo "ParticipantManager Seeding"
echo "ProvisionManager Seeding"
echo "================================================"

echo ""
Expand Down Expand Up @@ -187,5 +187,5 @@ spec:

echo ""
echo "================================================"
echo "ParticipantManager seeding completed!"
echo "ProvisionManager seeding completed!"
echo "================================================"
Original file line number Diff line number Diff line change
Expand Up @@ -14,26 +14,26 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: cfm-participant-manager
name: cfm-provision-manager
namespace: edc-v
labels:
app: participant-manager
app: provision-manager
platform: edcv
type: edcv-app
spec:
replicas: 1
selector:
matchLabels:
app: participant-manager
app: provision-manager
template:
metadata:
labels:
app: participant-manager
app: provision-manager
platform: edcv
type: edcv-app
spec:
containers:
- name: participant-manager
- name: provision-manager
image: ghcr.io/metaform/connector-fabric-manager/pmanager:latest
imagePullPolicy: Always
ports:
Expand All @@ -52,11 +52,11 @@ spec:
apiVersion: v1
kind: Service
metadata:
name: participant-manager
name: provision-manager
namespace: edc-v
spec:
selector:
app: participant-manager
app: provision-manager
ports:
- port: 8080
targetPort: 8080
Expand All @@ -66,7 +66,7 @@ spec:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: participant-manager
name: provision-manager
namespace: edc-v
spec:
rules:
Expand All @@ -77,6 +77,6 @@ spec:
pathType: Prefix
backend:
service:
name: participant-manager
name: provision-manager
port:
number: 8080
2 changes: 1 addition & 1 deletion k8s/base/vault.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ spec:
template:
metadata:
labels:
type: edcv-infra
type: edcv-job
spec:
serviceAccountName: vault-auth
containers:
Expand Down
4 changes: 2 additions & 2 deletions k8s/kustomization.yml
Original file line number Diff line number Diff line change
Expand Up @@ -27,8 +27,8 @@ resources:
- apps/identityhub.yaml
- apps/edcv-agent-config.yaml
- apps/keycloak-agent-config.yaml
- apps/participant-manager.yaml
- apps/participant-manager-config.yaml
- apps/provision-manager.yaml
- apps/provision-manager-config.yaml
- apps/tenant-manager.yaml
- apps/tenant-manager-config.yaml
- apps/cfm-agents.yaml
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,10 +14,16 @@

package org.eclipse.edc.jad.tests;

import com.fasterxml.jackson.databind.DeserializationFeature;
import com.fasterxml.jackson.databind.ObjectMapper;
import io.restassured.RestAssured;
import io.restassured.config.ObjectMapperConfig;
import io.restassured.config.RestAssuredConfig;
import org.eclipse.edc.jad.tests.model.CatalogResponse;
import org.eclipse.edc.jad.tests.model.ClientCredentials;
import org.eclipse.edc.junit.annotations.EndToEndTest;
import org.eclipse.edc.spi.monitor.ConsoleMonitor;
import org.junit.jupiter.api.BeforeAll;
import org.junit.jupiter.api.Test;

import java.io.IOException;
Expand Down Expand Up @@ -54,6 +60,18 @@ static String loadResourceFile(String resourceName) {
}
}

@BeforeAll
static void prepare() {
// globally disable failing on unknown properties for RestAssured
RestAssured.config = RestAssuredConfig.config().objectMapperConfig(new ObjectMapperConfig().jackson2ObjectMapperFactory(
(cls, charset) -> {
ObjectMapper om = new ObjectMapper().findAndRegisterModules();
om.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false);
return om;
}
));
}

@Test
void testDataTransfer() {
var monitor = new ConsoleMonitor(ConsoleMonitor.Level.DEBUG, true);
Expand Down
Loading