-
Notifications
You must be signed in to change notification settings - Fork 13
SPIKE: Maestro integration to adapter framework #82
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
WalkthroughThis PR removes two legacy adapter configuration templates and adds new Maestro-focused configuration, business-logic templates, and design documentation. Added files include an AdapterConfig deployment template with Maestro client and TLS settings, an adapter business-logic template (preconditions, resource creation, CEL post-processing, and status postActions), and two Maestro integration design docs. The deleted files are an older AdapterConfig template and a multi-environment ConfigMap template. Sequence Diagram(s)sequenceDiagram
participant Adapter
participant KubernetesAPI as "Kubernetes API"
participant Maestro as "Maestro Transport"
participant HyperFleetAPI as "HyperFleet API"
Adapter->>KubernetesAPI: GET Cluster resource (precondition)
KubernetesAPI-->>Adapter: Cluster status, metadata
Adapter->>Maestro: Create/Update Namespace resources (agentNamespace, clusterNamespace)
Maestro-->>KubernetesAPI: Apply manifests to target clusters
KubernetesAPI-->>Maestro: Apply results / resource status
Maestro-->>Adapter: Delivery result / observed state
Adapter->>Adapter: Build clusterStatusPayload (CEL expressions)
Adapter->>HyperFleetAPI: POST status payload (with timeout & retries)
HyperFleetAPI-->>Adapter: Acknowledge / response
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In
`@hyperfleet/components/adapter/framework/configs/adapter-deployment-config-template.yaml`:
- Around line 38-50: The maestro configuration block is incorrectly nested under
spec.adapter (the symbol "maestro" appears inside "adapter"); move the "maestro"
section so it is a direct child of "spec" (i.e., use spec.maestro instead of
spec.adapter.maestro), ensuring any comments/indentation reflect the new
top-level placement and that any references to "maestro" in templates or docs
match spec.maestro.
🧹 Nitpick comments (7)
hyperfleet/components/adapter/maestro-integration/maestro-architecture.md (1)
114-162: Documentation content is comprehensive and well-structured.The event flow documentation provides valuable insights into Maestro's architecture. One minor improvement: the code evidence section (lines 158-161) references specific line numbers without specifying the source file path.
Consider adding the file path for clarity:
**Code Evidence:** -- Line 93: `PageList(ctx, m.logger, m.apiClient, ...)` - REST API for initial state -- Line 43: `cloudEventsClient.Subscribe(ctx, watcherStore.HandleReceivedResource)` - CloudEvents subscription -- Line 117: `HandleReceivedResource` processes incoming work updates +- `watcher.go` Line 93: `PageList(ctx, m.logger, m.apiClient, ...)` - REST API for initial state +- `watcher.go` Line 43: `cloudEventsClient.Subscribe(ctx, watcherStore.HandleReceivedResource)` - CloudEvents subscription +- `watcher.go` Line 117: `HandleReceivedResource` processes incoming work updateshyperfleet/components/adapter/framework/configs/adapter-business-logic-template-MVP.yaml (2)
86-91: Consider documenting the expected API response structure.The deeply nested field path
status.conditions.placement.data.clusterNameassumes a specific API response structure. If this path doesn't exist, the capture will silently fail or return null.Consider either:
- Adding a comment with the expected API response schema
- Using a default value mechanism similar to CEL's
.orValue()- name: "placementClusterName" field: "status.conditions.placement.data.clusterName" default: "" # Optional: provide default if path doesn't exist
233-246: Consider adding error handling guidance for postActions.The postActions section defines retry behavior but doesn't specify what happens on final failure after all retries are exhausted.
Consider documenting the expected behavior:
postActions: - name: "reportClusterStatus" apiCall: # ... existing config ... retryAttempts: 3 retryBackoff: "exponential" # Optional: Document or add failure handling # onFailure: "log" # Options: log, skip, failhyperfleet/components/adapter/framework/configs/adapter-deployment-config-template.yaml (2)
62-91: Clarify that TLS and token configs are mutually exclusive.Both
tlsConfigandtokenConfigare defined, but only one should be active based onauth.type. Consider adding comments or restructuring to make this clearer.♻️ Suggested clarification
# Authentication configuration auth: # Authentication type: "tls" (certificate-based) or "token" (bearer token) + # NOTE: Configure ONLY the section that matches your auth type type: "tls" - # TLS certificate configuration + # TLS certificate configuration (used when type: "tls") tlsConfig: # ... - # Alternative: Bearer token configuration (for token-based authentication) + # Bearer token configuration (used when type: "token") + # NOTE: Comment out or remove this section if using TLS authentication tokenConfig:
106-112: Consider increasinghyperfleetApi.timeoutvalue.A 2-second timeout for HTTP API calls may be too aggressive for production environments, especially during network congestion or when the API is under load. The business logic template uses
10sfor precondition API calls and30sfor postActions.♻️ Suggested change
hyperfleetApi: # HTTP client timeout for API calls - timeout: 2s + timeout: 10s # Number of retry attempts for failed API calls retryAttempts: 3hyperfleet/components/adapter/maestro-integration/SPIKE-maestro-adapter-integration.md (2)
34-46: Add language specifier to fenced code block.The ASCII diagram code block lacks a language specifier, which triggers markdown lint warning MD040.
♻️ Suggested fix
-``` +```text ┌─────────────────┐ ┌─────────────────┐ ┌──────────────────-┐ │ Sentinel │────▶│ HyperFleet │────▶│ Deployment Cluster│
488-493: Add language specifier to fenced code block.The event flow diagram code block lacks a language specifier.
♻️ Suggested fix
-``` +```text Maestro Events → Adapter (Event Processor) → HyperFleet Broker → Sentinel → HyperFleet API</details> </blockquote></details> </blockquote></details> <!-- This is an auto-generated comment by CodeRabbit for review status -->
| spec: | ||
| # Adapter Information | ||
| adapter: | ||
| # Adapter version | ||
| version: "0.2.0" | ||
|
|
||
| # ============================================================================ | ||
| # NEW: Transport Configuration | ||
| # ============================================================================ | ||
| # Defines how resources are managed (direct K8s API vs Maestro transport) | ||
|
|
||
| # Maestro-specific configuration this is optional and only used if the adapter uses Maestro transport | ||
| maestro: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Inconsistent structure placement for maestro configuration.
The maestro section is nested under spec.adapter (line 49-50), but the SPIKE document (SPIKE-maestro-adapter-integration.md lines 81-117) shows it at spec.maestro level.
♻️ Suggested structure to match SPIKE document
spec:
# Adapter Information
adapter:
# Adapter version
version: "0.2.0"
- # Maestro-specific configuration this is optional and only used if the adapter uses Maestro transport
- maestro:
+ # Maestro-specific configuration - optional, only used if adapter uses Maestro transport
+ maestro:
# Maestro gRPC server connection
serverAddress: "maestro-grpc.aro-hcp.example.com:8090"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| spec: | |
| # Adapter Information | |
| adapter: | |
| # Adapter version | |
| version: "0.2.0" | |
| # ============================================================================ | |
| # NEW: Transport Configuration | |
| # ============================================================================ | |
| # Defines how resources are managed (direct K8s API vs Maestro transport) | |
| # Maestro-specific configuration this is optional and only used if the adapter uses Maestro transport | |
| maestro: | |
| spec: | |
| # Adapter Information | |
| adapter: | |
| # Adapter version | |
| version: "0.2.0" | |
| # ============================================================================ | |
| # NEW: Transport Configuration | |
| # ============================================================================ | |
| # Defines how resources are managed (direct K8s API vs Maestro transport) | |
| # Maestro-specific configuration - optional, only used if adapter uses Maestro transport | |
| maestro: |
🤖 Prompt for AI Agents
In
`@hyperfleet/components/adapter/framework/configs/adapter-deployment-config-template.yaml`
around lines 38 - 50, The maestro configuration block is incorrectly nested
under spec.adapter (the symbol "maestro" appears inside "adapter"); move the
"maestro" section so it is a direct child of "spec" (i.e., use spec.maestro
instead of spec.adapter.maestro), ensuring any comments/indentation reflect the
new top-level placement and that any references to "maestro" in templates or
docs match spec.maestro.
| # ========================================================================== | ||
| - name: "clusterNamespace" | ||
| transport: | ||
| type: "direct" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nitpick: is "direct" meaningful enough? should it be called kubernetes, or kubectl?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am a bit of struggle here, kubernetes may caused confusion about which kubernetes or kubernetes to which cluster? Let me ask AI to generate a better name
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AI gave me
My Recommendation
Use Direct.
It provides the cleanest contrast. When someone looks at your Job CR or Epic, seeing Direct vs Maestro immediately explains the networking and security posture of that specific task without needing further documentation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about in-cluster?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's the second recommendation by AI. If no objections I think we can use it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what if we call the property client instead of type?
I mean, now it is: transport.type, I am proposing transport.client
Will it more clear that client: kubernetes does not refer to a cluster but to a client access?
To me, in-cluster makes me think that is going to apply the manifest in the same cluster the adapter is running, similar to "in-this-cluster'
hyperfleet/components/adapter/framework/configs/adapter-deployment-config-template.yaml
Show resolved
Hide resolved
| # Maestro-specific configuration this is optional and only used if the adapter uses Maestro transport | ||
| maestro: | ||
| # Maestro gRPC server connection | ||
| serverAddress: "maestro-grpc.aro-hcp.example.com:8090" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need also the HTTP address to query maestro?
In the other document we say:
Key Finding: Watch uses hybrid approach - HTTP REST API for initial state + CloudEvents for live updates!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch. We need!
hyperfleet/components/adapter/framework/configs/adapter-deployment-config-template.yaml
Outdated
Show resolved
Hide resolved
hyperfleet/components/adapter/framework/configs/adapter-deployment-config-template.yaml
Outdated
Show resolved
Hide resolved
hyperfleet/components/adapter/maestro-integration/SPIKE-maestro-adapter-integration.md
Show resolved
Hide resolved
hyperfleet/components/adapter/maestro-integration/SPIKE-maestro-adapter-integration.md
Outdated
Show resolved
Hide resolved
hyperfleet/components/adapter/maestro-integration/SPIKE-maestro-adapter-integration.md
Show resolved
Hide resolved
hyperfleet/components/adapter/maestro-integration/SPIKE-maestro-adapter-integration.md
Outdated
Show resolved
Hide resolved
hyperfleet/components/adapter/maestro-integration/SPIKE-maestro-adapter-integration.md
Outdated
Show resolved
Hide resolved
| - name: "clusterName" | ||
| field: "name" | ||
| - name: "clusterPhase" | ||
| field: "status.phase" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be better to use the latest structure based on Angel’s recent changes to cluster status to avoid user's confusion? And also the below conditions.
| # ========================================================================== | ||
| - name: "clusterNamespace" | ||
| transport: | ||
| type: "direct" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about in-cluster?
| hyperfleet.io/managed-by: "{{ .metadata.name }}" | ||
| hyperfleet.io/resource-type: "namespace" | ||
| annotations: | ||
| hyperfleet.io/created-by: "hyperfleet-adapter" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be better to have this added uniformly by the adapter framework? If it’s not needed, can it be removed?
hyperfleet/components/adapter/framework/configs/adapter-business-logic-template-MVP.yaml
Show resolved
Hide resolved
| annotations: | ||
| hyperfleet.io/created-by: "hyperfleet-adapter" | ||
| hyperfleet.io/generation: "{{ .generationId }}" | ||
| discovery: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maestro supports submitting and configuring multiple Kubernetes resources in a single request. If we continue using it, does that mean we need to split the Kubernetes resources and submit them to Maestro in multiple requests?
About the resource discovery, any ideas or patterns we can take from Maestro?
| # | ||
| # TRANSPORT MODES: | ||
| # ================ | ||
| # - direct: Traditional direct Kubernetes API access (existing behavior) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it better to remove (existing behavior)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch.
hyperfleet/components/adapter/framework/configs/adapter-deployment-config-template.yaml
Outdated
Show resolved
Hide resolved
| - **Capacity**: 10,000+ events per second | ||
| - **Architecture**: Direct broker subscription with event transformation | ||
|
|
||
| ### Connection Management |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this section a little duplicated with Key Finding? They all outlines the details for gRPC Mode and MQTT/Pub-Sub Mode.
8585dd9 to
f1f6d3a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
🤖 Fix all issues with AI agents
In
`@hyperfleet/components/adapter/framework/configs/adapter-deployment-config-template.yaml`:
- Around line 93-109: The httpApi and kubernetes blocks are currently indented
under clients (yielding spec.clients.httpApi and spec.clients.kubernetes) but
the schema expects spec.httpApi and spec.kubernetes; move/dedent the httpApi:
and kubernetes: sections (including their child keys like timeout,
retryAttempts, retryBackoff, apiVersion, kubeconfig) so they are direct children
of spec (remove the extra clients indentation) to restore the correct top-level
keys used by the schema and validation; ensure references to httpApi and
kubernetes in any templates or code (e.g., places reading spec.httpApi or
spec.kubernetes) remain unchanged.
In
`@hyperfleet/components/adapter/maestro-integration/maestro-architecture-introduction.md`:
- Around line 195-223: The doc currently contradicts supported gRPC
authentication methods: the "Direct gRPC Streaming" section lists token-based
auth while "Watch Connection Management" (see
workClient.ManifestWorks(consumerName).Watch(...)) and "Security Architecture"
emphasize TLS/mTLS only; update these sections to consistently describe the
exact supported auth options (e.g., TLS/mTLS only or TLS/mTLS plus token-based),
clarify where token-based authentication is available or not, and add a short
note in "Watch Connection Management" and the connection details bullet that
explains which auth modes apply to the gRPC CloudEvents stream and any
exceptions for initial REST calls.
- Around line 118-121: The fenced diagram blocks in the "Event Production and
Consumption" section (the blocks containing "Maestro Server ←→ gRPC CloudEvents
Stream ←→ Client (Watch API)" and the other three similar diagrams) lack a
language tag and trigger markdownlint MD040; update each opening fence from ```
to ```text (or ```txt) so the four diagram fences (the ones around the diagram
strings) become fenced code blocks with a language identifier.
- Around line 228-232: The line stating "CloudEvents provide sequencing and
delivery guarantees" is misleading; update the "Event Characteristics" section
(the "Status Updates / Generation Tracking / Event Ordering" block) to say that
CloudEvents defines the envelope only and that ordering/delivery guarantees are
provided by the underlying transport or broker and application patterns (for
example Kafka partitioning, Pub/Sub ordering keys, Knative eventing settings, or
retries/ack semantics), so replace or reword the "Event Ordering" bullet to
attribute sequencing/delivery guarantees to the chosen transport and any
app-level strategies rather than the CloudEvents spec.
In
`@hyperfleet/components/adapter/maestro-integration/SPIKE-maestro-adapter-integration.md`:
- Around line 350-353: The "Fallback mode" entry in the Connection Failures
section incorrectly implies direct API access is available by default; update
SPIKE-maestro-adapter-integration.md to either remove the fallback mention or
make it explicit opt-in by adding a clear prerequisite and configuration flag
(e.g., a documented fallbackToDirectApi config) and state that the default is
disabled, include required prerequisites (direct API credentials/permissions and
justification about kubeconfig distribution) and an example config snippet plus
a security note under the "Fallback mode" heading so readers know it only
applies when fallbackToDirectApi is explicitly enabled.
- Around line 32-41: Fix the spacing and wording around the polling cadence in
the "CloudEvent Processing" bullet and make cadence phrasing consistent: change
"Every 30 minutes(This is configurable when deploy sentinel)" to "every 30
minutes (configurable when deploying Sentinel)" in the CloudEvent Processing
line, and standardize other occurrences under "Resource Reporting Cadence" and
related sections to use "30-minute" (e.g., "Ready Polling: Sentinel posts
CloudEvents every 30 minutes" -> "Ready Polling: Sentinel posts CloudEvents on a
30-minute cycle" or "every 30 minutes (30-minute cycle)") so spacing,
capitalization of Sentinel, and hyphenation are consistent.
- Around line 91-140: The example AdapterConfig uses spec.maestro,
serverAddress, and hyperfleetApi but the loader expects spec.clients.maestro
with grpcServerAddress/httpServerAddress and spec.httpApi; update the example to
match the template schema by moving maestro config under spec.clients.maestro,
rename serverAddress to grpcServerAddress (or include both grpcServerAddress and
httpServerAddress where appropriate), and rename hyperfleetApi to httpApi with
the same timeout/retry fields; locate the example block (symbols: AdapterConfig,
spec.maestro, serverAddress, hyperfleetApi) and adjust keys and structure to
exactly mirror the template (spec.clients.maestro,
grpcServerAddress/httpServerAddress, spec.httpApi) so the loader will recognize
it.
hyperfleet/components/adapter/framework/configs/adapter-deployment-config-template.yaml
Show resolved
Hide resolved
| #### **Event Production and Consumption** | ||
| ``` | ||
| Maestro Server ←→ gRPC CloudEvents Stream ←→ Client (Watch API) | ||
| ``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add language identifiers to diagram fences to satisfy markdownlint.
MD040 flags fenced blocks without a language in this new section. Please add text (or txt) to these diagram fences (Line 119, 139, 159, 214) to keep lint clean.
✏️ Example fix
-```
+```text
Maestro Server ←→ gRPC CloudEvents Stream ←→ Client (Watch API)🤖 Prompt for AI Agents
In
`@hyperfleet/components/adapter/maestro-integration/maestro-architecture-introduction.md`
around lines 118 - 121, The fenced diagram blocks in the "Event Production and
Consumption" section (the blocks containing "Maestro Server ←→ gRPC CloudEvents
Stream ←→ Client (Watch API)" and the other three similar diagrams) lack a
language tag and trigger markdownlint MD040; update each opening fence from ```
to ```text (or ```txt) so the four diagram fences (the ones around the diagram
strings) become fenced code blocks with a language identifier.
| **Direct gRPC Streaming:** | ||
| - **One gRPC connection per client** to Maestro server | ||
| - **SourceID-based filtering**: Each client gets events filtered by its sourceID | ||
| - **No message competition**: Each client receives independent event stream | ||
| - **Connection authentication**: TLS/mTLS or token-based authentication | ||
|
|
||
| **Multiple Clients:** | ||
| - Each client establishes separate gRPC connection | ||
| - Events filtered by sourceID parameter (set during client creation) | ||
| - No shared subscriptions or competing consumers | ||
| - Safe for multiple clients with different sourceIDs | ||
|
|
||
| #### **MQTT/Pub-Sub Mode (Alternative)** | ||
| - **Topic-based routing**: Events routed via broker topics | ||
| - **Subscription management**: Requires pre-configured topics and subscriptions | ||
| - **Connection overhead**: Separate connections to message broker | ||
| - **Multiple subscriber challenges**: Potential message competition depending on topic design | ||
|
|
||
| #### **Watch Connection Management** | ||
| ``` | ||
| workClient.ManifestWorks(consumerName).Watch(ctx, metav1.ListOptions{}) | ||
| ``` | ||
|
|
||
| **Connection Details (Based on Code Analysis):** | ||
| - **Protocol**: gRPC CloudEvents streaming (not HTTP polling) | ||
| - **Authentication**: TLS/mTLS certificates | ||
| - **Initial Data**: HTTP REST API call to `/api/maestro/v1/resource-bundles` | ||
| - **Live Updates**: gRPC CloudEvents stream for real-time changes | ||
| - **Filtering**: By consumerName (target cluster) and sourceID (client identifier) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Align gRPC auth options across sections.
This section lists token-based auth, while “Watch Connection Management” and “Security Architecture” later emphasize TLS/mTLS only. Please reconcile the supported auth methods so readers don’t get conflicting guidance.
🤖 Prompt for AI Agents
In
`@hyperfleet/components/adapter/maestro-integration/maestro-architecture-introduction.md`
around lines 195 - 223, The doc currently contradicts supported gRPC
authentication methods: the "Direct gRPC Streaming" section lists token-based
auth while "Watch Connection Management" (see
workClient.ManifestWorks(consumerName).Watch(...)) and "Security Architecture"
emphasize TLS/mTLS only; update these sections to consistently describe the
exact supported auth options (e.g., TLS/mTLS only or TLS/mTLS plus token-based),
clarify where token-based authentication is available or not, and add a short
note in "Watch Connection Management" and the connection details bullet that
explains which auth modes apply to the gRPC CloudEvents stream and any
exceptions for initial REST calls.
| #### **Event Characteristics** | ||
| - **Status Updates**: Frequent condition changes generate multiple events | ||
| - **Generation Tracking**: API generation correlation for conflict resolution | ||
| - **Event Ordering**: CloudEvents provide sequencing and delivery guarantees | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🌐 Web query:
CloudEvents specification ordering delivery guarantees
💡 Result:
CloudEvents does not define delivery guarantees (at-most-once/at-least-once/exactly-once) or ordering guarantees.
- The CloudEvents core spec is about the event “envelope” (structure/metadata) for interoperability, not message delivery semantics. [1]
- Even protocol/transport bindings are explicit that they only define how to represent/map CloudEvents onto a transport, and do not prescribe transfer/settlement (ack/retry) rules—those are owned by the underlying messaging system and your application. [2]
- Practically, ordering and delivery behavior comes from the chosen transport/broker (e.g., Kafka partition ordering, Pub/Sub ordering keys) and any eventing layer on top (e.g., Knative/Kafka “ordered” vs “unordered” delivery options), not from CloudEvents itself. [3]
Implication: If you need ordering or specific delivery semantics, you must rely on (and document) the guarantees of the transport/broker and implement patterns like idempotent consumers, deduplication, retries, and partitioning/ordering keys outside the CloudEvents spec. [2]
Sources:
[1] CloudEvents spec repo overview (CloudEvents describes event data formats for interoperability) (github.com)
[2] CloudEvents transport binding statement (doesn’t prescribe transfer/settlement rules; only message expression/mapping) (gist.github.com)
[3] Example of ordering being an eventing-layer/broker configuration (Knative Kafka delivery order) (docs.redhat.com)
Citations:
- 1: https://github.com/cloudevents/spec?utm_source=openai
- 2: https://gist.github.com/sijie/082c59e66b9ed175c1bb48466f3629f0?utm_source=openai
- 3: https://docs.redhat.com/documentation/en-us/red_hat_openshift_serverless/1.33/html-single/eventing/?utm_source=openai
🏁 Script executed:
# First, let's locate and examine the actual file to see the full context
find . -name "maestro-architecture-introduction.md" -type fRepository: openshift-hyperfleet/architecture
Length of output: 313
Clarify ordering/delivery guarantees as transport-specific, not CloudEvents spec-level.
"CloudEvents provide sequencing and delivery guarantees" is misleading. The CloudEvents specification defines only the event envelope structure for interoperability and does not prescribe ordering or delivery semantics. These guarantees come from the underlying transport/broker (e.g., Kafka partition ordering, Pub/Sub ordering keys, Knative eventing configuration). Revise to clarify that ordering and delivery depend on your chosen transport and application-level patterns, not CloudEvents itself.
🤖 Prompt for AI Agents
In
`@hyperfleet/components/adapter/maestro-integration/maestro-architecture-introduction.md`
around lines 228 - 232, The line stating "CloudEvents provide sequencing and
delivery guarantees" is misleading; update the "Event Characteristics" section
(the "Status Updates / Generation Tracking / Event Ordering" block) to say that
CloudEvents defines the envelope only and that ordering/delivery guarantees are
provided by the underlying transport or broker and application patterns (for
example Kafka partitioning, Pub/Sub ordering keys, Knative eventing settings, or
retries/ack semantics), so replace or reword the "Event Ordering" bullet to
attribute sequencing/delivery guarantees to the chosen transport and any
app-level strategies rather than the CloudEvents spec.
| ### Existing Adapter Framework Architecture | ||
| - **CloudEvent Processing**: Adapters consume CloudEvents from Sentinel every 30 minutes(This is configurable when deploy sentinel) when clusters are ready | ||
| - **Direct K8s API**: Current adapters directly manage resources on local/accessible clusters | ||
| - **Status Reporting**: Adapters report status back to HyperFleet API via HTTP REST calls | ||
| - **DSL-Based Configuration**: Declarative YAML configuration drives adapter behavior | ||
|
|
||
| ### Resource Reporting Cadence | ||
| - **Pre-Ready Polling**: Sentinel posts CloudEvents every 10 seconds before cluster becomes ready | ||
| - **Ready Polling**: Sentinel posts CloudEvents every 30 minutes when cluster is ready | ||
| - **Implications**: Status updates frequency depends on cluster readiness state |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix spacing/wording around polling cadence.
There’s a missing space and awkward phrasing in “Every 30 minutes(This is configurable when deploy sentinel)”. Also consider using “30 min/30‑minute” consistently in later sections (Reporting Cycle, Alternatives).
✏️ Example tweak
- **CloudEvent Processing**: Adapters consume CloudEvents from Sentinel every 30 minutes(This is configurable when deploy sentinel) when clusters are ready
+ **CloudEvent Processing**: Adapters consume CloudEvents from Sentinel every 30 minutes (configurable during Sentinel deployment) when clusters are ready🧰 Tools
🪛 markdownlint-cli2 (0.20.0)
39-39: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
In
`@hyperfleet/components/adapter/maestro-integration/SPIKE-maestro-adapter-integration.md`
around lines 32 - 41, Fix the spacing and wording around the polling cadence in
the "CloudEvent Processing" bullet and make cadence phrasing consistent: change
"Every 30 minutes(This is configurable when deploy sentinel)" to "every 30
minutes (configurable when deploying Sentinel)" in the CloudEvent Processing
line, and standardize other occurrences under "Resource Reporting Cadence" and
related sections to use "30-minute" (e.g., "Ready Polling: Sentinel posts
CloudEvents every 30 minutes" -> "Ready Polling: Sentinel posts CloudEvents on a
30-minute cycle" or "every 30 minutes (30-minute cycle)") so spacing,
capitalization of Sentinel, and hyphenation are consistent.
| #### Deployment Configuration (adapter-deployment-config-template.yaml) | ||
|
|
||
| Infrastructure settings for Maestro connection. This section is **OPTIONAL** - only configure when maestro transport is used in business logic. | ||
|
|
||
| ```yaml | ||
| apiVersion: hyperfleet.redhat.com/v1alpha1 | ||
| kind: AdapterConfig | ||
| metadata: | ||
| name: aro-hcp-adapter | ||
| namespace: hyperfleet-system | ||
| labels: | ||
| hyperfleet.io/adapter-type: aro-hcp | ||
| hyperfleet.io/component: adapter | ||
| spec: | ||
| adapter: | ||
| version: "0.2.0" | ||
|
|
||
| # OPTIONAL: Maestro configuration (only if adapter uses maestro transport) | ||
| # NOTE: consumerName is NOT configured here - resolved dynamically at runtime | ||
| maestro: | ||
| # Maestro gRPC server connection | ||
| serverAddress: "maestro-grpc.aro-hcp.example.com:8090" | ||
|
|
||
| # Source identifier for CloudEvents routing | ||
| sourceId: "hyperfleet-{{ .metadata.name }}" | ||
|
|
||
| # Authentication configuration | ||
| auth: | ||
| type: "tls" | ||
| tlsConfig: | ||
| caFile: "/etc/maestro/certs/ca.crt" | ||
| certFile: "/etc/maestro/certs/hyperfleet-client.crt" | ||
| keyFile: "/etc/maestro/certs/hyperfleet-client.key" | ||
| serverName: "maestro-grpc.aro-hcp.example.com" | ||
|
|
||
| # Connection settings | ||
| timeout: "30s" | ||
| retryAttempts: 3 | ||
| retryBackoff: "exponential" | ||
| keepalive: | ||
| time: "30s" | ||
| timeout: "10s" | ||
| permitWithoutStream: true | ||
|
|
||
| # HyperFleet API Configuration | ||
| hyperfleetApi: | ||
| timeout: 2s | ||
| retryAttempts: 3 | ||
| retryBackoff: exponential | ||
| ``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Align the AdapterConfig example with the template schema.
This example uses spec.maestro, serverAddress, and hyperfleetApi, while the template in adapter-deployment-config-template.yaml defines spec.clients.maestro with grpcServerAddress/httpServerAddress and spec.httpApi. As written, readers will copy a schema the loader won’t recognize. Please sync the example (or the template) so they match.
♻️ Example alignment to template schema
spec:
adapter:
version: "0.2.0"
- # OPTIONAL: Maestro configuration (only if adapter uses maestro transport)
- # NOTE: consumerName is NOT configured here - resolved dynamically at runtime
- maestro:
- # Maestro gRPC server connection
- serverAddress: "maestro-grpc.aro-hcp.example.com:8090"
+ # OPTIONAL: Maestro configuration (only if adapter uses maestro transport)
+ # NOTE: consumerName is NOT configured here - resolved dynamically at runtime
+ clients:
+ maestro:
+ # Maestro gRPC server connection
+ grpcServerAddress: "maestro-grpc.aro-hcp.example.com:8090"
+ # Maestro HTTPS server connection
+ httpServerAddress: "maestro-http.aro-hcp.example.com:8091"
# Source identifier for CloudEvents routing
sourceId: "hyperfleet-{{ .metadata.name }}"
@@
- hyperfleetApi:
+ httpApi:
timeout: 2s
retryAttempts: 3
retryBackoff: exponential🧰 Tools
🪛 markdownlint-cli2 (0.20.0)
95-95: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
119-119: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
139-139: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
In
`@hyperfleet/components/adapter/maestro-integration/SPIKE-maestro-adapter-integration.md`
around lines 91 - 140, The example AdapterConfig uses spec.maestro,
serverAddress, and hyperfleetApi but the loader expects spec.clients.maestro
with grpcServerAddress/httpServerAddress and spec.httpApi; update the example to
match the template schema by moving maestro config under spec.clients.maestro,
rename serverAddress to grpcServerAddress (or include both grpcServerAddress and
httpServerAddress where appropriate), and rename hyperfleetApi to httpApi with
the same timeout/retry fields; locate the example block (symbols: AdapterConfig,
spec.maestro, serverAddress, hyperfleetApi) and adjust keys and structure to
exactly mirror the template (spec.clients.maestro,
grpcServerAddress/httpServerAddress, spec.httpApi) so the loader will recognize
it.
| #### Connection Failures | ||
| - **Retry with backoff**: Configurable max retries and exponential backoff | ||
| - **Fallback mode**: Optional fallback to direct API if Maestro connection fails | ||
| - **Health checks**: Periodic connection health verification |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Clarify prerequisites for “fallback to direct API”.
The spike’s goal is to avoid distributing kubeconfig to remote clusters. A fallback to direct API implies direct access is already configured; otherwise it undercuts the security objective. Please scope this as opt‑in only or remove it.
🤖 Prompt for AI Agents
In
`@hyperfleet/components/adapter/maestro-integration/SPIKE-maestro-adapter-integration.md`
around lines 350 - 353, The "Fallback mode" entry in the Connection Failures
section incorrectly implies direct API access is available by default; update
SPIKE-maestro-adapter-integration.md to either remove the fallback mention or
make it explicit opt-in by adding a clear prerequisite and configuration flag
(e.g., a documented fallbackToDirectApi config) and state that the default is
disabled, include required prerequisites (direct API credentials/permissions and
justification about kubeconfig distribution) and an example config snippet plus
a security note under the "Fallback mode" heading so readers know it only
applies when fallbackToDirectApi is explicitly enabled.
f1f6d3a to
e63b4e6
Compare
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.