y-cluster: initial Go binary and serve feature#1
Merged
Conversation
- Graceful skipping: build tags gate compilation, runtime checks gate execution (no QEMU failures on Mac) - Coverage matrix: track which provider was tested per CI run - Remote runtime rule: never mount local dirs, never use local Docker socket for node containerd. Images go through registry or piped over SSH/exec. Matches production behavior. - Multipass provider added to test matrix Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Abstracts the local container runtime (Docker, Podman, Colima) for creating kwok and k3s-in-Docker test clusters. No direct Docker client dependency in test code. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
e2e tests use github.com/moby/moby/client directly — the same API as the k3s-in-Docker provisioner. No separate test abstraction. Podman works via Docker-compatible socket. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
7 e2e tests using a kwok cluster in Docker: - Namespace: basic apply + wait check - Idempotent: re-apply succeeds - DependencyOrdering: transitive chain (namespace→configmap→dependent) - IndirectChecks: checks aggregated from base via traversal - NamespaceEnvVar: $NAMESPACE exported to exec checks - PrintDeps: dependency resolution without cluster - ChecksOnly: verify without re-applying Self-contained testdata/ with CUE module, verify schema, and 5 test bases. No ystack or checkit dependency. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…olution Replace namespace-based test bases with a three-tier application model: db (foundation), backend (depends on db), frontend (depends on backend). Each tier has base/ and optionally qa/ overlay. Tests cover: - CUE ordering: db before backend before frontend - Customization: qa overlay aggregates checks from base - Overlay deps: qa inherits base's CUE dependencies via traversal Fix Run() to resolve dependencies from all CUE files in the kustomize tree, not just the target dir. An overlay (backend/qa) that wraps a base (backend/base) now inherits the base's CUE dependencies (db). Add namespace caution note to README. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
db's check creates a ConfigMap marker. backend's check reads it, proving that db was fully converged (applied AND checked) before backend's apply started. This would fail if both were bundled into one atomic kustomize apply. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
pkg/provision/qemu/ provides: - Provision: cloud image download, disk creation, cloud-init seed, VM start, SSH wait - Teardown: VM stop with process wait, disk keep/delete - ExportVMDK/ImportVMDK: appliance export/import via qemu-img - Configurable port forwards, SSH key management CLI subcommands: - y-cluster provision (--name, --disk-size, --memory, --cpus, --ssh-port) - y-cluster teardown (--keep-disk) - y-cluster export <output.vmdk> - y-cluster import <input.vmdk> e2e tests (//go:build e2e && kvm): - TestQemu_ProvisionTeardown: full lifecycle + SSH verification - TestQemu_TeardownKeepDisk: disk preservation on teardown - TestQemu_ExportImport: VMDK round-trip 7 unit tests for config, pid detection, teardown modes, error cases. Known issue: re-provision from preserved disk hangs on SSH (cloud-init state on Ubuntu Noble needs investigation). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
pkg/kubeconfig/ provides: - New: validates KUBECONFIG env, records context and cluster names - CleanupStale: removes stale context/cluster/user entries - Import: rename default→named entries, merge into existing kubeconfig - CleanupTeardown: remove context + fix null→[] for kubie compatibility Integrated into QEMU provisioner: - Init kubeconfig manager early in Provision - CleanupStale before provision (handles failed previous runs) - CleanupTeardown in TeardownConfig 8 unit tests covering: env validation, null fix, import new/merge, cleanup without error when entries don't exist. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Capture spec/implementation divergence, weak spots, and open scope questions so maintainers can answer inline (as diffs) and we can reconcile SPEC.md / TESTING.md / CI.md with intent before editing code. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…tation Every answer draws from real experience building and testing y-cluster against ystack and checkit acceptance tests. Key decisions documented: - Q1-Q3: kubectl subprocess is deliberate, not TODO. Spec should reflect reality. - Q5: Envoy Gateway validated by experiment, implementation gated on provisioner. - Q9: Hardcoded module path is a shortcut — fix by reading cue.mod/module.cue. - Q11-Q12: kubectl output discard and traverse error silence are regressions. Fix. - Q13: Dep aggregation across traverse tree is load-bearing. Document in README. - Q14: --checks-only should propagate. Current behavior is wrong. - Q15-Q17: TESTING.md is stale, CI.md is deferred, Phase 0 was a miss. - Q18: Delete legacy binary, promote root main.go to cmd/kustomize-traverse/. Requests section identifies next priorities for the handover. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Scope-confined plan to implement SERVE_FEATURE.md. All new code lives under pkg/serve/ and cmd/y-cluster/serve.go — no changes to yconverge, kustomize, or provision packages. Initial release validates the y-kustomize-local backend against a GitHub release artifact on linux and macos. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Phase 0 per SERVE_PLAN.md -- e2e harness lands before runtime code so
the test suite drives the implementation, not the other way round.
Tests fail with 'unknown command \"serve\"' until pkg/serve exists.
Fixture mirrors the ystack y-converge-checks-dag two-base layout: one
y-cluster-serve.yaml pointing at two sources, each with its own
y-kustomize-bases/{group}/{name}/ tree. A second fixture prepares a
duplicate-route scenario for the scan-level error path.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Implements SERVE_FEATURE.md initial scope per SERVE_PLAN.md. All new
Go code is confined to pkg/serve/ and cmd/y-cluster/serve.go, honoring
the 'minor part of this tool' constraint -- no changes to yconverge,
kustomize, or provision packages.
pkg/serve (new):
- config.go + schema: y-cluster-serve.yaml loader, strict YAML,
validation, deterministic digest for `ensure` comparison
- state.go: per-OS state dir (XDG / Library / LocalAppData), pidfile
- http.go: weak-ETag via FNV-1a, Cache-Control no-cache force-revalidate,
If-None-Match 304, yaml MIME override (application/yaml per Q-S3)
- openapi.go: OpenAPI 3.1 snapshot per port, served at /openapi.yaml
- health.go: /health JSON endpoint per port
- ykustomizelocal.go: scan y-kustomize-bases/{group}/{name}/{file},
error on duplicate routes across sources, GET/HEAD handler
- serve.go: Run / Ensure / Stop / Logs public API; refuses UID 0;
--foreground uses console zap, background uses JSON zap (Q-S1);
Ensure waits for /health on every port before returning (Q-S2)
- process.go + spawn_unix.go: setsid-based re-exec for background,
SIGTERM/SIGINT graceful shutdown with 10s deadline
- static.go: schema placeholder only; runtime not in first release
cmd/y-cluster/serve.go (new):
- `serve`, `serve ensure`, `serve stop`, `serve logs`
- Thin cobra adapter; all logic lives in pkg/serve
Unit tests cover config, state, http middleware, health, y-kustomize
backend, openapi snapshot, and the Run/Ensure/Stop/Logs flow via an
injected spawnFn that runs the daemon in-process. The background
re-exec path itself is covered by the e2e test.
The Phase 0 e2e test now passes.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Phases 6 and 7 of SERVE_PLAN.md. ci.yaml: add golangci-lint (errcheck, govet, staticcheck, unused) and an e2e-serve job that builds cmd/y-cluster and runs the serve e2e against the fresh build. Partial payback on Q16 (CI.md drift). .goreleaser.yaml: add a y-cluster build entry alongside the existing kustomize-traverse one; rename the project to y-cluster; attach the y-cluster-serve JSON schema as a release artifact under schema/. e2e-release.yaml (new) + scripts/e2e-serve-against-binary.sh (new): on release publication (and manually), download the tagged release archive on ubuntu-latest and macos-latest, then run a bash-level lifecycle -- ensure → GET /health, /v1/*, /openapi.yaml with ETag + 304 assertion → stop → stop again -- against the shipped binary. This is the 'first use case validated based on a GitHub release' gate the maintainer asked for. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Local serve maps routes by on-disk filename. If a kustomization.yaml uses rename syntax (key=path), the served path would silently differ from the in-cluster path. Detect this at startup and fail with guidance to rename the source file. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Follow-up to the tester's no-rename guardrail. - Replace the hand-rolled kustomizationFile struct with traverse.LoadKustomization, which returns types.Kustomization from sigs.k8s.io/kustomize/api/types. This gives us both generator kinds for free (SecretArgs and ConfigMapArgs both embed GeneratorArgs and its FileSources) and supports kustomization.yml / Kustomization filename fallbacks. - Extend the check to configMapGenerator[].files; the same route-skew trap applies. - Error message now identifies which generator the offending entry belongs to. - New tests: ConfigMapGenerator rejection, malformed kustomization parse error names the file, alternate filenames (kustomization.yml, Kustomization) still trigger the check. - SERVE_PLAN.md documents the contract under the y-kustomize-local backend, credited to the ystack maintainer. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Follows the pattern from Yolean/ystack@81c49c8 which produced ghcr.io/yolean/y-kustomize:<sha> using distroless/static:nonroot as the base and turbokube/contain to assemble the image without a Dockerfile. cmd/y-cluster/contain.yaml: single amd64 binary layered onto the pinned distroless base at /usr/local/bin/y-cluster. Nonroot UID 65532 matches pkg/serve's refuse-root check. .github/workflows/image.yaml: builds and pushes on push-to-main and on v* tag pushes, plus workflow_dispatch. Tags with github.sha always; release tag additionally on v* pushes. `if: github.repository_owner == 'Yolean'` guard is belt-and-braces -- GitHub Actions is not enabled under YoleanAgents, and the human owner mirrors the repo manually when ready to go open source. The guard keeps the workflow a no-op even if Actions is ever turned on there. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Answers the maintainer's question "is main→tag artifact reuse useless?" (largely yes), proposes merging the three workflow files into one, and lays out three options for multi-arch image builds (crane recommended). Five decision points at the end for sign-off before implementation. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Also refine the trigger matrix note to spell out that vX.Y.Z image tags are produced only on release runs, not on main pushes.
…ontain
D3: hand-roll release publishing, but skip compression entirely.
Publish each binary under goreleaser's naming convention
(y-cluster_vX.Y.Z_<os>_<arch>) plus a goreleaser-format checksums.txt
so downstream tooling that already understands that layout keeps
working. Schema file moves from release asset to tagged-source URL.
D4: no targets beyond {linux,darwin}×{amd64,arm64} for v0.2.0.
D5: defer. Keep cmd/y-cluster/contain.yaml + image.yaml until the
filed contain feature request (per-arch localFile.path) is
accepted or declined upstream.
D2 is likewise blocked on that upstream decision. Feature request
written at ~/Yolean/contain/FEATURE_REQUEST_PER_ARCH_LOCALFILE.md
(outside this repo) for the maintainer to review.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…release assets
Replaces ci.yaml + image.yaml + release.yaml + .goreleaser.yaml with
a single workflow that implements the design in RELEASE_PIPELINE_PLAN.md.
Binaries are built once per run by a `build` matrix over
{linux,darwin} × {amd64,arm64} and uploaded as artifacts. Every
downstream job (e2e-serve, image, release-assets) consumes from those
artifacts -- so the bytes shipped in the image and the bytes attached
to the GitHub release are byte-for-byte the same binaries CI tested.
Image job uses turbokube/contain v0.9.0 (pinned by sha256 per
ystack's y-bin runner) with the per-arch localFile.pathPerPlatform
feature we requested and verified against a test build
(ghcr.io/yolean/y-cluster:<sha256:501a9815…> in a local OCI layout).
contain.yaml now declares platforms linux/amd64 and linux/arm64/v8
with a matching pathPerPlatform mapping. The :<sha> tag is always
published; :<vX.Y.Z> is published only on tag pushes.
Release-assets job runs on v* tag push and publishes raw
(uncompressed) binaries with goreleaser naming
(y-cluster_<tag>_<os>_<arch>) plus a matching checksums.txt. No
tarballs, no .goreleaser.yaml needed -- gh CLI idempotently creates
the release and uploads.
e2e-release.yaml updated to consume the raw-binary layout and verify
the sha256 before running scripts/e2e-serve-against-binary.sh.
image + release-assets jobs are guarded on
`github.repository_owner == 'Yolean'` so the workflow stays a no-op
if ever enabled under YoleanAgents. Human owner mirrors manually
when ready to open-source.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Replaces the inline curl + hardcoded sha256 + chmod block with solsson/setup-contain@v1 (pinned to commit 49cc4cc). The action downloads the requested release, verifies its sha256 against the .sha256 asset published next to the binary, handles runner OS and arch, and exports the resolved tag as an output. With this change the workflow no longer has to track contain release checksums -- upstream does, via the .sha256 asset the action fetches on every run.
The previous pin d3f86a106a0bac45b6d05aa4d0c4b1c1d5d3c6c1 was wrong (same leading 8 hex chars as the real SHA, but hallucinated suffix). The real commit SHA for v4.3.0 is d3f86a106a0bac45b974a628896c90dbdf5c8093, confirmed via https://api.github.com/repos/actions/download-artifact/git/refs/tags/v4.3.0. Fixes the e2e-serve, image, and release-assets jobs failing with 'Unable to resolve action'. Other action pins (checkout, setup-go, upload-artifact, golangci-lint-action, docker/login-action, solsson/setup-contain) were re-verified against the same API and are correct.
The CI lint job on run 24902871277 failed with 37 errcheck issues.
They fall into two categories:
1. Noise patterns where the returned error is idiomatically ignored:
fmt.Fprint*, io.Closer.Close, *os.File.Close, os.Setenv/Unsetenv,
os.Remove/RemoveAll. Added these to errcheck.exclude-functions in
.golangci.yaml so the signal-to-noise ratio stays useful.
2. Real operations where the error should be propagated:
- pkg/kubeconfig/kubeconfig.go cmd.Run: already commented "ignore
errors", now explicitly `_ = cmd.Run()`.
- pkg/kubeconfig/kubeconfig.go fixNullLists: best-effort rewrite,
explicit `_ = os.WriteFile` with a comment explaining why the
failure is recoverable (original file remains valid kubectl YAML).
- Test helpers calling os.WriteFile / f.WriteString / f.Close in
pkg/kubeconfig, pkg/provision/qemu, pkg/serve: wrapped with
`if err := ...; err != nil { t.Fatal(err) }` so a seed-file
failure fails the test loudly instead of letting a later
assertion report a confusing downstream symptom.
`golangci-lint run --timeout=5m` now reports 0 issues and `go test
./...` stays green.
Addresses the lint failure observed on run 24903224830. The action's default install-mode fetched the prebuilt golangci-lint v2.5.0 binary, which was compiled with Go 1.25 and refused to load our config with: can't load config: the Go language version (go1.25) used to build golangci-lint is lower than the targeted Go version (1.26.1) Two coupled upgrades: 1. golangci-lint-action v8.0.0 -> v9.2.0. v9 runs on node24, which eliminates the Node.js 20 deprecation warnings GitHub currently prints on every job (Node 20 becomes unsupported June 2026). Same input surface as v8; no other config changes needed. 2. golangci-lint v2.5.0 -> v2.11.4. v2.11.4 is the latest release (March 2026). Its release binary is built with Go 1.26 per the upstream release.yml (GO_VERSION: "1.26" + goreleaser), so the default prebuilt binary install works with our go 1.26.1 module. No install-mode: goinstall workaround needed. Drop-through fix in pkg/serve/openapi.go: v2.11.4's staticcheck enables QF1012, which flags b.WriteString(fmt.Sprintf(...)) as replaceable by fmt.Fprintf(&b, ...). Three call sites rewritten; behavior identical. If go.mod is ever bumped ahead of what golangci-lint's latest release supports, re-add "install-mode: goinstall" on the lint step to rebuild golangci-lint from source with the runner's Go.
Run e62cb0c's green build still emits Node.js 20 deprecation warnings because the runtimes of our other action pins are still node20. June 2nd, 2026 the runner starts forcing node24 by default, and node20 leaves the runners entirely on September 16th, 2026. Getting ahead of both dates so the warnings vanish and the pinned SHAs stay valid after the flip. Version bumps (all pinned by commit SHA, same convention as before): actions/checkout v4.3.1 -> v6.0.2 actions/setup-go v5.5.0 -> v6.4.0 actions/upload-artifact v4.6.2 -> v7.0.1 actions/download-artifact v4.3.0 -> v8.0.1 docker/login-action v3.4.0 -> v4.1.0 Each of the new versions declares `runs.using: node24`, verified by fetching action.yml at the corresponding tag. solsson/setup-contain is a composite action and does not have a node runtime concern. Input surfaces are unchanged for the way we call these actions (go-version-file, cache, name, path, pattern, if-no-files-found, registry/username/password). No behavior change expected.
Two backends ship together because they shared the runtime plumbing
(dynamic health, dynamic openapi, context-driven backend lifetimes)
and because the ystack migration is the first production user of
the in-cluster backend.
y-kustomize-in-cluster (new):
- Watches Kubernetes Secrets named y-kustomize.{group}.{name} via a
SharedInformer and serves each data key at /v1/{group}/{name}/{key}.
Matches the ystack y-kustomize convention exactly so consumers need
no changes.
- Content-Type is application/yaml (RFC 9512, per Q-S3), replacing
ystack's legacy application/x-yaml.
- Label selector defaults to yolean.se/module-part=y-kustomize
(ystack convention) and is overridable via inCluster.labelSelector.
- Namespace resolution: explicit config -> pod's service account
namespace file -> kubeconfig current-context namespace -> "default".
- openapi.yaml is re-rendered on every request so it adapts to the
current watch state; /health likewise reports live routes+ns+selector.
- client-go dep (k8s.io/client-go v0.35.4) is pulled in; the fake
clientset drives all unit tests for watch semantics.
static (was a stub, now implemented):
- Snapshots the configured dir at startup for the openapi spec.
- Path traversal is gated twice: URL path cleaning + an absolute-path
prefix check against the resolved dir.
- dirTrailingSlash "redirect" emits 302 to the trailing-slash form
(query string preserved); target still 404s -- no listing.
- yamlToJson transforms application/yaml responses to application/json
only when the override is on. Minification via sigs.k8s.io/yaml +
re-marshal; Content-Length and ETag are computed on the transformed
body; HEAD runs the transform too so headers agree. 500 on parse
failure. Off by default.
- The openapi spec advertises application/json for .yaml routes when
yamlToJson is enabled.
Runtime plumbing:
- buildServers now takes ctx so backends with background goroutines
(informer) can tie their lifetime to SIGTERM.
- health.HealthHandlerFunc and openapi.OpenAPIHandlerFunc invoke
their provider on every request; the existing HealthHandler now
snapshots its map once to keep backward compatibility.
- http.WriteAssetAs factors out the Content-Type-overriding path so
yamlToJson can reuse ETag/Cache-Control/If-None-Match logic.
E2E additions:
- testdata/serve-static/ -- a worked example with yamlToJson enabled
and dirTrailingSlash=redirect. TestServe_Static drives the binary
end-to-end.
- testdata/serve-ykustomize-incluster/ -- config + a Secret manifest
matching the ystack shape. TestServe_InCluster runs against the
shared kwok cluster from yconverge e2e tests: apply, serve, patch,
delete, assert the watch propagates at each step.
Tests: 0 golangci-lint issues, all go test -count=1 ./... green,
all go test -tags e2e ./e2e/ green including the new in-cluster
and static scenarios.
Contributor
Author
|
Squashed into a single commit f7fa70b |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What this is
Bootstraps
y-clusteras a single Go binary (CLI + library + containerimage), with the first end-user feature
y-cluster serveready for av0.2.0pre-release.Scope
y-clusterbinaryy-cluster_vX.Y.Z_{linux,darwin}_{amd64,arm64})ghcr.io/yolean/y-cluster:<sha>/:<vX.Y.Z>multi-arch distrolessimage (linux/amd64 + linux/arm64, each with the native binary)
:nonrootbasekubectl-yconvergefor kubectl plugin useSubcommands
y-cluster yconverge -k <base>— idempotent Kubernetes convergencewith CUE-declared dependency ordering and post-apply checks. Replaces
the
y-cluster-converge-ystackandkubectl-yconvergebash scriptsin ystack. Dependencies are declared as CUE imports between
convergeable bases; checks are aggregated across the kustomize tree
so overlays inherit the checks of their bases.
y-cluster provision/teardown/export/import— QEMU-basedlocal cluster lifecycle, with a VMDK appliance round-trip so a senior
developer can hand a prepared cluster to a new teammate without
re-running provision.
y-cluster serve -c <dir> [-c <dir> ...]— new, first-releasefeature. HTTP server for config assets. One port per config, no
name-based vhosting. Default background daemon with per-user state
dir, pidfile, single-instance rule;
--foregroundopts out.Graceful SIGTERM/SIGINT shutdown, drain in-flight, remove pidfile,
exit zero.
serve ensureis idempotent and blocks on per-port/healthbefore returning.Initial backend is
y-kustomize-local, which readsy-kustomize-bases/{group}/{name}/{file}trees and serves them at/v1/{group}/{name}/{file}, emulating the in-cluster y-kustomizeservice for local development. Multi-source merge; duplicate routes
across sources are a startup error. Responses force revalidation
via weak ETag +
Cache-Control: no-cache, must-revalidate.Kustomization files using secretGenerator / configMapGenerator
rename syntax (
key=path) are rejected at startup because theywould cause the local route to silently diverge from the in-cluster
route.
Packages (importable as a library)
All packages have sibling test files; unit coverage is > 90 % for the
serve package and the kustomize-traverse package. End-to-end tests
live under
e2e/behind thee2ebuild tag (kwok cluster + realbinary).
Testing
go test ./...— unit tests for every package plus the CLI layergo test -tags e2e ./e2e/— spins up kwok, drives convergence andcheck orchestration; also builds the
y-clusterbinary and runsscripts/e2e-serve-against-binary.shagainst itscripts/e2e-serve-against-binary.sh— standalone bash e2e thatexercises
serve ensure → GET → stopagainst anyy-clusterbinary.github/workflows/e2e-release.yaml— fires onrelease: published,downloads the published binary on ubuntu-latest and macos-latest,
verifies the sha256 from
checksums.txt, and runs the same bashe2e against the shipped bytes
Release pipeline
One workflow (
.github/workflows/ci.yaml) is the single source oftruth for build, test, image, and release:
buildmatrix produces every arch binary once per run; downstreamjobs consume via artifacts so the image and the release archive
ship the exact same bytes that CI tested
imageusessolsson/setup-contain@v1to install containv0.9.0,then
contain build --pushwith per-archpathPerPlatformso everyvariant of the multi-arch image carries the right native binary
release-assets(tag pushes only) publishes raw uncompressedbinaries using goreleaser naming
(
y-cluster_vX.Y.Z_<os>_<arch>) alongside achecksums.txtgithub.repository_owner == 'Yolean'so they stay a no-op outsidethe public repo; the human owner mirrors from the internal
automation repo when ready
Known gaps / follow-ups (post-merge)
type: staticbackend forserveis schema-declared but returns"not implemented" at runtime (deferred to a later release)
y-kustomize-in-clusterbackend (Kubernetes informer against Secretobjects) not yet implemented
are accepted here and scheduled for follow-up rather than blocking
this PR: kubectl output is currently swallowed by the check runner,
and
yconverge.Runsilently ignores traverse errorsyolean.se/ystack/moduleprefix; the cleanup to read the module name from
cue.mod/module.cueis small but not in scope for this PR