Skip to content

Draft: GitLab Code Review Support#13

Closed
dennismeister93 wants to merge 11 commits intoKilo-Org:mainfrom
dennismeister93:feature/gitlab-code-review-support-new
Closed

Draft: GitLab Code Review Support#13
dennismeister93 wants to merge 11 commits intoKilo-Org:mainfrom
dennismeister93:feature/gitlab-code-review-support-new

Conversation

@dennismeister93
Copy link
Copy Markdown
Contributor

Just a draft PR

@dennismeister93 dennismeister93 closed this by deleting the head repository Feb 4, 2026
jrf0110 pushed a commit that referenced this pull request Mar 10, 2026
…r landing (#962) (#13)

Primary fix: In completeReviewWithResult(), after closeBead() on the source bead,
explicitly call updateConvoyProgress() with the source bead ID. The polecat already
closed the source bead before gt_done, so closeBead is a no-op (guard in
updateBeadStatus short-circuits on same status). Calling updateConvoyProgress
directly ensures the convoy recounts after the MR bead transitions to 'closed',
allowing the source bead to pass the NOT EXISTS guard and count toward closedCount.

Secondary fix: Fix getConvoyForBead() to handle the case where the bead IS the
convoy itself. When processConvoyLandings() creates the final landing MR, it passes
convoyId as the source bead. The old lookup (find 'tracks' edge from bead) returns
null for convoy beads. Now also checks for convoy_metadata presence so the landing
MR receives the correct convoy context (merge mode, isIntermediateStep=false) and
the refinery sees the 'Final Landing' section in its system prompt.

Co-authored-by: Maple (gastown) <Maple@gastown.local>
jrf0110 pushed a commit that referenced this pull request Mar 10, 2026
…r landing (#962) (#13)

Primary fix: In completeReviewWithResult(), after closeBead() on the source bead,
explicitly call updateConvoyProgress() with the source bead ID. The polecat already
closed the source bead before gt_done, so closeBead is a no-op (guard in
updateBeadStatus short-circuits on same status). Calling updateConvoyProgress
directly ensures the convoy recounts after the MR bead transitions to 'closed',
allowing the source bead to pass the NOT EXISTS guard and count toward closedCount.

Secondary fix: Fix getConvoyForBead() to handle the case where the bead IS the
convoy itself. When processConvoyLandings() creates the final landing MR, it passes
convoyId as the source bead. The old lookup (find 'tracks' edge from bead) returns
null for convoy beads. Now also checks for convoy_metadata presence so the landing
MR receives the correct convoy context (merge mode, isIntermediateStep=false) and
the refinery sees the 'Final Landing' section in its system prompt.

Co-authored-by: Maple (gastown) <Maple@gastown.local>
jrf0110 pushed a commit that referenced this pull request Mar 11, 2026
…r landing (#962) (#13)

Primary fix: In completeReviewWithResult(), after closeBead() on the source bead,
explicitly call updateConvoyProgress() with the source bead ID. The polecat already
closed the source bead before gt_done, so closeBead is a no-op (guard in
updateBeadStatus short-circuits on same status). Calling updateConvoyProgress
directly ensures the convoy recounts after the MR bead transitions to 'closed',
allowing the source bead to pass the NOT EXISTS guard and count toward closedCount.

Secondary fix: Fix getConvoyForBead() to handle the case where the bead IS the
convoy itself. When processConvoyLandings() creates the final landing MR, it passes
convoyId as the source bead. The old lookup (find 'tracks' edge from bead) returns
null for convoy beads. Now also checks for convoy_metadata presence so the landing
MR receives the correct convoy context (merge mode, isIntermediateStep=false) and
the refinery sees the 'Final Landing' section in its system prompt.

Co-authored-by: Maple (gastown) <Maple@gastown.local>
jrf0110 added a commit that referenced this pull request Mar 11, 2026
* fix(convoy): call updateConvoyProgress after MR bead closes to trigger landing (#962) (#13)

Primary fix: In completeReviewWithResult(), after closeBead() on the source bead,
explicitly call updateConvoyProgress() with the source bead ID. The polecat already
closed the source bead before gt_done, so closeBead is a no-op (guard in
updateBeadStatus short-circuits on same status). Calling updateConvoyProgress
directly ensures the convoy recounts after the MR bead transitions to 'closed',
allowing the source bead to pass the NOT EXISTS guard and count toward closedCount.

Secondary fix: Fix getConvoyForBead() to handle the case where the bead IS the
convoy itself. When processConvoyLandings() creates the final landing MR, it passes
convoyId as the source bead. The old lookup (find 'tracks' edge from bead) returns
null for convoy beads. Now also checks for convoy_metadata presence so the landing
MR receives the correct convoy context (merge mode, isIntermediateStep=false) and
the refinery sees the 'Final Landing' section in its system prompt.

Co-authored-by: Maple (gastown) <Maple@gastown.local>

* fix(patrol): break triage agent feedback loop (#965) (#12)

- detectCrashLoops: exclude agents hooked to triage beads via NOT EXISTS
  subquery so triage failures don't create new crash-loop triage requests
- createTriageRequest: add global cap (MAX_OPEN_TRIAGE_REQUESTS=5) to
  prevent unbounded accumulation during feedback loops
- maybeDispatchTriageAgent: pass role='triage' to skip git clone in
  container; apply DISPATCH_COOLDOWN_MS on failure via last_activity_at
- agent-runner: handle role='triage' with createLightweightWorkspace
  (no git clone); refactor createMayorWorkspace to share the same helper
- Add 'triage' to AgentRole enum in both worker and container type files

Co-authored-by: Birch (gastown) <Birch@gastown.local>

* fix(gastown): replace per-agent JWTs with per-container HMAC secrets (#923)

Agent JWTs (8h expiry) caused 401s for persistent agents (Mayor)
running longer than 8 hours. Instead of adding token refresh complexity,
replace the auth model with HMAC-based container secrets that never
expire — they live as long as the container does. When the container
sleeps and wakes, a new secret is minted automatically.

Container secret design:
- Format: townId:nonce:hmac (HMAC-SHA256 signed with GASTOWN_JWT_SECRET)
- No expiry — lives as long as the container process
- Stateless verification — no DO lookup needed, just HMAC check
- Town-scoped — cross-town access prevented by HMAC input binding
- Agent identity via X-Gastown-Agent-Id/Rig-Id headers, trusted
  because the container secret proves the request origin

Backwards compatible:
- Auth middleware accepts both container secrets AND legacy JWTs
- Container code prefers GASTOWN_CONTAINER_SECRET, falls back to
  GASTOWN_SESSION_TOKEN
- Legacy JWT minting retained (marked deprecated) for rollout safety

Closes #923

* fix(gastown): send userId header with container secret auth

The mayor's gt_list_rigs tool requires a userId (to look up rigs via
GastownUserDO). With JWT auth, userId was embedded in the token payload.
With container secrets, it was missing — causing 401 'Missing userId
in token'.

Fix: inject GASTOWN_USER_ID as an env var in startAgentInContainer,
propagate it through buildAgentEnv, and send it as X-Gastown-User-Id
header alongside the container secret.

* refactor(gastown): replace HMAC secrets with per-container JWT

Replaces the HMAC-based container secret approach with a simpler
per-container JWT. The container JWT:

- Carries { townId, userId, scope: 'container' } — same JWT format
  the auth middleware already understands
- Has 8h expiry (same as legacy agent JWTs) but is proactively
  refreshed hourly by the TownDO alarm
- Is shared by all agents in the container (one token per town)
- Eliminates the X-Gastown-* identity headers — userId lives in the
  JWT, agentId/rigId come from route params

Removes container-secret.util.ts entirely. The auth middleware tries
container JWT verification first (scope: 'container'), falls back to
legacy agent JWT verification.

The TownDO alarm calls refreshContainerToken() once per hour,
which mints a fresh JWT and pushes it to the TownContainerDO via
setEnvVar('GASTOWN_CONTAINER_TOKEN', ...).

* fix(gastown): address PR review comments

- Add auth guard to startMergeInContainer (missing null check for tokens)
- Add POST /refresh-token endpoint to container control server so the
  alarm-based refresh actually updates process.env on the running Bun
  process (setEnvVar only takes effect on next boot)
- Plugin clients read process.env.GASTOWN_CONTAINER_TOKEN on each request
  to pick up refreshed tokens without needing to restart
- dispatch.refreshContainerToken() pushes fresh JWT to both the
  ContainerDO (setEnvVar for next boot) and the running container
  (POST /refresh-token for current process)
- Clarify auth middleware comments re: intentional no-op checks for
  container JWTs (town-scoped, rig/agent identity from route params)

* fix(gastown): address second round of PR review comments

- broadcastEvent: read live token from process.env instead of cached
  ManagedAgent field, so event persistence uses refreshed tokens
- heartbeat: read process.env.GASTOWN_CONTAINER_TOKEN on each tick
  instead of the module-level cached token from startHeartbeat()
- completion-reporter: same pattern — prefer live process.env token
- Strip gastownContainerToken from /agents/start response to prevent
  leaking the town-wide bearer token to dashboard callers

* fix(gastown): address third round of PR review comments

- ensureContainerToken now pushes to running container via POST
  /refresh-token (not just setEnvVar), so existing agents pick up
  the fresh token immediately on every agent start — no gap until
  the next alarm-based refresh
- refreshContainerToken is now an alias for ensureContainerToken
  since both do the same thing (setEnvVar + POST /refresh-token)
- Move throttle timestamp update to after successful refresh so
  failed refreshes are retried on the next alarm tick instead of
  being throttled away for an hour

* fix(gastown): populate agentId from header for routes without :agentId param

Container JWTs don't carry agentId, and routes like /triage/resolve
and /mail don't have :agentId in the URL. This left agentId as ''
in the auth payload, breaking handleResolveTriage (which requires a
non-empty agentId) and weakening ownership checks in other handlers.

Fix: tryContainerJWTAuth falls back to X-Gastown-Agent-Id and
X-Gastown-Rig-Id headers when route params are absent. Both plugin
clients (GastownClient, MayorGastownClient) now send these headers
when using a container-scoped JWT.

* fix(gastown): send agent identity header in broadcastEvent

The /agent-events route doesn't have :agentId in the URL, so with a
container JWT the getEnforcedAgentId() ownership check became a no-op.
Add X-Gastown-Agent-Id/Rig-Id headers when using the container token
so the handler can still verify agent_id ownership.

* fix(gastown): propagate non-2xx refresh-token responses as failures

container.fetch() only throws on transport errors. A 4xx/5xx from
/refresh-token was silently swallowed, causing the alarm throttle
to advance even though the container never accepted the new token.

Now check resp.ok and throw on non-2xx so the error propagates to
refreshContainerToken() in Town.do.ts, which only advances
lastContainerTokenRefreshAt after success.

* chore: remove accidentally committed plans document

* fix(gastown): use townId fallback for empty userId in startMergeInContainer

owner_user_id is optional in TownConfigSchema, so the fallback was
minting a container JWT with userId: '' which broke resolveUserId()
in mayor tool handlers. Match the pattern used in refreshContainerToken
by falling back to townId.

---------

Co-authored-by: kilo-code-bot[bot] <240665456+kilo-code-bot[bot]@users.noreply.github.com>
Co-authored-by: Maple (gastown) <Maple@gastown.local>
Co-authored-by: Birch (gastown) <Birch@gastown.local>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant