feat: add cache control middleware and improve response handling#2565
feat: add cache control middleware and improve response handling#2565
Conversation
📝 WalkthroughWalkthroughAdds route-level cache metadata and registry, a cache-control middleware and global compression, memoization on provider/deployment methods with providerMap optimizations, 10s HTTP client timeouts, and an indexer migration plus a batched backfill script. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client
participant App as HonoApp
participant Compress as CompressMiddleware
participant CacheMW as CacheControlMiddleware
participant Registry as CacheConfigRegistry
participant Auth as AuthCheck
participant Handler as RouteHandler
Client->>App: GET /v1/...
App->>Compress: run compress middleware
Compress-->>App: pass-through
App->>CacheMW: invoke cache-control middleware
CacheMW->>Registry: getCacheConfig(req.routePath)
Registry-->>CacheMW: CacheConfig | undefined
CacheMW->>Auth: determine authentication (header / JWT)
Auth-->>CacheMW: authenticated? true|false
CacheMW->>Handler: call downstream handler
Handler-->>CacheMW: Response (200)
CacheMW-->>App: attach/modify Cache-Control header (public/private, max-age, swr)
App-->>Client: Response (compressed, with Cache-Control)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Comment |
Codecov Report❌ Patch coverage is
❌ Your project status has failed because the head coverage (79.38%) is below the target coverage (80.00%). You can increase the head coverage or adjust the target coverage. Additional details and impacted files@@ Coverage Diff @@
## main #2565 +/- ##
==========================================
+ Coverage 50.78% 50.86% +0.07%
==========================================
Files 1069 1070 +1
Lines 29735 29793 +58
Branches 6565 6580 +15
==========================================
+ Hits 15102 15154 +52
- Misses 14221 14239 +18
+ Partials 412 400 -12
*This pull request uses carry forward flags. Click here to find out more.
🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@apps/api/src/middlewares/cacheControlMiddleware.ts`:
- Around line 27-44: The cacheControlMiddleware currently marks matched routes
as "public" and unconditionally overwrites Cache-Control, which can expose
authenticated responses; update cacheControlMiddleware to first check if a
Cache-Control header already exists on c.res and leave it untouched if present,
and when deciding directives for a matched config, detect whether the request is
authenticated (inspect the request/auth context used by your auth layer—e.g.,
presence of c.req.headers.authorization, c.req.user, or the same auth flag your
handlers rely on) and if authenticated, do not include "public" (use "private"
or "no-store" based on sensitivity) so user-specific responses are not cached
publicly; keep using CACHE_CONFIGS for maxAge and staleWhileRevalidate but guard
composition of directives accordingly and only call c.header("Cache-Control",
...) when you have a safe header to set.
In `@apps/api/src/provider/services/provider/provider.service.ts`:
- Around line 139-141: The Memoize cache is colliding because getProviderList({
trial }) accepts an object and the Memoize helper only hashes primitives; ensure
the cache key includes the trial boolean to avoid returning wrong results. Fix
by either changing the method signature to accept a primitive (e.g.,
getProviderList(trial = false): Promise<ProviderList[]>) so Memoize receives a
simple boolean, or update the Memoize usage to provide an explicit
cache-key/resolver that serializes the { trial } object (or returns `trial`),
keeping the call to providerRepository.getWithAttributesAndAuditors({ trial })
unchanged.
🧹 Nitpick comments (1)
packages/http-sdk/src/lease/lease-http.service.ts (1)
75-86: Consider a shared timeout constant.Inline timeouts are harder to tune and keep consistent across services. A shared constant (or HttpClient default) makes future adjustments safer.
♻️ Example refactor
+const DEFAULT_HTTP_TIMEOUT_MS = 10000; + export class LeaseHttpService { constructor(private readonly httpClient: HttpClient) {} public async list({ owner, dseq, state, pagination }: LeaseListParams): Promise<RestAkashLeaseListResponse> { return extractData( await this.httpClient.get<RestAkashLeaseListResponse>("/akash/market/v1beta5/leases/list", { params: { "filters.owner": owner, "filters.dseq": dseq, "filters.state": state, "pagination.limit": pagination?.limit, "pagination.key": pagination?.key }, - timeout: 10000 + timeout: DEFAULT_HTTP_TIMEOUT_MS }) ); }
…tes with cache settings
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@apps/indexer/scripts/backfill-address-reference-height.sql`:
- Around line 8-43: The DO $$ anonymous block contains COMMIT (invalid inside
DO), so convert the anonymous block into a stored procedure (e.g., CREATE
PROCEDURE backfill_address_reference_height() LANGUAGE plpgsql AS $$ ...) and
move the loop and COMMIT into that procedure where transaction control is
allowed; keep the UPDATE logic that targets "addressReference" ar and joins
"transaction" t to set ar.height from t.height for NULLs in batches, retain the
GET DIAGNOSTICS rows_updated handling and RAISE NOTICE lines, then CREATE the
procedure and invoke it with CALL backfill_address_reference_height(); this
removes the invalid COMMIT-in-DO error while preserving the batching behavior
for addressReference.height.
🧹 Nitpick comments (1)
apps/indexer/scripts/backfill-address-reference-height.sql (1)
23-27: Consider addingORDER BYfor deterministic batch selection.The subquery selecting batch IDs has no
ORDER BY, so row selection order depends on physical storage. While functionally correct, adding an explicit order (e.g., byid) makes behavior predictable and debugging easier.♻️ Suggested improvement
SELECT ar2.id FROM "addressReference" ar2 WHERE ar2.height IS NULL + ORDER BY ar2.id LIMIT batch_size
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@apps/indexer/scripts/backfill-address-reference-height.sql`:
- Around line 20-43: The loop can spin forever if some addressReference rows
have no matching transaction; modify the UPDATE in the backfill loop (affecting
addressReference and transaction logic) to only target rows with an existing
join (e.g., use EXISTS or an INNER JOIN against "transaction" t in the WHERE so
orphan rows are not selected by the subquery), and add a safety guard by
introducing a max_iterations counter (e.g., max_iterations and batch_count) and
change the EXIT condition to EXIT WHEN rows_updated = 0 OR batch_count >=
max_iterations so the loop terminates even if orphans remain; reference symbols:
addressReference, transaction, rows_updated, batch_size, batch_count,
max_iterations.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@apps/api/src/core/lib/create-route/create-route.spec.ts`:
- Line 3: Change the root test suite to use the function's runtime name for
refactor safety: replace the hardcoded string "create-route" with
createRoute.name in the top-level describe call (the suite covering createRoute
tests) so the suite title updates automatically when the function is renamed.
Summary by CodeRabbit
New Features
Performance
Reliability
Chores / Data
Tests
✏️ Tip: You can customize this high-level summary in your review settings.