Conversation
📝 WalkthroughWalkthroughThis PR migrates the caching infrastructure from the memory-cache library to lru-cache, updating package dependencies, the MemoryCacheEngine implementation, and affected services to use LRU eviction policies with bounded memory limits. A minor health check initialization simplification is also included. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #2762 +/- ##
==========================================
- Coverage 53.64% 53.63% -0.02%
==========================================
Files 1019 1019
Lines 23591 23578 -13
Branches 5754 5752 -2
==========================================
- Hits 12655 12645 -10
+ Misses 9531 9528 -3
Partials 1405 1405
*This pull request uses carry forward flags. Click here to find out more.
🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/api/src/caching/memoryCacheEngine.ts`:
- Around line 3-4: Replace the explicit any used in the LRUCache instantiation
with unknown and remove the eslint-disable comment: change the cache declaration
from LRUCache<string, any> to LRUCache<string, unknown> so the MemoryCacheEngine
wrapper’s generic get/set methods maintain type safety without suppressing the
`@typescript-eslint/no-explicit-any` rule (look for the cache variable and
MemoryCacheEngine usages to update).
- Around line 25-26: The TTL unit is mismatched: make storeInCache treat its
duration parameter as seconds (to match cacheResponse callers) and convert to
milliseconds before passing to lru-cache; inside storeInCache(key: string, data:
T, duration?: number) compute const ttl = duration ? duration * 1000 : undefined
and call cache.set(key, data, ttl ? { ttl } : undefined). Update any references
to storeInCache (and leave cacheResponse(seconds) calls unchanged) so callers
can continue passing seconds without breaking behavior.
| // eslint-disable-next-line @typescript-eslint/no-explicit-any | ||
| const cache = new LRUCache<string, any>({ max: 500 }); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cd apps/api/src/caching && cat -n memoryCacheEngine.tsRepository: akash-network/console
Length of output: 2034
🏁 Script executed:
cd apps/api/src/caching && rg "cache\.(set|get|del)" -A 2 -B 2Repository: akash-network/console
Length of output: 1364
🏁 Script executed:
rg "lru-cache" --type ts --type json -iRepository: akash-network/console
Length of output: 3852
🏁 Script executed:
cat package.json | grep -A 5 '"dependencies"'Repository: akash-network/console
Length of output: 202
Remove any from cache typing.
The any type at line 4 violates the coding guideline requiring proper TypeScript types. Since the MemoryCacheEngine wrapper is designed to handle values of any type through its generic methods, use unknown instead to maintain type safety while removing the eslint-disable requirement.
Proposed fix
-// eslint-disable-next-line `@typescript-eslint/no-explicit-any`
-const cache = new LRUCache<string, any>({ max: 500 });
+const cache = new LRUCache<string, unknown>({ max: 500 });📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // eslint-disable-next-line @typescript-eslint/no-explicit-any | |
| const cache = new LRUCache<string, any>({ max: 500 }); | |
| const cache = new LRUCache<string, unknown>({ max: 500 }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/api/src/caching/memoryCacheEngine.ts` around lines 3 - 4, Replace the
explicit any used in the LRUCache instantiation with unknown and remove the
eslint-disable comment: change the cache declaration from LRUCache<string, any>
to LRUCache<string, unknown> so the MemoryCacheEngine wrapper’s generic get/set
methods maintain type safety without suppressing the
`@typescript-eslint/no-explicit-any` rule (look for the cache variable and
MemoryCacheEngine usages to update).
| storeInCache<T>(key: string, data: T, duration?: number) { | ||
| mcache.put(key, data, duration); | ||
| cache.set(key, data, duration ? { ttl: duration } : undefined); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# First, examine the memoryCacheEngine.ts file to understand the cache library
head -50 apps/api/src/caching/memoryCacheEngine.tsRepository: akash-network/console
Length of output: 1180
🏁 Script executed:
#!/bin/bash
# Find all call sites of storeInCache to see what durations are passed
rg -n --type=ts -C 3 "storeInCache\s*\("Repository: akash-network/console
Length of output: 4405
🏁 Script executed:
#!/bin/bash
# Search for cache library imports and initialization in the caching directory
rg -n "import.*cache|new.*Cache|require.*cache" apps/api/src/caching/ --type=tsRepository: akash-network/console
Length of output: 624
🏁 Script executed:
#!/bin/bash
# Check package.json for lru-cache version
cat apps/api/package.json | grep -A 2 -B 2 "lru-cache"Repository: akash-network/console
Length of output: 209
🏁 Script executed:
#!/bin/bash
# Search for any TODOs, comments, or code related to duration/ttl
rg -n "duration|ttl|millisecond|second" apps/api/src/caching/ --type=ts -C 2Repository: akash-network/console
Length of output: 2270
🏁 Script executed:
#!/bin/bash
# Check if there are any calls to storeInCache with a duration parameter
rg -n "storeInCache.*\d" apps/api/src --type=tsRepository: akash-network/console
Length of output: 47
🏁 Script executed:
#!/bin/bash
# Check if cacheResponse actually passes the seconds parameter to storeInCache
cat -n apps/api/src/caching/helpers.ts | head -120 | tail -90Repository: akash-network/console
Length of output: 3900
🏁 Script executed:
#!/bin/bash
# Look at the full cacheResponse function to see if/where duration is passed to storeInCache
sed -n '40,110p' apps/api/src/caching/helpers.tsRepository: akash-network/console
Length of output: 2683
🌐 Web query:
lru-cache npm package ttl milliseconds or seconds
💡 Result:
For the popular lru-cache npm package, ttl is in milliseconds (must be a positive integer if provided). The README/docs describe it as “Time to live in milliseconds,” and related options like ttlResolution are also in ms. [1], [2]
So:
ttl: 1000= 1 secondttl: 60000= 60 seconds / 1 minute [1], [2]
Fix TTL unit mismatch: duration parameter must be in milliseconds, not seconds.
The lru-cache library expects TTL in milliseconds, but the duration parameter name and context suggest it should be in seconds. Currently, no callers pass a duration value, so this latent bug hasn't manifested. However, the method signature invites incorrect usage. The cacheResponse function receives a seconds parameter that should be converted to milliseconds before being passed to storeInCache (lines 67, 99), or the parameter should be documented and renamed to clarify it expects milliseconds.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/api/src/caching/memoryCacheEngine.ts` around lines 25 - 26, The TTL unit
is mismatched: make storeInCache treat its duration parameter as seconds (to
match cacheResponse callers) and convert to milliseconds before passing to
lru-cache; inside storeInCache(key: string, data: T, duration?: number) compute
const ttl = duration ? duration * 1000 : undefined and call cache.set(key, data,
ttl ? { ttl } : undefined). Update any references to storeInCache (and leave
cacheResponse(seconds) calls unchanged) so callers can continue passing seconds
without breaking behavior.
|
although this is not introducing radical changes I think the cache engine should be reworked as an interface utilising various storages. E.g. in local dev LRUCache is totally find, in prod we should use a distributed storage like redis. @stalniy wdyt? |
| import { LRUCache } from "lru-cache"; | ||
|
|
||
| // eslint-disable-next-line @typescript-eslint/no-explicit-any | ||
| const cache = new LRUCache<string, any>({ max: 500 }); |
There was a problem hiding this comment.
| const cache = new LRUCache<string, any>({ max: 500 }); | |
| export type CacheValue = NonNullable<unknown>; | |
| const cache = new LRUCache<string, CacheValue>({ max: 500 }); | |
| .... | |
| storeInCache<T extends CacheValue>(key: string, data: T, duration?: number) { |
| if (cachedBody !== undefined) { | ||
| return cachedBody as T; | ||
| } | ||
| return false; |
There was a problem hiding this comment.
question: false is a valid value to store. Any idea why we return it as a fallback?
I actually like this one because it's minus 1 dependency. And this is the thing which I wanted to do by myself for a long time :) I think that we should make a discussion for this and consider other options like https://bentocache.dev/docs/introduction |
The healthz service was returning unhealthy immediately on the first failed check after startup (when isFailed was null), bypassing the failure tolerance logic. This could cause liveness probe failures during pod startup when DB connections aren't yet established. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
f9d164b to
238392c
Compare
Why
The memory-cache doesn't have any limit
What
Replace with LRUCache
Summary by CodeRabbit
Improvements
Chores